Abstract:
A camera includes an expandable flexible display. The display is expanded based on user input. In an example, a processor detects an event of expanding the flexible display while a first preview image from a first camera module is output to the flexible display, and outputs a second preview image to the expanded configuration of the flexible display. The second preview image is based on second image data acquired through a second camera module.
Abstract:
An electronic device is provided. The electronic device includes a hinge allowing the electronic device to be folded or unfolded, a foldable display disposed on at least one side of the electronic device, a processor controlling an output screen of the foldable display, sensor circuitry for detecting a folding angle of the electronic device, and input circuitry for touch input, the foldable display including a first area disposed on one side and a second area disposed on the other side with respect to the hinge, and the processor providing an execution screen of an application to a target area, which is any one of the first area or the second area, when an execution input for the application is received in a state in which the folding angle is within a predetermined range.
Abstract:
A foldable electronic device includes a first housing structure, a second housing structure, a first display foldable about a folding area depending on an arrangement state corresponding to an angle between the first housing structure and the second housing structure, a second display, a hinge structure disposed between the first housing structure and the second housing structure, a driver that operates depending on at least one drive signal of a plurality of drive signals to rotate the hinge structure, a processor, and a memory that stores instructions that cause the processor to change the arrangement state depending on occurrence of a first event in the foldable electronic device.
Abstract:
A foldable electronic device is disclosed, the electronic device including a first housing and a second housing adjacent to the first housing; a hinge unit configured to connect the first housing and the second housing; a flexible touch display disposed across the first housing and the second housing; at least one sensor configured to detect an angle formed by the first housing and the second housing; a processor operatively connected with the flexible touch display and the at least one sensor; and a memory operatively connected with the processor, and the memory stores instructions that, when executed, cause the processor to determine whether the first housing and the second housing are in an unfolding state by using the at least one sensor; when the first housing and the second housing are in the unfolding state, set a touch sensitivity of the flexible touch display to a first state; determine a change of the angle formed by the first housing and the second housing by using the at least one sensor; and when the angle is being changed, change the touch sensitivity of the flexible touch display to a second state which is lower than the first state.
Abstract:
The present disclosure relates to a method for providing a push service using a web push, and an electronic device supporting the same. Methods for providing a push service according to various embodiments of the present disclosure may comprise of: displaying a user interface of a software program; receiving a first web page from a first server associated with a push service according to a user's subscription when a first user input is detected in the user interface; displaying an indicator for the push service and the first web page; and transmitting a signal indicative of the push service subscription to a second server independent from the first server which manages a plurality of web sites for providing the push service when a second user input for the push service subscription is detected. Other embodiments are possible.
Abstract:
A mobile device user interface method activates a camera module to support a video chat function and acquires an image of a target object using the camera module. In response to detecting a face in the captured image, the facial image data is analyzed to identify an emotional characteristic of the face by identifying a facial feature and comparing the identified feature with a predetermined feature associated with an emotion. The identified emotional characteristic is compared with a corresponding emotional characteristic of previously acquired facial image data of the target object. In response to the comparison, an emotion indicative image is generated and the generated emotion indicative image is transmitted to a destination terminal used in the video chat.