Patent classifications
H04N23/632
Devices, Methods, and Graphical User Interfaces for Assisted Photo-Taking
An electronic device with a camera obtains, with the camera, one or more images of a scene. The electronic device detects a respective feature within the scene. In accordance with a determination that a first mode is active on the device, the electronic device provides a first audible description of the scene. The first audible description provides information indicating a size and/or position of the respective feature relative to a first set of divisions applied to the one or more images of the scene. In accordance with a determination that the first mode is not active on the device, the electronic device provides a second audible description of the plurality of objects. The second audible description is distinct from the first audible description and does not include the information indicating the size and/or position of the respective feature relative to the first set of divisions.
Hysteretic multilevel touch control
According to some embodiments, a processor reads a force from a touch control. Hysteretic behavior is provided through at least three different actuation states and at least four different force threshold values, allowing user-friendly control of disparate actions through one touch interface with applications in volume control, photography, and user authentication. Other possibilities are shown and discussed.
Electronic device capable of controlling image display effect, and method for displaying image
An electronic device includes a first camera, a second camera, a display, a memory, and a processor. The processor collects a first image obtained by the first camera with respect to an external object and a second image obtained by the second camera with respect to the external object, generates a third image with respect to the external object using a first area of the first image and a second area of the second image, which corresponds to the first area, identifies an input associated with the third image displayed through the display, and displays an image generated using at least one of the first image, the second image, or depth information in response to the input. The generating operation of the third image includes generating the depth information with respect to the third image.
ELECTRONIC DEVICE COMPRISING MULTI-CAMERA, AND PHOTOGRAPHING METHOD
An electronic device having a multi-camera, according to various embodiments of the present disclosure, includes: the multi-camera, a display, a memory, and a processor operatively connected to the camera, the display, and the memory. The processor may be configured to receive a first image being photographed at a first angle of view of the camera, receive a second image being photographed at a second angle of view of the camera, identify a subject in the first image according to predetermined criteria, generate a third image in which the identified subject is cropped according to a predetermined area of interest, and display the second image and the third image on at least a portion of an area in which the first image is displayed. Various other embodiments may be possible.
Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
The present disclosure discloses a photography-based 3D modeling system and method, and an automatic 3D modeling apparatus and method, including: (S1) attaching a mobile device and a camera to the same camera stand; (S2) obtaining multiple images used for positioning from the camera or the mobile device during movement of the stand, and obtaining a position and a direction of each photo capture point, to build a tracking map that uses a global coordinate system; (S3) generating 3D models on the mobile device or a remote server based on an image used for 3D modeling at each photo capture point; and (S4) placing the individual 3D models of all photo capture points in the global three-dimensional coordinate system based on the position and the direction obtained in S2, and connecting the individual 3D models of multiple photo capture points to generate an overall 3D model that includes multiple photo capture points.
Electronic device for acquiring image by using light-emitting module having polarizing filter and method for controlling same
An electronic device includes a display; a camera; a first light emitting module, wherein each of the camera and the first light emitting module comprises a first type polarizing filter; a second light emitting module including a second type polarizing filter that is different from the first type polarizing filter; and at least one processor configured to: obtain a first image by using the camera, based on a first light output from the first light emitting module and a second light output from the second light emitting module; identify at least one feature point in the first image; and control the display to display the first image and information related to the at least one feature point.
Imaging apparatus, method of controlling imaging apparatus and computer-readable medium
An imaging apparatus includes: a display unit configured to display on a display a live image of a subject and previously captured images; an image capturing unit configured to capture an image of the subject; and a recording unit configured to record in a recording medium the captured image of the subject which has been captured by the image capturing unit and the previously captured images in association with disposition information which includes a display position of the captured image of the subject and display positions of the previously captured images.
Universal control interface for camera
The present invention relates to a new universal control interface for cameras and other audio-visual recording/using instruments, and more specifically a multi-axis visual interface for simultaneous display and control of aperture (Av), shutter speed (Tv), ISO, and/or other parameters like exposure value (EV). The invention relates to either a triangular, rectangle or clover shape interface where the parameters are visually represented on one of the axis, side or branch of the interface and where the user instead of altering the parameters, will provide intention such as (a) depth-of field, (b) motion blur, (c) granularity, or the composite (d) exposure. The invention further describes how in some cases, one or more of these parameters can be locked or not available based on the technology used for the delay and control interface.
TECHNIQUES TO SELECTIVELY CAPTURE VISUAL MEDIA USING A SINGLE INTERFACE ELEMENT
Techniques to selectively capture media using a single user interface element are described. In one embodiment, an apparatus may comprise a touch controller, a visual media capture component, and a storage component. The touch controller may be operative to receive a haptic engagement signal. The visual media capture component may be operative to be configured in a capture mode based on whether a haptic disengagement signal is received by the touch controller before expiration of a first timer, the capture mode one of a photo capture mode or video capture mode, the first timer started in response to receiving the haptic engagement signal, the first timer configured to expire after a first preset duration. The storage component may be operative to store visual media captured by the visual media capture component in the configured capture mode. Other embodiments are described and claimed.
METHOD, ELECTRONIC DEVICE, AND RECORDING MEDIUM FOR NOTIFYING OF SURROUNDING SITUATION INFORMATION
According to various embodiments, a method for notifying of surrounding situation information by an electronic device may comprise the operations of: monitoring a value indicating a movement of the electronic device; determining whether a state of the electronic device is a stopped state, on the basis of the value indicating a movement of the electronic device; and acquiring surrounding situation information of the electronic device, which will be notified of to a user, when the state of the electronic device is a stopped state; and outputting the surrounding situation information.