Patent classifications
H04N5/2226
USER INTERFACES FOR ALTERING VISUAL MEDIA
The present disclosure generally relates to user interfaces for altering visual media. In some embodiments, user interfaces capturing visual media (e.g., via a synthetic depth-of-field effect), playing back visual media (e.g., via a synthetic depth-of-field effect), editing visual media (e.g., that has a synthetic depth-of-field effect applied), and/or managing media capture.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
An image processing device including an image acquisition unit configured to acquire an image containing a subject via a lens unit; a distance information acquisition unit configured to acquire distance information indicating a distance to the subject; an auxiliary data generation unit configured to generate auxiliary data related to the distance information; a data stream generation unit configured to generate a data stream in which the image, the distance information, and the auxiliary data are superimposed; and an output unit configured to output the data stream to outside.
DEPTH ACQUISITION DEVICE AND DEPTH ACQUISITION METHOD
A depth acquisition device includes a memory and a processor. The processor performs: acquiring timing information indicating a timing at which a light source irradiates a subject with infrared light; acquiring, from the memory, an infrared light image generated by imaging a scene including the subject with the infrared light according to the timing indicated by the timing information; acquiring, from the memory, a visible light image generated by imaging a substantially same scene as the scene of the infrared light image, with visible light from a substantially same viewpoint as a viewpoint of imaging the infrared light image at a substantially same time as a time of imaging the infrared light image; detecting a flare region from the infrared light image; and estimating a depth of the flare region based on the infrared light image, the visible light image, and the flare region.
DATA PROCESSING SYSTEM, METHOD FOR DETERMINING COORDINATES, AND COMPUTER READABLE STORAGE MEDIUM
The embodiments of the disclosure provide a data processing system, a method for determining coordinates, and a computer readable storage medium. The method includes: receiving a plurality of positioning data, wherein the plurality of positioning data correspond to device positions of a plurality of positioning devices in a real world, and the positioning devices comprise a first positioning device and a second positioning device; in response to determining that the first positioning device is selected as a reference point of a coordinate system of a virtual world, determining a coordinate of the second positioning device in the coordinate system of the virtual world based on a relative position between the device positions of the first positioning device and the second positioning device.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
An image processing device that can facilitate setting related to layer information based on distance information is provided.
The image processing device includes an image acquisition unit configured to acquire an image including a subject through a lens unit, a distance information acquisition unit configured to acquire distance information indicating a distance to the subject, a layer information generation unit configured to generate layer information on a layer for each distance based on the distance information, and a setting unit configured to set a reference for generating the layer information and switch display of a setting value capable of being set in accordance with the lens information of the lens unit.
Time-of-flight depth measurement using modulation frequency adjustment
In a method for time-of-flight (ToF) based measurement, a scene is illuminated using a ToF light source modulated at a first modulation frequency F.sub.MOD.sup.(1). While the light is modulated using F.sub.MOD.sup.(1), depths are measured to respective surface points within the scene, where the surface points are represented by a plurality of respective pixels. At least one statistical distribution parameter is computed for the depths. A second modulation frequency F.sub.MOD.sup.(2) higher than F.sub.MOD.sup.(1) is determined based on the at least one statistical distribution parameter. The depths are then re-measured using F.sub.MOD.sup.(2) to achieve a higher depth accuracy.
Methods and apparatus for using a controllable physical light filter as part of an image capture system and for processing captured images
Methods and apparatus for using a controllable filter, e.g., an liquid crystal panel, in front of a camera are described. The filter is controlled based on the luminosity of object in a scene being captured by the camera to reduce or eliminate luminosity related image defects such as flaring, blooming or ghosting. Multiple cameras and filters can be used to capture multiple images as part of a depth determination processes where pixel values captured by cameras at different locations are matched to determine the depth, e.g., distance from the camera or camera system to object in the environment. Pixel values are normalized in some embodiments based on the amount of filtering applied to a sensor region and sensor exposure time. The filtering allows for regional sensor exposure control at an individual camera even though the overall exposure time of the pixel sensors may be and often will be the same.
HEAD-MOUNTED ELECTRONIC VISION AID DEVICE AND AUTOMATIC IMAGE MAGNIFICATION METHOD THEREOF
Disclosed in the present invention is a head-mounted electronic vision aid device and an image magnification method thereof. The head-mounted electronic vision aid device comprising a memory unit, a processing unit, an image zooming unit, and at least one ranging unit; the ranging unit being configured to obtain distance data between a target object of interest to a user and the device and/or three-dimensional profile data of the object and output the data to the processing unit; the memory unit stores a correspondence table between the distance data and the magnification of the image zooming unit; the processing unit confirms the target object of interest to the user, performs operations on the distance data and/or the three-dimensional profile data of the object, and outputs an magnification matching the distance data to the image zooming unit according to the correspondence table; and the image zooming unit can automatically adjust to the matching magnification. For visually impaired users, accurate, intuitive and rapid automatic magnification of the target objects of interest can be realized on demand. Compared with the prior art, the repeated and tedious manual adjustment is avoided, and the user experience is greatly improved.
Under-display image sensor
A device includes a display and a first light source configured to emit light, wherein the first light source is proximate to the display. The device further includes a first camera disposed behind the display, wherein the first camera is configured to detect reflections of the light emitted by the first light source. The first camera is further configured to capture a first image based at least in part on the reflections, wherein the reflections are partially occluded by the display. The device also includes a second camera proximate to the display, wherein the second camera is configured to capture a second image. In addition, the device includes a depth map generator configured to generate depth information about one or more objects in a field-of-view (FOV) of the first and second cameras based at least in part on the first and second images.
Composite imaging systems using a focal plane array with in-pixel analog storage elements
Various embodiments of a 3D+imaging system include a focal plane array with in-pixel analog storage elements. In embodiments, an analog pixel circuit is disclosed for use with an array of photodetectors for a sub-frame composite imaging system. In embodiments, a composite imaging system is capable of determining per-pixel depth, white point and black point for a sensor and/or a scene that is stationary or in motion. Examples of applications for the 3D+imaging system include advanced imaging for vehicles, as well as for industrial and smart phone imaging. an extended dynamic range imaging technique is used in imaging to reproduce a greater dynamic range of luminosity.