Patent classifications
H04N5/2226
Material generation apparatus, image generation apparatus, and image processing apparatus
A material generation apparatus includes an acquisition unit configured to acquire a plurality of camera images, and a material data generation unit configured to generate, based on a camera image selected from among the camera images, at least one of a foreground image and a background image as material data to be used for generation of an image corresponding to a designated viewpoint.
Adaptive-Flash Photography, Videography, and/or Flashlight Using Camera, Scene, or User Input Parameters
A light source module includes an array of illumination elements and an optional projecting lens. The light source module is configured to receive or generate a control signal for adjusting different ones of the illumination elements to control a light field emitted from the light source module. In some embodiments, the light source module is also configured to adjust the projecting lens responsive to objects in an illuminated scene and a field of view of an imaging device. A controller for a light source module may determine a light field pattern based on various parameters including a field of view of an imaging device, an illumination sensitivity model of the imaging device, depth, ambient illumination and reflectivity of objects, configured illumination priorities including ambient preservation, background illumination and direct/indirect lighting balance, and so forth.
Tracking objects using sensor rotation
An example apparatus for tracking objects includes a controller to receive a depth map, a focus distance, and an image frame of an object to be tracked. The controller is to detect the object to be tracked in the image frame and generate an object position for the object in the image frame. The controller is to calculate a deflection angle for the object based on the depth map, the focus distance, and the object position. The controller is to further rotate an imaging sensor based on the deflection angle.
Method and apparatus of depth detection, and computer-readable storage medium
Spherical or hemispherical non-visible light depth detection includes projecting a hemispherical non-visible light static structured light pattern, in response to projecting the hemispherical non-visible light static structured light pattern, detecting non-visible light, determining three-dimensional depth information based on the detected non-visible light and the projected hemispherical non-visible light static structured light pattern, and outputting the three-dimensional depth information.
IMAGE SENSOR MODULE, IMAGE PROCESSING SYSTEM, AND OPERATING METHOD OF IMAGE SENSOR MODULE
An image sensor module includes an image sensor configured to generate image data and memory including at least a memory bank storing the image data and a processor-in-memory (PIM) circuit, the PIM circuit including a plurality of processing elements. The memory is configured to read the image data from the memory bank; generate optical flow data and pattern density data using the plurality of processing elements, the optical flow data indicating time-sequential motion of at least one object included in the image data, and the pattern density data indicating a density of a pattern of the image data; and output the image data, the optical flow data, and the pattern density data.
CAMERA MODULE
Disclosed according to an embodiment of the present invention is a camera module, comprising: a light source; an optical unit which converts light, output by the light source, into a planar form or a multi-point form and outputs same; and an image sensor, wherein the light source is periodically turned on/off, and when the light source is turned on, the optical unit moves to be positioned in a first position, and when the light source is turned off, the optical unit moves to the initial position thereof.
Depth sensing systems and methods
A depth sensing system includes a sensor having first and second sensor pixels to receive light from a surface. The system also includes a filter to allow transmission of full spectrum light to the first sensor pixel and visible light to the second sensor pixel while preventing transmission of infrared light to the second sensor pixel. The system further includes a processor to analyze the full spectrum light and the visible light to determine a depth of the surface. The filter is disposed between the sensor and the surface.
Depth acquisition device and depth acquisition method
A depth acquisition device includes a memory and a processor. The processor performs: acquiring timing information indicating a timing at which a light source irradiates a subject with infrared light; acquiring, from the memory, an infrared light image generated by imaging a scene including the subject with the infrared light according to the timing indicated by the timing information; acquiring, from the memory, a visible light image generated by imaging a substantially same scene as the scene of the infrared light image, with visible light from a substantially same viewpoint as a viewpoint of imaging the infrared light image at a substantially same time as a time of imaging the infrared light image; detecting a flare region from the infrared light image; and estimating a depth of the flare region based on the infrared light image, the visible light image, and the flare region.
Depth prediction from dual pixel images
Apparatus and methods related to using machine learning to determine depth maps for dual pixel images of objects are provided. A computing device can receive a dual pixel image of at least a foreground object. The dual pixel image can include a plurality of dual pixels. A dual pixel of the plurality of dual pixels can include a left-side pixel and a right-side pixel that both represent light incident on a single dual pixel element used to capture the dual pixel image. The computing device can be used to train a machine learning system to determine a depth map associated with the dual pixel image. The computing device can provide the trained machine learning system.
CHROMA KEY CONTENT MANAGEMENT SYSTEMS AND METHODS
A system of properly displaying chroma key content is presented. The system obtains a digital representation of a 3D environment, for example a digital photo, and gathers data from that digital representation. The system renders the digital representation in an environmental model and displays that digital representation upon an output device. Depending upon the context, content anchors of the environmental model are selected which will be altered by suitable chroma key content. The chroma key content takes into consideration the position and orientation of the chroma key content relative to the content anchor and relative to the point of view that the environmental model is displayed from in order to accurately display chroma key content in a realistic manner.