Patent classifications
H04N23/56
REARVIEW ASSEMBLY INCORPORATING HIGH-TRANSMITTANCE NIR FILM
A rearview assembly includes an electrochromic element. The electrochromic element includes a first substrate including a first surface and a second surface. The electrochromic element further includes a second substrate comprising a third surface and a fourth surface. The first substrate and the second substrate form a cavity. The electrochromic element includes an electrochromic medium contained in the cavity. The rearview assembly includes an image sensor directed toward the fourth surface and configured to capture near-infrared light reflected from an object and projected through the electrochromic element. The rearview assembly includes a transflective film disposed adjacent to the fourth surface having a near-infrared light transmission level and a visible light reflectance level.
SYSTEMS AND METHODS FOR DIFFRACTION LINE IMAGING
A novel class of imaging systems that combines diffractive optics with 1D line sensing is disclosed. When light passes through a diffraction grating or prism, it disperses as a function of wavelength. This property is exploited to recover 2D and 3D positions from line images. A detailed image formation model and a learning-based algorithm for 2D position estimation are disclosed. The disclosure includes several extensions of the imaging system to improve the accuracy of the 2D position estimates and to expand the effective field-of-view. The invention is useful for fast passive imaging of sparse light sources, such as streetlamps, headlights at night and LED-based motion capture, and structured light 3D scanning with line illumination and line sensing.
SYSTEMS AND METHODS FOR DIFFRACTION LINE IMAGING
A novel class of imaging systems that combines diffractive optics with 1D line sensing is disclosed. When light passes through a diffraction grating or prism, it disperses as a function of wavelength. This property is exploited to recover 2D and 3D positions from line images. A detailed image formation model and a learning-based algorithm for 2D position estimation are disclosed. The disclosure includes several extensions of the imaging system to improve the accuracy of the 2D position estimates and to expand the effective field-of-view. The invention is useful for fast passive imaging of sparse light sources, such as streetlamps, headlights at night and LED-based motion capture, and structured light 3D scanning with line illumination and line sensing.
Systems and methods for generating virtual item displays
Systems, methods, and devices of the various embodiments enable virtual displays of an item, such as vehicle, to be generated. In an embodiment, a plurality of images of an item may be captured and annotation may be provided to one or more of the images. In an embodiment, the plurality of images may be displayed, and the transition between each of the plurality of images may be an animated process. In an embodiment, an item imaging system may comprise a structure including one or more cameras and one or more lights, and the item imaging system may be configured to automate at least a portion of the process for capturing the plurality of images of an item.
Systems and methods for generating virtual item displays
Systems, methods, and devices of the various embodiments enable virtual displays of an item, such as vehicle, to be generated. In an embodiment, a plurality of images of an item may be captured and annotation may be provided to one or more of the images. In an embodiment, the plurality of images may be displayed, and the transition between each of the plurality of images may be an animated process. In an embodiment, an item imaging system may comprise a structure including one or more cameras and one or more lights, and the item imaging system may be configured to automate at least a portion of the process for capturing the plurality of images of an item.
Image capturing and display apparatus and wearable device
An image capturing and display apparatus comprises a plurality of photoelectric conversion elements for converting incident light from the outside of the image capturing and display apparatus to electrical charge signals, and a plurality of light-emitting elements for emitting light of an intensity corresponding to the electrical charge signals acquired by the plurality of photoelectric conversion elements. A pixel region is defined as a region in which the plurality of photoelectric conversion elements are arranged in an array. Signal paths for transmitting signals from the plurality of photoelectric conversion elements to the plurality of light-emitting elements lie within the pixel region.
Image capturing and display apparatus and wearable device
An image capturing and display apparatus comprises a plurality of photoelectric conversion elements for converting incident light from the outside of the image capturing and display apparatus to electrical charge signals, and a plurality of light-emitting elements for emitting light of an intensity corresponding to the electrical charge signals acquired by the plurality of photoelectric conversion elements. A pixel region is defined as a region in which the plurality of photoelectric conversion elements are arranged in an array. Signal paths for transmitting signals from the plurality of photoelectric conversion elements to the plurality of light-emitting elements lie within the pixel region.
Visual, depth and micro-vibration data extraction using a unified imaging device
A unified imaging device used for detecting and classifying objects in a scene including motion and micro-vibrations by receiving a plurality of images of the scene captured by an imaging sensor of the unified imaging device comprising a light source adapted to project on the scene a predefined structured light pattern constructed of a plurality of diffused light elements, classifying object(s) present in the scene by visually analyzing the image(s), extracting depth data of the object(s) by analyzing position of diffused light element(s) reflected from the object(s), identifying micro-vibration(s) of the object(s) by analyzing a change in a speckle pattern of the reflected diffused light element(s) in at least some consecutive images and outputting the classification, the depth data and data of the one or more micro-vibrations which are derived from the analyses of images captured by the imaging sensor and are hence inherently registered in a common coordinate system.
Vehicle detecting device and vehicle lamp system
A vehicle detecting device includes a region setting unit that sets a plurality of regions of interest having different ranges on image data acquired from an image capturing device that captures an image of a space in front of a host vehicle; and a vehicle determining unit that determines, for each of the regions of interest, presence of a front vehicle based on a luminous point present in the region and that executes a determination at different frequencies in the respective regions of interest.
Multi-channel depth estimation using census transforms
A depth estimation system is described capable of determining depth information using two images from two cameras. A first camera captures a first image and a second camera captures a second image, both images including a plurality of light channels. A scan direction is selected from a plurality of scan directions. For the selected scan direction, along each of a plurality of scanlines, the system compares pixels from the first image to pixels from the second image. The comparison is based on calculating a census transform for each pixel in the first image and a census transform for each pixel in the second image. This comparison is used to determine a stereo correspondence between the pixels in the first image and the pixels in the second image. The system generates a depth map based on the stereo correspondence.