Patent classifications
H04N13/204
Intraoral 3D scanner employing multiple miniature cameras and multiple miniature pattern projectors
A method for generating a 3D image includes driving structured light projector(s) to project a pattern of light on an intraoral 3D surface, and driving camera(s) to capture images, each image including at least a portion of the projected pattern, each one of the camera(s) comprising an array of pixels. A processor compares a series of images captured by each camera and determines which of the portions of the projected pattern can be tracked across the images. The processor constructs a three-dimensional model of the intraoral three-dimensional surface based at least in part on the comparison of the series of images. Other embodiments are also described.
Multi-channel depth estimation using census transforms
A depth estimation system is described capable of determining depth information using two images from two cameras. A first camera captures a first image and a second camera captures a second image, both images including a plurality of light channels. A scan direction is selected from a plurality of scan directions. For the selected scan direction, along each of a plurality of scanlines, the system compares pixels from the first image to pixels from the second image. The comparison is based on calculating a census transform for each pixel in the first image and a census transform for each pixel in the second image. This comparison is used to determine a stereo correspondence between the pixels in the first image and the pixels in the second image. The system generates a depth map based on the stereo correspondence.
Multi-channel depth estimation using census transforms
A depth estimation system is described capable of determining depth information using two images from two cameras. A first camera captures a first image and a second camera captures a second image, both images including a plurality of light channels. A scan direction is selected from a plurality of scan directions. For the selected scan direction, along each of a plurality of scanlines, the system compares pixels from the first image to pixels from the second image. The comparison is based on calculating a census transform for each pixel in the first image and a census transform for each pixel in the second image. This comparison is used to determine a stereo correspondence between the pixels in the first image and the pixels in the second image. The system generates a depth map based on the stereo correspondence.
METHOD AND APPARATUS FOR OPERATING A COMPANION PROCESSING UNIT
Embodiments of apparatus and method for operating a companion processing unit. In an example, an apparatus includes an application processor, a memory, and a companion processing unit. The apparatus also includes an image sensor. The application processor is configured to turn on the image sensor, perform a scene detection on an image received from the image sensor, and determine whether a scene category of the image is supported by the companion processing unit. In response to the scene category being supported by the companion processing unit, the application processor controls the companion processing unit to start a boot-up sequence corresponding to the scene category. The boot-up sequence enables the companion processing unit to enter a mission mode in which the companion processing unit is ready to receive and process image data from the image sensor and send processed image data to the application processor.
METHOD AND APPARATUS FOR OPERATING A COMPANION PROCESSING UNIT
Embodiments of apparatus and method for operating a companion processing unit. In an example, an apparatus includes an application processor, a memory, and a companion processing unit. The apparatus also includes an image sensor. The application processor is configured to turn on the image sensor, perform a scene detection on an image received from the image sensor, and determine whether a scene category of the image is supported by the companion processing unit. In response to the scene category being supported by the companion processing unit, the application processor controls the companion processing unit to start a boot-up sequence corresponding to the scene category. The boot-up sequence enables the companion processing unit to enter a mission mode in which the companion processing unit is ready to receive and process image data from the image sensor and send processed image data to the application processor.
Methods and apparatus for initializing object dimensioning systems
Methods, systems, and apparatus for initializing a dimensioning system based on a location of a vehicle carrying an object to be dimensioned. An example method disclosed herein includes receiving, from a location system, location data indicating a location of a vehicle carrying an object; responsive to the location data indicating that the vehicle is approaching an imaging area, initializing, using a logic circuit, a sensor to be primed for capturing data representative of the object; receiving, from a motion detector carried by the vehicle, motion data indicating a speed of the vehicle; and triggering, using the logic circuitry, the sensor to capture data representative of the object at a sample rate based on the speed of the vehicle.
Methods and apparatus for initializing object dimensioning systems
Methods, systems, and apparatus for initializing a dimensioning system based on a location of a vehicle carrying an object to be dimensioned. An example method disclosed herein includes receiving, from a location system, location data indicating a location of a vehicle carrying an object; responsive to the location data indicating that the vehicle is approaching an imaging area, initializing, using a logic circuit, a sensor to be primed for capturing data representative of the object; receiving, from a motion detector carried by the vehicle, motion data indicating a speed of the vehicle; and triggering, using the logic circuitry, the sensor to capture data representative of the object at a sample rate based on the speed of the vehicle.
METHODS FOR DISPLAYING USER INTERFACE ELEMENTS RELATIVE TO MEDIA CONTENT
In some embodiments, a computer system displays a caption for a media item at different depths depending on the depth of the portion of the media item over which the caption is displayed. In some embodiments, a computer system displays a user interface element that includes information associated with the media item at different locations relative to the media item depending on attention of the user. In some embodiments, a computer system displays a user interface element that includes information associated with the media item with different visual appearances depending on visual characteristics of the portion of the media item over which the user interface element is displayed.
Method, system, and medium having stored thereon instructions that cause a processor to execute a method for obtaining image information of an organism comprising a set of optical data
The present disclosure relates to methods and systems for obtaining image information of an organism including a set of optical data; calculating a growth index based on the set of optical data; and calculating an anticipated harvest time based on the growth index, where the image information includes at least one of: (a) visible image data obtained from an image sensor and non-visible image data obtained from the image sensor, and (b) a set of image data from at least two image capture devices, where the at least two image capture devices capture the set of image data from at least two positions.
Method, system, and medium having stored thereon instructions that cause a processor to execute a method for obtaining image information of an organism comprising a set of optical data
The present disclosure relates to methods and systems for obtaining image information of an organism including a set of optical data; calculating a growth index based on the set of optical data; and calculating an anticipated harvest time based on the growth index, where the image information includes at least one of: (a) visible image data obtained from an image sensor and non-visible image data obtained from the image sensor, and (b) a set of image data from at least two image capture devices, where the at least two image capture devices capture the set of image data from at least two positions.