Patent classifications
H04N13/271
PLANT IDENTIFICATION USING HETEROGENOUS MULTI-SPECTRAL STEREO IMAGING
A farming machine identifies and treats a plant as the farming machine travels through a field. The farming machine includes a pair of image sensors for capturing images of a plant. The image sensors are different, and their output images are used to generate a depth map to improve the plant identification process. A control system identifies a plant using the depth map. The control system captures images, identifies a plant, and actuates a treatment mechanism in real time.
PLANT IDENTIFICATION USING HETEROGENOUS MULTI-SPECTRAL STEREO IMAGING
A farming machine identifies and treats a plant as the farming machine travels through a field. The farming machine includes a pair of image sensors for capturing images of a plant. The image sensors are different, and their output images are used to generate a depth map to improve the plant identification process. A control system identifies a plant using the depth map. The control system captures images, identifies a plant, and actuates a treatment mechanism in real time.
IMAGING APPARATUS, IMAGE PROCESSING APPARATUS, IMAGING SYSTEM, IMAGING METHOD, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM
An imaging apparatus includes an imaging device, a first imaging optical system and a second imaging optical system that form respective input images from mutually different viewpoints onto the imaging device, and a first modulation mask and a second modulation mask that modulate the input images formed by the first imaging optical system and the second imaging optical system. The imaging device captures a superposed image composed of the two input images that have been formed by the first imaging optical system and the second imaging optical system, modulated by the first modulation mask and the second modulation mask, and optically superposed on each other, and the first modulation mask and the second modulation mask have mutually different optical transmittance distribution characteristics.
APPARATUS FOR GENERATING A THREE-DIMENSIONAL COLOR IMAGE AND A METHOD FOR PRODUCING A THREE-DIMENSIONAL COLOR IMAGE
An apparatus comprises an input interface for receiving a color image of at least an object and a low resolution depth image of at least the object. The apparatus further comprises an image processing module configured to produce data for generating a three-dimensional color image based on a first color pixel image data of a first color pixel and a first derived depth pixel image data of a first derived depth pixel. The image processing module is configured to calculate the first derived depth pixel image data based on a measured depth pixel image data of a measured depth pixel and a weighting factor. The weighting factor is based on a color edge magnitude summation value of a path between the first color pixel and the reference color pixel. The apparatus further comprises an output interface for providing the generated three-dimensional color image.
3D IMAGING SYSTEM AND METHOD
A 3D imaging system includes an optical modulator for modulating a returned portion of a light pulse as a function of time. The returned light pulse portion is reflected or scattered from a scene for which a 3D image or video is desired. The 3D imaging system also includes an element array receiving the modulated light pulse portion and a sensor array of pixels, corresponding to the element array. The pixel array is positioned to receive light output from the element array. The element array may include an array of polarizing elements, each corresponding to one or more pixels. The polarization states of the polarizing elements can be configured so that time-of-flight information of the returned light pulse can be measured from signals produced by the pixel array, in response to the returned modulated portion of the light pulse.
Method and apparatus for scanning and printing a 3D object
A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an three-dimensional image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional depth map. Coordinates of the points in the depth map may be estimated with a level of certainty. The level of certainty may be used to determine which points are included in the composite image. The selected points may be smoothed and a mesh model may be formed by creating a convex hull of the selected points. The mesh model and associated texture information may be used to render a three-dimensional representation of the object on a two-dimensional display. Additional techniques include processing and formatting of the three-dimensional representation data to be printed by a three-dimensional printer so a three-dimensional model of the object may be formed.
Method and apparatus for scanning and printing a 3D object
A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an three-dimensional image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional depth map. Coordinates of the points in the depth map may be estimated with a level of certainty. The level of certainty may be used to determine which points are included in the composite image. The selected points may be smoothed and a mesh model may be formed by creating a convex hull of the selected points. The mesh model and associated texture information may be used to render a three-dimensional representation of the object on a two-dimensional display. Additional techniques include processing and formatting of the three-dimensional representation data to be printed by a three-dimensional printer so a three-dimensional model of the object may be formed.
User interface for camera effects
The present disclosure generally relates to user interfaces. In some examples, the electronic device transitions between user interfaces for capturing photos based on data received from a first camera and a second camera. In some examples, the electronic device provides enhanced zooming capabilities that result in visual pleasing results for a displayed digital viewfinder and for captured videos. In some examples, the electronic device provides user interfaces for transitioning a digital viewfinder between a first camera with an applied digital zoom to a second camera with no digital zoom. In some examples, the electronic device prepares to capture media at various magnification levels. In some examples, the electronic device enhanced capabilities for navigating through a plurality of values.
User interface for camera effects
The present disclosure generally relates to user interfaces. In some examples, the electronic device transitions between user interfaces for capturing photos based on data received from a first camera and a second camera. In some examples, the electronic device provides enhanced zooming capabilities that result in visual pleasing results for a displayed digital viewfinder and for captured videos. In some examples, the electronic device provides user interfaces for transitioning a digital viewfinder between a first camera with an applied digital zoom to a second camera with no digital zoom. In some examples, the electronic device prepares to capture media at various magnification levels. In some examples, the electronic device enhanced capabilities for navigating through a plurality of values.
3D image acquisition apparatus and method of generating depth image in the 3D image acquisition apparatus
Provided are a three-dimensional (3D) image acquisition apparatus, and a method of generating a depth image in the 3D image acquisition apparatus. The method may include sequentially projecting a light transmission signal, which is generated from a light source, to a subject, modulating reflected light, which is reflected by the subject, using a light modulation signal, calculating a phase delay using a combination of a first plurality of images of two groups, from among a second plurality of images of all groups obtained by capturing the modulated reflected light, and generating a depth image based on the phase delay.