G06V10/143

SENSING DEVICE AND ELECTRONIC DEVICE
20230051302 · 2023-02-16 ·

A sensing device includes a substrate, a first circuit, a second circuit, a first photodetector, and a second photodetector. The substrate has a sensing region. The first circuit is disposed on the substrate and in the sensing region, and configured to sense a fingerprint. The second circuit is disposed on the substrate and in the sensing region, and configured to sense a living body. The first photodetector is electrically connected to the first circuit. The second photodetector is electrically connected to the second circuit. The area of the second photodetector is larger than the area of the first photodetector.

IMAGE PROCESSING SYSTEM, ENDOSCOPE SYSTEM, AND IMAGE PROCESSING METHOD
20230050945 · 2023-02-16 · ·

An image processing system includes a processor, the processor performing processing, based on association information of an association between a biological image captured under a first imaging condition and a biological image captured under a second imaging condition, of outputting a prediction image corresponding to an image in which an object captured in an input image is to be captured under the second imaging condition. The association information is indicative of a trained model obtained through machine learning of a relationship between a first training image captured under the first imaging condition and a second training image captured under the second imaging condition. The processor is capable of outputting a plurality of different kinds of prediction images based on a plurality of trained models and the input image, and performs processing, based on a given condition, of selecting the prediction image to be output among a plurality of prediction images.

IMAGE PROCESSING SYSTEM, ENDOSCOPE SYSTEM, AND IMAGE PROCESSING METHOD
20230050945 · 2023-02-16 · ·

An image processing system includes a processor, the processor performing processing, based on association information of an association between a biological image captured under a first imaging condition and a biological image captured under a second imaging condition, of outputting a prediction image corresponding to an image in which an object captured in an input image is to be captured under the second imaging condition. The association information is indicative of a trained model obtained through machine learning of a relationship between a first training image captured under the first imaging condition and a second training image captured under the second imaging condition. The processor is capable of outputting a plurality of different kinds of prediction images based on a plurality of trained models and the input image, and performs processing, based on a given condition, of selecting the prediction image to be output among a plurality of prediction images.

Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas

An exemplary method includes maintaining a receiver-side mesh-vertices list, receiving duplicative-vertex information from a sender, and responsively reducing the receiver-side mesh-vertices list in accordance with the received duplicative-vertex information, and rendering, using the reduced receiver-side mesh-vertices list, viewpoint-adaptive three-dimensional (3D) personas of a subject at least in part by weighting video pixel colors from different video-camera vantage points of video cameras that capture video streams of the subject, the weighting being performed according to a respective geometric relationship of each video-camera vantage point to a user-selected viewpoint.

Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas

An exemplary method includes maintaining a receiver-side mesh-vertices list, receiving duplicative-vertex information from a sender, and responsively reducing the receiver-side mesh-vertices list in accordance with the received duplicative-vertex information, and rendering, using the reduced receiver-side mesh-vertices list, viewpoint-adaptive three-dimensional (3D) personas of a subject at least in part by weighting video pixel colors from different video-camera vantage points of video cameras that capture video streams of the subject, the weighting being performed according to a respective geometric relationship of each video-camera vantage point to a user-selected viewpoint.

Device for a color-based detection of image contents computing device, and motor vehicle including the device
11580716 · 2023-02-14 · ·

An apparatus for color-dependent detection of image contents includes a light input coupling apparatus, carrier medium, measuring region, output coupling region, and camera apparatus. The light input coupling apparatus includes a light source to emit light at a first wavelength. The carrier medium receives the light and transmits the light by internal reflection to the measuring region. The measuring region includes a first diffraction structure that outputs light at the first wavelength. The first diffraction structure is formed as a multiplex diffraction structure to input light in a second wavelength range. The output coupling region includes a second diffraction structure formed as a multiplex diffraction structure that outputs light at the first wavelength and the second wavelength range. The camera apparatus captures light output from the carrier medium to the camera apparatus, and provides the light in a form of image data which correlates with the light.

Device for a color-based detection of image contents computing device, and motor vehicle including the device
11580716 · 2023-02-14 · ·

An apparatus for color-dependent detection of image contents includes a light input coupling apparatus, carrier medium, measuring region, output coupling region, and camera apparatus. The light input coupling apparatus includes a light source to emit light at a first wavelength. The carrier medium receives the light and transmits the light by internal reflection to the measuring region. The measuring region includes a first diffraction structure that outputs light at the first wavelength. The first diffraction structure is formed as a multiplex diffraction structure to input light in a second wavelength range. The output coupling region includes a second diffraction structure formed as a multiplex diffraction structure that outputs light at the first wavelength and the second wavelength range. The camera apparatus captures light output from the carrier medium to the camera apparatus, and provides the light in a form of image data which correlates with the light.

Depth image acquiring apparatus, control method, and depth image acquiring system

It is intended to promote enhancement of performance of acquiring a depth image. A depth image acquiring apparatus includes a light emitting diode, a TOF sensor, and a filter. The light emitting diode irradiates modulated light toward a detection area becoming an area in which a depth image is to be acquired to detect a distance. The TOF sensor receives incident light into which the light irradiated from the light emitting diode is reflected by an object lying in the detection area to become, thereby outputting a signal used to produce the depth image. The filter passes more light having a wavelength in a predetermined pass bandwidth than light having a wavelength in a pass bandwidth other than the predetermined pass bandwidth of the light made incident toward the TOF sensor. In this case, at least one of the light emitting diode, the TOF sensor, or arrangement of the filter is controlled in accordance with a temperature of the light emitting diode or the TOF sensor. The present technique, for example, can be applied to a system for with international search report acquiring a depth image by using a TOF system.

Depth image acquiring apparatus, control method, and depth image acquiring system

It is intended to promote enhancement of performance of acquiring a depth image. A depth image acquiring apparatus includes a light emitting diode, a TOF sensor, and a filter. The light emitting diode irradiates modulated light toward a detection area becoming an area in which a depth image is to be acquired to detect a distance. The TOF sensor receives incident light into which the light irradiated from the light emitting diode is reflected by an object lying in the detection area to become, thereby outputting a signal used to produce the depth image. The filter passes more light having a wavelength in a predetermined pass bandwidth than light having a wavelength in a pass bandwidth other than the predetermined pass bandwidth of the light made incident toward the TOF sensor. In this case, at least one of the light emitting diode, the TOF sensor, or arrangement of the filter is controlled in accordance with a temperature of the light emitting diode or the TOF sensor. The present technique, for example, can be applied to a system for with international search report acquiring a depth image by using a TOF system.

Apparatus and method for image-guided agriculture

A method for image-guided agriculture includes receiving images; processing the images to generate reflectance maps respectively corresponding to spectral bands; synthesizing the reflectance maps to generate a multispectral image including vegetation index information of a target area; receiving crop information in regions of the target area; and assessing crop conditions for the regions based on the identified crop information and the vegetation index information.