G06V10/147

Augmented reality display device and augmented reality display method
11508134 · 2022-11-22 · ·

An augmented reality (AR) display device includes a camera that captures a background image; a distance measuring sensor that measures a distance to a real object in the background image; a position and orientation sensor that detects the position of the camera and the shooting direction of the camera; a controller that recognizes the real object from the background image captured by the camera and associates the predetermined AR object with the recognized real object; a display displaying an image of the associated AR object; and a memory that stores data of the real object and the AR object associated with each other. The controller determines whether or not the real object is movable from the measurement result of the distance measuring sensor, and arranges the AR object according to the current position of the real object when the position of the real object associated with the AR object moves.

Image Detection Module and Information Management System

Environmental information is managed by a neural network.

An image detection module includes a first neural network, a first communication module, a first position sensor, a first processor, and a passive element. The first neural network includes an imaging device. The imaging device has a function of obtaining an image, and the first position sensor has a function of detecting positional information on where the image is obtained. When the first neural network determines whether the image has learned features, the first processor can transmit the positional information on where the image is obtained. The first processor receives a detection result through the first communication module, and the first processor can operate the passive element in accordance with the detection result.

Image Detection Module and Information Management System

Environmental information is managed by a neural network.

An image detection module includes a first neural network, a first communication module, a first position sensor, a first processor, and a passive element. The first neural network includes an imaging device. The imaging device has a function of obtaining an image, and the first position sensor has a function of detecting positional information on where the image is obtained. When the first neural network determines whether the image has learned features, the first processor can transmit the positional information on where the image is obtained. The first processor receives a detection result through the first communication module, and the first processor can operate the passive element in accordance with the detection result.

DETECTION DEVICE
20230178574 · 2023-06-08 ·

According to an aspect, a detection device includes: a detection element formed on a substrate; a photodiode provided in the detection element; partial photodiodes included in the photodiode; an organic protective film covering the partial photodiodes; lenses provided so as to overlap the partial photodiodes; a first light-blocking layer between the organic protective films and the lenses and provided with first openings in regions overlapping the partial photodiodes; a second light-blocking layer between the first light-blocking layer and the lenses and provided with second openings in regions overlapping the partial photodiodes and the first openings; a first light-transmitting resin layer provided between the first light-blocking layer and the second light-blocking layer; and a second light-transmitting resin layer provided between the second light-blocking layer and the lenses. The first light-blocking layer is provided on the organic protective film so as to be directly in contact with the organic protective film.

DEVICES AND METHODS EMPLOYING OPTICAL-BASED MACHINE LEARNING USING DIFFRACTIVE DEEP NEURAL NETWORKS

An all-optical Diffractive Deep Neural Network (D.sup.2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D.sup.2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D.sup.2NNs. In alternative embodiments, the all-optical D.sup.2NN is used as a front-end in conjunction with a trained, digital neural network back-end.

FINGERPRINT IDENTIFICATION MODULE AND DISPLAY DEVICE

The re is provided a fingerprint identification module, including a substrate having a fingerprint identification area and a peripheral area; a photoelectric sensing structure in the fingerprint identification area, and including pixel units; each pixel unit includes a thin film transistor having a gate electrode coupled to a corresponding gate line and a first electrode coupIed to a corresponding signal sensing line; the fingerprint identification area includes a photosensitive region, the pixel unit in the photosensitive region further includes a photoelectric sensor including a third electrode, a photosensitive pattern and a fourth electrode which are sequentially stacked along a direction away from the substrate, and the third electrode is coupled to a second electrode of the thin film transistor in the same pixel unit as that where the photoelectric sensor is located; an area ratio of the photoelectric sensor to the pixel unit corresponding thereto ranges from 40% to 90%.

FINGERPRINT IDENTIFICATION MODULE AND DISPLAY DEVICE

The re is provided a fingerprint identification module, including a substrate having a fingerprint identification area and a peripheral area; a photoelectric sensing structure in the fingerprint identification area, and including pixel units; each pixel unit includes a thin film transistor having a gate electrode coupled to a corresponding gate line and a first electrode coupIed to a corresponding signal sensing line; the fingerprint identification area includes a photosensitive region, the pixel unit in the photosensitive region further includes a photoelectric sensor including a third electrode, a photosensitive pattern and a fourth electrode which are sequentially stacked along a direction away from the substrate, and the third electrode is coupled to a second electrode of the thin film transistor in the same pixel unit as that where the photoelectric sensor is located; an area ratio of the photoelectric sensor to the pixel unit corresponding thereto ranges from 40% to 90%.

DISPLAY DEVICE
20230180570 · 2023-06-08 · ·

A display panel can include a plurality of subpixels disposed on a substrate and configured to display an image; a plurality of light-emitting elements disposed on the substrate and configured to emit light; and a plurality of light-receiving elements disposed on the substrate and configured to receive reflected light based on the light emitted by the plurality of light-emitting elements, in which the plurality of light-emitting elements or the plurality of light-receiving elements are disposed between the plurality of subpixels. The plurality of light-emitting elements can emit infrared laser light and the plurality of light-receiving elements are configured to generate a three-dimensional map for recognizing a face of a user based on the infrared laser light emitted by the plurality of light-emitting elements.

Iris recognition camera system for mobile device
09824272 · 2017-11-21 ·

An iris recognition camera system for mobile device includes a sensor and associated circuit, a lens and lighting sources including a lighting source element, a displaying light source element and IR LED light source elements, wherein the lighting sources are installed around the center of a camera lens with a certain clearance, and the lighting sources are attached on an FPCB and arranged on the four sides of the center of the camera lens and equipped with an FPCB cover for protection and a guiding mirror so that a user may conveniently acquire an image of his or her iris watching the image of his or her iris so as to identify and process only a living iris.

Object behavior anomaly detection using neural networks
11501572 · 2022-11-15 · ·

In various examples, a set of object trajectories may be determined based at least in part on sensor data representative of a field of view of a sensor. The set of object trajectories may be applied to a long short-term memory (LSTM) network to train the LSTM network. An expected object trajectory for an object in the field of view of the sensor may be computed by the LSTM network based at least in part an observed object trajectory. By comparing the observed object trajectory to the expected object trajectory, a determination may be made that the observed object trajectory is indicative of an anomaly.