Patent classifications
G06V10/147
OPTICAL SENSOR
According to one embodiment, an optical sensor includes a display panel and a sensor panel under at least a part of the display panel. The display panel includes pixels arranged two-dimensionally. The sensor panel includes a sensor layer including sensor elements arranged two-dimensionally, a collimator layer on the sensor layer including openings, and lenses on the collimator layer. A first number of openings in the openings are on one of the sensor elements. The first number of lenses in the lenses are on the first number of openings. The first number of lenses are at positions different for each of the sensor elements.
OPTICAL SENSOR
According to one embodiment, an optical sensor includes a display panel and a sensor panel under at least a part of the display panel. The display panel includes pixels arranged two-dimensionally. The sensor panel includes a sensor layer including sensor elements arranged two-dimensionally, a collimator layer on the sensor layer including openings, and lenses on the collimator layer. A first number of openings in the openings are on one of the sensor elements. The first number of lenses in the lenses are on the first number of openings. The first number of lenses are at positions different for each of the sensor elements.
Systems, methods and devices for monitoring betting activities
System, processes and devices for monitoring betting activities using bet recognition devices and a server. Each bet recognition device has an imaging component for capturing image data for a gaming table surface. The bet recognition device receives calibration data for calibrating the bet recognition device. A server processor coupled to a data store processes the image data received from the bet recognition devices over the network to detect, for each betting area, a number of chips and a final bet value for the chips.
Systems, methods and devices for monitoring betting activities
System, processes and devices for monitoring betting activities using bet recognition devices and a server. Each bet recognition device has an imaging component for capturing image data for a gaming table surface. The bet recognition device receives calibration data for calibrating the bet recognition device. A server processor coupled to a data store processes the image data received from the bet recognition devices over the network to detect, for each betting area, a number of chips and a final bet value for the chips.
Electronic device
An electronic device is provided. The electronic device includes a cover plate, a main board, an iris camera, an infrared lamp, a polarizing member. The iris camera is coupled with the main board and arranged on a side of the main board facing the cover plate. The infrared lamp is coupled with the main board and arranged on the side of the main board facing the cover plate. The infrared lamp is spaced apart from the iris camera. The polarizing member is arranged between the infrared lamp and the cover plate to reduce an angle of infrared light emitted from the infrared lamp.
Electronic device
An electronic device is provided. The electronic device includes a cover plate, a main board, an iris camera, an infrared lamp, a polarizing member. The iris camera is coupled with the main board and arranged on a side of the main board facing the cover plate. The infrared lamp is coupled with the main board and arranged on the side of the main board facing the cover plate. The infrared lamp is spaced apart from the iris camera. The polarizing member is arranged between the infrared lamp and the cover plate to reduce an angle of infrared light emitted from the infrared lamp.
Optical array for high-quality imaging in harsh environments
Methods and apparatus are disclosed for producing high quality images in uncontrolled or impaired environments. In some examples of the disclosed technology, groups of cameras for high dynamic range (HDR), polarization diversity, and optional other diversity modes are arranged to concurrently image a common scene. For example, in a vehicle checkpoint application, HDR provides discernment of dark objects inside a vehicle, while polarization diversity aids in rejecting glare. Spectral diversity, infrared imaging, and active illumination can be applied for better imaging through a windshield. Preprocessed single-camera images are registered and fused. Faces or other features of interest can be detected in the fused image and identified in a library. Impairments can include weather, insufficient or interfering lighting, shadows, reflections, window glass, occlusions, or moving objects.
Optical array for high-quality imaging in harsh environments
Methods and apparatus are disclosed for producing high quality images in uncontrolled or impaired environments. In some examples of the disclosed technology, groups of cameras for high dynamic range (HDR), polarization diversity, and optional other diversity modes are arranged to concurrently image a common scene. For example, in a vehicle checkpoint application, HDR provides discernment of dark objects inside a vehicle, while polarization diversity aids in rejecting glare. Spectral diversity, infrared imaging, and active illumination can be applied for better imaging through a windshield. Preprocessed single-camera images are registered and fused. Faces or other features of interest can be detected in the fused image and identified in a library. Impairments can include weather, insufficient or interfering lighting, shadows, reflections, window glass, occlusions, or moving objects.
Vehicle-trailer distance detection device and method
A method for determining a distance between a camera positioned on a rear portion of a tow vehicle and a trailer coupler supported by a trailer positioned behind the tow vehicle as the tow vehicle approaches the trailer. The method includes identifying the trailer coupler of the trailer within one or more images of a rearward environment of the tow vehicle. The method also includes receiving sensor data from an inertial measurement unit supported by the tow vehicle. The method includes determining a pixel-wise intensity difference between a current received image from the one or more images and a previously received image from the one or more images. The method includes determining the distance based on the identified trailer coupler, the sensor data, and the pixel-wise intensity difference, the distance includes a longitudinal distance, a lateral distance, and a vertical distance.
Auto-labeling of driving logs using analysis-by-synthesis and unsupervised domain adaptation
Acquiring labeled data can be a significant bottleneck in the development of machine learning models that are accurate and efficient enough to enable safety-critical applications, such as automated driving. The process of labeling of driving logs can be automated. Unlabeled real-world driving logs, which include data captured by one or more vehicle sensors, can be automatically labeled to generate one or more labeled real-world driving logs. The automatic labeling can include analysis-by-synthesis on the unlabeled real-world driving logs to generate simulated driving logs, which can include reconstructed driving scenes or portions thereof. The automatic labeling can further include simulation-to-real automatic labeling on the simulated driving logs and the unlabeled real-world driving logs to generate one or more labeled real-world driving logs. The automatically labeled real-world driving logs can be stored in one or more data stores for subsequent training, validation, evaluation, and/or model management.