G06V10/147

METHOD FOR CONTROLLING A VEHICLE IN A DEPOT, TRAVEL CONTROL UNIT, AND VEHICLE HAVING SAID TRAVEL CONTROL UNIT
20220413508 · 2022-12-29 ·

The disclosure is directed to a method for controlling a vehicle in a depot. The method includes the steps: allocating a three-dimensional target object to the vehicle; detecting a three-dimensional object in the environment around the vehicle and determining depth information for the detected three-dimensional object; classifying the detected three-dimensional object on the basis of the determined depth information and checking whether the determined three-dimensional object has the same object class as the three-dimensional target object; identifying the detected three-dimensional object if the determined three-dimensional object has the same object class as the three-dimensional target object by detecting an object identifier assigned to the three-dimensional object and checking whether the detected object identifier matches a target identifier assigned to the target object; outputting an approach signal to move the vehicle closer to the detected three-dimensional target object in an automated manner or manually if the object identifier matches the target identifier.

METHOD FOR CONTROLLING A VEHICLE IN A DEPOT, TRAVEL CONTROL UNIT, AND VEHICLE HAVING SAID TRAVEL CONTROL UNIT
20220413508 · 2022-12-29 ·

The disclosure is directed to a method for controlling a vehicle in a depot. The method includes the steps: allocating a three-dimensional target object to the vehicle; detecting a three-dimensional object in the environment around the vehicle and determining depth information for the detected three-dimensional object; classifying the detected three-dimensional object on the basis of the determined depth information and checking whether the determined three-dimensional object has the same object class as the three-dimensional target object; identifying the detected three-dimensional object if the determined three-dimensional object has the same object class as the three-dimensional target object by detecting an object identifier assigned to the three-dimensional object and checking whether the detected object identifier matches a target identifier assigned to the target object; outputting an approach signal to move the vehicle closer to the detected three-dimensional target object in an automated manner or manually if the object identifier matches the target identifier.

FACE AUTHENTICATION ENVIRONMENT DETERMINATION METHOD, FACE AUTHENTICATION ENVIRONMENT DETERMINATION SYSTEM, FACE AUTHENTICATION ENVIRONMENT DETERMINATION APPARATUS, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
20220415014 · 2022-12-29 · ·

A face authentication environment determination method according to the present disclosure includes: capturing an image of a skin color chart (10) including a first skin color display part (BK) configured to show a skin color of a

Negroid and a second skin color display part (WT) configured to show a skin color of a Caucasoid, the first and the second skin color display parts being formed on a substrate; analyzing luminance of each of the first and the second skin color display parts (BK) and (WT) in the captured image of the skin color chart (10); and determining whether or not an image capturing environment is suitable for face authentication based on the luminance of each of the first and the second skin color display parts (BK) and (WT).

FACE AUTHENTICATION ENVIRONMENT DETERMINATION METHOD, FACE AUTHENTICATION ENVIRONMENT DETERMINATION SYSTEM, FACE AUTHENTICATION ENVIRONMENT DETERMINATION APPARATUS, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
20220415014 · 2022-12-29 · ·

A face authentication environment determination method according to the present disclosure includes: capturing an image of a skin color chart (10) including a first skin color display part (BK) configured to show a skin color of a

Negroid and a second skin color display part (WT) configured to show a skin color of a Caucasoid, the first and the second skin color display parts being formed on a substrate; analyzing luminance of each of the first and the second skin color display parts (BK) and (WT) in the captured image of the skin color chart (10); and determining whether or not an image capturing environment is suitable for face authentication based on the luminance of each of the first and the second skin color display parts (BK) and (WT).

LASER EMITTER, DEPTH CAMERA AND ELECTRONIC DEVICE
20220412726 · 2022-12-29 ·

A laser emitter includes an emitting assembly and a laser deflection assembly, wherein the emitting assembly that has a beam outlet, and the beam outlet is configured to emit a laser beam, the laser deflection assembly that is at the beam outlet and is movable relative to the beam outlet, the laser deflection assembly is configured to change an angle of deviation of the laser beam emitted from the beam outlet when the laser deflection assembly is translated relative to the beam outlet, and an included angle is between a translation direction of the laser deflection assembly and a center line of the laser beam emitted from the beam outlet.

POLKA LINES: LEARNING STRUCTURED ILLUMINATION AND RECONSTRUCTION FOR ACTIVE STEREO
20220414913 · 2022-12-29 ·

The present disclosure relates generally to image processing, and more particularly, toward techniques for structured illumination and reconstruction of three-dimensional (3D) images. Disclosed herein is a method to jointly learn structured illumination and reconstruction, parameterized by a diffractive optical element and a neural network in an end-to-end fashion. The disclosed approach has a differentiable image formation model for active stereo, relying on both wave and geometric optics, and a trinocular reconstruction network. The jointly optimized pattern, dubbed “Polka Lines,” together with the reconstruction network, makes accurate active-stereo depth estimates across imaging conditions. The disclosed method is validated in simulation and used with an experimental prototype, and several variants of the Polka Lines patterns specialized to the illumination conditions are demonstrated.

Projecting images captured using fisheye lenses for feature detection in autonomous machine applications
11538231 · 2022-12-27 · ·

In various examples, sensor data may be adjusted to represent a virtual field of view different from an actual field of view of the sensor, and the sensor data—with or without virtual adjustment—may be applied to a stereographic projection algorithm to generate a projected image. The projected image may then be applied to a machine learning model—such as a deep neural network (DNN)—to detect and/or classify features or objects represented therein.

Locking beverage container
11535435 · 2022-12-27 ·

A locking beverage container includes a container and a complementary removably securable lid. The lid includes a drinking aperture, a flap, and a lock. The drinking aperture is in fluid communication with the container when the lid is secured to the container and can be utilized by an individual to drink the contents of the container. The flap can be moved to a closed position to cover the drinking aperture to prevent access to the drinking aperture and contents of the container. The lock is operably connected to the flap and can selectively lock the flap in a closed position such that the flap cannot be moved by an unauthorized individual. The lock can also be unlocked thereby enabling the flap to move and enabling the individual to access and utilize the drinking aperture again.

System for synthesizing data

During a training phase, a first machine learning system is trained using actual data, such as multimodal images of a hand, to generate synthetic image data. During training, the first system determines latent vector spaces associated with identity, appearance, and so forth. During a generation phase, latent vectors from the latent vector spaces are generated and used as input to the first machine learning system to generate candidate synthetic image data. The candidate image data is assessed to determine suitability for inclusion into a set of synthetic image data that may be used for subsequent use in training a second machine learning system to recognize an identity of a hand presented by a user. For example, the candidate synthetic image data is compared to previously generated synthetic image data to avoid duplicative synthetic identities. The second machine learning system is then trained using the approved candidate synthetic image data.

Image-focusing method and associated image sensor
11539875 · 2022-12-27 · ·

An autofocusing method includes capturing an image of a scene with a camera that includes a pixel array; computing a horizontal-difference image, and a vertical-difference image; and combining the horizontal-difference image and the vertical-difference image to yield a combined image. The method also includes determining, from the combined image and the intensity image, an image distance with respect to a lens of the camera at which the camera forms an in-focus image. The pixel array includes horizontally-adjacent pixel pairs and vertically-adjacent pixel pairs each located beneath a respective microlens. The horizontal-difference image includes, for each horizontally-adjacent pixel pair, a derived pixel value that is an increasing function of a difference between pixel values generated by the horizontally-adjacent pixel pair. The vertical-difference image includes, for each vertically-adjacent pixel pair, a derived pixel value that is an increasing function of a difference between pixel values generated by the vertically-adjacent pixel pair.