G06V20/10

Viewpoint dependent brick selection for fast volumetric reconstruction

A method to culling parts of a 3D reconstruction volume is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data with low usage of computational resources and storage spaces. The method includes culling parts of the 3D reconstruction volume against a depth image. The depth image has a plurality of pixels, each of which represents a distance to a surface in a scene. In some embodiments, the method includes culling parts of the 3D reconstruction volume against a frustum. The frustum is derived from a field of view of an image sensor, from which image data to create the 3D reconstruction is obtained.

Method and device for reliably identifying objects in video images
11580332 · 2023-02-14 · ·

A computer-implemented method for reliably identifying objects in a sequence of input images received with the aid of an imaging sensor, positions of light sources in the respective input image being ascertained from the input images in each case with the aid of a first machine learning system, in particular, an artificial neural network, and objects from the sequence of input images being identified from the resulting sequence of positions of light sources, in particular, with the aid of a second machine learning system, in particular, with the aid of an artificial neural network.

Systems and methods for automatically grading pre-owned electronic devices

Systems and methods for automatically grading a user device are provided. Such systems and methods can include (1) a lighting element positioned at an angle relative to a platform, (2) an imaging device positioned at the angle relative to the platform such that light emitted from the lighting element and a field of view of the imaging device form a right angle where the light emitted from the lighting element and the field of view meet at a user device when the user device is positioned at a predetermined location on the platform, and (3) control circuitry that can activate the lighting element, instruct the imaging device to capture an image of a screen of the user device while the user device is at the predetermined location and is being illuminated by the first lighting element, and parse the image to determine whether the screen is damaged.

Systems and methods for automatically grading pre-owned electronic devices

Systems and methods for automatically grading a user device are provided. Such systems and methods can include (1) a lighting element positioned at an angle relative to a platform, (2) an imaging device positioned at the angle relative to the platform such that light emitted from the lighting element and a field of view of the imaging device form a right angle where the light emitted from the lighting element and the field of view meet at a user device when the user device is positioned at a predetermined location on the platform, and (3) control circuitry that can activate the lighting element, instruct the imaging device to capture an image of a screen of the user device while the user device is at the predetermined location and is being illuminated by the first lighting element, and parse the image to determine whether the screen is damaged.

Systems, devices and methods for imaging objects within or behind a medium using electromagnetic array

Systems, device, and methods are provided for imaging at least one target object within a medium, including acquiring multiple sets of RF signals and generating plurality of DAS images and analyzing the plurality of DAS images to detect one or more target object in the plurality of DAS images and further visualizing the at least one target object.

Systems, devices and methods for imaging objects within or behind a medium using electromagnetic array

Systems, device, and methods are provided for imaging at least one target object within a medium, including acquiring multiple sets of RF signals and generating plurality of DAS images and analyzing the plurality of DAS images to detect one or more target object in the plurality of DAS images and further visualizing the at least one target object.

Method for supporting a camera-based environment recognition by a means of transport using road wetness information from a first ultrasonic sensor
11580752 · 2023-02-14 · ·

A method and an apparatus for supporting a camera-based environment recognition by a means of transport using road wetness information from a first ultrasonic sensor. The method includes: recording a first signal representing an environment of the means of transport by the first ultrasonic sensor of the means of transport; recording a second signal representing the environment of the means of transport by a camera of the means of transport; obtaining road wetness information on the basis of the first signal; selecting a predefined set of parameters from a plurality of predefined sets of parameters as a function of the road wetness information; and performing an environment recognition on the basis of the second signal in conjunction with the predefined set of parameters.

Augmented-reality-based video record and pause zone creation

According to one embodiment, a method, computer system, and computer program product for operating a camera to perform video capture of a subject is provided. The present invention may include pausing or recording video capture of the subject based on the camera's location within one or more designated recording zones, wherein the recording zones are two-dimensional or three-dimensional regions comprising or within view of the subject, and wherein the recording zones are designated within an augmented reality environment.

Augmented-reality-based video record and pause zone creation

According to one embodiment, a method, computer system, and computer program product for operating a camera to perform video capture of a subject is provided. The present invention may include pausing or recording video capture of the subject based on the camera's location within one or more designated recording zones, wherein the recording zones are two-dimensional or three-dimensional regions comprising or within view of the subject, and wherein the recording zones are designated within an augmented reality environment.

Multi-channel lidar sensor module

The present invention relates to a multi-channel lidar sensor module capable of measuring at least two target objects using one image sensor. The multi-channel lidar sensor module according to an embodiment of the present invention includes at least one pair of light emitting units configured to emit laser beams and a light receiving unit formed between the at least one pair of emitting units and configured to receive at least one pair of reflected laser beams which are emitted from the at least one pair of light emitting units and reflected by target objects.