H04N13/00

VISUAL POSITIONING SYSTEM, BATTERY REPLACING DEVICE, AND BATTERY REPLACEMENT CONTROL METHOD
20230234234 · 2023-07-27 ·

A visual positioning system, comprising a first visual sensor (501), a second visual sensor (502), and a position obtaining unit (503), wherein the first visual sensor (501) is used for obtaining a first image (G11) of a first position (A) of a target apparatus (7), a second visual sensor (502) is used for obtaining a second image of a second position (B) of the target apparatus (7), and the position obtaining unit (503) is used for obtaining position information of the target apparatus (7) according to the first image (G11) and the second image. Further disclosed are a battery swapping device and a battery swapping control method. Relatively high positioning accuracy is obtained in a visual manner, and accurate positioning between the battery swapping device and a vehicle for battery swap is implemented.

Video data processing method and apparatus

Example video data processing methods and apparatus are disclosed. One example method includes receiving a first stream from a client, where the first bitstream is obtained by encoding image data in a specified spatial object. The specified spatial object is part of panoramic space, and a size of the specified spatial object is larger than a size of a spatial object of the panoramic space corresponding to viewport information. The spatial object corresponding to the viewport information is located in the specified spatial object. The client receives a second stream, where the second bitstream is obtained by encoding image data of a panoramic image of the panoramic space with a lower resolution than a resolution of the image data included in the specified spatial object. The client plays the second bitstream and first bitstream.

Video data processing method and apparatus

Example video data processing methods and apparatus are disclosed. One example method includes receiving a first stream from a client, where the first bitstream is obtained by encoding image data in a specified spatial object. The specified spatial object is part of panoramic space, and a size of the specified spatial object is larger than a size of a spatial object of the panoramic space corresponding to viewport information. The spatial object corresponding to the viewport information is located in the specified spatial object. The client receives a second stream, where the second bitstream is obtained by encoding image data of a panoramic image of the panoramic space with a lower resolution than a resolution of the image data included in the specified spatial object. The client plays the second bitstream and first bitstream.

Methods and systems for displaying content

Disclosed are methods and systems for displaying content. In an aspect, a plurality of content items can be displayed on one user device according to user preference. An example method can comprise positioning a first set of pixels associated with a user device so that first content displayed via the first set of pixels can be viewable in a first viewing location. A second set of pixels associated with the user device can be positioned so that second content displayed via the second set of pixels can be viewable in a second viewing location. The second content can be different from the first content.

Stereo camera apparatus, vehicle, and parallax calculation method
11703326 · 2023-07-18 · ·

A stereo camera apparatus includes a first imaging unit including a first imaging optical system provided with a plurality of lens groups, and a first actuator configured to change a focal length by driving at least one of the plurality of lens groups of the first imaging optical system; a second imaging unit including a second imaging optical system provided with a plurality of lens groups, and a second actuator configured to change a focal length by driving at least one of the plurality of lens groups of the second imaging optical system; a focal length controller configured to output synchronized driving signals to the first and second actuators; and an image processing unit configured to calculate a distance to a subject by using images captured by the first imaging unit and the second imaging unit.

Methods and apparatus for initializing object dimensioning systems

Methods, systems, and apparatus for initializing a dimensioning system based on a location of a vehicle carrying an object to be dimensioned. An example method disclosed herein includes receiving, from a location system, location data indicating a location of a vehicle carrying an object; responsive to the location data indicating that the vehicle is approaching an imaging area, initializing, using a logic circuit, a sensor to be primed for capturing data representative of the object; receiving, from a motion detector carried by the vehicle, motion data indicating a speed of the vehicle; and triggering, using the logic circuitry, the sensor to capture data representative of the object at a sample rate based on the speed of the vehicle.

Ultrafast, robust and efficient depth estimation for structured-light based 3D camera system

A system and a method are disclosed for a structured-light system to estimate depth in an image. An image is received in which the image is of a scene onto which a reference light pattern has been projected. The projection of the reference light pattern includes a predetermined number of particular sub-patterns. A patch of the received image and a sub-pattern of the reference light pattern are matched based on either a hardcode template matching technique or a probability that the patch corresponds to the sub-pattern. If a lookup table is used, the table may be a probability matrix, may contain precomputed correlations scores or may contain precomputed class IDs. An estimate of depth of the patch is determined based on a disparity between the patch and the sub-pattern.

Single depth tracked accommodation-vergence solutions

While a viewer is viewing a first stereoscopic image comprising a first left image and a first right image, a left vergence angle of a left eye of a viewer and a right vergence angle of a right eye of the viewer are determined. A virtual object depth is determined based at least in part on (i) the left vergence angle of the left eye of the viewer and (ii) the right vergence angle of the right eye of the viewer. A second stereoscopic image comprising a second left image and a second right image for the viewer is rendered on one or more image displays. The second stereoscopic image is subsequent to the first stereoscopic image. The second stereoscopic image is projected from the one or more image displays to a virtual object plane at the virtual object depth.

System for displaying information to a user

The invention relates to a system for displaying information to a user, comprising: an emission device arranged to emit light so as to display information to a user, the emission device being adapted to emit the light in a pulsed manner so that the intensity of the light varies between a high value and a low value, a selective viewing device comprising a panel, the panel being adapted so that the user can view the light which is emitted by the emission device through that panel so as to visually perceive the information being displayed, the panel having a variable transparency which can be varied between a state of high transparency and a state of low transparency, the system being adapted to synchronize the emission device and the selective viewing device so that the states of the emission device emitting light at a high-intensity value and the states of the panel of the selective viewing device of high transparency overlap in time, the emission device being adapted so that the light is emitted in a pulsed manner with a duty cycle of less than or equal to 1/10, wherein the panel of the selective viewing device is adapted to operate at essentially the same duty cycle.

Cascaded architecture for disparity and motion prediction with block matching and convolutional neural network (CNN)

A CNN operates on the disparity or motion outputs of a block matching hardware module, such as a DMPAC module, to produce refined disparity or motion streams which improve operations in images having ambiguous regions. As the block matching hardware module provides most of the processing, the CNN can be small and thus able to operate in real time, in contrast to CNNs which are performing all of the processing. In one example, the CNN operation is performed only if the block hardware module output confidence level is below a predetermined amount. The CNN can have a number of different configurations and still be sufficiently small to operate in real time on conventional platforms.