G01S3/00

Method and apparatus for tracking eyes of user and method of generating inverse-transform image

There is provided a method and apparatus for tracking eyes of a user. The method and apparatus may acquire an image of the user, acquire an illuminance of a viewpoint from which the image is captured, and output coordinates of the eyes tracked from the image by operating at least one of a high illuminance eye tracker that operates at a high illuminance or a low illuminance eye tracker that operates at a low illuminance based on the acquired illuminance.

ESTIMATING POSE IN 3D SPACE
20220101004 · 2022-03-31 ·

Methods and devices for estimating position of a device within a 3D environment are described. Embodiments of the methods include sequentially receiving multiple image segments forming an image representing a field of view (FOV) comprising a portion of the environment. The image includes multiple sparse points that are identifiable based in part on a corresponding subset of image segments of the multiple image segments. The method also includes sequentially identifying one or more sparse points of the multiple sparse points when each subset of image segments corresponding to the one or more sparse points is received and estimating a position of the device in the environment based on the identified the one or more sparse points.

ESTIMATING POSE IN 3D SPACE
20220101004 · 2022-03-31 ·

Methods and devices for estimating position of a device within a 3D environment are described. Embodiments of the methods include sequentially receiving multiple image segments forming an image representing a field of view (FOV) comprising a portion of the environment. The image includes multiple sparse points that are identifiable based in part on a corresponding subset of image segments of the multiple image segments. The method also includes sequentially identifying one or more sparse points of the multiple sparse points when each subset of image segments corresponding to the one or more sparse points is received and estimating a position of the device in the environment based on the identified the one or more sparse points.

Systems and methods for multi-target tracking and autofocusing based on deep machine learning and laser radar
11283986 · 2022-03-22 · ·

Systems and methods for recognizing, tracking, and focusing a moving target are disclosed. In accordance with the disclosed embodiments, the systems and methods may recognize the moving target traveling relative to an imaging device; track the moving target; and determine a distance to the moving target from the imaging device.

Systems and methods for multi-target tracking and autofocusing based on deep machine learning and laser radar
11283986 · 2022-03-22 · ·

Systems and methods for recognizing, tracking, and focusing a moving target are disclosed. In accordance with the disclosed embodiments, the systems and methods may recognize the moving target traveling relative to an imaging device; track the moving target; and determine a distance to the moving target from the imaging device.

Camera mount system
11300856 · 2022-04-12 · ·

Systems, apparatuses, and methods are described which provide a camera mount system. A camera mount is in communication with a positioning device. The positioning device has a compass, gyroscope, and emits an infrared light beam. The camera mount has a sensor that receives reflected and/or scattered infrared light from a moving target. The camera mount changes its orientation based on information from its sensor, and gyroscope and compass information from the positioning device.

Camera mount system
11300856 · 2022-04-12 · ·

Systems, apparatuses, and methods are described which provide a camera mount system. A camera mount is in communication with a positioning device. The positioning device has a compass, gyroscope, and emits an infrared light beam. The camera mount has a sensor that receives reflected and/or scattered infrared light from a moving target. The camera mount changes its orientation based on information from its sensor, and gyroscope and compass information from the positioning device.

Tracking a point of interest in a panoramic video

A computer implemented method, device and computer program device are provided that obtain a panoramic video for a scene, with a coordinate system. The method, device and computer program product identify a point of interest (POI) from the scene within the panoramic video, and track a position of the POI within the panoramic video. The method, device and computer program product record POI position data in connection with changes in the position of the POI during the panoramic video. The method, device and computer program product support play back of the panoramic video and adjustment of the field of view based on the POI position data.

Tracking a point of interest in a panoramic video

A computer implemented method, device and computer program device are provided that obtain a panoramic video for a scene, with a coordinate system. The method, device and computer program product identify a point of interest (POI) from the scene within the panoramic video, and track a position of the POI within the panoramic video. The method, device and computer program product record POI position data in connection with changes in the position of the POI during the panoramic video. The method, device and computer program product support play back of the panoramic video and adjustment of the field of view based on the POI position data.

Systems and methods for deep learning-based shopper tracking

Systems and techniques are provided for tracking puts and takes of inventory items by subjects in an area of real space. A plurality of cameras with overlapping fields of view produce respective sequences of images of corresponding fields of view in the real space. In one embodiment, the system includes first image processors, including subject image recognition engines, receiving corresponding sequences of images from the plurality of cameras. The first image processors process images to identify subjects represented in the images in the corresponding sequences of images. The system includes second image processors, including background image recognition engines, receiving corresponding sequences of images from the plurality of cameras. The second image processors mask the identified subjects to generate masked images. Following this, the second image processors process the masked images to identify and classify background changes represented in the images in the corresponding sequences of images.