Patent classifications
G01S3/00
SYSTEMS AND METHODS FOR DEEP LEARNING-BASED SHOPPER TRACKING
Systems and techniques are provided for tracking puts and takes of inventory items by subjects in an area of real space. A plurality of cameras with overlapping fields of view produce respective sequences of images of corresponding fields of view in the real space. In one embodiment, the system includes first image processors, including subject image recognition engines, receiving corresponding sequences of images from the plurality of cameras. The first image processors process images to identify subjects represented in the images in the corresponding sequences of images. The system includes second image processors, including background image recognition engines, receiving corresponding sequences of images from the plurality of cameras. The second image processors mask the identified subjects to generate masked images. Following this, the second image processors process the masked images to identify and classify background changes represented in the images in the corresponding sequences of images.
SYSTEMS AND METHODS FOR DEEP LEARNING-BASED SHOPPER TRACKING
Systems and techniques are provided for tracking puts and takes of inventory items by subjects in an area of real space. A plurality of cameras with overlapping fields of view produce respective sequences of images of corresponding fields of view in the real space. In one embodiment, the system includes first image processors, including subject image recognition engines, receiving corresponding sequences of images from the plurality of cameras. The first image processors process images to identify subjects represented in the images in the corresponding sequences of images. The system includes second image processors, including background image recognition engines, receiving corresponding sequences of images from the plurality of cameras. The second image processors mask the identified subjects to generate masked images. Following this, the second image processors process the masked images to identify and classify background changes represented in the images in the corresponding sequences of images.
Terminal and method of controlling therefor
The present invention relates to a terminal and a method of controlling therefor. According to one embodiment of the present invention, a terminal includes at least two or more camera sensors, a preview window configured to include a first object to which a first focus is set by a first camera sensor and a second object to which a second focus is set by a second camera sensor and a display configured to display a processed image data, and a controller configured to track a movement of the first object, the controller, if the first object is deviated from an angle of view covered by the first camera sensor, configured to set a focus to the first object using the second camera sensor, the controller configured to obtain image data.
Terminal and method of controlling therefor
The present invention relates to a terminal and a method of controlling therefor. According to one embodiment of the present invention, a terminal includes at least two or more camera sensors, a preview window configured to include a first object to which a first focus is set by a first camera sensor and a second object to which a second focus is set by a second camera sensor and a display configured to display a processed image data, and a controller configured to track a movement of the first object, the controller, if the first object is deviated from an angle of view covered by the first camera sensor, configured to set a focus to the first object using the second camera sensor, the controller configured to obtain image data.
Optical sensing apparatuses, method, and optical detecting module capable of estimating multi-degree-of-freedom motion
A method capable of estimating multi-degree-of-freedom motion of an optical sensing apparatus includes: providing an image sensor having a pixel array having a plurality of image zones to sense and capture a frame; providing and using a lens to vary optical magnifications of a plurality of portion images of the frame to generate a plurality of reconstructed images with different of field of views, the portion images of the frame respectively corresponding to the image zones; and estimating and obtaining a motion result for each of the reconstructed images to estimate the multi-degree-of-freedom motion of the optical sensing apparatus.
Optical sensing apparatuses, method, and optical detecting module capable of estimating multi-degree-of-freedom motion
A method capable of estimating multi-degree-of-freedom motion of an optical sensing apparatus includes: providing an image sensor having a pixel array having a plurality of image zones to sense and capture a frame; providing and using a lens to vary optical magnifications of a plurality of portion images of the frame to generate a plurality of reconstructed images with different of field of views, the portion images of the frame respectively corresponding to the image zones; and estimating and obtaining a motion result for each of the reconstructed images to estimate the multi-degree-of-freedom motion of the optical sensing apparatus.
Real-time multifocal displays with gaze-contingent rendering and optimization
Systems and methods for displaying an image across a plurality of displays are described herein. Pixel intensity values in the multifocal display are determined using correlation values and numerical iterations. An eye tracking system measures eye tracking information about a position of a user's eye, and the pixel intensity values are modified based on the eye tracking information. An image is displayed on the plurality of displays based on the determined pixel intensity values. The plurality of displays may be within an HMD, and address vergence accommodation conflict by simulating retinal defocus blur.
METHOD AND DEVICE FOR ESTIMATING A TIME OF ARRIVAL OF A RADIO SIGNAL
A method and device for estimating the time of arrival (ToA) of a radio signal are proposed. The radio signal comprises M subsignals carried on M subcarriers, where M is an integer 2. The number of propagation paths and the time of arrival associated with a first path are estimated. The technique can improve ToA estimation accuracy, especially in a multipath fading channel, and reduce the need for channel information.
INERTIAL MEASUREMENT UNIT PROGRESS ESTIMATION
Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.
INERTIAL MEASUREMENT UNIT PROGRESS ESTIMATION
Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. In particular, a multi-view interactive digital media representation can be generated from live images captured from a camera. The live images can include an object. An angular view of the object captured in the live images can be estimated using sensor data from an inertial measurement unit. The multi-view interactive digital media representation can include a plurality of images where each of the plurality of images includes the object from a different camera view. When the plurality of images is output to a display, the object can appear to undergo a 3-D rotation through the determined angular view where the 3-D rotation of the object is generated without a 3-D polygon model of the object.