Patent classifications
G01S3/00
Compensating for crosstalk in determination of an angle of arrival of an electromagnetic wave at a receive antenna
An angle of arrival (AoA) of an electromagnetic wave is determined. A phase of an antenna signal associated with each of two receive antenna is measured. A measured phase difference of arrival (PDoA) of the electromagnetic wave is determined based on the measured phase of each of the antenna signals. The measured PDoA is corrected based on one or more crosstalk factors associated with the two receive antennas. The AoA of the electromagnetic wave at the two receive antenna is generated based on the corrected measured PDoA.
IMAGE PICKUP DEVICE AND METHOD OF TRACKING SUBJECT THEREOF
The present invention provides an image pickup device that recognizes the object that the user is attempting to capture as the subject, tracks the movement of that subject, and can continue tracking the movement of the subject even when the subject leaves the capturing area so that the subject can always be reliably brought into focus. The image pickup device includes a main camera that captures the subject; an EVF that displays the captured image captured by the main camera, a sub-camera that captures the subject using a wider capturing region than the main camera, and a processing unit that extracts the subject from the captured images captured by the main camera and the sub-camera, tracks the extracted subject, and brings the subject into focus when an image of the subject is actually captured. When the subject moves outside of a capturing region of the main camera, the processing unit tracks the subject extracted from the captured image captured by the sub-camera.
Camera for Locating Hidden Objects
A distance substantially between a camera and an object is measured preferably with a rangefinder. Positional coordinates including an altitude of the camera are determined. A pose including pitch and azimuth of the camera directed at the object is determined from sensors. Positional coordinates of the object are determined using at least the positional coordinates of the camera, the pose of the camera and the distance substantially between the camera and the object which are used to determine a location volume. A database is searched for objects located at least partially inside the location volume. The camera is part of a computing device with a screen. Search results are listed on the screen and an outline of a hidden object in the location volume is drawn on the screen.
ZOOMING CONTROL APPARATUS, IMAGE CAPTURING APPARATUS AND CONTROL METHODS THEREOF
A zooming control apparatus comprises an object detection unit configured to detect an object from an image; a first acquisition unit configured to acquire information regarding a distance to the object; and a zooming control unit configured to perform zooming control for automatically changing a zoom magnification according to at least one of second information that includes information regarding a size of the object detected by the object detection unit and first information regarding the distance to the object acquired by the first acquisition unit, wherein a condition for automatically changing the zoom magnification in the zooming control differs according to a reliability of the first information.
Measuring device and measuring method for emulating an angle of departure determining test signal
A measuring device for providing an angle of departure determining test signal to a device under test, is provided. The measuring device comprises a signal generator and a single output port. The signal generator is adapted to generate the angle of departure determining test signal, emulating an antenna array angle of departure determining signal, comprised of a plurality of individual array antenna signals, thereby emulating an angle of departure of the angle of departure determining test signal. The single output port is adapted to output the angle of departure determining test signal to the device under test.
Measuring device and measuring method for emulating an angle of departure determining test signal
A measuring device for providing an angle of departure determining test signal to a device under test, is provided. The measuring device comprises a signal generator and a single output port. The signal generator is adapted to generate the angle of departure determining test signal, emulating an antenna array angle of departure determining signal, comprised of a plurality of individual array antenna signals, thereby emulating an angle of departure of the angle of departure determining test signal. The single output port is adapted to output the angle of departure determining test signal to the device under test.
Estimating pose in 3D space
Methods and devices for estimating position of a device within a 3D environment are described. Embodiments of the methods include sequentially receiving multiple image segments forming an image representing a field of view (FOV) comprising a portion of the environment. The image includes multiple sparse points that are identifiable based in part on a corresponding subset of image segments of the multiple image segments. The method also includes sequentially identifying one or more sparse points of the multiple sparse points when each subset of image segments corresponding to the one or more sparse points is received and estimating a position of the device in the environment based on the identified the one or more sparse points.
Estimating pose in 3D space
Methods and devices for estimating position of a device within a 3D environment are described. Embodiments of the methods include sequentially receiving multiple image segments forming an image representing a field of view (FOV) comprising a portion of the environment. The image includes multiple sparse points that are identifiable based in part on a corresponding subset of image segments of the multiple image segments. The method also includes sequentially identifying one or more sparse points of the multiple sparse points when each subset of image segments corresponding to the one or more sparse points is received and estimating a position of the device in the environment based on the identified the one or more sparse points.
Systems and methods for deep learning-based shopper tracking
Systems and techniques are provided for tracking puts and takes of inventory items by subjects in an area of real space. A plurality of cameras with overlapping fields of view produce respective sequences of images of corresponding fields of view in the real space. In one embodiment, the system includes first image processors, including subject image recognition engines, receiving corresponding sequences of images from the plurality of cameras. The first image processors process images to identify subjects represented in the images in the corresponding sequences of images. The system includes second image processors, including background image recognition engines, receiving corresponding sequences of images from the plurality of cameras. The second image processors mask the identified subjects to generate masked images. Following this, the second image processors process the masked images to identify and classify background changes represented in the images in the corresponding sequences of images.
Systems and methods for deep learning-based shopper tracking
Systems and techniques are provided for tracking puts and takes of inventory items by subjects in an area of real space. A plurality of cameras with overlapping fields of view produce respective sequences of images of corresponding fields of view in the real space. In one embodiment, the system includes first image processors, including subject image recognition engines, receiving corresponding sequences of images from the plurality of cameras. The first image processors process images to identify subjects represented in the images in the corresponding sequences of images. The system includes second image processors, including background image recognition engines, receiving corresponding sequences of images from the plurality of cameras. The second image processors mask the identified subjects to generate masked images. Following this, the second image processors process the masked images to identify and classify background changes represented in the images in the corresponding sequences of images.