Patent classifications
G01S3/00
Systems and methods for deep learning-based shopper tracking
Systems and techniques are provided for tracking puts and takes of inventory items by subjects in an area of real space. A plurality of cameras with overlapping fields of view produce respective sequences of images of corresponding fields of view in the real space. In one embodiment, the system includes first image processors, including subject image recognition engines, receiving corresponding sequences of images from the plurality of cameras. The first image processors process images to identify subjects represented in the images in the corresponding sequences of images. The system includes second image processors, including background image recognition engines, receiving corresponding sequences of images from the plurality of cameras. The second image processors mask the identified subjects to generate masked images. Following this, the second image processors process the masked images to identify and classify background changes represented in the images in the corresponding sequences of images.
System and method of processing video of a tileable wall
In one or more embodiments, one or more systems, processes, and/or methods may receive first video streams, of a user, from respective first cameras, at respective first locations and construct, from the first video streams, a single video stream that includes forward-facing images of the user. The single video stream constructed from the first video streams may be provided to a network. One or more movements of the user may be tracked, and based on the tracking, a hand-off to second cameras may occur. The one or more systems, processes, and/or methods may receive second video streams, of the user, from respective second cameras, at respective second locations and construct, from the second video streams, the single video stream that includes forward-facing images of the user. The single video stream constructed from the second video streams may be provided to the network.
System and method of processing video of a tileable wall
In one or more embodiments, one or more systems, processes, and/or methods may receive first video streams, of a user, from respective first cameras, at respective first locations and construct, from the first video streams, a single video stream that includes forward-facing images of the user. The single video stream constructed from the first video streams may be provided to a network. One or more movements of the user may be tracked, and based on the tracking, a hand-off to second cameras may occur. The one or more systems, processes, and/or methods may receive second video streams, of the user, from respective second cameras, at respective second locations and construct, from the second video streams, the single video stream that includes forward-facing images of the user. The single video stream constructed from the second video streams may be provided to the network.
CLASSIFYING FACIAL EXPRESSIONS USING EYE-TRACKING CAMERAS
Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.
CLASSIFYING FACIAL EXPRESSIONS USING EYE-TRACKING CAMERAS
Images of a plurality of users are captured concurrently with the plurality of users evincing a plurality of expressions. The images are captured using one or more eye tracking sensors implemented in one or more head mounted devices (HMDs) worn by the plurality of first users. A machine learnt algorithm is trained to infer labels indicative of expressions of the users in the images. A live image of a user is captured using an eye tracking sensor implemented in an HMD worn by the user. A label of an expression evinced by the user in the live image is inferred using the machine learnt algorithm that has been trained to predict labels indicative of expressions. The images of the users and the live image can be personalized by combining the images with personalization images of the users evincing a subset of the expressions.
SYSTEMS AND METHODS FOR MULTI-TARGET TRACKING AND AUTOFOCUSING BASED ON DEEP MACHINE LEARNING AND LASER RADAR
Systems and methods for recognizing, tracking, and focusing a moving target are disclosed. In accordance with the disclosed embodiments, the systems and methods may recognize the moving target traveling relative to an imaging device; track the moving target; and determine a distance to the moving target from the imaging device.
SYSTEMS AND METHODS FOR MULTI-TARGET TRACKING AND AUTOFOCUSING BASED ON DEEP MACHINE LEARNING AND LASER RADAR
Systems and methods for recognizing, tracking, and focusing a moving target are disclosed. In accordance with the disclosed embodiments, the systems and methods may recognize the moving target traveling relative to an imaging device; track the moving target; and determine a distance to the moving target from the imaging device.
Electronic apparatus, method, and storage medium
An electronic apparatus includes a control unit configured to perform control, while a specific screen on which a specific function can be set is displayed on a first display unit on which a touch operation can be performed, and, if a specific state regarding the specific function is set, to cancel the specific state in response to a touch operation within the first area on the display surface of the first display unit. In addition, the control unit is configured to perform control, while the specific screen is displayed on a second display unit, and, if the specific state is set, to cancel the specific state in response to a touch operation at any position within a second area that is larger than the first area on the display surface.
Electronic apparatus, method, and storage medium
An electronic apparatus includes a control unit configured to perform control, while a specific screen on which a specific function can be set is displayed on a first display unit on which a touch operation can be performed, and, if a specific state regarding the specific function is set, to cancel the specific state in response to a touch operation within the first area on the display surface of the first display unit. In addition, the control unit is configured to perform control, while the specific screen is displayed on a second display unit, and, if the specific state is set, to cancel the specific state in response to a touch operation at any position within a second area that is larger than the first area on the display surface.
Composite tensor beamforming method for electromagnetic vector coprime planar array
The present invention belongs to the field of array signal processing and relates to a composite tensor beamforming method for an electromagnetic vector coprime planar array. The method includes: building an electromagnetic vector coprime planar array; performing tensor modeling of an electromagnetic vector coprime planar array receiving signal; designing a three-dimensional weight tensor corresponding to a coprime sparse uniform sub-planar array; forming a tensor beam power pattern of the coprime sparse uniform sub-planar array; and performing electromagnetic vector coprime planar array tensor beamforming based on coprime composite processing of the sparse uniform sub-planar array. Starting from the principles of receiving signal tensor spatial filtering of two sparse uniform sub-planar arrays that compose the electromagnetic vector coprime planar array, the present invention forms a coprime composite processing method based on a sparse uniform sub-planar array output signal.