G06T2207/30241

DEVICE AND COMPUTER-IMPLEMENTED METHOD FOR OBJECT TRACKING
20230051014 · 2023-02-16 ·

A device and computer-implemented method for object tracking. The method comprises providing a sequence of digital images, determining a sequence of relational graph embeddings, wherein a first relational graph embedding of the sequence comprises a first object embedding representing a first object in a first digital image of the sequence of digital images, wherein the first relational graph embedding comprises a first relation embedding of a relation for the first object embedding, wherein the first relation embedding relates the first object embedding to embeddings representing other objects of the first digital image in the first relational graph embedding and to embeddings in a second relational graph embedding of the sequence that represent objects of a second digital image of the sequence of digital images.

AIRCRAFT DOOR CAMERA SYSTEM FOR DOCKING ALIGNMENT MONITORING
20230052176 · 2023-02-16 ·

A camera with a field of view toward an external environment of an aircraft is disposed within an aircraft door such that a ground surface is within the field of view of the camera during taxiing of the aircraft. A display device is disposed within an interior of the aircraft. A processor is operatively coupled to the camera and to the display device. The processor analyzes image data captured by the camera for docking guidance by identifying, within the captured image data, a region on the ground surface corresponding to an alignment fiducial indicating a parking location for the aircraft, determining, based on the region of the captured image data corresponding to the alignment fiducial indicating the parking location, a relative location of the aircraft with respect to the alignment fiducial, and outputting an indication of the relative location of the aircraft to the alignment fiducial.

Detecting interactions with non-discretized items and associating interactions with actors using digital images

Commercial interactions with non-discretized items such as liquids in carafes or other dispensers are detected and associated with actors using images captured by one or more digital cameras including the carafes or dispensers within their fields of view. The images are processed to detect body parts of actors and other aspects therein, and to not only determine that a commercial interaction has occurred but also identify an actor that performed the commercial interaction. Based on information or data determined from such images, movements of body parts associated with raising, lowering or rotating one or more carafes or other dispensers may be detected, and a commercial interaction involving such carafes or dispensers may be detected and associated with a specific actor accordingly.

Temporal information prediction in autonomous machine applications

In various examples, a sequential deep neural network (DNN) may be trained using ground truth data generated by correlating (e.g., by cross-sensor fusion) sensor data with image data representative of a sequences of images. In deployment, the sequential DNN may leverage the sensor correlation to compute various predictions using image data alone. The predictions may include velocities, in world space, of objects in fields of view of an ego-vehicle, current and future locations of the objects in image space, and/or a time-to-collision (TTC) between the objects and the ego-vehicle. These predictions may be used as part of a perception system for understanding and reacting to a current physical environment of the ego-vehicle.

System and method for visually tracking persons and imputing demographic and sentiment data

A visual tracking system for tracking and identifying persons within a monitored location, comprising a plurality of cameras and a visual processing unit, each camera produces a sequence of video frames depicting one or more of the persons, the visual processing unit is adapted to maintain a coherent track identity for each person across the plurality of cameras using a combination of motion data and visual featurization data, and further determine demographic data and sentiment data using the visual featurization data, the visual tracking system further having a recommendation module adapted to identify a customer need for each person using the sentiment data of the person in addition to context data, and generate an action recommendation for addressing the customer need, the visual tracking system is operably connected to a customer-oriented device configured to perform a customer-oriented action in accordance with the action recommendation.

Navigation device capable of estimating contamination and denoising image frame
11582413 · 2023-02-14 · ·

There is provided an optical navigation device including an image sensor and a processing unit. The image sensor outputs successive image frames. The processing unit calculates a contamination level and a motion signal based on filtered image frames, and determines whether to update a fixed pattern noise (FPN) stored in a frame buffer according to a level of FPN subtraction, the calculated contamination level and the calculated motion signal to optimize the update of the fixed pattern noise.

Passive wide-area three-dimensional imaging

Radar, lidar, and other active 3D imaging techniques require large, heavy sensors that consume lots of power. Passive 3D imaging techniques based on feature matching are computationally expensive and limited by the quality of the feature matching. Fortunately, there is a robust, computationally inexpensive way to generate 3D images from full-motion video acquired from a platform that moves relative to the scene. The full-motion video frames are registered to each other and mapped to the scene coordinates using data about the trajectory of the platform with respect to the scene. The time derivative of the registered frames equals the product of the height map of the scene, the projected angular velocity of the platform, and the spatial gradient of the registered frames. This relationship can be solved in (near) real time to produce the height map of the scene from the full-motion video and the trajectory.

HIGH-DEFINITION MAP CREATION METHOD AND DEVICE, AND ELECTRONIC DEVICE

A high-definition map creation method includes: obtaining point cloud data collected with respect to a target region, the point cloud data including K frames of point clouds and an initial pose of each frame of point cloud, K being an integer greater than 1; associating the K frames of point clouds with each other in accordance with the initial pose to obtain a first point cloud relation graph of the K frames of point clouds; performing point cloud registration on the K frames of point clouds in accordance with the first point cloud relation graph and the initial pose to obtain a target relative pose of each frame of point cloud in the K frames of point clouds; and splicing the K frames of point clouds in accordance with the target relative pose to obtain a point cloud map of the target region.

CONTROL DEVICE, CONTROL METHOD, AND STORAGE MEDIUM
20230040374 · 2023-02-09 ·

A control device includes a storage device which has stored a program, and a hardware processor, in which the hardware processor executes the program stored in the storage device, thereby acquiring a peripheral image of the mobile object, which is an image captured by a fisheye camera mounted on a mobile object, calculating an instruction regarding future traveling of the mobile object as a base trajectory in an orthogonal coordinate system, coordinate-converting the acquired base trajectory in the orthogonal coordinate system into a base trajectory in a fisheye camera coordinate system, calculating a risk of the base trajectory in the fisheye camera coordinate system on the basis of the peripheral image, and the base trajectory in the fisheye camera coordinate system, and calculating a traveling trajectory by modifying the base trajectory in the orthogonal coordinate system on the basis of the risk of the base trajectory in the fisheye camera coordinate system.

SYSTEM AND METHOD FOR EVALUATING SPORT BALL DATA
20230040575 · 2023-02-09 ·

A system and a method related for evaluation of sport ball data. The method includes calibrating a first coordinate system of a first camera to a second coordinate system of a baseball field; capturing, with the first camera, one or more images including a first batter; determining biometric characteristics of the first batter based on the one or more images and the calibration of the first camera to the baseball field; mapping the biometric characteristics of the first batter to an upper positional limit and a lower positional limit of a first strike zone for the first batter; and determining positional limits of the first strike zone in the second coordinate system.