G06T7/215

VEHICLE USING FULL-VELOCITY DETERMINATION WITH RADAR

A computer includes a processor and a memory storing instructions executable by the processor to receive radar data including a radar pixel having a radial velocity from a radar; receive camera data including an image frame including camera pixels from a camera; map the radar pixel to the image frame; generate a region of the image frame surrounding the radar pixel; determine association scores for the respective camera pixels in the region; select a first camera pixel of the camera pixels from the region, the first camera pixel having a greatest association score of the association scores; and calculate a full velocity of the radar pixel using the radial velocity of the radar pixel and a first optical flow at the first camera pixel. The association scores indicate a likelihood that the respective camera pixels correspond to a same point in an environment as the radar pixel.

VEHICLE USING FULL-VELOCITY DETERMINATION WITH RADAR

A computer includes a processor and a memory storing instructions executable by the processor to receive radar data including a radar pixel having a radial velocity from a radar; receive camera data including an image frame including camera pixels from a camera; map the radar pixel to the image frame; generate a region of the image frame surrounding the radar pixel; determine association scores for the respective camera pixels in the region; select a first camera pixel of the camera pixels from the region, the first camera pixel having a greatest association score of the association scores; and calculate a full velocity of the radar pixel using the radial velocity of the radar pixel and a first optical flow at the first camera pixel. The association scores indicate a likelihood that the respective camera pixels correspond to a same point in an environment as the radar pixel.

AUTONOMOUS MACHINE HAVING VISION SYSTEM FOR NAVIGATION AND METHOD OF USING SAME

Vision systems for autonomous machines and methods of using same during machine localization are provided. Exemplary systems and methods may reduce computing resources needed to perform vision-based localization by selecting the most appropriate camera from two or more cameras, and optionally selecting only a portion of the selected camera's field of view, from which to perform vision-based location correction. Other embodiments may provide camera lens coverings that maintain optical clarity while operating within debris-filled environments.

AUTONOMOUS MACHINE HAVING VISION SYSTEM FOR NAVIGATION AND METHOD OF USING SAME

Vision systems for autonomous machines and methods of using same during machine localization are provided. Exemplary systems and methods may reduce computing resources needed to perform vision-based localization by selecting the most appropriate camera from two or more cameras, and optionally selecting only a portion of the selected camera's field of view, from which to perform vision-based location correction. Other embodiments may provide camera lens coverings that maintain optical clarity while operating within debris-filled environments.

PHYSICAL ABILITY EVALUATION SERVER, PHYSICAL ABILITY EVALUATION SYSTEM, AND PHYSICAL ABILITY EVALUATION METHOD
20230230259 · 2023-07-20 · ·

A physical ability evaluation server includes an image processing unit that executes evaluation score calculation processing on a plurality of still images included in a measurement video to calculate a physical ability evaluation score, a physical ability evaluation unit that evaluates the physical ability based on the evaluation score, and an evaluation result notification unit that creates and outputs an evaluation report based on the evaluation result. The evaluation score calculation processing includes a first process of acquiring joint position coordinates by physique estimation for each still image, and a second process of acquiring physique information by segmentation for a first still image corresponding to a first target period, and a third process that calculates the evaluation score of physical ability by a predetermined calculation formula using the information acquired in the first and second processes with respect to a second still image corresponding to a second target period.

PHYSICAL ABILITY EVALUATION SERVER, PHYSICAL ABILITY EVALUATION SYSTEM, AND PHYSICAL ABILITY EVALUATION METHOD
20230230259 · 2023-07-20 · ·

A physical ability evaluation server includes an image processing unit that executes evaluation score calculation processing on a plurality of still images included in a measurement video to calculate a physical ability evaluation score, a physical ability evaluation unit that evaluates the physical ability based on the evaluation score, and an evaluation result notification unit that creates and outputs an evaluation report based on the evaluation result. The evaluation score calculation processing includes a first process of acquiring joint position coordinates by physique estimation for each still image, and a second process of acquiring physique information by segmentation for a first still image corresponding to a first target period, and a third process that calculates the evaluation score of physical ability by a predetermined calculation formula using the information acquired in the first and second processes with respect to a second still image corresponding to a second target period.

Enhanced animation generation based on motion matching using local bone phases

Systems and methods are provided for enhanced animation generation based on using motion mapping with local bone phases. An example method includes accessing first animation control information generated for a first frame of an electronic game including local bone phases representing phase information associated with contacts of a plurality of rigid bodies of an in-game character with an in-game environment. Executing a local motion matching process for each of the plurality of local bone phases and generating a second pose of the character model based on the plurality of matched local poses for a second frame of the electronic game.

Enhanced animation generation based on motion matching using local bone phases

Systems and methods are provided for enhanced animation generation based on using motion mapping with local bone phases. An example method includes accessing first animation control information generated for a first frame of an electronic game including local bone phases representing phase information associated with contacts of a plurality of rigid bodies of an in-game character with an in-game environment. Executing a local motion matching process for each of the plurality of local bone phases and generating a second pose of the character model based on the plurality of matched local poses for a second frame of the electronic game.

System and method for generating accurate hyperlocal nowcasts
11561326 · 2023-01-24 · ·

A computing system includes at least one processor, and a memory communicatively coupled to the at least one processor. The processor is configured to receive at least two successive radar images of precipitation data, generate a motion vector field using the at least two successive radar images, forecast linear prediction imagery of future precipitation using the motion vector field, and generate corrected output imagery corresponding to the forecasted linear prediction imagery of the future precipitation corrected by a first neural network. In addition, the processor is further configured to receive, by a second neural network, the linear prediction imagery, and one of observed imagery and the corrected output imagery, and distinguish, by the second neural network, between the corrected output imagery and the observed imagery to produce conditioned output imagery. The processor is also configured to display the conditioned output imagery on a display.

System and method for generating accurate hyperlocal nowcasts
11561326 · 2023-01-24 · ·

A computing system includes at least one processor, and a memory communicatively coupled to the at least one processor. The processor is configured to receive at least two successive radar images of precipitation data, generate a motion vector field using the at least two successive radar images, forecast linear prediction imagery of future precipitation using the motion vector field, and generate corrected output imagery corresponding to the forecasted linear prediction imagery of the future precipitation corrected by a first neural network. In addition, the processor is further configured to receive, by a second neural network, the linear prediction imagery, and one of observed imagery and the corrected output imagery, and distinguish, by the second neural network, between the corrected output imagery and the observed imagery to produce conditioned output imagery. The processor is also configured to display the conditioned output imagery on a display.