G06T7/231

Vision system for a motor vehicle and method of controlling a vision system
10706589 · 2020-07-07 · ·

A motor vehicle vision system (10) includes a pair of imaging devices (12a, 12b) forming a stereo imaging apparatus (11) and a data processing apparatus (14) for rectification of images captured by the stereo imaging apparatus (11), matching of rectified images, and to detect an object in the surrounding of the motor vehicle. The data processing device (14) performs, for image elements (43) of a rectified image from one imaging device, a search for a best-matching image element (44) in the corresponding rectified image from the other imaging device. The search yielding vertical shift information from which a vertical shift from the image element (43) to the best-matching image element (44) is derivable. The data processing device (14) calculates a pitch angle error and/or a roll angle error of or between the imaging devices (12a, 12b) from the vertical shift information.

ELECTRONIC CIRCUIT AND ELECTRONIC DEVICE PERFORMING MOTION ESTIMATION BASED ON DECREASED NUMBER OF CANDIDATE BLOCKS
20200193618 · 2020-06-18 ·

An electronic circuit includes a block determinator, a candidate selector, and a motion vector generator to perform motion estimation between images. The block determinator determines a current block corresponding to a current location on an image and candidate blocks corresponding to relative locations with respect to the current location for each recursion for blocks constituting the image. The candidate selector selects some of the candidate blocks. The motion vector generator generates a motion vector for the current block based on one reference patch which is determined from reference patches indicated by candidate motion vectors of the selected candidate blocks. At least one of the relative locations corresponding to the candidate blocks selected in a first recursion is different from each of the relative locations corresponding to the candidate blocks selected in a second recursion following the first recursion.

Method and system for estimating motion between images, particularly in ultrasound spatial compounding
10679349 · 2020-06-09 · ·

Methods are provided for estimating motion between images associated with a common region of interest, the method comprising: providing frames including a reference frame and a target frame; determining a global motion vector based on a comparison of the reference and target frames; for a plurality of local blocks, determining local motion vectors between the reference and target frames based on the global motion vector to form globally adjusted local motion vectors; considering the globally adjusted local motion vectors as motion estimator. A corresponding system is also disclosed.

Method and system for estimating motion between images, particularly in ultrasound spatial compounding
10679349 · 2020-06-09 · ·

Methods are provided for estimating motion between images associated with a common region of interest, the method comprising: providing frames including a reference frame and a target frame; determining a global motion vector based on a comparison of the reference and target frames; for a plurality of local blocks, determining local motion vectors between the reference and target frames based on the global motion vector to form globally adjusted local motion vectors; considering the globally adjusted local motion vectors as motion estimator. A corresponding system is also disclosed.

HAND DETECTION AND TRACKING METHOD AND DEVICE
20200134838 · 2020-04-30 · ·

For each frame of a video, a determination is made whether an image of a hand exists in the frame. When at least one frame of the video includes the image of the hand, locations of the hand in the frames of the video are tracked to obtain a tracking result. A verification is performed to determine whether the tracking result is valid in a current frame of the frames of the video. When the tracking result is valid in the current frame of the video, a location of the hand is tracked in a next frame. When the tracking result is not valid in the current frame, localized hand image detection is performed on the current frame.

HAND DETECTION AND TRACKING METHOD AND DEVICE
20200134838 · 2020-04-30 · ·

For each frame of a video, a determination is made whether an image of a hand exists in the frame. When at least one frame of the video includes the image of the hand, locations of the hand in the frames of the video are tracked to obtain a tracking result. A verification is performed to determine whether the tracking result is valid in a current frame of the frames of the video. When the tracking result is valid in the current frame of the video, a location of the hand is tracked in a next frame. When the tracking result is not valid in the current frame, localized hand image detection is performed on the current frame.

System and method for evaluating the perception system of an autonomous vehicle

A method and apparatus are provided for optimizing one or more object detection parameters used by an autonomous vehicle to detect objects in images. The autonomous vehicle may capture the images using one or more sensors. The autonomous vehicle may then determine object labels and their corresponding object label parameters for the detected objects. The captured images and the object label parameters may be communicated to an object identification server. The object identification server may request that one or more reviewers identify objects in the captured images. The object identification server may then compare the identification of objects by reviewers with the identification of objects by the autonomous vehicle. Depending on the results of the comparison, the object identification server may recommend or perform the optimization of one or more of the object detection parameters.

System and method for evaluating the perception system of an autonomous vehicle

A method and apparatus are provided for optimizing one or more object detection parameters used by an autonomous vehicle to detect objects in images. The autonomous vehicle may capture the images using one or more sensors. The autonomous vehicle may then determine object labels and their corresponding object label parameters for the detected objects. The captured images and the object label parameters may be communicated to an object identification server. The object identification server may request that one or more reviewers identify objects in the captured images. The object identification server may then compare the identification of objects by reviewers with the identification of objects by the autonomous vehicle. Depending on the results of the comparison, the object identification server may recommend or perform the optimization of one or more of the object detection parameters.

Method for motion estimation between two images of an environmental region of a motor vehicle, computing device, driver assistance system as well as motor vehicle

The invention relates to a method for motion estimation between two images of an environmental region (9) of a motor vehicle (1) captured by a camera (4) of the motor vehicle (1), wherein the following steps are performed: a) determining at least two image areas of a first image as at least two first blocks (B) in the first image, b) for each first block (B), defining a respective search region in a second image for searching the respective search region in the second image for a second block (B) corresponding to the respective first block (B); c) determining a cost surface (18) for each first blocks (B) and its respective search region; d) determining an averaged cost surface (19) for one of the at least two first blocks (B) based on the cost surfaces (18); d) identifying a motion vector (v) for the one of the first blocks (B) describing a motion of a location of the first block (B) in the first image and the corresponding second block (B) in the second image. The invention also relates to a computing device (3), a driver assistance system (2) as well as a motor vehicle (1).

Method for motion estimation between two images of an environmental region of a motor vehicle, computing device, driver assistance system as well as motor vehicle

The invention relates to a method for motion estimation between two images of an environmental region (9) of a motor vehicle (1) captured by a camera (4) of the motor vehicle (1), wherein the following steps are performed: a) determining at least two image areas of a first image as at least two first blocks (B) in the first image, b) for each first block (B), defining a respective search region in a second image for searching the respective search region in the second image for a second block (B) corresponding to the respective first block (B); c) determining a cost surface (18) for each first blocks (B) and its respective search region; d) determining an averaged cost surface (19) for one of the at least two first blocks (B) based on the cost surfaces (18); d) identifying a motion vector (v) for the one of the first blocks (B) describing a motion of a location of the first block (B) in the first image and the corresponding second block (B) in the second image. The invention also relates to a computing device (3), a driver assistance system (2) as well as a motor vehicle (1).