G06T2207/30261

METHOD FOR DETERMINING OBJECT INFORMATION RELATING TO AN OBJECT IN A VEHICLE ENVIRONMENT, CONTROL UNIT AND VEHICLE
20220398852 · 2022-12-15 ·

The disclosure relates to a method for determining object information relating to an abject in an environment of a vehicle having a camera. The method includes: capturing the environment with the camera from a first position; changing the position of the camera; capturing the environment with the camera from a second position; determining object information relating to an object by selecting at least one first pixel in the first image and at least one second pixel in the second image, by selecting the first pixel and the second pixel such that they are assigned to the same object point of the object, and determining object coordinates of the assigned object point by triangulation. Changing the position of the camera is brought about by controlling an active actuator system in the vehicle. The actuator system adjusts the camera by an adjustment distance without changing a driving condition of the vehicle.

Method for Controlling a Flight Movement of an Aerial Vehicle and Aerial Vehicle
20220397919 · 2022-12-15 ·

The preferred embodiments pertain to a method for controlling a flight movement of an aerial vehicle that includes acquiring first image data by means of a first camera device that is arranged on an aerial vehicle and configured for monitoring an environment of the aerial vehicle while flying, wherein the first image data are indicative of a first sequence of first camera images. The method also includes acquiring second image data by means of a second camera device that is arranged on an aerial vehicle and configured for monitoring the environment of the aerial vehicle while flying, wherein the second image data are indicative of a second sequence of second camera images. The processing includes determining object parameters for a position of a flight obstacle in the environment of the aerial vehicle if the first image analysis predicts the flight obstacle in the at least one camera measurement image and the second image analysis likewise identifies the flight obstacle in the at least one camera measurement image. An aerial vehicle is furthermore disclosed.

Method for Controlling a Flight Movement of an Aerial Vehicle for Landing or for Dropping a Cargo, and Aerial Vehicle
20220397913 · 2022-12-15 ·

The preferred embodiments relate to a method for controlling a flight movement of an aerial vehicle for landing the aerial vehicle, including: recording of first image data by means of a first camera device, which is provided on an aerial vehicle, and is configured to record an area of ground, wherein the first image data is indicative of a first sequence of first camera images. The method also includes recording of second image data by means of a second camera device, which is provided on the aerial vehicle, and is configured to record the area of ground, wherein the second image data is indicative of a second sequence of second camera images.

SYSTEMS AND METHODS FOR DISPLAYING BIRD'S EYE VIEW OF A ROADWAY

A vehicle navigation system includes an electronic control unit. The electronic control unit receives image data regarding a source of a traffic jam on a roadway from a plurality of sensors of a plurality of vehicles in a mesh network. Moreover, the electronic control unit generates a bird's eye view of the traffic jam based on the image data, wherein the bird's eye view includes a graphical representation of the source of the traffic jam and a graphical representation of vehicles on the roadway within the traffic jam. A display device displays the bird's eye view.

IMAGE ANNOTATION FOR DEEP NEURAL NETWORKS

A first image can be acquired from a first sensor included in a vehicle and input to a deep neural network to determine a first bounding box for a first object. A second image can be acquired from the first sensor. Input latitudinal and longitudinal motion data from second sensors included in the vehicle corresponding to the time between inputting the first image and inputting the second image. A second bounding box can be determined by translating the first bounding box based on the latitudinal and longitudinal motion data. The second image can be cropped based on the second bounding box. The cropped second image can be input to the deep neural network to detect a second object. The first image, the first bounding box, the second image, and the second bounding box can be output.

SYSTEMS AND METHODS FOR JOINTLY TRAINING A MACHINE-LEARNING-BASED MONOCULAR OPTICAL FLOW, DEPTH, AND SCENE FLOW ESTIMATOR

Systems and methods described herein relate to jointly training a machine-learning-based monocular optical flow, depth, and scene flow estimator. One embodiment processes a pair of temporally adjacent monocular image frames using a first neural network structure to produce an optical flow estimate and to extract, from at least one image frame in the pair of temporally adjacent monocular image frames, a set of encoded image context features; triangulates the optical flow estimate to generate a depth map; extracts a set of encoded depth context features from the depth map using a depth context encoder; and combines the set of encoded image context features and the set of encoded depth context features to improve performance of a second neural network structure in estimating depth and scene flow.

CONTROL APPARATUS, MOVING OBJECT, CONTROL METHOD, AND COMPUTER-READABLE STORAGE MEDIUM

A control apparatus comprises: an area identification unit configured to identify an unrecognizable area not possible to be recognized from a moving object; an arrival time calculation unit configured to calculate an arrival time taken by the moving object to arrive at a location in the unrecognizable area; a region setting unit configured to set a first region and a second region in the unrecognizable area based on the arrival time; an alerting level setting unit configured to set an alerting level for the first region and an alerting level for the second region different from each other; and a transmission control unit configured to control transmission of warning information including: location information of the first region and location information of the second region; and information indicating the alerting level for the first region and the alerting level for the second region set by the alerting level setting unit.

Method and device for fusion of measurements from different information sources
11521027 · 2022-12-06 · ·

The invention relates to a method and a device for fusion of measurements from various information sources (I 1, I 2, . . . , I m) in conjunction with filtering of a filter vector, wherein the information sources (I 1, I 2, . . . , I m) comprise one or more environment detection sensor(s) of an ego vehicle, wherein in each case at least one measured quantity derived from the measurements is contained in the filter vector, wherein the measurements from at least one individual information source (I 1; I 2; . . . , I m) are mapped nonlinearly to the respective measured quantity, wherein at least one of these mapping operations depends on at least one indeterminate parameter, wherein the value to be determined of the at least one indeterminate parameter is estimated from the measurements of the different information sources (I 1, I 2, . . . , I m) and wherein the filter vector is not needed for estimating the at least one indeterminate parameter.

OBJECT DETECTION DEVICE, OBJECT DETECTION SYSTEM, MOBILE OBJECT, AND OBJECT DETECTION METHOD
20220383643 · 2022-12-01 · ·

A processor of an object detection device is configured to search for a pair of coordinates associated with a disparity approximately equal to a target disparity satisfying a predetermined condition in a second direction of a disparity map, update a first pair of coordinates to the found pair of coordinates, and calculate a height of an object corresponding to the target disparity on the basis of the first pair of coordinates. In the disparity map, a disparity acquired from a captured image is associated with a pair of two-dimensional coordinates formed by a first direction and the second direction intersecting the first direction. When a second pair of coordinates associated with a disparity approximately equal to the target disparity is present within a predetermined interval from the first pair of coordinates, the processor is configured to update the first pair of coordinates to the second pair of coordinates.

RENDERING SYSTEM, DISPLAY SYSTEM, MOVING VEHICLE, RENDERING METHOD, AND NON-TRANSITORY STORAGE MEDIUM

A rendering system includes a rendering unit and a correction unit. The rendering unit renders, based on a result of detection by a detection system, a marker corresponding to a location of a target. The detection system is installed in a moving vehicle for the purpose of detecting the target. The correction unit corrects, based on correction data, the location of the target in accordance with the result of detection by the detection system and thereby determines a location of the marker to be rendered by the rendering unit. The correction data is obtained based on at least traveling information about a traveling condition of the moving vehicle.