G06T2207/30261

Robot and control method thereof

A method of controlling a robot includes obtaining a first image and a second image of a plurality of objects, the first and second image being captured from different positions; obtaining, from the first and second images, a plurality of candidate positions corresponding to each of the plurality of objects, based on a capturing position of each of the first and second images and a direction to each of the plurality of objects from each capturing position; obtaining distance information between each capturing position and each of the plurality of objects in the first and second images by analyzing the first and second images; and identifying a position of each of the plurality of objects from among the plurality of candidate positions based on the distance information.

Three-dimensional object detection

Generally, the disclosed systems and methods implement improved detection of objects in three-dimensional (3D) space. More particularly, an improved 3D object detection system can exploit continuous fusion of multiple sensors and/or integrated geographic prior map data to enhance effectiveness and robustness of object detection in applications such as autonomous driving. In some implementations, geographic prior data (e.g., geometric ground and/or semantic road features) can be exploited to enhance three-dimensional object detection for autonomous vehicle applications. In some implementations, object detection systems and methods can be improved based on dynamic utilization of multiple sensor modalities. More particularly, an improved 3D object detection system can exploit both LIDAR systems and cameras to perform very accurate localization of objects within three-dimensional space relative to an autonomous vehicle. For example, multi-sensor fusion can be implemented via continuous convolutions to fuse image data samples and LIDAR feature maps at different levels of resolution.

METHOD FOR NAVIGATION AND JOINT COORDINATION OF AUTOMATED DEVICES

The invention relates to methods for controlling automated devices. The method comprises locating at least one automated device on an area being controlled and placing an observation device, before the automated device starts operation, over the area being controlled on a flying device or tower, said observation device being capable of receiving and transmitting a control signal to the automated device and determining the coordinates of the flying device, whereupon said observation device controls at least said one automated device. The invention simplifies control of the automated device and improves the accuracy with which its coordinates are determined.

VEHICLE AND METHOD FOR AVOIDING A COLLISION OF A VEHICLE WITH ONE OR MORE OBSTACLES
20220358768 · 2022-11-10 ·

According to various aspects, a vehicle may include: one or more image sensors configured to provide sensor image data representing a sensor image of a vicinity of the vehicle; one or more processors configured to determine one or more obstacles from the sensor image data, the one or more obstacles corresponding to one or more image objects of the sensor image, determine a distance from ground for each of the one or more obstacles based its corresponding image object, and trigger a safety operation when the distance from ground is equal to or less than a safety height associated with the vehicle.

Imaging sensor, imaging system, and moving body
11265493 · 2022-03-01 · ·

A third line that supplies a first potential to a first semiconductor region of a first detection pixel and a fourth line that supplies a second potential to the first semiconductor region of a second detection pixel are provided. An interval between a partial line of the third line and a partial line of the fourth line is longer than an interval between a partial line of a first line and a partial line of a second line which extend along the partial line of the third line and the partial line of the fourth line.

Determining lane position of a partially obscured target vehicle
11263771 · 2022-03-01 · ·

A computing device including processor circuitry. The processor circuitry may perform operations comprising obtaining images representative of features within an environment of a host vehicle; identifying, from the images, a target object partially obscured in the environment; obtaining map data corresponding to the environment of the host vehicle, the map data comprising information of the features within the environment; localizing a position of the partially obscured target object within the environment based on comparing the features in the images to the information of the features obtained from the map data; and identifying a predicted trajectory of the partially obscured target object based on the localized position of the partially obscured target object within the environment.

Systems and methods for identifying air traffic objects
11263911 · 2022-03-01 · ·

An identification system for a digital air traffic control center includes a sensor with a field of view and having a pulse detection array, a user interface to display air traffic objects in the field of view of the sensor, and a controller. The controller includes a pulse detection module disposed in communication with the pulse detection array to identify an air traffic object using pulsed illumination emitted by a pulsed illuminator carried by the air traffic object within the field of view of the sensor. Digital air traffic control centers, airfields, and air traffic object identification methods are also described.

OCCULSION AWARE PLANNING AND CONTROL

Techniques are discussed for controlling a vehicle, such as an autonomous vehicle, based on occluded areas in an environment. An occluded area can represent areas where sensors of the vehicle are unable to sense portions of the environment due to obstruction by another object. An occlusion grid representing the occluded area can be stored as map data or can be dynamically generated. An occlusion grid can include occlusion fields, which represent discrete two- or three-dimensional areas of driveable environment. An occlusion field can indicate an occlusion state and an occupancy state, determined using LIDAR data and/or image data captured by the vehicle. An occupancy state of an occlusion field can be determined by ray casting LIDAR data or by projecting an occlusion field into segmented image data. The vehicle can be controlled to traverse the environment when a sufficient portion of the occlusion grid is visible and unoccupied.

Object Detecting Method and Object Detecting Apparatus

An enhanced object detecting method and apparatus is presented. A plurality of successive frames is captured by a monocular camera and the image data of the captured frames are transformed with respect to a predetermined point of view. For instance, the images may be transformed in order to obtain a top-down view. Particular features such as lines are extracted from the transformed image data, and corresponding features of successive frames are matched. An angular change of corresponding features is determined and boundaries of an object are identified based on the angular change of the features.

STEREO CAMERA APPARATUS AND VEHICLE COMPRISING THE SAME
20170318279 · 2017-11-02 · ·

The stereo camera apparatus includes a stereo camera and a first controller configured to detect a target object to be detected based on at least one first region among a plurality of regions located at different positions in a predetermined direction in an image captured by the stereo camera, generate interpolation pixels by performing pixel interpolation based on at least original pixels that constitute an image of the detected object and detect distance from a reference position to a position of the detected object based on at least the interpolation pixels. As a result, a stereo camera apparatus capable of detecting an object located far away from a vehicle with a high accuracy while suppressing the processing load and a vehicle can be provided.