Patent classifications
G06T2207/30261
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM
To provide an information processing apparatus, an information processing method, and a computer program which enable highly safe travel.
An information processing apparatus according to the present disclosure includes: a recognition processing portion configured to perform recognition processing of a recognition target based on a captured image of a surrounding environment of a mobile body; and
an output control portion configured to cause, when the recognition target is not recognized, an output apparatus to output non-recognition notification information indicating that the recognition target is not recognized.
REMOTE MONITORING SYSTEM, DISTRIBUTION CONTROL APPARATUS, AND METHOD
An image reception unit receives an internal image from a mobile object. An accident risk prediction unit predicts a risk of occurrence of an accident inside the mobile object based on the internal image and situation information indicating a situation of the mobile object. A quality determination unit determines internal image quality information indicating quality of the internal image based on a result of the predicted risk of the inside accident. A quality adjustment unit adjusts the quality of the internal image based on the internal image quality information.
DEPTH IMAGE PROCESSING METHOD, SMALL OBSTACLE DETECTION METHOD AND SYSTEM, ROBOT, AND MEDIUM
Provided are a depth image processing method, and a small obstacle detection method and system. The method comprises calibration of sensors, distortion and epipolar rectification, data alignment, and sparse stereo matching. The depth image processing method and the small obstacle detection method and system of the present invention only requires execution of sparse stereo matching on hole portions of a structured light depth image, and do not requires stereo matching of the entire image, thereby significantly reducing the overall computation load for processing a depth image, and enhancing the robustness of a system.
PARKING-STOPPING POINT MANAGEMENT DEVICE, PARKING-STOPPING POINT MANAGEMENT METHOD, AND VEHICLE DEVICE
A parking-stopping point management device includes a determination criterion acquisition unit configured to acquire at least one of vehicle behavior data indicative of a behavior of at least one vehicle and sensing information of a surrounding monitoring sensor mounted in the at least one vehicle in association with position information of the at least one vehicle; a parking-stopping point detection unit configured to detect, as a parking-stopping point, a street parking point which is a point where the parking-stopping vehicle is parked on a normal road; an existence state determination unit configured to determine whether the parking-stopping vehicle still exists at the street parking point detected by the parking-stopping point detection unit based on the information acquired by the determination criterion acquisition unit; and a distribution processing unit configured to distribute information on the street parking point detected by the parking-stopping point detection unit to the at least one vehicle.
ESTIMATING AUTO EXPOSURE VALUES OF CAMERA BY PRIORITIZING OBJECT OF INTEREST BASED ON CONTEXTUAL INPUTS FROM 3D MAPS
Systems and methods are provided for operating a vehicle, is provided. The method includes, by a vehicle control system of the vehicle, identifying map data for a present location of the vehicle using a location of the vehicle and pose and trajectory data for the vehicle, identifying a field of view of a camera of the vehicle, and analyzing the map data to identify an object that is expected to be in the field of view of the camera. The method further includes, based on (a) a class of the object, (b) characteristics of a region of interest in the field of view of the vehicle, or (c) both, selecting an automatic exposure (AE) setting for the camera. The method additionally includes causing the camera to use the AE setting when capturing images of the object, and using the camera, capturing the images of the object.
PROJECTING IMAGES CAPTURED USING FISHEYE LENSES FOR FEATURE DETECTION IN AUTONOMOUS MACHINE APPLICATIONS
In various examples, live perception from wide-view sensors may be leveraged to detect features in an environment of a vehicle. Sensor data generated by the sensors may be adjusted to represent a virtual field of view different from an actual field of view of the sensor, and the sensor data—with or without virtual adjustment—may be applied to a stereographic projection algorithm to generate a projected image. The projected image may then be applied to a machine learning model—such as a deep neural network (DNN)—to detect and/or classify features or objects represented therein. In some examples, the machine learning model may be pre-trained on training sensor data generated by a sensor having a field of view less than the wide-view sensor such that the virtual adjustment and/or projection algorithm may update the sensor data to be suitable for accurate processing by the pre-trained machine learning model.
SYSTEMS AND METHODS FOR PRODUCING AMODAL CUBOIDS
This document discloses system, method, and computer program product embodiments for detecting an object. For example, the method includes generating a plurality of cuboids by performing the following operations: defining a plurality of first cuboids each encompassing lidar data points that are plotted on a respective 3D graph of a plurality of 3D graphs; accumulating the lidar data points encompassed by the plurality of first cuboids; computing an extent using the accumulated lidar data points; and defining a second cuboid that has dimensions specified by the extent. The first cuboids and/or the second cuboid may be used to detect the object.
EMERGENCY VEHICLE DETECTION SYSTEM AND METHOD
In an embodiment, a method includes: receiving ambient sound; determining if the ambient sound includes a siren; in accordance with determining that the ambient sound includes a siren, determining a first location associated with the siren; receiving a camera image; determining if the camera image includes a flashing light; in accordance with determining that the camera image includes a flashing light, determining a second location associated with the flashing light; 3D data; determining if the 3D data includes an object; in accordance with determining that the 3D data includes an object, determining a third location associated with the object; determining a presence of an emergency vehicle based on the siren, detected flashing light and detected object; determining an estimated location of the emergency vehicle based on the first, second and third locations; and initiating an action related to the vehicle based on the determined presence and location.
IMAGE RECOGNIZING DEVICE AND IMAGE RECOGNIZING METHOD
An image recognizing device includes an image storing portion, a cylindrical distortion correcting portion, a vertical edge extracting portion; a column candidate extracting portion, a pole candidate evaluating portion, a pole foot position setting portion, a movement distance acquiring portion, a detected distance difference calculating portion, and a pole identifying portion. When the vehicle is moving toward a pole candidate, the movement distance acquiring portion acquires a movement distance moved by the vehicle during a prescribed time interval, and the detected distance difference calculating portion calculates a detected distance difference between a starting detected distance and an ending detected distance for the prescribed time interval. The pole identifying portion identifies that a pole candidate wherein the absolute value of the difference between the movement distance and the detected distance difference is less than a threshold value as being a pole that has a pole foot position that contacts the ground.
Robot climbing control method and device and storage medium and robot
A robot climbing control method is disclosed. The method obtains an RGB color image and a depth image of stairs, extracts an outline of a target object of a target step on the stairs from the RGB color image, determines relative position information of the robot and the target step according to the depth image and the outline of the target object, and controls the robot to climb the target step according to the relative position information. The embodiment of the present disclosure allows the robot to effectively adjust postures and forward directions on any size of and non-standardized stairs and avoids the deviation of the walking direction, thereby improving the effectiveness and safety of the stair climbing of the robot.