Patent classifications
B60W2554/4048
DRIVING ASSIST DEVICE, DRIVING ASSIST SYSTEM, AND DRIVING ASSIST METHOD
A driving assist device includes an input information determination unit configured to determine driving assist information on the basis of first vehicle information including positional information and operational information about a target vehicle, second vehicle information including positional information and operational information about each of nearby vehicles, and a running route for the target vehicle. The input information determination unit selects any of the nearby vehicles that is located at a position at which a video to be used for driving assist can be recorded, and determines the video acquired from the selected nearby vehicle as the driving assist information. Consequently, driving assist flexibly adapted to, for example, the running route for and the operational information about the target vehicle can be performed.
CONTROL DEVICE AND CONTROL METHOD FOR MOBILE OBJECT, AND STORAGE MEDIUM
A control device of a mobile object including an imaging device attached with a lens having a wide angle of view is given. The control device acquires, from the imaging device, an image acquired by imaging an outside of the mobile object, and detects a target object through image recognition, based on the acquired image. The control device performs a distortion reduction process for reducing a distortion of the image on a partial area in the image acquired from the imaging device, in accordance with the detection result. The partial area is an area whose center is set to be either a detection position of the target object or a vicinity of the detection position. The control device recognizes the outside of the mobile object, based on an image that has been acquired by the distortion reduction process.
TRAVEL CONTROL DEVICE
In the present invention, it is possible to accurately predict, at an earlier timing, that a pedestrian will perform constant speed movement or a route change that is more complex than the constant speed movement. Provided is a travel control device that can accurately determine a change in the route of the pedestrian according to a change in the pedestrian's posture and, in particular, a change in the orientation of the body or a change in an inverted angle, and that can appropriately control the travel of the vehicle.
HYBRID CHALLENGER MODEL THROUGH PEER-PEER REINFORCEMENT FOR AUTONOMOUS VEHICLES
A driverless vehicle system comprises a processor that is configured to communicate information related to attributes of a focus autonomous vehicle (FAV) to an other peer vehicle (PV) and/or a central repository system (CRS). The processor is further configured to communicate information about a corrective action by at least one of the FAV and a previously contacted vehicle to the CRS or to a further peer vehicle that is within a predefined region.
OCCULSION AWARE PLANNING AND CONTROL
Techniques are discussed for controlling a vehicle, such as an autonomous vehicle, based on occluded areas in an environment. An occluded area can represent areas where sensors of the vehicle are unable to sense portions of the environment due to obstruction by another object. An occlusion grid representing the occluded area can be stored as map data or can be dynamically generated. An occlusion grid can include occlusion fields, which represent discrete two- or three-dimensional areas of driveable environment. An occlusion field can indicate an occlusion state and an occupancy state, determined using LIDAR data and/or image data captured by the vehicle. An occupancy state of an occlusion field can be determined by ray casting LIDAR data or by projecting an occlusion field into segmented image data. The vehicle can be controlled to traverse the environment when a sufficient portion of the occlusion grid is visible and unoccupied.
DEVICE AND METHOD FOR DATA FUSION BETWEEN HETEROGENEOUS SENSORS
An apparatus and method for data fusion between heterogeneous sensors are disclosed. The method for data fusion between the heterogeneous sensors may include identifying image data and point cloud data for a search area by each of a camera sensor and a LiDAR sensor that are calibrated using a marker board having a hole; recognizing a translation vector determined through calibrating of the camera sensor and the LiDAR sensor; and projecting the point cloud data of the LiDAR sensor onto the image data of the camera sensor using the recognized translation vector to fuse the identified image data and point cloud data.
Systems and methods for implementing a tracking camera system onboard an autonomous vehicle
Systems, methods, and non-transitory computer-readable media are provided for implementing a tracking camera system onboard an autonomous vehicle. Coordinate data of an object can be received. The tracking camera system actuates, based on the coordinate data, to a position such that the object is in view of the tracking camera system. Vehicle operation data of the autonomous vehicle can be received. The position of the tracking camera system can be adjusted, based on the vehicle operation data, such that the object remains in view of the tracking camera system while the autonomous vehicle is in motion. A focus of the tracking camera system can be adjusted to bring the object in focus. The tracking camera system captures image data corresponding to the object.
Electronic apparatus and operation method thereof
Provided are an electronic apparatus and a method for recognizing, based on ambient road information of a vehicle and field of view information of the vehicle, a hidden region that is a region where another vehicle is possibly present in an area hidden from a field of view of the vehicle by an external object In the present disclosure, one or more of an electronic apparatus, a vehicle, a vehicular terminal, and the autonomous driving vehicle may be associated with an artificial intelligence module, an unmanned aerial vehicle (UAV), a robot, an augmented reality (AR) device, a virtual reality (VR) device, a 5G service-related device, and the like.
Navigation based on liability constraints
A method including operations to obtain a planned driving action for accomplishing a navigational goal of a host vehicle on a roadway, identify a planned trajectory for the host vehicle, corresponding to the planned driving action, identify, from sensor data representative of an environment of the host vehicle, an occluded location of a potential object that is occluded from view of the host vehicle, identify a possible trajectory of the potential object, based on possible movement of the potential object from the location into the roadway, identify an intersection of the planned trajectory for the host vehicle with the possible trajectory for the potential object, determine a safety action of the host vehicle to respond to the possible movement of the potential object, and apply the safety action to change the planned driving action of the host vehicle.
Obstacle avoidance action
A vehicle can traverse an environment along a first region and detect an obstacle impeding progress of the vehicle. The vehicle can determine a second region that is adjacent to the first region and associated with a direction of travel opposite the first region. A cost can be determined based on an action (e.g., an oncoming action) to utilize the second region to overtake the obstacle. By comparing the cost to a cost threshold and/or to a cost associated with another action (e.g., a “stay in lane” action), the vehicle can determine a target trajectory that traverses through the second region. The vehicle can traverse the environment based on the target trajectory to avoid, for example, the obstacle in the environment while maintaining a safe distance from the obstacle and/or other entities in the environment.