Patent classifications
G06T2207/30256
Method for updating road signs and markings on basis of monocular images
A method for updating road signs and markings on the basis of monocular images comprises the following steps: acquiring street images of urban roads and GPS phase center coordinates and spatial attitude data corresponding to the street images; extracting coordinates of the road sign marking images; constructing a sparse three-dimensional model, and then generating a streetscape image depth map; calculating the space position of the road sign and marking according to the semantic and depth values of the image, the collinear equation and the space distance relation; if the same road sign and marking is visible in multiple views, solving the position information of the road sign; and vectorizing the obtained road sign position information, and fusing the information into the original data to realize the updating of the road sign data.
METHOD AND DEVICE TO DETERMINE THE CAMERA POSITION AND ANGLE
The present disclosure provides a method and an apparatus for determining an attitude angle of a camera fixed to a rigid object in a vehicle along with an Inertial Measurement Unit (IMU). In some embodiments, the method includes: obtaining IMU attitude angles outputted from the IMU and images captured by the camera; determining a target IMU attitude angle corresponding to each frame of image based on respective capturing time of the frames of images and respective outputting time of the IMU attitude angles; and determining an attitude angle of the camera corresponding to each frame of image based on a predetermined conversion relationship between a camera coordinate system for the camera and an IMU coordinate system for the IMU and the target IMU attitude angle corresponding to each frame of image.
METHOD FOR DETECTING WHETHER AN EGO VEHICLE CHANGES FROM A CURRENTLY TRAVELED TRAFFIC LANE OF A ROADWAY TO AN ADJACENT TRAFFIC LANE OR WHETHER IT STAYS IN THE CURRENTLY TRAVELED TRAFFIC LANE
A method for detecting whether an ego vehicle will leave a currently traveled traffic lane of a roadway to the left or right or whether it will stay in the currently traveled traffic lane. In the method, an image of a measuring space, which includes the vehicle area in front of the ego vehicle, is generated using an image sensor; an expected trajectory of the ego vehicle is projected into the image; at least one traffic lane boundary laterally adjacent to the trajectory is detected; and a decision is made whether the traffic lane will be changed or maintained by comparing the trajectory to the at least one detected traffic lane boundary.
Camera pose estimation techniques
Techniques are described for estimating pose of a camera located on a vehicle. An exemplary method of estimating camera pose includes obtaining, from a camera located on a vehicle, an image including a lane marker on a road on which the vehicle is driven, and estimating a pose of the camera such that the pose of the camera provides a best match according to a criterion between a first position of the lane marker determined from the image and a second position of the lane marker determined from a stored map of the road.
Self-position estimation device
A self-position estimation device equipped to a vehicle: captures an image of a periphery of the vehicle; detects a state quantity of the vehicle; acquires position information indicating a position of the vehicle from a satellite system; stores map data that defines a map in which a road is expressed by a link and a node; estimate a self-position of the vehicle on the map, as an estimation position, based on the captured image, the state quantity, the position information, and the map data, respectively; recognizes a road section in which a lane quantity increases or decreases based on the captured image; and sets a weighting of the estimation position of the vehicle estimated based on the map data relatively smaller in response to a recognition of the road section in which the lane quantity increases or decreases.
Navigation Based on Detected Size of Occlusion Zones
A navigation system for a host vehicle is provided. The system may comprise at least one processing device programmed to receive, from a camera, a plurality of images representative of an environment of the host vehicle; analyze the plurality of images to identify at least one vehicle-induced occlusion zone in an environment of the host vehicle; and cause a navigational change for the host vehicle based, at least in part, on a size of a target vehicle that induces the identified occlusion zone.
Method and device for assisting the driving of an aircraft moving on the ground
A method and device for assisting the driving of an aircraft (AC) moving on the ground, on a taxiing circuit (CP) including a taxi line (TL) to be followed by the aircraft (AC). The taxi line (TL) has different portions (PR) forming between them intersections (IP). The device is configured to use a digital modeling of the taxi line (TL), called digital trajectory (TR), including nodes corresponding to the intersections (IP). In addition, the device includes a detection unit (4) configured to detect at least one of the intersections (IP), as well as an increment unit (6) configured to increment a counter associated with the digital trajectory (TR), after detection of the intersection (IP), the counter being designed to count a series of the nodes.
MULTI-SENSOR SEQUENTIAL CALIBRATION SYSTEM
Techniques for performing a sensor calibration using sequential data is disclosed. An example method includes receiving, from a first camera located on a vehicle, a first image comprising at least a portion of a road comprising lane markers, where the first image is obtained by the camera at a first time; obtaining a calculated value of a position of an inertial measurement (IM) device at the first time; obtaining an optimized first extrinsic matrix of the first camera by adjusting a function of a first actual pixel location of a location of a lane marker in the first image and an expected pixel location of the location of the lane marker; and performing autonomous operation of the vehicle using the optimized first extrinsic matrix of the first camera when the vehicle is operated on another road or at another time.
VEHICLE AND METHOD OF CONTROLLING THE SAME
A vehicle includes a controller and a first camera, a second camera, and a third camera. The first camera is installed in the vehicle to have a first field of view and is configured to obtain first image data for the first field of view. The second camera is installed in the vehicle to have a second field of view and is configured to obtain second image data for the second field of view. The third camera is installed in the vehicle to have a third field of view and is configured to obtain third image data for the third field of view. The controller is configured to perform vehicle dynamic compensation (VDC) based on a result of processing any one of the first image data, the second image data, and the third image data and to perform automated online calibration (AOC) based on a result of the VDC to determine abnormality of the cameras.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
An information processing apparatus according to the embodiment includes a control unit (corresponding to an example of a “controller”). The control unit performs attitude estimation processing to estimate the attitude of an onboard camera based on optical flows of feature points in a region of interest set in an image captured by the onboard camera. When the onboard camera is mounted in a first state, the control unit performs first attitude estimation processing using a first region of interest set in a rectangular shape, and, when the onboard camera is mounted in a second state, the control unit performs second attitude estimation processing using a second region of interest set in accordance with the shape of a road surface.