Patent classifications
G01S13/867
Method and system for generating and updating digital maps
A method and control system for generating and updating digital maps using a plurality of passages along a road portion by at least one road vehicle is provided. The method comprises obtaining positioning data and sensor data of each passage from the at least one road vehicle. Further, the method comprises forming a sub-map representation of the surrounding environment at each obtained longitudinal position based on the obtained sensor data, and estimating a longitudinal error for each obtained longitudinal position within each segment. Furthermore, the method comprises determining a new plurality of longitudinal positions of each road vehicle for each passage by applying the estimated longitudinal error on each corresponding obtained longitudinal position, and applying the determined new plurality of longitudinal positions on associated sensor data in order to generate a first layer of a map representation of the surrounding environment along the road portion.
Bad weather judgment apparatus and bad weather judgment method thereof
A bad weather judgment apparatus and a bad weather judgment method thereof are disclosed. The apparatus includes a target recognizer configured to recognize targets in detection areas of a plurality of heterogeneous sensors based on sensor recognition information received from the heterogeneous sensors, a counter configured to count the number of cases based on detection states of the heterogeneous sensors about a same target among the targets, and a bad weather judger configured to determine whether the same target is present in bad weather judgment zones of the detection areas of the heterogeneous sensors, control the counter to increment or decrement the number of the cases based on detection states of the heterogeneous sensors about whether the same target is present in the bad weather judgment zones, and judge current weather to be bad weather when the number of the cases is greater than a threshold value.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING PROGRAM, AND INFORMATION PROCESSING METHOD
A processing load in a case where a plurality of different sensors is used can be reduced. An information processing apparatus according to an embodiment includes: a recognition processing unit (15, 40b) configured to perform recognition processing for recognizing a target object by adding, to an output of a first sensor (23), region information that is generated according to object likelihood detected in a process of object recognition processing based on an output of a second sensor (21) different from the first sensor.
SENSOR AIMING DEVICE, DRIVING CONTROL SYSTEM, AND CORRECTION AMOUNT ESTIMATION METHOD
A sensor aiming device includes: a target positional relationship processing unit for outputting positional relationship information of first and second targets; a sensor observation information processing unit configured to convert the observation result of the first and second targets into a predetermined unified coordinate system according to a coordinate conversion parameter, perform time synchronization at a predetermined timing, and extract first target information indicating a position of the first target and second target information indicating a position of the second target; a position estimation unit configured to estimate a position of the second target using the first target information, the second target information, and the positional relationship information; and a sensor correction amount estimation unit configured to calculate a deviation amount of the second sensor using the second target information and an estimated position of the second target and estimate a correction amount.
SYNTHETIC GEOREFERENCED WIDE-FIELD OF VIEW IMAGING SYSTEM
An imaging system for an aircraft is disclosed. A plurality of image sensors are attached, affixed, or secured to the aircraft. Each image sensor is configured to generate sensor-generated pixels based on an environment surrounding the aircraft. Each of the sensor-generated pixels is associated with respective pixel data including, position data, intensity data, time-of-acquisition data, sensor-type data, pointing angle data, latitude data, and longitude data. A controller generates a buffer image including synthetic-layer pixels, maps the sensor-generated pixels to the synthetic-layer pixels in the buffer image, fills a plurality of regions of the buffer image with the sensor-generated pixels, and presents the buffer image on a head-mounted display (HMD) to a user of the aircraft.
Object recognition device and object recognition method
In this object recognition device, an association between a first object detection result and a second object detection result is taken in a region excluding an occlusion area. When the first object detection result and the second object detection result are determined to be detection results for an identical object, a recognition result of the surrounding object is calculated from the first object detection result and the second object detection result. Thus, occurrences of erroneous recognition of an object can be decreased as compared to a conventional object recognition device of a vehicle.
Automatic wall climbing type radar photoelectric robot system for non-destructive inspection and diagnosis of damages of bridge and tunnel structure
An automatic wall climbing type radar photoelectric robot system for damages of a bridge and tunnel structure, mainly including a control terminal, a wall climbing robot and a server. The wall climbing robot generates a reverse thrust by rotor systems, moves flexibly against the surface of a rough bridge and tunnel structure by adopting an omnidirectional wheel technology, and during inspection by the wall climbing robot, bridges and tunnels do not need to be closed, and the traffic is not affected. Bridges and tunnels can divide into different working regions only by arranging a plurality of UWB base stations, charging and data receiving devices on the bridge and tunnel structure by means of UWB localization, laser SLAM and IMU navigation technologies, a plurality of wall climbing robots supported to work at the same time, automatic path planning and automatic obstacle avoidance realized, and unattended regular automatic patrolling can be realized.
System and method for ordered representation and feature extraction for point clouds obtained by detection and ranging sensor
A method is described which includes receiving a point cloud having a plurality of data points each representing a 3D location in a 3D space, the point cloud being obtained using a detection and ranging (DAR) sensor. For each data point, associating the data point with a 3D volume containing the 3D location of the data point, the 3D volume being defined using a 3D lattice that partitions the 3D space based on spherical coordinates. For at least one 3D volume, the data points are sorted within the 3D volume based on at least one dimension of the 3D lattice; and the sorted data points are stored as a set of ordered data points. The method also includes performing feature extraction on the set of ordered data points to generate a set of ordered feature vectors and providing the set of ordered feature vectors to perform a machine learning inference task.
Determining relevant signals using multi-dimensional radar signals
A method and electronic device for determining relevant signals in radar signal processing. The electronic device includes a radar transceiver, a memory, and a processor. The processor is configured to cause the electronic device to obtain, via the radar transceiver of the electronic device, radar measurements for one or more modes in a set of modes; process the radar measurements to obtain a set of radar images; identify relevant signals in the set of radar images based on signal determination criteria for an application; and perform the application using only the relevant signals.
OBJECT RECOGNITION DEVICE AND OBJECT RECOGNITION METHOD
Provided is an object recognition device including a prediction processing unit, a temporary setting unit, and a association processing unit. The prediction processing unit predicts, as a prediction position on an object model obtained by modeling a tracking target, a position of a movement destination of the tracking target based on a trajectory formed by movement of at least one object of a plurality of objects as the tracking target. The temporary setting unit sets, based on specifications of a sensor that has detected the tracking target, a position of at least one candidate point on the object model. The association processing unit sets, based on the position of the candidate point and the prediction position, a reference position on the object model. The association processing unit determines whether the position of the detection point and the prediction position associate with each other based on a positional relationship between a association range which is set so that the association range has a reference position on the object model as a reference and a detection point at a time when the sensor has detected the at least one object of the plurality of objects.