G01S7/4808

Method and apparatus for detecting obstacle

Embodiments of the present disclosure provide a method and apparatus for detecting an obstacle. The method may include: acquiring first point cloud data collected by a first vehicle-mounted laser radar and second point cloud data collected by a second vehicle-mounted laser radar, where a height of the first vehicle-mounted laser radar from a ground is greater than a height of the second vehicle-mounted laser radar from the ground, and a number of wiring harnesses of the first vehicle-mounted laser radar is greater than a number of wiring harnesses of the second vehicle-mounted laser radar; performing ground estimation based on the first point cloud data; filtering out a ground point in the second point cloud data according to the ground estimation result of the first point cloud data; and performing obstacle detection based on the second point cloud data after the ground point is filtered out.

Multi-channel lidar sensor module

The present invention relates to a multi-channel lidar sensor module capable of measuring at least two target objects using one image sensor. The multi-channel lidar sensor module according to an embodiment of the present invention includes at least one pair of light emitting units configured to emit laser beams and a light receiving unit formed between the at least one pair of emitting units and configured to receive at least one pair of reflected laser beams which are emitted from the at least one pair of light emitting units and reflected by target objects.

System and method for detecting unmanned aerial vehicles

A method for detecting unmanned aerial vehicles (UAV) includes detecting an unknown flying object in a monitored zone of air space. An image of the detected unknown flying object is captured. The captured image is analyzed to classify the detected unknown flying object. A determination is made, based on the analyzed image, whether the detected unknown flying object comprises a UAV.

Surveying data processing device, surveying data processing method, and surveying data processing program
11580696 · 2023-02-14 · ·

A surveying data processing device includes a point cloud data acquiring unit, a three-dimensional model acquiring unit, a first correspondence relationship determining unit, an extended three-dimensional data generating unit, and a second correspondence relationship determining unit. The point cloud data acquiring unit acquires first point cloud data obtained by laser scanning, at a first viewpoint, and acquires second point cloud data obtained by laser scanning, at a second viewpoint. The three-dimensional model acquiring unit acquires data of a three-dimensional model. The first correspondence relationship determining unit obtains a correspondence relationship between the first point cloud data and the three-dimensional model. The extended three-dimensional data generating unit generates extended three-dimensional data in which the first point cloud data is extended, on the basis of the correspondence relationship. The second correspondence relationship determining unit determines a correspondence relationship between the extended three-dimensional data and the second point cloud data.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING PROGRAM, AND INFORMATION PROCESSING METHOD

A processing load in a case where a plurality of different sensors is used can be reduced. An information processing apparatus according to an embodiment includes: a recognition processing unit (15, 40b) configured to perform recognition processing for recognizing a target object by adding, to an output of a first sensor (23), region information that is generated according to object likelihood detected in a process of object recognition processing based on an output of a second sensor (21) different from the first sensor.

THREE-DIMENSIONAL MEASUREMENT DEVICE FOR GENERATING THREE-DIMENSIONAL POINT POSITION INFORMATION
20230043994 · 2023-02-09 ·

A three-dimensional measurement device includes a camera for acquiring position information for three-dimensional points on the surface of an object on the basis of the time of flight of light, and a control device. The camera acquires, at a plurality of relative positions of the camera with respect to a workpiece, three-dimensional point position information. A plurality of evaluation regions are defined for the workpiece. The control device specifies, for each evaluation region, the three-dimensional point closest to a reference plane from among three-dimensional points detected in the evaluation region. The control device generates, on the basis of the multiple three-dimensional points specified for the respective evaluation regions, three-dimensional point position information in which multiple pieces of three-dimensional point position information acquired by the camera are combined.

Single-camera particle tracking system and method

A method for tracking moving particles in a fluid. The method includes illuminating the moving particles with an illumination sequence of patterns generated by a light projector; measuring with a single camera light intensities reflected by the moving particles; calculating, based on the measured light intensity, digital coordinates (x′, y′, z′) of the moving particles; determining a mapping function f that maps the digital coordinates (x′, y′, z′) of the moving particles to physical coordinates (x, y, z) of the moving particles; and calculating the physical coordinates (x, y, z) of the moving particles based on the mapping function f. The illumination sequence of patterns is generated with a single wavelength, and light emitted by the projector is perpendicular to light received by the single camera.

Automatic wall climbing type radar photoelectric robot system for non-destructive inspection and diagnosis of damages of bridge and tunnel structure

An automatic wall climbing type radar photoelectric robot system for damages of a bridge and tunnel structure, mainly including a control terminal, a wall climbing robot and a server. The wall climbing robot generates a reverse thrust by rotor systems, moves flexibly against the surface of a rough bridge and tunnel structure by adopting an omnidirectional wheel technology, and during inspection by the wall climbing robot, bridges and tunnels do not need to be closed, and the traffic is not affected. Bridges and tunnels can divide into different working regions only by arranging a plurality of UWB base stations, charging and data receiving devices on the bridge and tunnel structure by means of UWB localization, laser SLAM and IMU navigation technologies, a plurality of wall climbing robots supported to work at the same time, automatic path planning and automatic obstacle avoidance realized, and unattended regular automatic patrolling can be realized.

Method of localization using multi sensor and robot implementing same

Disclosed herein are a method of localization using multi sensors and a robot implementing the same, the method including sensing a distance between an object placed outside of a robot and the robot and generating a first LiDAR frame by a LiDAR sensor of the robot while a moving unit moves the robot, capturing an image of an object placed outside of the robot and generating a first visual frame by a camera sensor of the robot, and comparing a LiDAR frame stored in a map storage of the robot with the first LiDAR frame, comparing a visual frame registered in a frame node of a pose graph with the first visual frame, determining accuracy of comparison's results of the first LiDAR frame, and calculating a current position of the robot by a controller.

System and method for vehicle position and velocity estimation based on camera and LIDAR data
11557128 · 2023-01-17 · ·

A vehicle position and velocity estimation based on camera and LIDAR data are disclosed. A particular embodiment includes: receiving input object data from a subsystem of an autonomous vehicle, the input object data including image data from an image generating device and distance data from a distance measuring device; determining a two-dimensional (2D) position of a proximate object near the autonomous vehicle using the image data received from the image generating device; tracking a three-dimensional (3D) position of the proximate object using the distance data received from the distance measuring device over a plurality of cycles and generating tracking data; determining a 3D position of the proximate object using the 2D position, the distance data received from the distance measuring device, and the tracking data; determining a velocity of the proximate object using the 3D position and the tracking data; and outputting the 3D position and velocity of the proximate object relative to the autonomous vehicle.