Patent classifications
G01S7/4808
MONITORING CONSTRUCTION OF A STRUCTURE
Methods, apparatuses, and embodiments related to a technique for monitoring construction of a structure. In an example, a robot with a sensor, such as a LIDAR device, enters a building and obtains sensor readings of the building. The sensor data is analyzed and components related to the building are identified. The components are mapped to corresponding components of an architect's three dimensional design of the building, and the installation of the components is checked for accuracy. When a discrepancy above a certain threshold is detected, an error is flagged and project managers are notified. Construction progress updates do not give credit for completed construction that includes an error, resulting in improved accuracy progress updates and corresponding improved accuracy for project schedule and cost estimates.
Systems and Methods for Controlling an Autonomous Vehicle with Occluded Sensor Zones
Systems and methods for controlling an autonomous vehicle are provided. In one example embodiment, a computer-implemented method includes obtaining sensor data indicative of a surrounding environment of the autonomous vehicle, the surrounding environment including one or more occluded sensor zones. The method includes determining that a first occluded sensor zone of the occluded sensor zone(s) is occupied based at least in part on the sensor data. The method includes, in response to determining that the first occluded sensor zone is occupied, controlling the autonomous vehicle to travel clear of the first occluded sensor zone.
METHOD AND DEVICE OF LABELING LASER POINT CLOUD
The present application discloses a method and device of labeling laser point cloud. The method comprises: receiving data of a laser point cloud; constructing a 3D scene and establishing a 3D coordinate system corresponding to the 3D scene; converting a coordinate of each laser point in the laser point cloud into a 3D coordinate in the 3D coordinate system; mapping laser points included in the laser point cloud into the 3D scene respectively according to the 3D coordinate of the laser points; labeling the laser points in the 3D scene.
TIME-OF-FLIGHT SENSING FOR HORTICULTURE
The invention provides a sensing system (1000), e.g. for agricultural application, comprising a radiation generator (100), a sensing apparatus (200), and a control system (300) functionally coupled to the radiation generator (100) and the sensing apparatus (200), wherein the sensing system (1000) has one or more time-of-flight sensing modes of operation, wherein the generator (100) is configured to generate a pulse of radiation (111) in the one or more time-of-flight sensing modes of operation, and wherein the sensing apparatus (200) is configured to sense wavelength dependent spectral intensities of radiation received by the sensing apparatus (200) as a function of time in the one or more time-of-flight sensing modes, to provide a sensing system signal; wherein the sensing system signal is indicative of the wavelength dependent spectral intensity distribution of the received radiation as a function of time in the one or more time-of-flight sensing modes.
TIME-OF-FLIGHT IMAGING CIRCUITRY, TIME-OF-FLIGHT IMAGING SYSTEM, TIME-OF-FLIGHT IMAGING METHOD
The present disclosure generally pertains to a time-of-flight imaging circuitry configured to: obtain first image data from an image sensor, the first image data being indicative of a scene, which is illuminated with spotted light; determine a first image feature in the first image data; obtain second image data from the image sensor, the second image data being indicative of the scene; determine second image feature in the second image data; estimate a motion of the second image feature with respect to the first image feature; and merge the first and the second image data based on the estimated motion.
INFORMATION PROCESSING DEVICE, CONTROL METHOD, PROGRAM AND STORAGE MEDIUM
The control unit 15 of the in-vehicle device 1 is configured to extract, from voxel data VD that is position information of an object for each of unit areas (voxels) into which a space is divided, the voxel data VD of plural voxels located at or around an own vehicle. Then, the control unit 15 is configured to calculate a normal vector of an approximate plane calculated based on the extracted voxel data VD of the plural voxels. Then, the control unit 15 is configured to calculate at least one of a pitch angle of the own vehicle or a roll angle of the own vehicle based on an orientation of the own vehicle and the normal vector.
DETECTION DEVICE OF LIDAR, LIDAR, AND DETECTION METHOD THEREFOR
A detection device of a light detection and ranging (lidar) device, a detection method, and a lidar device are provided. The detection device predicts the location of light spots of a reflected echo on a detector array, and reads electric signals of a subset of the photodetectors corresponding to the light spots. According to the detection method, the location on a detector array for light spots of a reflected echo is predicted according to a time of flight of a detection beam, a subset of the photodetectors corresponding to the light spots are activated, and their electric signals are read. All received light is detected, without increasing the receiving field of view, ambient light interference is suppressed, and the problem of shift of the light spots on a focal plane caused by optical path distortion is effectively solved.
Method for capturing at least one object, device of a sensor apparatus, sensor apparatus and driver assistance system with at least one sensor apparatus
A method is described for the in particular optical capture of at least one object (18, 20) with at least one sensor apparatus (14) of a vehicle (10), a device (34) of a sensor apparatus (14), a sensor apparatus (14) and a driver assistance system (12) with at least one sensor apparatus (14). In the method, in particular optical transmitted signals (36) are transmitted into a monitoring region (16) with the at least one sensor apparatus (14) and transmitted signals (36) reflected from object points (40) of the at least one object (18, 20) are captured as received signals (38) with angular resolution with reference to a main monitoring direction (42) of the at least one sensor apparatus (14). A spatial distribution of the object points (40) of the at least one object (18, 20) relative to the at least one sensor apparatus (14) is determined from a relationship between the transmitted signals (36) and the received signals (38), and the at least one object (18, 20) is categorized as stationary or non-stationary. A spatial density of the captured object points (40) in at least one region of the at least one object (20) is determined, and if the density of the captured object points (40) is smaller than a predetermined or predeterminable threshold value, the at least one object (20) is categorized as stationary.
Method and apparatus for optimizing scan data and method and apparatus for correcting trajectory
A method and an apparatus optimizes scan data obtained by sensors on vehicle, and corrects trajectory for a vehicle/robot based on the optimized scan data. The method for optimizing the scan data obtained by scanning environment elements, includes: step of obtaining the scan data, including obtaining at least two frames of scan data respectively corresponding to different timings; step of cluster processing, based on the characteristic of the data points, including classifying the plurality of data points in each frame of the scan data into one or more clusters; step of establishing correspondence, among the at least two frames of scan data, including searching and obtaining at least one set of clusters having correspondence; step of optimizing clusters, among the at least two frames of scan data, including conducting calculation to each set of the at least one set of clusters having correspondence, to obtain optimized clusters respectively corresponding to each set of the at least one set of clusters having correspondence; and step of optimizing the scan data, including accumulating all optimized clusters to obtain an optimized scan date for the at least two frames of scan data.
Laser scanner with real-time, online ego-motion estimation
A method comprises accessing a data set comprising a LIDAR acquired point cloud comprising a plurality of points each of which are attributed with at least a geospatial coordinate, sub-sampling at least a portion of the plurality of points to derive a representative sample of the plurality of points and displaying the representative sample of the plurality of points.