Patent classifications
G06T2207/10044
Point cloud registration for LiDAR labeling
The subject disclosure relates to techniques for detecting an object. A process of the disclosed technology can include steps for receiving three-dimensional (3D) Light Detection and Ranging (LiDAR) data of the object at a first time, generating a first point cloud based on the 3D LiDAR data at the first time, receiving 3D LiDAR data of the object at a second time, generating a second point cloud based on the 3D LiDAR data at the second time, aggregating the first point cloud and the second point cloud to form an aggregated point cloud, and placing a bounding box around the aggregated point cloud. Systems and machine-readable media are also provided.
Super-resolution radar for autonomous vehicles
Examples disclosed herein relate to an autonomous driving system in an vehicle. The autonomous driving system includes a radar system configured to detect a target in a path and a surrounding environment of the vehicle and produce radar data with a first resolution that is gathered over a continuous field of view on the detected target. The system includes a super-resolution network configured to receive the radar data with the first resolution and produce radar data with a second resolution different from the first resolution using first neural networks. The system also includes a target identification module configured to receive the radar data with the second resolution and to identify the detected target from the radar data with the second resolution using second neural networks. Other examples disclosed herein include a method of operating the radar system in the autonomous driving system of the vehicle.
Method and apparatus for localization of position data
Methods, systems, apparatuses, and computer program products are provided that are configured to perform localization of position data, specifically using a trained localization neural network. In the context of an apparatus, the apparatus is caused to receive observed feature representation data. The apparatus is further configured to transform the observed feature representation data into standardized feature representation data utilizing a trained localization neural network. The apparatus is further configured to compare the standardized feature representation data and the map feature representation data and identify local position data based on the comparison.
Techniques for volumetric estimation
The present disclosure relates generally to the operation of autonomous machinery for performing various tasks at various industrial work sites, and more particularly to the volumetric estimation and dimensional estimation of a pile of material or other object, and the use of multiple sensors for the volumetric estimation and dimensional estimation of a pile of material or other object at such work sites. An application and a framework is disclosed for volumetric estimation and dimensional estimation of a pile of material or other object using at least one sensor, preferably a plurality of sensors, on an autonomous machine (e.g., robotic machines or autonomous vehicles) in various work-site environments applicable to various industries such as, construction, mining, manufacturing, warehousing, logistics, sorting, packaging, agriculture, etc.
IMAGE PROCESSING VIA ISOTONIC CONVOLUTIONAL NEURAL NETWORKS
A convolutional neural network system includes a sensor and a controller, wherein the controller is configured to receive an image from the sensor, divide the image into patches, each patch of size p, extract, via a first convolutional layer, a feature map having a number of channels based on a feature detector of size p, wherein the feature detector has a stride equal to size p, refine the feature map by alternatingly applying depth-wise convolutional layers and point-wise convolutional layers to obtain a refined feature map, wherein the number of channels in the feature map and the size of the feature map remains constant throughout all operations in the refinement; and output the refined feature map.
Image analysis device, image analysis method, and computer-readable recording medium
An image analysis device that ease association between an SAR image and an object is provided. The image analysis device includes: a stable reflection point identification unit that identifies, based on a plurality of synthetic aperture radar (SAR) images, stable reflection points at which reflection is stable in the plurality of SAR images; a phase identification unit that identifies a phase at each of the stable reflection points, based on the plurality of SAR images and a location of the stable reflection point in the plurality of SAR images; and a clustering means that clusters the stable reflection points, based on a Euclidian distance between each of the stable reflection points and a correlation of the phases at each of the stable reflection points.
Intensity data visualization
Techniques for coloring a point cloud based on colors derived from LIDAR (light detection and ranging) intensity data are disclosed. In some embodiments, the coloring of the point cloud may employ an activation function that controls the colors assigned to different intensity values. Further, the activation function may be parameterized based on statistics computed for a distribution of intensities associated with a 3D scene and a user-selected sensitivity. Alternatively, a Fourier transform of the distribution of intensities or a clustering of the intensities may be used to estimate individual distributions associated with different materials, based on which the point cloud coloring may be determined from intensity data.
IMAGE PROCESSING METHOD
An image processing apparatus according to the present invention includes: an extracting unit configured to extract a candidate image, which is an image of a candidate region specified in accordance with a preset criterion, from a target image to be a target for an annotation process, and also extract a corresponding image, which is an image of a corresponding region corresponding to the candidate region, from a reference image that is an image corresponding to the target image; a displaying unit configured to display the candidate image and the corresponding image so as to be able to compare the images with each other; and an input accepting unit configured to accept input of input information for the annotation process for the candidate image.
Automated clinical documentation system and method
A method, computer program product, and computing system for visual diarization of an encounter is executed on a computing device and includes obtaining encounter information of a patient encounter. The encounter information is processed to: associate a first portion of the encounter information with a first encounter participant, and associate at least a second portion of the encounter information with at least a second encounter participant. A visual representation of the encounter information is rendered. A first visual representation of the first portion of the encounter information is rendered that is temporally-aligned with the visual representation of the encounter information. At least a second visual representation of the at least a second portion of the encounter information is rendered that is temporally-aligned with the visual representation of the encounter information.
PREDICTING VISIBLE/INFRARED BAND IMAGES USING RADAR REFLECTANCE/BACKSCATTER IMAGES OF A TERRESTRIAL REGION
The present invention relates to a method and apparatus that can predict the visible-infrared band images of a region of the Earth's surface that would be observed by an Earth Observation (EO) satellite or other high-altitude imaging platform, using data from radar reflectance/backscatter of the same region. The method and apparatus can be used to predict images of the Earth's surface in the visible-infrared bands when the view between an imaging instrument and the ground is obscured by cloud or some other medium that is opaque to electromagnetic (EM) radiation in the visible-infrared spectral range, approximately spanning 400-2300 nanometres (nm), but transparent to EM radiation in the radio-/microwave part of the spectrum. Regular, uninterrupted monitoring of the Earth's surface is important for a wide range of applications, from agriculture to defence.