G06T2207/20068

METHOD FOR DETERMINING THE SHORT AXIS IN A LESION REGION IN A THREE DIMENSIONAL MEDICAL IMAGE
20180005403 · 2018-01-04 ·

A short axis in a 3 dimensional image of a lesion is determined starting from voxels defining the long axis and voxels in the plane of the long axis. Voxels within the plane of the long axis are projected perpendicularly onto the long axis and receive an identifier indicative of the region on the long axis onto which they are projected. Distances between points (projected sub-voxels) in pairs of points within the same range and within adjacent ranges are evaluated in order to determine the longest distance.

DATA PROCESSING SYSTEMS
20180005351 · 2018-01-04 · ·

In a data processing system, an input data array to be downscaled is split into plural parts along its horizontal extent and the different parts of the input data array are then provided to respective scalers of the data processing system and are respectively downscaled by those scalers to provide a plurality of downscaled output parts. The plural downscaled output parts are then combined (merged) to provide the desired downscaled output data array.

LANE EXTRACTION METHOD USING PROJECTION TRANSFORMATION OF THREE-DIMENSIONAL POINT CLOUD MAP
20230005278 · 2023-01-05 ·

A lane extraction method uses projection transformation of a 3D point cloud map, by which the amount of operations required to extract the coordinates of a lane is reduced by performing deep learning and lane extraction in a two-dimensional (2D) domain, and therefore, lane information is obtained in real time. In addition, black-and-white brightness, which is most important information for lane extraction on an image, is substituted by the reflection intensity of a light detection and ranging (LiDAR) sensor so that a deep learning model capable of accurately extracting a lane is provided. Therefore, reliability and competitiveness is enhanced in the field of autonomous driving, the field of road recognition, the field of lane recognition, and the field of HD road maps for autonomous driving, and the fields similar or related thereto, and more particularly, in the fields of road recognition and autonomous driving using LiDAR.

Z-PLANE IDENTIFICATION AND BOX DIMENSIONING USING THREE-DIMENSIONAL TIME-OF-FLIGHT IMAGING
20230228883 · 2023-07-20 ·

A sensor system that obtains and processes time-of-flight data (TOF) is provided. A TOF sensor obtains raw data describing various surfaces. A processor applies an averaging filter to the raw data to smooth the raw data for increasing signal-to-noise ratio (SNR) of flat surfaces represented in the raw data, performs a depth compute process on the raw data, as filtered, to generate distance data, generates a point cloud based on the distance data, and identifies the Z-planes in the point cloud.

Partial point cloud-based pedestrians' velocity estimation method

A method, apparatus, and system for estimating a moving speed of a detected pedestrian at an autonomous driving vehicle (ADV) is disclosed. A pedestrian is detected in a plurality of frames of point clouds generated by a LIDAR device installed at an autonomous driving vehicle (ADV). In each of at least two of the plurality of frames of point clouds, a minimum bounding box enclosing points corresponding to the pedestrian excluding points corresponding to limbs of the pedestrian is generated. A moving speed of the pedestrian is estimated based at least in part on the minimum bounding boxes across the at least two of the plurality of frames of point clouds. A trajectory for the ADV is planned based at least on the moving speed of the pedestrian. Thereafter, control signals are generated to drive the ADV based on the planned trajectory.

METHOD FOR RECONSTRUCTION OF A FEATURE IN AN ENVIRONMENTAL SCENE OF A ROAD
20220398856 · 2022-12-15 ·

In a method for reconstruction of a feature in an environmental scene of a road, a 3D point cloud of the scene and a sequence of 2D images of the scene are generated. A portion of candidates of 3D points of the 3D point cloud is identified by projecting the 3D points to each of the 2D images, determining a plurality of candidates of the 3D points of the 3D point cloud representing the feature by semantic segmentation in each of the images, projecting the candidates of the 3D points on a plane of the road in each of the 2D images, and selecting those candidates of the 3D points staying in a projection range on the road in each of the 2D images. The selected candidates of the 3D points are merged for determining estimated locations of the feature. The feature can be modeled by generating a fitting curve along the estimated locations.

METHODS FOR GENERATING THREE-DIMENSIONAL IMAGE DATA OF HUMAN BONES
20220392149 · 2022-12-08 ·

The present invention relates to a method for generating three-dimensional image from two-dimensional images, and more specifically, a method for generating three-dimensional image of human bones from two 2D planar images thereof. The method comprises the steps of: providing a first X-ray planar image and a second X-ray planar image; predicting one set of predicted posture parameters for each of the first X-ray planar image and the second X-ray planar image; and generating the data of a stereoscopic image according to the first X-ray planar image, the second X-ray planar image, and the predicted posture parameters. The present invention also relates to a method for training an artificial intelligence to perform three-dimensional image generation described above.

Point cloud feature-based obstacle filter system

A method, apparatus, and system for filtering obstacle candidates determined based on outputs of a LIDAR device in an autonomous vehicle is disclosed. A point cloud comprising a plurality of points is generated based on outputs of the LIDAR device. One or more obstacle candidates are determined based on the point cloud. The one or more obstacle candidates are filtered to remove a first set of obstacle candidates in the one or more obstacle candidates that correspond to noise based at least in part on characteristics associated with points that correspond to each of the one or more obstacle candidates. One or more recognized obstacles comprising the obstacle candidates that have not been removed are determined. Operations of an autonomous vehicle are controlled based on the recognized obstacles.

ELECTRONIC APPARATUS AND CONTROLLING METHOD THEREOF
20220343541 · 2022-10-27 ·

An electronic apparatus is disclosed. The electronic apparatus includes: a memory storing a first pattern image and a second pattern image, a communication interface comprising communication circuitry configured to communicate with an external terminal apparatus, a projection part including a projection lens, and a processor configured to: control the projection part to project the first pattern image on a screen member comprising a reflector located on a projection surface, and based on receiving a first photographed image which photographed the screen member from the external terminal apparatus through the communication interface, acquire transformation information based on the first photographed image and the first pattern image, control the projection part to project the second pattern image on the projection surface, and based on receiving a second photographed image which photographed the projection surface from the external terminal apparatus through the communication interface, perform color calibration based on the characteristic of the projection surface based on the second photographed image, the second pattern image, and the transformation information.

6D POSE AND SHAPE ESTIMATION METHOD

A computer-implemented method of estimating a 6D pose and shape of one or more objects from a 2D image, comprises the steps of: detecting, within the 2D image, one or more 2D regions of interest, each 2D region of interest containing a corresponding object among the one of more objects; cropping out a corresponding pixel value array, coordinate tensor , and feature map for each 2D region of interest; concatenating the corresponding pixel value array, coordinate tensor, and feature map for each 2D region of interest; and inferring, for each 2D region of interest, a 4D quaternion describing a rotation of the corresponding object in the 3D rotation group, a 2D centroid, which is a projection of a 3D translation of the corresponding object onto a plane of the 2D image given a camera matrix associated to the 2D, image, a distance from a viewpoint of the 2D image to the corresponding object a size and a class-specific latent shape vector of the corresponding object.