G06T2207/20068

AUTOMATIC FOCUSING PROJECTION METHOD AND SYSTEM
20220210381 · 2022-06-30 ·

Embodiments of the present disclosure relate to the technical field of digital projection and display, and in particular, relate to an automatic focusing projection method and system. The embodiments provide an automatic focusing projection method, which is applicable to an automatic focusing projection system. The automatic focusing projection system includes a ranging unit, a projection unit, and a reflection unit. The method includes: acquiring a depth image from the ranging unit, and acquiring a vertical projection distance from the ranging unit to a projection plane based on the depth image; acquiring position information of a center point of a projection picture in the depth image based on an elevation angle of the reflection unit and the vertical projection distance; acquiring a projection distance between the projection unit and the projection picture based on the position information; and performing focus adjustment on the projection unit based on the projection distance.

Image handling and display in x-ray mammography and tomosynthesis

A method and system for acquiring, processing, storing, and displaying x-ray mammograms Mp tomosynthesis images Tr representative of breast slices, and x-ray tomosynthesis projection images Tp taken at different angles to a breast, where the Tr images are reconstructed from Tp images.

OBSTACLE RECOGNITION METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
20220198808 · 2022-06-23 ·

An obstacle recognition method and apparatus, a computer device, and a storage medium are provided. The method comprises: acquiring point cloud data scanned by LiDAR and time-sequence pose information of a vehicle; determining a spliced image of an eye bird view according to the point cloud data, the time-sequence pose information, and a historical frame embedded image; inputting the spliced image into a preset first CNN model to obtain a current frame embedded image and pixel-level information of the eye bird view; determining recognition information of at least one obstacle according to the current frame embedded image and pixel-level information.

METHOD, DEVICE, SERVER AND SYSTEM FOR CALIBRATING AT LEAST ONE CAMERA OF A DRIVER ASSISTANCE SYSTEM
20220191468 · 2022-06-16 ·

Disclosed herein is a method for calibrating at least one camera of a driver assistance system, comprising the steps of: imaging a calibration pattern using a display device and transmitting a control command to the camera to capture the calibration pattern imaged by the display device. Further disclosed is a method, a device, a server and a system (100) for calibrating at least one camera of a driver assistance system.

IMAGE CORRECTION FOR OPHTHALMIC IMAGES

Generating a correction algorithm includes obtaining a model of an eye, the model including a front portion with optics to mimic a cornea and a lens of a human eye, and a rear portion having a generally hemispherical-shaped body to mimic a retina of the human eye. The rear portion includes physical reference lines on an inside surface of the generally hemispherical-shaped body. Images of the model are captured using an image capturing device aimed at the model. Vertices of the physical reference lines are identified according to a given projection technique for displaying generally hemispherical-shaped body in a two-dimensional image in the captured images. Idealized placement of the vertices of the physical reference lines is obtained according to the given projection technique. The result is a correction algorithm used to adjust any pixel of an image of an actual eye in the x-axis and in the y-axis.

APPARATUS AND METHOD FOR MEASURING THE GAP
20220170734 · 2022-06-02 ·

Provided is a gap measuring method capable of accurately measuring a gap between adjacent members. The gap measuring method includes preparing an equipment to be measured including a first member and a second member, obtaining a first image by capturing the equipment to be measured using a photographing module of a first setting while irradiating a laser to the equipment to be measured, wherein the first image includes a first laser line image corresponding to a surface of the first member and a second laser line image corresponding to a surface of the second member, modifying the photographing module from the first setting to a second setting based on a maximum width of the second laser line image in the first image, obtaining a second image by capturing the equipment to be measured using the photographing module of the second setting while irradiating a laser to the equipment to be measured, and calculating a gap between the first member and the second member based on a discontinuous region between the first laser line image and the second laser line image in the second image.

METHOD FOR CONSTRUCTING THREE-DIMENSIONAL MODEL OF TARGET OBJECT AND RELATED APPARATUS
20220165031 · 2022-05-26 ·

A method for constructing a 3D model of a target object provided is performed by a computer device, the method including: obtaining at least two initial images of a target object from a plurality of shooting angles, the at least two initial images respectively including depth information of the target object, and the depth information indicating distances between a plurality of points of the target object and a reference position; obtaining first point cloud information corresponding to the at least two initial images respectively according to the depth information in the at least two initial images; fusing the first point cloud information respectively corresponding to the at least two initial images into second point cloud information; and constructing a 3D model of the target object according to the second point cloud information.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
20220165066 · 2022-05-26 ·

An information processing apparatus includes an input device and a controller. A captured image that is captured by a camera is input to the input device, the captured image including distance information for each pixel. The controller generates a transformed captured image obtained by transforming pairs of coordinates for respective pixels of the captured image on the basis of an amount of movement of the camera or a mobile body on which the camera is mounted. Further, the controller associates a pair of coordinates for a pixel of the transformed captured image with a pair of coordinates for a pixel of a post-movement captured image captured at a position of the camera after the movement, and the controller identifies a non-associated pixel that is included in the post-movement captured image and is not associated with the pixel of the transformed captured image.

ORTHOPHOTO MAP GENERATION METHOD BASED ON PANORAMIC MAP

Disclosed is an orthophoto map generation method based on a panoramic map, the method comprising: overlapping and fusing a panoramic map and panoramic field depth data when the panoramic map is photographed, such that each pixel of the panoramic map has a corresponding field depth value; acquiring a pitch angle and azimuth angle of a reference pixel relative to a photographing device, and in conjunction with a 360 degree feature of a panorama, calculating an azimuth angle and pitch angle, relative to a camera, of an object represented by each pixel; in conjunction with calculation results and geographic coordinates, determined by a positioning device when the panoramic map is photographed, of the camera, calculating a geographic position corresponding to the object represented by each pixel to form point cloud data.

METHOD AND DEVICE FOR CHARACTERIZING AT LEAST ONE OBJECT DEPICTED IN AN ULTRASOUND IMAGE

Disclosed is a method and a device for characterizing, for example identifying at least one object depicted in a raster image (1) or determining the speed of sound of the object, the raster image (1) having pixel rows and pixel columns. In order to efficiently and accurately characterize the object, the invention provides that several pixel columns (Cn) are selected and each of the selected pixel columns (Cn) is converted into a line profile (L), the amplitude of the line profile (L) representing the value (V) of image information of selected pixels of the respective selected pixel column (Cn), wherein the method comprises determining characteristics of the line profiles (L) and using the characteristics to characterize the at least one object depicted in the raster image (1).