G06T3/06

Determining a two-dimensional mammography dataset
10702233 · 2020-07-07 · ·

A method is for determining a two-dimensional mammography dataset. The method includes the receipt of a three-dimensional mammography dataset of an examination region via an interface. The method furthermore includes the first determination of a two-dimensional mammography dataset of the examination region by application of a trained generator function to the three-dimensional mammography dataset via a processing unit, wherein the trained generator function is based on a trained GA network. Through this method, it is possible efficiently to create two-dimensional mammography datasets, which are visually similar to real two-dimensional mammography datasets and can therefore be appraised with standardized methods.

Laser point cloud positioning method and system

The present disclosure provides a laser point cloud positioning method and system. The method comprises: converting laser point cloud reflection value data and height value data matched with a current location of an autonomous vehicle into laser point cloud projection data in a ground plane; assigning a weight for a reflection value matching probability and a height value matching probability of the laser point cloud projection data and a laser point cloud two-dimensional grid map, and determining a matching probability of the laser point cloud projection data and the laser point cloud two-dimensional grid map; determining a location of the autonomous vehicle in the laser point cloud two-dimensional grid map based on a matching probability of the laser point cloud projection data and the laser point cloud two-dimensional grid map. The present disclosure is employed to solve the problem in the prior art that when the laser point cloud matches with the map, an undesirable matching effect is achieved by individually considering the reflection value matching or height value matching, or superimposing the two simply. The present disclosure can improve the laser point cloud positioning precision, and enhance robustness of the positioning system.

METHOD AND APPARATUS FOR GENERATING PROJECTION-BASED FRAME WITH 360-DEGREE IMAGE CONTENT REPRESENTED BY TRIANGULAR PROJECTION FACES ASSEMBLED IN TRIANGLE-BASED PROJECTION LAYOUT

A projection-based frame is generated according to an omnidirectional video frame and a triangle-based projection layout. The projection-based frame has a 360-degree image content represented by triangular projection faces assembled in the triangle-based projection layout. A 360-degree image content of a viewing sphere is mapped onto the triangular projection faces via a triangle-based projection of the viewing sphere. One side of a first triangular projection face has contact with one side of a second triangular projection face, one side of a third triangular projection face has contact with another side of the second triangular projection face. One image content continuity boundary exists between one side of the first triangular projection face and one side of the second triangular projection face, and another image content continuity boundary exists between one side of the third triangular projection face and another side of the second triangular projection face.

IMAGE CAPTURING METHOD AND DEVICE, CAPTURING APPARATUS AND COMPUTER STORAGE MEDIUM
20200213513 · 2020-07-02 ·

The embodiments of present disclosure involve an image capturing method and device. The image capturing method includes: performing image capturing to generate first images corresponding to preset regions when a capturing apparatus is moved to preset capturing angles; splicing the first images corresponding to the preset regions to generate a spherical image; and projecting the spherical image onto a preset reference plane to generate a second image.

IMAGE SIGNAL PROCESSOR FOR PROCESSING IMAGES
20200211229 · 2020-07-02 ·

Techniques are provided for using one or more machine learning systems to process input data including image data. The input data including the image data can be obtained, and at least one machine learning system can be applied to at least a portion of the image data to determine at least one color component value for one or more pixels of at least the portion of the image data. Based on application of the at least one machine learning system to at least the portion of the image data, output image data for a frame of output image data can be generated. The output image data includes at least one color component value for one or more pixels of the frame of output image data. Application of the at least one machine learning system causes the output image data to have a reduced dimensionality relative to the input data.

Image generation apparatus and image display control apparatus
10699372 · 2020-06-30 · ·

Disclosed is an image generation apparatus generating and outputting a panoramic image that is obtained by converting, to a planar shape, a projection plane onto which a scene within at least a partial range of a virtual sphere as viewed from an observation point is projected. The panoramic image is such that a unit area on the virtual sphere containing a given attention direction as viewed from the observation point is converted to a broader area than other unit areas. The image generation apparatus generates the panoramic image corresponding to the projection plane such that a portion of a main line that links a position in the attention direction to a position in a direction opposite the attention direction and is within the panoramic image corresponding to a unit amount of an angle of rotation around the observation point is maximized in length at a position closest to the attention direction.

Embedding 3D information in documents
10688822 · 2020-06-23 · ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for obtaining information to be embedded within an identification document. Modifying the information to add depth when viewed through a three-dimensional (3D) viewing device, thereby yielding modified information. Embedding the modified information in a target image to yield a modified target image such that the modified information is not viewable to a naked eye and the modified information is visible with added depth when viewed through a 3D viewing device. Disposing the modified target image on an identification document to yield embedded 3D information. An identification document includes the target image and 3D information embedded within the target image. The 3D information embedded within the target image is not visible to a naked eye, but is visible with added depth when viewed through a 3D viewing device.

Generating a three-dimensional model from a scanned object
10679408 · 2020-06-09 · ·

The present disclosure is directed toward systems and methods that facilitate scanning an object (e.g., a three-dimensional object) having custom mesh lines thereon and generating a three-dimensional mesh of the object. For example, a three-dimensional modeling system receives a scan of the object including depth information and a two-dimensional texture map of the object. The three-dimensional modeling system further generates an edge map for the two-dimensional texture map and modifies the edge map to generate a two-dimensional mesh including edges, vertices, and faces that correspond to the custom mesh lines on the object. Based on the two-dimensional mesh and the depth information from the scan, the three-dimensional modeling system generates a three-dimensional model of the object.

PANORAMIC IMAGE COMPRESSION METHOD AND APPARATUS
20200177925 · 2020-06-04 ·

A panoramic image compression method and device is disclosed. The method comprises: obtaining a first spherical model formed by a first panoramic image to be compressed; generating a second spherical model in the first spherical model according to a main view image of an user; establishing a first mapping relationship between plane 2D rectangular coordinates in a second panoramic image and plane 2D rectangular coordinates in the first panoramic image; and sampling, from the first panoramic image, pixels corresponding to plane 2D rectangular coordinates in the second panoramic image according to the first mapping relationship to constitute the second panoramic image containing the pixels, so as to realize the compression of the first panoramic image.

SPHERICAL VISUAL CONTENT TRANSITION
20200177860 · 2020-06-04 ·

First visual information defining first spherical visual content, second visual information defining second spherical visual content, and/or other information may be obtained. Presentation of the first spherical visual content on a display may be effectuated. A spherical transition between the first spherical visual content and the second spherical visual content may be identified. The spherical transition may define a change in presentation of visual content on the display from the first spherical visual content to the second spherical visual content based on a transitional motion within a spherical space and/or other information. A change in presentation of the first spherical visual content on the display to presentation of the second visual content on the display may be effectuated based on the spherical transition and/or other information. The change may be determined based on the transition motion within the spherical space and/or other information.