G06T7/40

IMAGE QUALITY EVALUATION METHOD, IMAGE QUALITY EVALUATION APPARATUS, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
20230230218 · 2023-07-20 · ·

An image quality evaluation method to be executed by an information processing apparatus includes: acquiring information about a step at a boundary portion between an image portion, which is formed using a recording material, and a base material portion, which is a ground of the image portion, and a gloss level of the image portion; and calculating an evaluation value of a relief feel on the basis of the acquired information about the step and the gloss level.

Method and system for augmented reality visualisation

A method for visualizing an image combining an image (Ic) of a real object (200) originating from a video capture system (300) with digital information (In) originating from a three-dimensional model of the equipment, comprising: carrying out a processing operation to superimpose, in real time, a reference point (402) of the three-dimensional model with a reference point (302) of the video capture system and an object reference point (202) of the real object, and displaying at least some of the digital information superimposed on the image captured by the video capture system, further comprising: an initial step (Ei) of recording the reference texture (T200) of the real object, and a step (Ea) of analyzing the images transmitted by the video capture system, the analysis step comprising: generating a synthesis image from the captured image, and from the three-dimensional model of the equipment textured using the recorded texture; a step of calculating a composite image by mixing the synthesis image and the captured image.

Method and system for augmented reality visualisation

A method for visualizing an image combining an image (Ic) of a real object (200) originating from a video capture system (300) with digital information (In) originating from a three-dimensional model of the equipment, comprising: carrying out a processing operation to superimpose, in real time, a reference point (402) of the three-dimensional model with a reference point (302) of the video capture system and an object reference point (202) of the real object, and displaying at least some of the digital information superimposed on the image captured by the video capture system, further comprising: an initial step (Ei) of recording the reference texture (T200) of the real object, and a step (Ea) of analyzing the images transmitted by the video capture system, the analysis step comprising: generating a synthesis image from the captured image, and from the three-dimensional model of the equipment textured using the recorded texture; a step of calculating a composite image by mixing the synthesis image and the captured image.

Identifying spatial locations of images using location data from mobile devices
11562495 · 2023-01-24 · ·

A system determines spatial locations of pixels of an image. The system includes a processor configured to: receive location data from devices located within a hotspot; generate a density map for the hotspot including density pixels associated with spatial locations defined by the location data, each density pixel having a value indicating an amount of location data received from an associated spatial location; match the density pixels of the density map to at least a portion of the pixels of the image; and determine spatial locations of the at least a portion of the pixels of the image based on the spatial locations of the matching density pixels of the density map. In some embodiments, the image and density map are converted to edge maps, and a convolution is applied to the edge maps to match the density map to the pixels of the image.

Identifying spatial locations of images using location data from mobile devices
11562495 · 2023-01-24 · ·

A system determines spatial locations of pixels of an image. The system includes a processor configured to: receive location data from devices located within a hotspot; generate a density map for the hotspot including density pixels associated with spatial locations defined by the location data, each density pixel having a value indicating an amount of location data received from an associated spatial location; match the density pixels of the density map to at least a portion of the pixels of the image; and determine spatial locations of the at least a portion of the pixels of the image based on the spatial locations of the matching density pixels of the density map. In some embodiments, the image and density map are converted to edge maps, and a convolution is applied to the edge maps to match the density map to the pixels of the image.

TEXTURE MAPPING

A texture cache comprises at least two banks of cache storage to cache texels for processing in texture mapping operations. Access to the cached texels corresponding to a given chunk of texels of a given texture image is controlled according to a selected bank mapping selected from two or more bank mappings supported by the texture cache access control circuitry. Each bank mapping corresponds to a different mapping of the respective texels within the given chunk to the banks of cache storage. In at least one operating mode, the selected bank mapping is selected for the given chunk of texels of the given texture image depending on: at least one of first/second chunk position coordinates associated with the given chunk of texels; and at least one further texture attribute associated with the given texture image.

TEXTURE MAPPING

A texture cache comprises at least two banks of cache storage to cache texels for processing in texture mapping operations. Access to the cached texels corresponding to a given chunk of texels of a given texture image is controlled according to a selected bank mapping selected from two or more bank mappings supported by the texture cache access control circuitry. Each bank mapping corresponds to a different mapping of the respective texels within the given chunk to the banks of cache storage. In at least one operating mode, the selected bank mapping is selected for the given chunk of texels of the given texture image depending on: at least one of first/second chunk position coordinates associated with the given chunk of texels; and at least one further texture attribute associated with the given texture image.

METHOD FOR DECODING IMMERSIVE VIDEO AND METHOD FOR ENCODING IMMERSIVE VIDEO

A method of encoding an immersive image according to the present disclosure comprises classifying a plurality of view images into a basic image and an additional image, generating a plurality of texture atlases based on the plurality of view images, generating a first depth atlas including depth information of view images included in a first texture atlas among the plurality of texture atlases, and generating a second depth atlas including depth information of view images included in remaining texture atlases other than the first texture atlas.

METHOD FOR DECODING IMMERSIVE VIDEO AND METHOD FOR ENCODING IMMERSIVE VIDEO

A method of encoding an immersive image according to the present disclosure comprises classifying a plurality of view images into a basic image and an additional image, generating a plurality of texture atlases based on the plurality of view images, generating a first depth atlas including depth information of view images included in a first texture atlas among the plurality of texture atlases, and generating a second depth atlas including depth information of view images included in remaining texture atlases other than the first texture atlas.

METHOD AND APPARATUS FOR MULTIMODAL SOFT TISSUE DIAGNOSTICS
20230222767 · 2023-07-13 · ·

A method and device for multimodal imaging of dermal and mucosal lesions. The method includes using at least two imaging modalities from which one is a 3D scan of the lesion, and, additionally providing information on the distance and angulation between scanning device and the dermis or mucosa and mapping at least the second modality over the 3D data.