H04N2013/0077

Information processing apparatus, information processing method and storage medium

An information processing apparatus is configured to output a histogram for inspecting a state of a target object based on presence of a peak in a specific class in the histogram, the histogram representing a distribution of depth values from a measurement apparatus to the target object, the information processing apparatus including an acquisition unit configured to acquire depth information obtained from a result of measurement of a depth value from the measurement apparatus to the target object by the measurement apparatus and an output unit configured to output a histogram based on the acquired depth information so that a frequency of a class including a predetermined depth value to be applied when the depth value is not obtained is reduced.

Image synthesis method, apparatus and device for free-viewpoint

The embodiment of the present specification discloses an image synthesis method, apparatus and device for free-viewpoint. The method includes: correcting a first depth map based on a first color map and the first depth map inputted, wherein the first color map includes a left image and a right image, and the first depth map includes a left image and a right image; forward-projecting the second depth map to a virtual viewpoint position based on the pose of a virtual viewpoint to obtain a third depth map, where the third depth map is a projection map located at the virtual viewpoint position; back-projecting the left and right color maps closest to the virtual viewpoint position in the first color map to the virtual viewpoint position based on the third depth map to obtain a second color map, where the second color map is a projection view of the first color map at the virtual viewpoint position; correcting the second color map by using an optical flow algorithm to obtain a third color map, where the optical flow algorithm implements aligning the left and right images of the second color map; performing weighting calculation and blendering on the third color map to obtain a composite image.

SYSTEMS AND METHODS OF CREATING A THREE-DIMENSIONAL VIRTUAL IMAGE
20220191459 · 2022-06-16 ·

Embodiments of the present invention create a three-dimensional virtual model by a user identifying a three-dimensional object, capturing a plurality of two-dimensional images of said object in succession, said plurality of images being captured from different orientations, recording said plurality of images on a storage medium, determining the relative change in position of said plurality of images by comparing two subsequent images, wherein the relative change is determined by a difference in color intensity values between the pixels of one image and another image, generating a plurality of arrays from the difference determined and generating a computer image from said plurality of arrays, wherein said computer image represents said three-dimensional object.

A METHOD AND APPARATUS FOR ENCODING AND RENDERING A 3D SCENE WITH INPAINTING PATCHES
20220159231 · 2022-05-19 ·

Methods, devices and stream are disclosed for encoding, transporting and decoding a 3D scene prepared to be viewed from the inside of a viewing zone. A central view comprising texture and depth information is encoded by projected points of the 3D scene visible from a central point of view onto an image plane. Patches are generated to encode small parts of the 3D scene not visible from the central point of view. At the rendering, a viewport image is generated for the current point of view. Holes, that is dis-occluded areas, of the viewport are filled using a patch based inpainting algorithm adapted to take the patches, warped according to the rotation and translation between virtual camera used for capturing the patch and the current virtual camera.

METHOD AND DEVICE FOR MONITORING THE SHAPE OF HARD-TO-REACH COMPONENTS
20230267631 · 2023-08-24 ·

The group of inventions relates to measuring technology, in particular to methods for inspection of the shape of hard-to-reach parts, and can be used in power engineering, transport, mechanical engineering and other fields of technology to measure the geometric parameters of a part. The technical result of the invention as claimed is to increase the accuracy, reliability and performance of measurements related to determining the shape and defects of parts installed in cavities, as well as the shape of the internal cavities of products and surface discontinuities. The method for controlling the shape of hard-to-reach parts, including the stages of delivery inside the controlling equipment of the executive part of the control system equipped with a miniature stereo camera, for navigation across the path of which white light illumination is used, the white light illumination is transmitted through an optical fiber, after leaving which the given indicatrix of intensity is formed by means of the first lens. This is followed by the stage of turning off or dimming the white light, followed by turning on the laser, which, by means of an optical fiber transmitting a laser stream passing through the second lens, forms a beam of rays, which, passing through a diffraction optical element, forms an image with small laser spots of intensity on the hard-to-reach surface of the part to obtain a stereo image of the hard-to-reach surface of the part. A three-dimensional hard-to-reach surface is restored, while laser intensity spots on a pair of flat images are used to automatically identify the same points of object on stereo images with a given degree of confidence. The inspection system consists of a miniature digital camera that forms a stereo image of the object under control, a white light illumination unit, a laser illuminator operating in pulsed or continuous mode, a handle placed on the articulation control panel, a PC for processing and displaying information, articulation cables, two lenses, DOE, a laser fiber, lighting fiber, power line and a video signal transmission line.

Method for displaying a model of a surrounding area, control unit and vehicle

A method, including recording a first and a second camera image; the first camera image and the second camera image having an overlap region. The method includes: assigning pixels of the first camera image and pixels of the second camera image to predefined points of a three-dimensional lattice structure, the predefined points being situated in a region of the three-dimensional lattice structure, which represents the overlap region; ascertaining a color information item difference for each predefined point as a function of the assigned color information items; ascertaining a quality value as a function of the ascertained color information item difference at the specific, predefined point; determining a global color transformation matrix as a function of the color information item differences, weighted as a function of the corresponding quality value; and adapting the second camera image as a function of the determined color transformation matrix.

SYSTEM AND METHOD FOR INTERACTIVELY RENDERING AND DISPLAYING 3D OBJECTS
20220124294 · 2022-04-21 ·

A 3D rendering solution that can include machine learning to convert static images into interactable 3D objects. The solution can analyze pixel data containing position data, color data and luminosity data. The solution can identify subobjects, which can include a collection of pixels at a specific location, with an infinite number of color luminosity combinations tied to perspective. The solution can use perspective as an input, along with a product feed and library of subobjects to render the interactable 3D objects. The solution can capture relative perspective with a protractic device to accurately measure relative perspective. The libraries can include libraries of subobjects along with their metadata. The solution can analyze pixel patterns to identify appropriate sets of subobjects to create new objects of the same object class, which can be further refined by the solution's ability to capture relative perspective.

An Image Synthesis Method, Apparatus and Device for Free-viewpoint
20210368153 · 2021-11-25 ·

The embodiment of the present specification discloses an image synthesis method, apparatus and device for free-viewpoint. The method includes: correcting a first depth map based on a first color map and the first depth map inputted, wherein the first color map includes a left image and a right image, and the first depth map includes a left image and a right image; forward-projecting the second depth map to a virtual viewpoint position based on the pose of a virtual viewpoint to obtain a third depth map, where the third depth map is a projection map located at the virtual viewpoint position; back-projecting the left and right color maps closest to the virtual viewpoint position in the first color map to the virtual viewpoint position based on the third depth map to obtain a second color map, where the second color map is a projection view of the first color map at the virtual viewpoint position; correcting the second color map by using an optical flow algorithm to obtain a third color map, where the optical flow algorithm implements aligning the left and right images of the second color map; performing weighting calculation and blendering on the third color map to obtain a composite image.

Modification of a live-action video recording using volumetric scene reconstruction to replace a designated region

A main video sequence of a live action scene is captured along with ancillary device data to provide corresponding volumetric information about the scene. The volumetric data can then be used to visually remove or replace objects in the main video sequence. A removed object is replaced by the view that would have been captured by the main video sequence had the removed object not been present in the live action scene at the time of capturing.

Image splicing method and apparatus, and storage medium

An image splicing method includes obtaining a first overlapping image and a second overlapping image from a first image and a second image to-be-spliced. The method also includes determining a motion vector from each pixel in the first overlapping image to a corresponding pixel in the second overlapping image, to obtain an optical flow vector matrix; according to the optical flow vector matrix, remapping the first overlapping image to obtain a first remapping image, and remapping the second overlapping image to obtain a second remapping image; and merging the first remapping image and the second remapping image, to obtain a merged image of the first overlapping image and the second overlapping image, and determining a spliced image of the first image and the second image according to the merged image.