G06T3/0056

Signal processors and methods for estimating transformations between signals with phase estimation
09836807 · 2017-12-05 · ·

A phase estimation method estimates the phase of signal components using a point spread function. The method obtains a point spread function that expresses complex frequencies at a non integer location in terms of integral frequencies, for a complex frequency of a signal at a non integer location in a complex frequency domain. It obtains complex frequencies of the signal for the integral frequencies, and computes a sum of products of the complex frequencies of the signal at the integral frequencies with the corresponding complex values of the point spread function to provide an estimate of phase of the signal at the non integer location.

Image processing apparatus, method, and medium to apply a restrictive condition
09836812 · 2017-12-05 · ·

An image processing apparatus includes a restrictive condition storage unit in which at least one restrictive condition, which is to be applied to an image to be output and acquired from a subject, is stored, an accepting unit that accepts an image that is obtained by shooting the subject and has at least one field, an image changing unit that applies the at least one restrictive condition to the at least one field of the image accepted by the accepting unit, changes the at least one field so that it satisfies the at least one restrictive condition, and acquires at least one new field, and an image output unit that outputs the at least one field acquired by the image changing unit, enabling an image having an overall balance to be output.

Method and image-processing device for detecting foreign objects on a transparent protective cover of a video camera
11670074 · 2023-06-06 · ·

A method for determining whether or not a transparent protective cover of a video camera comprising a lens-based optical imaging system is partly covered by a foreign object is disclosed. The method comprises: obtaining (402) a first captured image frame captured by the video camera with a first depth of field; obtaining (404) a second captured image frame captured by the video camera with a second depth of field which differs from the first depth of field; and determining (406) whether or not the protective cover is partly covered by the foreign object by analysing whether or not the first and second captured image frames are affected by presence of the foreign object on the protective cover such that the difference between the first depth of field and the second depth of field results in a difference in a luminance pattern of corresponding pixels of a first image frame and a second image frame. The first image frame is based on the first captured image frame and the second image frame is based on the second captured image frame.

Virtual image generating apparatus, virtual image generating method, and recording medium storing virtual image generating program
09807354 · 2017-10-31 · ·

An image generation apparatus, system, and method that include an image recording unit that records at least one image containing photography position information corresponding to a photography position of the image, an input unit for selecting a selected image from the at least one image recorded in the image recording unit, a map-image unit that acquires at least one map image corresponding to the photography position information of the selected image, a direction detection unit that detects a photography direction of the selected image from the photography position information and acquires at least one surrounding map image viewable from the photography position when the selected image was recorded, and a virtual image generation unit that generates at least one virtual image of a scene viewable from the photography position when the selected image was recorded based on the photography position information and the at least one surrounding map image.

DISPLAY DEVICE AND DISPLAY METHOD
20170307926 · 2017-10-26 · ·

An image signal line driver circuit includes first to third source drivers and fourth to sixth source drivers, which are respectively cascade-connected. The output duration of data signals that are provide to these source drivers is increasingly short on the source drivers that are connected further downstream (that is, the amount of pixel data to be output to the next stage is increasingly small). This reduces the power consumption and heat generation of the overall device. Moreover, the phases of the data signals are shifted, thereby reducing EMI. In this way, when a plurality of image signal line driver circuits are cascade-connected, heat generation and power consumption in each driver circuit and/or EMI therebetween is reduced.

System and method for processing video to provide facial de-identification

A system and method for real-time image and video face de-identification that removes the identity of the subject while preserving the facial behavior is described The facial features of the source face are replaced with that of the target face while preserving the facial actions of the source face on the target face. The facial actions of the source face are transferred to the target face using personalized Facial Action Transfer (FAT), and the color and illumination is adapted. Finally, the source image or video containing the target facial features is outputted for display. Alternatively, the system can run in real-time.

Color reconstruction

In one embodiment, coloring artifacts of a color image output by a camera are minimized by taking into account a distortion introduced by the lens. Based on the distortion, the color reconstruction determines which pixels in the grayscale image to include in the reconstruction process. Additionally, the color reconstruction can take into account edges depicted in the grayscale image to determine which pixels to include in the reconstruction process. In another embodiment, coloring artifacts in a 360 degree color image are minimized by performing the color reconstruction process on a three-dimensional surface. Before the color reconstruction takes place, the two-dimensional grayscale image is projected onto a three-dimensional surface, and the color reconstruction is performed on the three-dimensional surface. The color reconstruction on the three-dimensional surface can take into account the distortion produced by the lens and/or can take into account the edges depicted in the two-dimensional and three-dimensional grayscale image.

Input parameter based image waves
11671572 · 2023-06-06 · ·

A virtual wave creation system comprises an eyewear device that includes a frame, a temple connected to a lateral side of the frame, and a depth-capturing camera. Execution of programming by a processor configures the virtual wave creation system to generate, for each of multiple initial depth images, a respective wave image by applying a transformation function that is responsive to a selected input parameter to the initial three-dimensional coordinates. The virtual wave creation system creates a warped wave video including a sequence of the generated warped wave images. The virtual wave creation system presents, via an image display, the warped wave video.

Systems and methods for changing projection of visual content
09747667 · 2017-08-29 · ·

First visual information defining the visual content in a first projection may be accessed. Second visual information defining lower versions of the visual content in the first projection may be accessed. A transformation of the visual content from the first projection to a second projection may be determined. The transformation may include a visual compression of a portion of the visual content in the first projection. The portion may be identified. An amount of the visual compression of the portion may be determined. One or more lower resolution versions of the visual content may be selected. The visual content may be transformed using the one or more lower resolution versions of the visual content.

X-ray CT system and medical image processing method

An X-ray CT system and a method of processing medical images are provided that enable combining of images with reduced effect of the differences in coordinates of the pixels in the overlapped areas of a plurality of constituent images. The X-ray CT system includes a processor and a synthesizer. Based on coordinates of first pixels in a first image of a first three-dimensional region of the subject and coordinates of second pixels in a second image of a second three-dimensional region of the subject, the processor combines the first pixels with the second pixels on a one-for-one basis within a predetermined range in the rostrocaudal direction. The synthesizer generates third pixels relative to the first pixels and the second pixels and generates a third image that includes the third pixels.