Patent classifications
G06T7/593
Method for image processing of image data for image and visual effects on a two-dimensional display wall
A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values to add or modify an image effect or a visual effect are determined, and the image data is adjusted using the pixel display values and the image matte.
Method for image processing of image data for image and visual effects on a two-dimensional display wall
A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values to add or modify an image effect or a visual effect are determined, and the image data is adjusted using the pixel display values and the image matte.
Systems and methods for digitally representing a scene with multi-faceted primitives
Disclosed is a system and associated methods for generating and rendering a polyhedral point cloud that represents a scene with multi-faceted primitives. Each multi-faceted primitive stores multiple sets of values that represent different non-positional characteristics that are associated with a particular point in the scene from different angles. For instance, the system generates a multi-faceted primitive for a particular point of the scene that is captured in first capture from a first position and a second capture from a different second position. Generating the multi-faceted primitive includes defining a first facet with a first surface normal oriented towards the first position and first non-positional values based on descriptive characteristics of the particular point in the first capture, and defining a second facet with a second surface normal orientated towards the second position and second non-positional values based on different descriptive characteristics of the particular point in the second capture.
Systems and methods for improved 3-D data reconstruction from stereo-temporal image sequences
In some aspects, the techniques described herein relate to systems, methods, and computer readable media for data pre-processing for stereo-temporal image sequences to improve three-dimensional data reconstruction. In some aspects, the techniques described herein relate to systems, methods, and computer readable media for improved correspondence refinement for image areas affected by oversaturation. In some aspects, the techniques described herein relate to systems, methods, and computer readable media configured to fill missing correspondences to improve three-dimensional (3-D) reconstruction. The techniques include identifying image points without correspondences, using existing correspondences and/or other information to generate approximated correspondences, and cross-checking the approximated correspondences to determine whether the approximated correspondences should be used for the image processing.
Systems and methods for improved 3-D data reconstruction from stereo-temporal image sequences
In some aspects, the techniques described herein relate to systems, methods, and computer readable media for data pre-processing for stereo-temporal image sequences to improve three-dimensional data reconstruction. In some aspects, the techniques described herein relate to systems, methods, and computer readable media for improved correspondence refinement for image areas affected by oversaturation. In some aspects, the techniques described herein relate to systems, methods, and computer readable media configured to fill missing correspondences to improve three-dimensional (3-D) reconstruction. The techniques include identifying image points without correspondences, using existing correspondences and/or other information to generate approximated correspondences, and cross-checking the approximated correspondences to determine whether the approximated correspondences should be used for the image processing.
Synthesizing three-dimensional visualizations from perspectives of onboard sensors of autonomous vehicles
Aspects of the disclosure provide for generating a visualization of a three-dimensional (3D) world view from the perspective of a camera of a vehicle. For example, images of a scene captured by a camera of the vehicle and 3D content for the scene may be received. A virtual camera model for the camera of the vehicle may be identified. A set of matrices may be generated using the virtual camera model. The set of matrices may be applied to the 3D content to create a 3D world view. The visualization may be generated using the 3D world view as an overlay with the image, and the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content.
Structural characteristic extraction using drone-generated 3D image data
A structural analysis computing device may generate a proposed insurance claim and/or generate a proposed insurance quote for an object pictured in a three-dimensional (3D) image. The structural analysis computing device may be coupled to a drone configured to capture exterior images of the object. The structural analysis computing device may include a memory, a user interface, an object sensor configured to capture the 3D image, and a processor in communication with the memory and the object sensor. The processor may access the 3D image including the object, and analyze the 3D images to identify features of the object—such as by inputting the 3D image into a trained machine learning or pattern recognition program. The processor may generate a proposed claim form for a damaged object and/or a proposed quote for an uninsured object, and display the form to a user for their review and/or approval.
Binocular See-Through AR Head-Mounted Display Device and Information Display Method Therefor
A binocular see-through AR head-mounted display device is disclosed. Based on that the mapping relationships f.sub.c.fwdarw.s and f.sub.d.fwdarw.i are pre-stored in the head-mounted device, the position of the target object in the camera image is obtained through an image tracking method, and is mapped to the screen coordinate system of the head-mounted device for calculating the left/right image display position. Through a monocular distance finding method, the distance between the target object and the camera is real-time calculated referring to the imaging scale of the camera, so as to calculate a left-right image distance, thereby calculating the right or the right image display position. Correspondingly, the present invention also provides an information display method for a binocular see-through AR head-mounted display device and an augmented reality information display system. The present invention is highly reliable with low cost. The conventional depth of field adjustment is to change an image distance of an optical element. However, the present invention breaks conventional thinking, which calculates the left and the right image display positions for depth of field adjustment without changing a structure of an optical device. The present invention is novel and practical compared to changing an optical focal length.
DETERMINING THE POSITION OF AN OBJECT IN A SCENE
A method of determining the position of an object in a scene, comprising: receiving captured images of the scene, each image being captured from a different field of view of the scene, wherein a portion of the scene with a volume comprises a detectable object, the volume is divided into volume portions, and each volume portion is within the captured field of view of at least two of the captured images so that an image of each volume portion appears in the at least two of the captured images; detecting, for each volume portion in each of the captured images within which an image of that volume portion appears, whether or not an image of one of the detectable objects in the scene is positioned within a distance of the position of the image of that volume portion, a correspondence between the images of the detectable objects detected in the at least two of the images is established, the correspondence indicating that the images of the detectable objects detected in the at least two of the images correspond to a single detectable object in the scene, and the position in the scene of that volume portion is established as a position in the scene of the single detectable object.
Multifunctional Sky Camera System for Total Sky Imaging and Spectral Radiance Measurement
A multifunctional sky camera system and techniques for the use thereof for total sky imaging and spectral irradiance/radiance measurement are provided. In one aspect, a sky camera system is provided. The sky camera system includes an objective lens having a field of view of greater than about 170 degrees; a spatial light modulator at an image plane of the objective lens, wherein the spatial light modulator is configured to attenuate light from objects in images captured by the objective lens; a semiconductor image sensor; and one or more relay lens configured to project the images from the spatial light modulator to the semiconductor image sensor. Techniques for use of the one or more of the sky camera systems for optical flow based cloud tracking and three-dimensional cloud analysis are also provided.