Patent classifications
H04N5/2224
Live Teleporting System and Apparatus
A method of producing a Pepper's Ghost, includes projecting an image of a subject onto a reflective and transparent screen to create a virtual image of the subject alongside an object, the subject in the virtual image having a colour temperature. The object is illuminated with light having a colour and intensity that results in a colour temperature of the object at least approximately matching the colour temperature of the subject in the virtual image. The subject in the virtual image has a luminance and may be illuminated with light having a colour and intensity that results in a luminance of the object at least approximately matching the luminance of the subject in the virtual image.
Volumetric representation of digital objects from depth renderings
An image processing system includes a computing platform having processing hardware, a display, and a system memory storing a software code. The processing hardware executes the software code to receive a digital object, surround the digital object with virtual cameras oriented toward the digital object, render, using each one of the virtual cameras, a depth map identifying a distance of that one of the virtual cameras from the digital object, and generate, using the depth map, a volumetric perspective of the digital object from a perspective of that one of the virtual cameras, resulting in multiple volumetric perspectives of the digital object. The processing hardware further executes the software code to merge the multiple volumetric perspectives of the digital object to form a volumetric representation of the digital object, and to convert the volumetric representation of the digital object to a renderable form.
SYSTEMS AND METHODS FOR CREATING A 2D FILM FROM IMMERSIVE CONTENT
Systems, methods, and non-transitory computer-readable media can obtain data associated with a computer-based experience. The computer-based experience can be based on interactive real-time technology. At least one virtual camera can be configured within the computer-based experience in a real-time engine. Data associated with an edit cut of the computer-based experience can be obtained based on content captured by the at least one virtual camera. A plurality of shots that correspond to two-dimensional content can be generated from the edit cut of the computer-based experience in the real-time engine. Data associated with a two-dimensional version of the computer-based experience can be generated with the real-time engine based on the plurality of shots. The two-dimensional version can be rendered based on the generated data.
SYSTEM AND METHOD FOR DETERMINING MEDIATED REALITY POSITIONING OFFSET FOR A VIRTUAL CAMERA POSE TO DISPLAY GEOSPATIAL OBJECT DATA
In various aspects, there is provided a system and method for determining a mediated reality positioning offset for a virtual camera pose to display geospatial object data. The method comprising: determining or receiving positioning parameters associated with a neutral position; subsequent to determining or receiving the positioning parameters associated with the neutral position, receiving positioning data representing a subsequent physical position; determining updated positioning parameters associated with the subsequent physical position; determining a updated offset comprising determining a geometric difference between the positioning parameters associated with a neutral position and the updated positioning parameters associated with the subsequent physical position; and outputting the updated offset.
Systems and methods for generating virtual item displays
Systems, methods, and devices of the various embodiments enable virtual displays of an item, such as vehicle, to be generated. In an embodiment, a plurality of images of an item may be captured and annotation may be provided to one or more of the images. In an embodiment, the plurality of images may be displayed, and the transition between each of the plurality of images may be an animated process. In an embodiment, an item imaging system may comprise a structure including one or more cameras and one or more lights, and the item imaging system may be configured to automate at least a portion of the process for capturing the plurality of images of an item.
Apparatus and system for virtual camera configuration and selection
A system and method for virtual camera configuration and selection. For example, one embodiment of a system comprises: a decode subsystem comprising circuitry to concurrently decode a plurality of video streams captured by cameras at an event to generate decoded video streams from a perspective of corresponding virtual cameras (VCAMs); video evaluation logic to apply at least one video quality metric to determine a quality value for the decoded video streams or a subset thereof, and to rank the decoded video streams based, at least in part, on the quality values associated with the decoded video streams; preview logic to provide the decoded video streams or modified versions thereof to one or more computing devices accessible to one or more video production team members and to further provide the quality values and/or the rank generated by the video quality evaluation logic; stream selection hardware logic to select a subset of the plurality of decoded video streams based on input from the one or more video production team members; and transcoder hardware logic to transcode the subset of the plurality of decoded video streams for live transmission over a public or private network.
LED Panel Having Integrated Infrared Retroreflectors For Video Volume Production Environments
A LED panel wherein a plurality thereof are employed to construct a LED volume for filing simulated virtual environments wherein the LED panel of the present invention provides tracking of objects in the vicinity thereof such as but not limited to a camera wherein the tracking elements do not require post production removal. The LED panel includes a housing having a perimeter frame that has a transparent LED display mounted thereto. On the rear surface of the LED display or proximate thereto are a plurality of retroreflectors. The retroreflectors function to provide inside-out tracking of a camera disposed within the LED volume. The present invention further includes a plurality of lidar sensors and optical sensors mounted to the perimeter frame. The lidar sensors and optical sensors provide data for outside-in tracking of a camera within the LED volume. The retroreflectors can be provided in multiple alternate embodiments.
Systems and methods for tracking objects in a field of view
Systems and methods for tracking objects in a field of view are disclosed. In one embodiment a method may include capturing, via a camera, a real-world object in the field of view; generating a first object data associating the real-world object with a first position of the real-world object in a real-world environment at a first time; generating a virtual object representative of the real-world object depicting the real-world object in the first position at the first time; generating a second object data associating the real-world object with a second position of the real-world object in the real-world environment at a second time, determining a displacement value of the real-world object between the first position and the second position, modifying the virtual object to include an indication that the real-world object has been displaced when the displacement value is greater than a threshold value.
Indoor Producing of High Resolution Images of the Commonly Viewed Exterior Surfaces of Vehicles, Each with the Same Background View
Disclosed is an apparatus and a process for producing and viewing through the internet high-resolution images of the commonly viewed exterior surfaces of a vehicle, while maintaining the same background view for multiple images of the vehicle. The background and the imaging device are revolved around a vehicle which is maintained in fixed position between the background and the imaging device. There can be two or more opposed imaging devices and two or more opposed displays. The vehicle does not need to be rotated or moved during the imaging.
Creating and distributing interactive addressable virtual content
Systems and methods create and distribute addressable virtual content with interactivity. The virtual content may depict a live event and may be customized for each individual user based on dynamic characteristics (e.g., habits, preferences, etc.) of the user that are captured during user interaction with the virtual content. The virtual content is generated with low latency between the actual event and the live content that allows the user to interactively participate in actions related to the live event. The virtual content may represent a studio with multiple display screens that each show different live content (of the same or different live events), and may also include graphic displays that include related data such as statistics corresponding to the live event, athletes at the event, and so on. The content of the display screens and graphics may be automatically selected based on the dynamic characteristics of the user.