Patent classifications
H04N13/243
Efficient Delivery of Multi-Camera Interactive Content
Techniques are disclosed relating to encoding recorded content for distribution to other computing devices. In various embodiments, a first computing device records content of a physical environment in which the first computing device is located, the content being deliverable to a second computing device configured to present a corresponding environment based on the recorded content and content recorded by one or more additional computing devices. The first computing device determines a pose of the first computing device within the physical environment and encodes the pose in a manifest usable to stream the content recorded by the first computing device to the second computing device. The encoded pose is usable by the second computing device to determine whether to stream the content recorded by the first computing device.
360-DEGREE VIRTUAL-REALITY SYSTEM FOR DYNAMIC EVENTS
A dynamic event capturing and rendering system collects and aggregates video, audio, positional, and motion data to create a comprehensive user perspective 360-degree rendering of a field of play. An object associated with a user collects data that is stitched together and synchronized to provide post event analysis and training. Through an interface actions that occurred during an event can be recreated providing the viewer with information on what the user associated with the object was experiencing, where the user was looking, and how certain actions may have changed the outcome. Using the collected data, a virtual realty environment is created that can be manipulated to present alternative courses of action and outcomes.
AUTONOMOUS WALKING VEHICLE
In one aspect, a vehicle is provided that includes i) a plurality of wheel-leg components and ii) a surround view imaging system for generating a surround view image of the vehicle. The plurality of wheel-leg components can operate to provide locomotion to the vehicle. The surround view image comprising a 360-degree, three-dimensional view of an environment surrounding the vehicle. The vehicle is configured to operate autonomously using the surround image view to control the locomotion of the plurality of the wheel-leg components.
AUTONOMOUS WALKING VEHICLE
In one aspect, a vehicle is provided that includes i) a plurality of wheel-leg components and ii) a surround view imaging system for generating a surround view image of the vehicle. The plurality of wheel-leg components can operate to provide locomotion to the vehicle. The surround view image comprising a 360-degree, three-dimensional view of an environment surrounding the vehicle. The vehicle is configured to operate autonomously using the surround image view to control the locomotion of the plurality of the wheel-leg components.
SYSTEM AND METHOD FOR GENERATING COMBINED EMBEDDED MULTI-VIEW INTERACTIVE DIGITAL MEDIA REPRESENTATIONS
Various embodiments describe systems and processes for capturing and generating multi-view interactive digital media representations (MIDMRs). In one aspect, a method for automatically generating a MIDMR comprises obtaining a first MIDMR and a second MIDMR. The first MIDMR includes a convex or concave motion capture using a recording device and is a general object MIDMR. The second MIDMR is a specific feature MIDMR. The first and second MIDMRs may be obtained using different capture motions. A third MIDMR is generated from the first and second MIDMRs, and is a combined embedded MIDMR. The combined embedded MIDMR may comprise the second MIDMR being embedded in the first MIDMR, forming an embedded second MIDMR. The third MIDMR may include a general view in which the first MIDMR is displayed for interactive viewing by a user on a user device. The embedded second MIDMR may not be viewable in the general view.
SYSTEM AND METHOD FOR GENERATING COMBINED EMBEDDED MULTI-VIEW INTERACTIVE DIGITAL MEDIA REPRESENTATIONS
Various embodiments describe systems and processes for capturing and generating multi-view interactive digital media representations (MIDMRs). In one aspect, a method for automatically generating a MIDMR comprises obtaining a first MIDMR and a second MIDMR. The first MIDMR includes a convex or concave motion capture using a recording device and is a general object MIDMR. The second MIDMR is a specific feature MIDMR. The first and second MIDMRs may be obtained using different capture motions. A third MIDMR is generated from the first and second MIDMRs, and is a combined embedded MIDMR. The combined embedded MIDMR may comprise the second MIDMR being embedded in the first MIDMR, forming an embedded second MIDMR. The third MIDMR may include a general view in which the first MIDMR is displayed for interactive viewing by a user on a user device. The embedded second MIDMR may not be viewable in the general view.
ELECTRONIC DEVICE AND OPERATING METHOD OF ELECTRONIC DEVICE
Disclosed is an operating method of an electronic device which includes a processor performing machine learning of a monocular depth estimation module. The operating method includes obtaining, by the processor, a first image and a second image respectively photographed by a first camera and a second camera of different locations, inferring, by the processor, a plurality of multi-cyclic disparities by applying weights of the monocular depth estimation module to the first image plural times and calculating a plurality of multi-cyclic loss functions based on the first image, the second image, and the plurality of multi-cyclic disparities, and updating, by the processor, the weights of the monocular depth estimation module through machine learning, based on the plurality of multi-cyclic loss functions.
ELECTRONIC DEVICE AND OPERATING METHOD OF ELECTRONIC DEVICE
Disclosed is an operating method of an electronic device which includes a processor performing machine learning of a monocular depth estimation module. The operating method includes obtaining, by the processor, a first image and a second image respectively photographed by a first camera and a second camera of different locations, inferring, by the processor, a plurality of multi-cyclic disparities by applying weights of the monocular depth estimation module to the first image plural times and calculating a plurality of multi-cyclic loss functions based on the first image, the second image, and the plurality of multi-cyclic disparities, and updating, by the processor, the weights of the monocular depth estimation module through machine learning, based on the plurality of multi-cyclic loss functions.
Stereo correspondence search
Methods, systems, devices and computer software/program code products enable efficiently finding stereo correspondence between a feature or set of features in a first image or signal, and a search domain in a second image or signal.
Stereo correspondence search
Methods, systems, devices and computer software/program code products enable efficiently finding stereo correspondence between a feature or set of features in a first image or signal, and a search domain in a second image or signal.