H04N13/332

DISPLAY HEADSET
20230164303 · 2023-05-25 ·

A headset can include a head mount; a support coupled to the head mount; and a multiply curved display coupled to the support.

DISPLAY HEADSET
20230164303 · 2023-05-25 ·

A headset can include a head mount; a support coupled to the head mount; and a multiply curved display coupled to the support.

TECHNIQUES FOR CAPTURING AND RENDERING VIDEOS WITH SIMULATED REALITY SYSTEMS AND FOR CONNECTING SERVICES WITH SERVICE PROVIDERS
20230111408 · 2023-04-13 ·

The disclosed techniques involve simulated reality systems, which can include a head mounted display (HMD) device that can remotely control a three-dimensional (3D) stereoscopic camera rig based on position, motions, and/or orientation data of the HMD device. The system may include multiple video cameras arranged side-by-side on a rig to capture video feeds of a real-world environment that can be stitched together in real-time to create a single stereoscopic 3D, 180 degree video rendered with a HMD as a panoramic video. An example of a use case includes pairing automotive body shops and insurance claims adjusters, and allowing them to perform insurance claim adjustments remotely via a live peer-to-peer video. Further, a process of creating an algorithm that pairs vehicle damages with insurance claim adjusters who have experience with particular vehicle makes and models is disclosed.

Method for a telepresence system
20230115563 · 2023-04-13 ·

There is provided a method comprising: receiving, at a local site, one or more perspective video-plus-depth streams from one or more remote sites, the video-plus-depth streams comprising video data and corresponding depth data from a viewpoint of a user at the local site; decoding the one or more perspective video-plus-depth streams; receiving a unified virtual geometry determining at least positions of participants at the local site and the one or more remote sites; forming a combined panorama based on the decoded one or more perspective video-plus-depth streams and the unified virtual geometry; and forming a plurality of focal planes based on the combined panorama and the depth data.

Method for a telepresence system
20230115563 · 2023-04-13 ·

There is provided a method comprising: receiving, at a local site, one or more perspective video-plus-depth streams from one or more remote sites, the video-plus-depth streams comprising video data and corresponding depth data from a viewpoint of a user at the local site; decoding the one or more perspective video-plus-depth streams; receiving a unified virtual geometry determining at least positions of participants at the local site and the one or more remote sites; forming a combined panorama based on the decoded one or more perspective video-plus-depth streams and the unified virtual geometry; and forming a plurality of focal planes based on the combined panorama and the depth data.

Event-based trigger interval for signaling of RTCP viewport for immersive teleconferencing and telepresence for remote terminals
11470300 · 2022-10-11 · ·

There is included a method and apparatus comprising computer code configured to cause a processor or processors to perform controlling a delivery of a video conference call to a viewport, setting an event-based threshold with respect to the video conference call, determining whether the event-based threshold has been triggered based on an event and whether an amount of time having elapsed from another event is less than a predetermined amount of time, and further controlling the delivery of the video conference call to the viewport based on determining whether the event-based threshold has been triggered and whether the amount of time having elapsed from the other event is less than the predetermined amount of time.

Event-based trigger interval for signaling of RTCP viewport for immersive teleconferencing and telepresence for remote terminals
11470300 · 2022-10-11 · ·

There is included a method and apparatus comprising computer code configured to cause a processor or processors to perform controlling a delivery of a video conference call to a viewport, setting an event-based threshold with respect to the video conference call, determining whether the event-based threshold has been triggered based on an event and whether an amount of time having elapsed from another event is less than a predetermined amount of time, and further controlling the delivery of the video conference call to the viewport based on determining whether the event-based threshold has been triggered and whether the amount of time having elapsed from the other event is less than the predetermined amount of time.

SYSTEM AND METHOD FOR PRESENTING VIRTUAL REALITY CONTENT TO A USER BASED ON BODY POSTURE

A system and/or method that uses a body posture of a user to determine and modulate a content mode of a virtual reality system. The content mode may define the manner in which virtual reality content is presented to the user and/or the manner in which the user interacts with the virtual reality content. The user's body posture and/or a change in body posture may cause the content mode and/or the virtual reality content to change accordingly. In some implementations, primary content may be presented to the user according to a first content mode in response to the user sitting. Secondary virtual reality content may be presented to the user according to the second content mode in response to the user standing. As such, a user may initiate a change in the virtual reality content and/or the content mode by standing from a sitting posture and/or sitting from a standing posture.

SYSTEM AND METHOD FOR LOCATION DETERMINATION USING A MIXED REALITY DEVICE AND MULTIPLE IMAGING CAMERAS

A system and method for determining a location for a surgical jig in a surgical procedure includes providing a mixed reality headset, a 3D spatial mapping camera, an infrared or stereotactic camera, and a computer system configured to transfer data to and from the mixed reality headset and the 3D spatial mapping camera. The system and method also include attaching a jig to a bone, mapping the bone and jig using the 3D spatial mapping camera, and then identifying a location for the surgical procedure using the computer system. Then the system and method use the mixed reality headset to provide a visualization of the location for the surgical procedure.

SYSTEM AND METHOD FOR LOCATION DETERMINATION USING A MIXED REALITY DEVICE AND MULTIPLE IMAGING CAMERAS

A system and method for determining a location for a surgical jig in a surgical procedure includes providing a mixed reality headset, a 3D spatial mapping camera, an infrared or stereotactic camera, and a computer system configured to transfer data to and from the mixed reality headset and the 3D spatial mapping camera. The system and method also include attaching a jig to a bone, mapping the bone and jig using the 3D spatial mapping camera, and then identifying a location for the surgical procedure using the computer system. Then the system and method use the mixed reality headset to provide a visualization of the location for the surgical procedure.