Patent classifications
G06T2219/024
Shared augmented reality sessions for rendering video effects
Systems and methods for generating a video including a plurality of graphic objects provided in a shared environment is described. The method includes acquiring, at a first computing device, a shared session identifier from a shared session manager, the shared session identifier being associated with a first user identifier, receiving a selection of a second user identifier, causing the shared session identifier and the first user identifier to be provided to a second computing device associated with the second user identifier, receiving as input, a first graphic object for rendering to a display associated with the first computing device, the first graphic object being associated with the first user identifier, receiving from a data synchronizer, a second graphic object associated with the second user identifier and the shared session identifier for rendering to the display associated with the second computing device, and generating a video including graphic objects.
Method And Apparatus For Real-time Data Communication in Full-Presence Immersive Platforms
Real-time two-way communication between a user immersed in a virtual reality (VR) environment and another party who is not in the VR environment. The sequence of video frames being presented to the VR user on, e.g., a head mounted display, i.e., the VR user's “field of view” in the VR environment, is shown to a guest user on their computing device. Simultaneously, video from a camera in a guest user computing device is shown to the VR user in a window in the VR environment. Audio is also exchanged in real time between the VR user and guest user by use of a microphone and loudspeaker in both the VR system and the guest user computing device. The VR user's system can also support both rotational and translational user movement thus properly changing the VR user's apparent distance from objects displayed in the VR environment.
METHOD AND SYSTEM FOR REMOTE COLLABORATION
A method for a remote collaboration, as a method for providing an augmented reality (AR)-based remote collaboration between a robot located in a worksite, a field worker terminal, and a remote administrator terminal located outside the worksite, includes acquiring a captured image including a field image captured by a robot located at the worksite or a captured image including a user image captured by the field worker terminal, displaying the captured image of the worksite, generating virtual content based on an input of a remote administrator and a field worker with respect to the displayed captured image, and displaying an AR image in which the virtual content is augmented on the displayed captured image.
Mobile Viewer Object Statusing
An example computing platform is configured to (i) maintain a three-dimensional, federated model of a construction project, where the model includes respective objects created using at least two different authoring tools, (ii) receive, via a client device installed with a viewing tool for displaying the model, one or more user inputs that collectively (a) select a displayed representation of a given object within the model and (b) assign a value for a property of the given object, (iii) based on the one or more inputs, identify a GUID of the given object within a hierarchical data structure for the model and cause the model to be updated by associating the assigned value for the property with the GUID of the given object, and (iv) cause the client device to display, via the viewing tool, the updated model including an indication of the assigned value for the property of the given object.
Systems And Methods For Providing Real-Time Composite Video From Multiple Source Devices Featuring Augmented Reality Elements
Systems and methods for superimposing the human elements of video generated by computing devices, wherein a first user device and second user device capture and transmit video to a central server which analyzes the video to identify and extract human elements, superimpose these human elements upon one another, adds in at least one augmented reality element, and then transmits the newly created superimposed video back to at least one of the user devices.
VISUALIZATION OF CAMERA LOCATION IN A REAL-TIME SYNCHRONIZED 3D MESH
Embodiments include systems and methods for visualizing the position of a capturing device within a 3D mesh, generated from a video stream from the capturing device. A capturing device may provide a video stream along with point cloud data and camera pose data. This video stream, point cloud data, and camera pose data are then used to progressively generate a 3D mesh. The camera pose data and point cloud data can further be used, in conjunction with a SLAM algorithm, to indicate the position and orientation of the capturing device within the generated 3D mesh.
System And Method For Capturing And Sharing A Location Based Experience
A system and method for capturing a location based experience at an event including a plurality of mobile devices having a camera employed near a point of interest to capture random, crowdsourced images and associated metadata near said point of interest. In a preferred form, the images include depth camera information from prepositioned devices around the point of interest during the event. A network communicates images, depth information, and metadata to build a 3D model of the region, preferably with the location of contributors known. Users connect to this experience platform to view the 3D model from a user selected location and orientation and to participate in experiences with, for example, a social network.
3D OBJECT ANNOTATION
Disclosed herein are systems and methods for presenting and annotating virtual content. According to an example method, a virtual object is presented to a first user at a first position via a transmissive display of a wearable device. A first input is received from the first user. In response to receiving the first input, a virtual annotation is presented at a first displacement from the first position. A first data is transmitted to a second user, the first data associated with the virtual annotation and the first displacement. A second input is received from the second user. In response to receiving the second input, the virtual annotation is presented to the first user at a second displacement from the first position. Second data is transmitted to a remote server, the second data associated with the virtual object, the virtual annotation, the second displacement, and the first position.
System and Method for an Interactive Digitally Rendered Avatar of a Subject Person
A system and method for an interactive digitally rendered avatar of a subject person to participate in a web meeting is described. In one embodiment, the method includes receiving an invite to a web meeting on a video conferencing platform, wherein the invite identifies a subject person and the video conferencing platform. The method also includes generating an interactive avatar of the subject person based on a data collection associated with the subject person stored in a database. The method further includes instantiating a platform integrator associated with the video conferencing platform identified in the invite and joining, by the interactive avatar of the subject person, the web meeting on the video conferencing platform. The platform integrator transforms outputs and inputs between the video conferencing platform and an interactive digitally rendered avatar system so that the interactive avatar of the subject person participates in the web meeting.
Systems and methods for creating and sharing virtual and augmented experiences
Techniques for creating compelling extended reality (XR) environments, including virtual reality (VR) and mixed reality (MR), and other computer-generated experiences, are provided. In some embodiments, a VR and MR system, including a computer hardware- and software-based control system, controls a specialized headset, hand controls, and a distributed array of sensors and actuators to produce a VR or MR environment with compelling VR and MR display and social interaction features. In some embodiments, the VR and MR system creates and provides escalating levels of data access, permissions and experiences for users, based on different, multi-phased ratings. In some embodiments, a first rating sets a level of access to gameplay leading to a second rating. In some such embodiments, one user's VR or MR experience related to another user is modified aesthetically, haptically or otherwise, depending on the levels granted by another user, and other attributes.