G06T2215/16

SYSTEMS AND METHODS FOR ANIMATING A SIMULATED FULL LIMB FOR AN AMPUTEE IN VIRTUAL REALITY
20230023609 · 2023-01-26 ·

A system and method for generating simulated full limb animations in real time based on sensor and tracking data. A computing environment for receiving and processing tracking data from one or more sensors, for mapping tracking data onto a 3D model having a skeletal hierarchy and a surface topology, and for rendering an avatar for display in virtual reality. A method for animating a full-bodied avatar from tracking data collected from an amputee. A means for determining, predicting, or modulating movements an amputee intends to make with his or her simulated full limb. A modified inverse kinematics method for arbitrarily and artificially overriding a position and orientation of a tracked end effector. Synchronous virtual reality therapeutic activities with predefined movement patterns that may modulate animations.

REALITY VS VIRTUAL REALITY RACING

A method for displaying a virtual vehicle includes: calculating a virtual world comprising the virtual vehicle and a representation of a physical object at a virtual position; calculating a virtual position of a point of view within the virtual world based on a position of the point of view at the racecourse; and calculating a portion of the virtual vehicle within the virtual world that is visible from the virtual position of the point of view, wherein the portion of the virtual vehicle visible from the virtual position of the point of view comprises a portion of the virtual vehicle that is unobscured, from the virtual position of the point of view, by the representation of the physical object at the virtual position of the physical object.

ENFORCING VIRTUAL OBSTACLES IN A LOCATION-BASED EXPERIENCE
20230230327 · 2023-07-20 ·

Methods and systems for a virtual experience where a user moves physically to move his or her ‘agent’, a position in the virtual space that determines which virtual content the user may interact with. If the virtual space contains obstacles, and the user moves such that the agent would intersect with one, either (a) move the correlation vector between the virtual space and real space to counteract the user's physical movement, preventing the user from penetrating the obstacle, or (b) allow the user to proceed, but become separated from his or her agent. Also described are applying these methods if the real space is too small to fit the virtual space, if the agent has a speed limitation that the user violates, how to direct the agent when it is separated from the user, and how to handle three-dimensional obstacles, real-world obstacles, and multiple users.

IMAGE GENERATION DEVICE, IMAGE GENERATION METHOD, AND PROGRAM

There is provided an image generation device, an image generation method, and a program capable of observing any point on the ground from a free viewpoint in the sky at any time. The image generation device includes an image generation unit that generates a free viewpoint image of a predetermined point on the ground viewed from a predetermined virtual viewpoint in the sky using a 3D model of a stationary subject generated using a satellite image captured by an artificial satellite and dynamic subject identification information that identifies a dynamic subject. The present technology can be applied to, for example, an image generation device and the like that generate a free viewpoint image from a satellite image captured by an artificial satellite.

THREE-DIMENSIONAL (3D) TERRAIN RECONSTRUCTION METHOD FOR SCOURED AREA AROUND BRIDGE PIER FOUNDATION BASED ON MECHANICAL SCANNED IMAGING SONAR
20230014144 · 2023-01-19 ·

A three-dimensional (3D) terrain reconstruction method for a scoured area around bridge pier foundation based on a mechanical scanned imaging sonar includes scanning an overall terrain of a scoured area around bridge pier foundation with a sonar from different azimuths to acquire n sonar images of a foundation scouring terrain; intercepting multiple analysis sections from each of acquired sonar images at a same distance; extracting key parameters of upper and lower edges on a terrain imaging strip in each of the analysis sections in the image, and transforming extracted parameters to a 3D space, a fan-shaped beam surface of the sonar being represented with a fan-shaped arc; recognizing a scour terrain profile in the analysis section; recognizing terrain profiles one by one, and respectively extracting spatially scattered 3D coordinate data; and performing interpolation and fitting on the spatially scattered data, thus implementing 3D reconstruction for the foundation scouring terrain.

Unmanned aerial vehicle (UAV) data collection and claim pre-generation for insured approval

Systems and methods are described for using data collected by unmanned aerial vehicles (UAVs) to generate insurance claim estimates that an insured individual may quickly review, approve, or modify. When an insurance-related event occurs, such as a vehicle collision, crash, or disaster, one or more UAVs are dispatched to the scene of the event to collect various data, including data related to vehicle or real property (insured asset) damage. With the insured's permission or consent, the data collected by the UAVs may then be analyzed to generate an estimated insurance claim for the insured. The estimated insurance claim may be sent to the insured individual, such as to their mobile device via wireless communication or data transmission, for subsequent review and approval. As a result, insurance claim handling and/or the online customer experience may be enhanced.

REDACTING CONTENT IN A VIRTUAL REALITY ENVIRONMENT

A method for execution by a computer generating a virtual reality environment utilizing a group of object representations by identifying an exclusion asset and modifying a set of common illustrative assets to exclude the exclusion asset to produce a redacted set of common illustrative assets.

The method further includes rendering a portion of the redacted set of common illustrative asset to produce a redacted set of common illustrative asset video frames and selecting a subset of the redacted set of common illustrative asset video frames to produce a common portion of video frames for the virtual reality environment. The method further includes rendering representations of object representations to produce remaining portions of the video frames for the virtual reality environment. The method further includes linking the common portion and the remaining portions of the video frames to produce the virtual reality environment for interactive consumption.

Iterative ray-tracing for autoscaling of oblique ionograms

This invention relates generally to ionogram image processing, autoscaling and inversion systems and methods for ionospheric monitoring, modeling, and estimation of the same. One advantage of the present invention is to provide a system, e.g., a lightweight, low-power, and fully-autonomous ionospheric monitoring system that is able to provide fully processed and highly accurate ionosphere characterization in near real-time over a low data-rate satellite link.

Method, system, and computer-readable storage media for rendering of object data based on recognition and/or location matching

A system described herein uses data obtained from a wearable device of a first user to identify a second user and/or to determine that the first user is within a threshold distance of the second user. The system can then access an account of the second user to identify one or more items and retrieve model data for the item(s). The system causes the wearable device of the first user to render, for display in an immersive 3D environment (e.g., an augmented reality environment), an item associated with the account of the second user. The item can be rendered for display at a location on a display that is proximate to the second user (e.g., within a threshold distance of the second user) such that the item graphically corresponds to the second user. The item rendered for display may be an item of interest to the first user.

Stylized image painting
11699259 · 2023-07-11 · ·

A photo filter (e.g., artistic/stylized painting) light field effect system includes an eyewear device having a frame, a temple connected to a lateral side of the frame, and a depth-capturing camera. Execution of programming by a processor configures the stylized image painting effect system to apply a photo filter selection to: (i) a left raw image or a left processed image to create a left photo filter image, and (ii) a right raw image or a right processed image to create a right photo filter image. The stylized image painting effect system generates a photo filter stylized painting effect image with an appearance of a spatial rotation or movement, by blending together the left photo filter image and the right photo filter image based on a left image disparity map and a right image disparity map.