Patent classifications
G06T2219/2024
AUGMENTED REALITY HOLOGRAM VIRTUAL AQUARIUM SYSTEM
An augmented reality hologram virtual aquarium system includes a control unit which includes a virtual fish video control unit for generating and controlling a virtual fish video, a display which displays a virtual fish video generated and transmitted by the video control unit, an aquarium management unit which is connected with the control unit and includes a history management unit related with the growth of the virtual fish and an equipment supply unit supplying equipment related with the growth of the virtual fish, and a user input unit which inputs the selection of the virtual fish and the growth activity through the control unit.
Method and system for optimizing distance estimation
Distance estimation is optimized in virtual or augmented reality. A distance map of a surgical instrument to a region of interest is determined, at least at the beginning and when a position of the surgical instrument has changed. A render-image is rendered based on a medical 3D image and the position of the surgical instrument, at least at the beginning and when the position of the surgical instrument has changed. At least the region of interest and those parts of the surgical instrument positioned in the volume of the render-image are shown in the render-image. Based on the distance map, at least for a predefined area of the region of interest, visible, acoustic, and/or haptic distance-information is added.
SYSTEMS AND METHODS FOR GENERATING INTERACTIVE 360-DEGREE CONTENT
Systems and methods for identifying certain objects within 360-degree content for user interaction. Objects within 360-degree content may be identified, and the corresponding segments of 360-degree content may be assigned a score according to how likely they may be to meet certain criteria, such as the likelihood that the user may interact with the object or its segment. Scores may be assigned in any manner, such as with reference to retrieved user information. In highly scored segments, users may be encouraged to pause the content at that segment, and interact with its objects. Encouragement may take any form, such as highlighting the segment or some component thereof. Likewise, interaction may also take any form, such as the allowing the user to alter the appearance of one or more segment objects in some way. In this manner, increased user interaction with 360-degree video is allowed, increasing viewer interest and engagement.
MESSAGING SYSTEM WITH NEURAL HAIR RENDERING
A messaging system performs neural network hair rendering for images provided by users of the messaging system. A method of neural network hair rendering includes processing a three-dimensional (3D) model of fake hair and a first real hair image depicting a first person to generate a fake hair structure, and encoding, using a fake hair encoder neural subnetwork, the fake hair structure to generate a coded fake hair structure. The method further includes processing, using a cross-domain structure embedding neural subnetwork, the coded fake hair structure to generate a fake and real hair structure, and encoding, using an appearance encoder neural subnetwork, a second real hair image depicting a second person having a second head to generate an appearance map. The method further includes processing, using a real appearance renderer neural subnetwork, the appearance map and the fake and real hair structure to generate a synthesized real image.
EDITING A VIRTUAL REALITY SPACE
An editing terminal includes a simple display data acquisition unit that acquires simple display data from an item management server, an item selection processing unit that receives selection of an item from a plurality of items displayed using the simple display data, a three-dimensional data acquisition unit that acquires three-dimensional data of a selected item from the item management server, and an editing processing unit that displays an editing space on an editing screen on the basis of editing space information, receives an input of operation information regarding editing of the editing space using the three-dimensional data of the selected item, transmits the operation information to an editing server, and displays the editing space after editing on the editing screen.
POINT-BASED MODELING OF HUMAN CLOTHING
Provided are virtual try-on applications, telepresence applications, relating to modeling realistic clothing worn by humans and realistic modeling of humans in three-dimension (3D). Proposed is a hardware comprising software products that perform method for imaging clothes on a person, that is adapted to the body pose and the body shape, based on point cloud draping model, the method including using of point cloud and a neural network that synthesizes such point clouds to capture/model the geometry of clothing outfits, and using of point based differentiable neural rendering to capture the appearance of clothing outfits.
DISPLAYING A PRODUCT IN A SELECTED ENVIRONMENT
Disclosed herein are a system and method for displaying a product in a selected environment of a customer. In one aspect, the method comprises, scanning, using a user device, a selected environment to obtain an image of the selected environment, processing the obtained image and creating a 3D image of the selected environment, selecting a product for displaying, generating, using an augmented reality system, an augmented reality 3D image of the selected product superimposed onto the 3D image of the selected environment, wherein the generated augmented reality 3D image is at scale and anchored to the 3D image of the selected environment based on a location of the selected environment in relation to a location of the user device and a first user selected view, and rendering the augmented reality 3D image onto a 2D display device.
Messaging system with augmented reality messages
A method of generating an augmented reality lens comprises: causing to display a list of lens categories on a display screen of a client device; receiving a user choice from the displayed list; causing to prepopulate a lens features display on the display device based on the user choice, wherein each lens feature comprises image transformation data configured to modify or overlay video or image data; receiving a user selection of a lens feature from the prepopulated lens display; receiving a trigger selection that activates the lens feature to complete the lens; and saving the completed lens to a memory of a computer device.
METHOD & APPARATUS FOR PRODUCING A VIDEO IMAGE STREAM
Methods and apparatus are described for producing a computer-generated image by monitoring one or more kinds of activity of a user to identify at least one item of interest; and creating a computer-generated image including a representation of the item of interest overlaid onto an image of a model, environment or contextually relevant scenario. The process begins at 102 with monitoring a user's online activity. Software tracks and stores which purchasable items on the website/mobile application the user appears to be most interested. In a particular example, a user's abandoned basket feed is tracked and stored. In another example, cookies information is stored. Another indicator of items that a user is most interested in is length of time browsing a webpage associated with a particular item. Dwell time and cursor tracking software may also be used to identify which items the user viewed most. At 114, a model simulation is created. The object to be modelled relates to the items of interest. In the example where the items of interest are wearable items such as clothing garments, the model to be simulated is a human person said human person resembling the user as closely as possible based on the available personal data.
GRAPHICAL USER INTERFACE FOR CONTROLLING A SOLAR RAY MAPPING
Systems, methods, and computer-readable media are described herein to model divergent beam ray paths between locations on a roof (e.g., of a structure) and modeled locations of the sun at different times of the day and different days during a week, month, year, or another time period. A graphical user interface allows for visualization of the modeled ray paths and graphical manipulation of the resolution and parameters of the modeling process.