Patent classifications
G06T19/20
3D Digital Imaging Technology for Apparel Sales and Manufacture
A manufacturing flow of apparel such as jeans uses a laser to finish the products. The products are designed using a digital design tool, where photorealistic previews are generated in three dimensions and two dimensions. Imagery of the products are sent to retailers where customers can order the products, such as online orders. Imagery of the products are sent to factories where the products are finished. Based on the imagery, the factories make adjustments to the processes as needed so that the actual products will have an appearance as in the received imagery. As orders are received by the retailers, the factories can manufacture the desired products on demand, and the products can be delivered to customers.
METHOD AND SYSTEM FOR GAZE-BASED CONTROL OF MIXED REALITY CONTENT
Systems and methods are presented for discovering and positioning content into augmented reality space. A method includes forming a three-dimensional (3D) map of surroundings of a user of an augmented reality (AR) head mounted display (HMD); determining a depth-wise location of a gaze point of a user based on eye gaze direction and eye vergence; determining a visual guidance line pathway in the 3D map; guiding an action of the user along the visual guidance line pathway at one or more identified focal points; and rendering a mixed reality (MR) object along the visual guidance line pathway at a location corresponding to a direction of the user’s gaze.
METHOD AND SYSTEM FOR GAZE-BASED CONTROL OF MIXED REALITY CONTENT
Systems and methods are presented for discovering and positioning content into augmented reality space. A method includes forming a three-dimensional (3D) map of surroundings of a user of an augmented reality (AR) head mounted display (HMD); determining a depth-wise location of a gaze point of a user based on eye gaze direction and eye vergence; determining a visual guidance line pathway in the 3D map; guiding an action of the user along the visual guidance line pathway at one or more identified focal points; and rendering a mixed reality (MR) object along the visual guidance line pathway at a location corresponding to a direction of the user’s gaze.
GENERATING AUGMENTED REALITY IMAGES FOR DISPLAY ON A MOBILE DEVICE BASED ON GROUND TRUTH IMAGE RENDERING
Systems and methods are disclosed herein for monitoring a location of a client device associated with a transportation service and generating augmented reality images for display on the client device. The systems and methods use sensor data from the client device and a device localization process to monitor the location of the client device by comparing renderings of images captured by the client device to renderings of the vicinity of the pickup location. The systems and methods determine navigation instructions from the user's current location to the pickup location and select one or more augmented reality elements associated with the navigation instructions and/or landmarks along the route to the pickup location. The systems and methods instruct the client device to overlay the selected augmented reality elements on a video feed of the client device.
GENERATING AUGMENTED REALITY IMAGES FOR DISPLAY ON A MOBILE DEVICE BASED ON GROUND TRUTH IMAGE RENDERING
Systems and methods are disclosed herein for monitoring a location of a client device associated with a transportation service and generating augmented reality images for display on the client device. The systems and methods use sensor data from the client device and a device localization process to monitor the location of the client device by comparing renderings of images captured by the client device to renderings of the vicinity of the pickup location. The systems and methods determine navigation instructions from the user's current location to the pickup location and select one or more augmented reality elements associated with the navigation instructions and/or landmarks along the route to the pickup location. The systems and methods instruct the client device to overlay the selected augmented reality elements on a video feed of the client device.
ROTATIONAL DEVICE FOR AN AUGMENTED REALITY DISPLAY SURFACE USING NFC TECHNOLOGY
A device for displaying AR markings comprising a top and a base, with the top rotatably attached to the base, and the base configured to be held by a hand or placed on a fixed surface. The AR markings are positioned on the top such that when the top rotates with respect to the base, so do the AR markings. When the AR markings are scanned by an appropriate scanning and display device, such as a smart phone, a 3d image associated with the AR markings will be displayed on the display device as an augmented reality projection. When the top rotates with respect to the base, so too does the augmented reality projection.
ROTATIONAL DEVICE FOR AN AUGMENTED REALITY DISPLAY SURFACE USING NFC TECHNOLOGY
A device for displaying AR markings comprising a top and a base, with the top rotatably attached to the base, and the base configured to be held by a hand or placed on a fixed surface. The AR markings are positioned on the top such that when the top rotates with respect to the base, so do the AR markings. When the AR markings are scanned by an appropriate scanning and display device, such as a smart phone, a 3d image associated with the AR markings will be displayed on the display device as an augmented reality projection. When the top rotates with respect to the base, so too does the augmented reality projection.
METHOD FOR RECONSTRUCTING THREE-DIMENSIONAL MODEL, METHOD FOR TRAINING THREE-DIMENSIONAL RECONSTRUCTION MODEL, AND APPARATUS
This application provides a method for reconstructing a three-dimensional model, a method for training a three-dimensional reconstruction model, an apparatus, a computer device, and a storage medium. The method for reconstructing a three-dimensional model includes: obtaining an image feature coefficient of an input image; respectively obtaining, according to the image feature coefficient, a global feature map and an initial local feature map based on a texture and a shape of the input image; performing edge smoothing on the initial local feature map, to obtain a target local feature map; respectively splicing the global feature map and the target local feature map based on the texture and the shape, to obtain a target texture image and a target shape image; and performing three-dimensional model reconstruction according to the target texture image and the target shape image, to obtain a target three-dimensional model.
METHOD FOR RECONSTRUCTING THREE-DIMENSIONAL MODEL, METHOD FOR TRAINING THREE-DIMENSIONAL RECONSTRUCTION MODEL, AND APPARATUS
This application provides a method for reconstructing a three-dimensional model, a method for training a three-dimensional reconstruction model, an apparatus, a computer device, and a storage medium. The method for reconstructing a three-dimensional model includes: obtaining an image feature coefficient of an input image; respectively obtaining, according to the image feature coefficient, a global feature map and an initial local feature map based on a texture and a shape of the input image; performing edge smoothing on the initial local feature map, to obtain a target local feature map; respectively splicing the global feature map and the target local feature map based on the texture and the shape, to obtain a target texture image and a target shape image; and performing three-dimensional model reconstruction according to the target texture image and the target shape image, to obtain a target three-dimensional model.
VIRTUAL SCENE DISPLAY METHOD AND APPARATUS, DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT
Embodiments of this application disclose a virtual scene display method performed by a computer device, which relate to the field of virtual scene technologies. The method includes: generating a virtual scene interface, the virtual scene interface including a scene image of a virtual scene captured by a virtual camera; displaying, in the virtual scene interface, a first scene image of one or more virtual objects in the virtual scene captured by the virtual camera using a first representation, the first representation being one of a 2D representation and a 3D representation; and in response to receiving a zoom operation on the virtual scene, displaying, in the virtual scene interface, a second scene image of the one or more virtual objects in the virtual scene captured by the virtual camera using a second representation, the second representation being the other one of the 2D representation and the 3D representation.