Patent classifications
G06T2219/2016
CREATING SYNTHETIC VISUAL INSPECTION DATA SETS USING AUGMENTED REALITY
In an approach for creating synthetic visual inspection data sets for training an artificial intelligence computer vision deep learning model utilizing augmented reality, a processor enables a user to capture a plurality of images of an anchor object using a camera on a user computing device. A processor receives the plurality of images of the anchor object from the user. A processor generates a baseline model of an anchor object. A processor generates a training data set. A processor trains the baseline model of the anchor object. A processor creates a trained Artificial Intelligence (AI) computer vision deep learning model. A processor enables the user to interact with the trained AI computer vision deep learning model in an access mode.
TECHNIQUES FOR INTRODUCING ORIENTED BOUNDING BOXES INTO BOUNDING VOLUME HIERARCHY
Described herein is a technique for modifying a bounding volume hierarchy. The techniques include combining preferred orientations of child nodes of a first bounding box node to generate a first preferred orientation; based on the first preferred orientation, converting one or more child nodes of the first bounding box node into one or more oriented bounding box nodes; combining preferred orientations of child nodes of a second bounding box node to generate a second preferred orientation; and based on the second preferred orientation, maintaining one or more children of the second bounding box node as non-oriented bounding box nodes.
System and Method for Improved Generation of Avatars for Virtual Try-On of Garments
A system and a method for improved generation of 3D avatars for virtual try-on of garments is provided. Inputs from a first user type are received, via a first input unit, for generating one or more garment types in a graphical format. Further, a 3D avatar of a second user type is generated in a semi-automatic manner or an automatic manner based on capturing a first input type or a second input type respectively received via a second input unit. The first input type comprises measurements of body specifications of the second user type and the second input type comprises body images of the second user type. Further, the generated garments are rendered on the generated 3D avatar of the second user type for carrying out a virtual try-on operation.
ELECTRONIC DEVICE AND METHOD FOR DISPLAYING NOTIFICATION ABOUT EXTERNAL OBJECT
An electronic having a sensor, a display including a first area and a second area, and a processor electrically connected to the sensor and the display. The processor acquires information on an external object through the sensor, determines the importance of the external object through the information, and controls the display to display at least one augmented reality object or at least one virtual reality object in the first area in response to at least one of determining that the importance is less than a first importance and that the information on the external object has not been acquired. The processor also controls the display to display an indicator in the second area so that the indicator has a first size and removes at least a part of the at least one augmented reality object or the at least one virtual reality object according to priorities.
AR ITEM PLACEMENT IN A VIDEO
Aspects of the present disclosure involve a system for presenting AR items. The system performs operations including receiving a video that includes a depiction of one or more real-world objects in a real-world environment and obtaining depth information related to the real-world environment; and generating a 3D model of the real-world environment. The operations further include determining 3D placement and orientation for an AR item based on data associated with the AR item and the 3D model of the real-world environment and causing display of a marker in the video that specifies the 3D placement and orientation of the AR item. The operations further include rendering a display of the AR item within the video according to the 3D placement and orientation in response to movement of the marker within the video.
AR BODY PART TRACKING SYSTEM
Aspects of the present disclosure involve a system for presenting AR items. The system performs operations including: receiving an image that includes a depiction of a first real-world body part in a real-world environment; applying a machine learning technique to the image to generate a plurality of dense outputs each associated with a respective pixel of a plurality of pixels in the image; applying a first task-specific decoder to the plurality of dense outputs to identify a pixel corresponding to a center of the first real-world body part; applying a second task-specific decoder using the identified pixel to retrieve a 3D rotation, translation and scale of first real-world body part from the plurality of dense outputs; modifying an AR object based on the 3D rotation, translation, and scale of first real-world body part; and modifying the image to include a depiction of the modified AR object.
Design Tool with 3D Garment Rendering and Preview
A tool allows a user to create new designs for apparel and preview these designs in three dimensions before manufacture. Software and lasers are used in finishing apparel to produce a desired wear pattern or other design. Based on a laser input file with a pattern, a laser will burn the pattern onto apparel. With the tool, the user will be able to create, make changes, and view images of a design, in real time, before burning by a laser. Input to the tool includes fabric template images, laser input files, and damage input. The tool allows adding of tinting and adjusting of intensity and bright point. The user can also move, rotate, scale, and warp the image input.
Authorized gesture control methods and apparatus
A method for a system includes capturing with a biometric capture device biometric data associated with a user of a smart device, determining with processor a user profile in response to the biometric data, determining with a physical sensor a plurality of physical perturbations in response to physical actions of the user, determining with the processor a requested user-perceptible action in response to the user profile and the plurality of physical perturbations, receiving with a short-range transceiver an authentication request from a reader device, and outputting with the short-range transceiver a token and identification of the user-perceptible action to the reader device in response to the authentication request, wherein the reader device performs or directs performance of the user-perceptible action in response to the identification of the user-perceptible action and to the token being valid.
SYNTHESIZING THREE-DIMENSIONAL VISUALIZATIONS FROM PERSPECTIVES OF ONBOARD SENSORS OF AUTONOMOUS VEHICLES
Aspects of the disclosure provide for generating a visualization of a three-dimensional (3D) world view from the perspective of a camera of a vehicle. For example, images of a scene captured by a camera of the vehicle and 3D content for the scene may be received. A virtual camera model for the camera of the vehicle may be identified. A set of matrices may be generated using the virtual camera model. The set of matrices may be applied to the 3D content to create a 3D world view. The visualization may be generated using the 3D world view as an overlay with the image, and the visualization provides a real-world image from the perspective of the camera of the vehicle with one or more graphical overlays of the 3D content.
METHOD FOR DISPLAYING AR NAVIGATION SCREEN, AND AR NAVIGATION SYSTEM
The present specification discloses a method for displaying an AR navigation screen, and an AR navigation system. The method for displaying an AR navigation screen according to the present specification may comprise the steps of: overlapping and displaying images of the surroundings of a vehicle and a plurality of AR images corresponding to the images of the surroundings; generating a window for separately displaying a first area including AR images overlapping at least one of the plurality of AR images; and overlapping and displaying the window and the images of the surroundings. The invention of the present specification has the effect of improving user convenience and promoting safe vehicle operation by using a window when AR images and the like are difficult to recognize due to being overlapped.