Patent classifications
G06T2200/24
Messaging system with augmented reality makeup
Systems, methods, and computer readable media for messaging system with augmented reality (AR) makeup are presented. Methods include processing a first image to extract a makeup portion of the first image, the makeup portion representing the makeup from the first image and training a neural network to process images of people to add AR makeup representing the makeup from the first image. The methods may further include receiving, via a messaging application implemented by one or more processors of a user device, input that indicates a selection to add the AR makeup to a second image of a second person. The methods may further include processing the second image with the neural network to add the AR makeup to the second image and causing the second image with the AR makeup to be displayed on a display device of the user device.
Systems and methods for detecting and correcting data density during point cloud generation
A point cloud capture system is provided to detect and correct data density during point cloud generation. The system obtains data points that are distributed within a space and that collectively represent one or more surfaces of an object, scene, or environment. The system computes the different densities with which the data points are distributed in different regions of the space, and presents an interface with a first representation for a first region of the space in which a first subset of the data points are distributed with a first density, and a second representation for a second region of the space in which a second subset of the data points are distributed with a second density.
Systems and methods for gamification of instrument inspection and maintenance
Disclosed is a gamification system for overlaying user-controlled graphical targeting elements over a real-time video feed of an instrument being inspected, and providing interactive controls for firing virtual weapons or other graphical indicators to designate and/or record the presence of contaminants, defects, and/or other issues at specific locations within or on the instrument. The system may receive and present images of the instrument under inspection in a graphical user interface (“GUI”). The system may receive user input that tags a particular region of a particular image with an issue identifier, and may generate a visualization that is presented in conjunction with the particular image in the GUI in response to receiving the input. The visualization corresponds to firing of a virtual weapon and other gaming visuals associated with tagging the particular region of the particular image with the issue identifier.
Customized virtual store
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for performing operations comprising: receiving, from a client device of a first user, a request from the first user to engage in an AR shopping experience curated by a store; identifying a first real-world product available for purchase from the store; receiving an image of a real-world environment of the first user; generating a first AR item that represents the first real-world product; comparing visual attributes of the first AR item to physical layouts of a plurality of real-world objects depicted in the image of the real-world environment; and overlaying the first AR item on a first real-world object of the plurality of real-world objects in the image responsive to comparing the visual attributes of the first AR item to the physical layouts of the plurality of real-world objects.
Single-pass object scanning
Various implementations disclosed herein include devices, systems, and methods that generates a three-dimensional (3D) model based on a selected subset of the images and depth data corresponding to each of the images of the subset. For example, an example process may include acquiring sensor data during movement of the device in a physical environment including an object, the sensor data including images of a physical environment captured via a camera on the device, selecting a subset of the images based on assessing the images with respect to motion-based defects based on device motion and depth data, and generating a 3D model of the object based on the selected subset of the images and depth data corresponding to each of the images of the selected subset.
Augmented reality placement for user feedback
Methods and systems are provided for generating augmented reality (AR) scenes where the AR scenes include one or more artificial intelligence elements (AIEs) that are rendered as visual objects in the AR scenes. The method includes generating an AR scene for rendering on a display; the AR scene includes a real-world space and virtual objects projected in the real-world space. The method includes analyzing a field of view into the AR scene; the analyzing is configured to detect an action by a hand of the user when reaching into the AR scene. The method includes generating one or more AIEs rendered as virtual objects in the AR scene, each AIE is configured to provide a dynamic interface that is selectable by a gesture of the hand of the user. In one embodiment, each of the AIEs is rendered proximate to a real-world object present in the real-world space; the real-world object is located in a direction of where the hand of the user is detected to be reaching when the user makes the action by the hand.
Systems, devices, and methods for in-field diagnosis of growth stage and crop yield estimation in a plant area
Methods, devices, and systems may be utilized for detecting one or more properties of a plant area and generating a map of the plant area indicating at least one property of the plant area. The system comprises an inspection system associated with a transport device, the inspection system including one or more sensors configured to generate data for a plant area including to: capture at least 3D image data and 2D image data; and generate geolocational data. The datacenter is configured to: receive the 3D image data, 2D image data, and geolocational data from the inspection system; correlate the 3D image data, 2D image data, and geolocational data; and analyze the data for the plant area. A dashboard is configured to display a map with icons corresponding to the proper geolocation and image data with the analysis.
Object time series system and investigation graphical user interface
Methods and systems for presenting time series for analysis. A method includes presenting a first visualization of summary information for an initial data set of a plurality of batches, presenting a filtered data set of the initial data set having a first batch identifier associated with a first batch and the second batch identifier associated with a second batch, executing a time series connector including transmitting a request to a time series application, the request comprising the first batch identifier, the second batch identifier, and the time series configuration data. The method further includes causing presentation of a user interface comprising a chart including a first plot for first time series data for the first batch identifier and a second plot for second time series data for the second batch identifier, the chart configured to the time series configuration data, and the first plot is aligned to the second plot.
Systems, methods, and media for displaying interactive augmented reality presentations
Systems, methods, and media for displaying interactive augmented reality presentations are provided. In some embodiments, a system comprises: a plurality of head mounted displays, a first head mounted display comprising a transparent display; and at least one processor, wherein the at least one processor is programmed to: determine that a first physical location of a plurality of physical locations in a physical environment of the head mounted display is located closest to the head mounted display; receive first content comprising a first three dimensional model; receive second content comprising a second three dimensional model; present, using the transparent display, a first view of the first three dimensional model at a first time; and present, using the transparent display, a first view of the second three dimensional model at a second time subsequent to the first time based one or more instructions received from a server.
System and method for advanced data management with video enabled software tools for video broadcasting environments
Video editing software tools platform utilizing a video display to provide access to specific video editing software tools, such as video oriented applications or widgets, that can assist those in a video broadcasting team, such as a camera operator or video editor, with a video broadcast feed. Various video editing software tools can provide features and functions that can add visual context to video data presented in the image stream from the video camera and provide archived information pertaining to the same. Various embodiments relate to systems and methods for simultaneously switching input image streams to output devices, while providing optional image processing functions on the image streams. Certain embodiments may enable multiple users/viewers to collaboratively control such systems and methods.