Patent classifications
G06T2215/16
Augmented reality smart drawer system and method
Disclosed is a mobile apparatus for monitoring state of cosmetic items in a drawer or on a shelf. The apparatus includes a camera; a display device; and processing circuitry. The processing circuitry is configured to capture an image of the drawer and its contents using the camera, decode the captured image to identify the cosmetic items contained in the drawer and respective location of the identified cosmetic items, and display on the display device a notification of identified cosmetic items that have expired. An augmented reality display device highlights a group of cosmetic items determined to be compatible and within a freshness degree. The processing circuitry is configured to display on the mobile display a notification of a cosmetic item to apply, highlight the cosmetic item in the augmented reality display device, and record in a database usage information of the cosmetic item.
INFORMATION PROCESSING APPARATUS, SYSTEM, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
An information processing apparatus according to one embodiment of the present disclosure obtains image capturing information including information representing a position and an orientation of an image capturing device and movement information representing specific movement of the image capturing device; and generates viewpoint information representing a position of a virtual viewpoint and a view direction from the virtual viewpoint for generating a virtual viewpoint video based on the obtained image capturing information and the obtained movement information in a case where a captured image obtained by the image capturing device and the virtual viewpoint video generated based on a plurality of images are switched.
Augmented reality and virtual reality feedback enhancement system, apparatus and method
- Chandrasekaran Sakthivel ,
- Michael Apodaca ,
- Kai Xiao ,
- Altug Koker ,
- Jeffery S. Boles ,
- Adam T. Lake ,
- Nikos Kaburlasos ,
- Joydeep Ray ,
- John H. Feit ,
- Travis T. Schluessler ,
- Jacek Kwiatkowski ,
- James M. Holland ,
- Prasoonkumar Surti ,
- Jonathan Kennedy ,
- Louis Feng ,
- Barnan Das ,
- Narayan Biswal ,
- Stanley J. Baran ,
- Gokcen Cilingir ,
- Nilesh V. Shah ,
- Archie Sharma ,
- Mayuresh M. Varerkar
Systems, apparatuses and methods may provide away to render augmented reality and virtual reality (VR/AR) environment information. More particularly, systems, apparatuses and methods may provide a way to selectively suppress and enhance VR/AR renderings of n-dimensional environments. The systems, apparatuses and methods may deepen a user's VR/AR experience by focusing on particular feedback information, while suppressing other feedback information from the environment.
Utilizing augmented reality to virtually trace cables
Systems and methods for utilizing Augmented Reality (AR) processes to track cables among a tangled bundle of cables are provided. An AR method, according to one implementation, includes a step of obtaining an initial captured image showing a bundle of cables. The AR method also includes the step of processing the initial captured image to distinguish a selected cable from other cables of the bundle of cables. Also, the AR method includes displaying the initial captured image on a display screen while visually augmenting an image of the selected cable to highlight the selected cable with respect to the other cables.
Distributed acceleration structures for ray tracing
A path tracing system in which the traversal task is distributed between one global acceleration structure, which is central in the system, and multiple local acceleration structures, distributed among cells, of high locality and of autonomous processing. Accordingly, the centrality of the critical resource of accelerating structure is reduced, lessening bottlenecks, while improving parallelism.
Head-up display system
A head-up display system mounted on a vehicle, the system including: a head-up display device that displays an image in front of the vehicle; and a forward sensing device that detects a forward object of the vehicle, the head-up display device including an image data generation unit, and an image display unit, the image data generated by the image data generation unit including a constantly displayed object, and a real scene overlaid object, a gyro sensor being installed in the vehicle, the image data generation unit performing pitching correction on a display position of an object to be displayed, based on angular velocity information for two axial directions, and in a case where the vehicle travels on a curve in an inclined state, the pitching correction being suppressed or stopped, and a brightness of display of the real scene overlaid object being reduced or stopped.
INSURANCE UNDERWRITING AND RE-UNDERWRITING IMPLEMENTING UNMANNED AERIAL VEHICLES (UAVS)
Unmanned aerial vehicles (UAVs) may facilitate insurance-related tasks. UAVs may actively be dispatched to an area surrounding a property, and collect data related to property. A location for an inspection of a property to be conducted by a UAV may be received, and one or more images depicting a view of the location may be displayed via a user interface. Additionally, a geofence boundary may be determined based on an area corresponding to a property boundary, where the geofence boundary represents a geospatial boundary in which to limit flight of the UAV. Furthermore, a navigation route may be determined which corresponds to the geofence boundary for inspection of the property by the UAV, the navigation route having waypoints, each waypoint indicating a location for the UAV to obtain drone data. The UAV may be directed around the property using the determined navigation route.
SPATIALLY-RESOLVED DYNAMIC DIMMING FOR AUGMENTED REALITY DEVICE
Techniques are described for operating an optical system. In some embodiments, light associated with a world object is received at the optical system. Virtual image light is projected onto an eyepiece of the optical system. A portion of a system field of view of the optical system to be at least partially dimmed is determined based on information detected by the optical system. A plurality of spatially-resolved dimming values for the portion of the system field of view may be determined based on the detected information. The detected information may include light information, gaze information, and/or image information. A dimmer of the optical system may be adjusted to reduce an intensity of light associated with the world object in the portion of the system field of view according to the plurality of dimming values.
Data-driven extraction and composition of secondary dynamics in facial performance capture
A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.
Method and system for augmented reality content production based on attribute information application
A method for augmented reality content production based on attribute information application according to an embodiment of the present disclosure, as a method for augmented reality content production based on attribute information application by a production application executed by at least one or more processors of a computing device, comprises providing a virtual object authoring space which is a virtual space for authoring a virtual object and includes at least one or more reference objects; providing a virtual object authoring interface for the virtual object authoring space; generating augmentation relationship attribute information based on a virtual object generated based on the provided virtual object authoring interface and at least one reference object of the virtual object authoring space; storing the virtual object by including the generated augmentation relationship attribute information; and displaying the stored virtual object on a reference object in a different space other than the virtual object authoring space based on the augmentation relationship attribute information.