Patent classifications
G06T2210/64
GENERATING AND MODIFYING AN ARTIFICIAL REALITY ENVIRONMENT USING OCCLUSION SURFACES AT PREDETERMINED DISTANCES
A method includes generating a depth map of a real environment as seen from a viewpoint that comprises pixels having corresponding depth values of one or more physical objects. Based on the depth map a two-dimensional occlusion surface is generated representing at least a visible portion of the one or more physical objects that are located within a predetermined depth range defined relative to the viewpoint. The two-dimensional occlusion surface is posed in a three-dimensional coordinate system such that the two-dimensional occlusion surface is located at a predetermined distance from the viewpoint. The visibility of a virtual object is determined relative to the one or more physical objects by comparing a model of the virtual object with the two-dimensional occlusion surface, and an output image is generated based on the visibility of the virtual object.
TECHNIQUES FOR RE-AGING FACES IN IMAGES AND VIDEO FRAMES
Techniques are disclosed for re-aging images of faces and three-dimensional (3D) geometry representing faces. In some embodiments, an image of a face, an input age, and a target age, are input into a re-aging model, which outputs a re-aging delta image that can be combined with the input image to generate a re-aged image of the face. In some embodiments, 3D geometry representing a face is re-aged using local 3D re-aging models that each include a blendshape model for finding a linear combination of sample patches from geometries of different facial identities and generating a new shape for the patch at a target age based on the linear combination. In some embodiments, 3D geometry representing a face is re-aged by performing a shape-from-shading technique using re-aged images of the face captured from different viewpoints, which can optionally be constrained to linear combinations of sample patches from local blendshape models.
Location-Specific Three-Dimensional Models Responsive to Location-Related Queries
Generating a location-specific three-dimensional model in response to a location query can provide users with a better understanding of a location through providing better interactivity, better perspective, and better understanding of dimensionality. Generation of the models can be enabled by leveraging a three-dimensional asset database and segmentation methods. The location-specific models can provide further utility by further including situation specific simulated effects, such as simulated weather or traffic.
GENERATING AND MODIFYING AN ARTIFICIAL REALITY ENVIRONMENT USING OCCLUSION SURFACES AT PREDETERMINED DISTANCES
A method includes generating a depth map of a real environment as seen from a viewpoint that comprises pixels having corresponding depth values of one or more physical objects. A first two-dimensional occlusion surface is generated representing at least a visible portion of the one or more physical objects located within a first predetermined depth range defined relative to the viewpoint. A second two-dimensional occlusion surface is generated such that a minimum depth of a second predetermined depth range is greater than a maximum depth of the first predetermined depth range. The first and second occlusion surfaces are posed in a three-dimensional coordinate system. The visibility of a virtual object is determined relative to the one or more physical objects by comparing a model of the virtual object with the first and second occlusion surfaces, and an output image is generated based on the visibility of the virtual object.
DECLARATIVELY DEFINED USER INTERFACE TIMELINE VIEWS
A device implementing a system to render user interface timeline views for display of dynamic application content includes a processor configured to retrieve a data structure corresponding to user interfaces of an application associated with respective times, and at least one declaratively defined user interface element. The processor is further configured to determine whether a rendering cost of a plurality of the user interfaces complies with an update budget of the application, where the rendering cost includes interpreting the at least one declaratively defined user interface element for the respective times. When the rendering cost is determined to comply, the processor is further configured to render the plurality of the user interfaces in advance of the respective times associated with the plurality of the user interfaces. The processor is further configured to display at least one of the rendered plurality of the user interfaces based on a current time.
TEMPLATE-BASED WEATHER DATA QUERIES FOR COMPILING LOCATION-BASED WEATHER MONITORING DATA FOR DEFINED TRANSPORTATION ROUTES
Various embodiments are directed to systems and methods for monitoring and compiling weather information/data for a plurality of identified locations along a route. A central computing entity 100 may store and retrieve a weather information/data inquiry template from memory, and may populate the weather information/data inquiry template with relevant information/data identifying locations for which weather information/data is requested. The central computing entity 100 may utilize the populated inquiry template to retrieve weather information/data from one or more weather information/data sources by transmitting the populated weather information/data inquiry to the one or more weather information/data sources to cause the weather information/data sources to provide weather information/data to the one or more computer processors; and to compile the retrieved weather information/data from the one or more weather information/data sources.
METHOD AND SYSTEM FOR VIRTUAL SENSOR DATA GENERATION WITH DEPTH GROUND TRUTH ANNOTATION
Methods and systems for generating virtual sensor data for developing or testing computer vision detection algorithms are described. A system and a method may involve generating a virtual environment. The system and the method may also involve positioning a virtual sensor at a first location in the virtual environment. The system and the method may also involve recording data characterizing the virtual environment, the data corresponding to information generated by the virtual sensor sensing the virtual environment. The system and the method may further involves annotating the data with a depth map characterizing a spatial relationship between the virtual sensor and the virtual environment.
DECLARATIVELY DEFINED USER INTERFACE TIMELINE VIEWS
A device implementing a system to render user interface timeline views for display of dynamic application content includes a processor configured to retrieve a data structure corresponding to user interfaces of an application associated with respective times, and at least one declaratively defined user interface element. The processor is further configured to determine whether a rendering cost of a plurality of the user interfaces complies with an update budget of the application, where the rendering cost includes interpreting the at least one declaratively defined user interface element for the respective times. When the rendering cost is determined to comply, the processor is further configured to render the plurality of the user interfaces in advance of the respective times associated with the plurality of the user interfaces. The processor is further configured to display at least one of the rendered plurality of the user interfaces based on a current time.
Declaratively defined user interface timeline views
A device implementing a system to render user interface timeline views for display of dynamic application content includes a processor configured to retrieve a data structure corresponding to user interfaces of an application associated with respective times, and at least one declaratively defined user interface element. The processor is further configured to determine whether a rendering cost of a plurality of the user interfaces complies with an update budget of the application, where the rendering cost includes interpreting the at least one declaratively defined user interface element for the respective times. When the rendering cost is determined to comply, the processor is further configured to render the plurality of the user interfaces in advance of the respective times associated with the plurality of the user interfaces. The processor is further configured to display at least one of the rendered plurality of the user interfaces based on a current time.
Generating and modifying an artificial reality environment using occlusion surfaces at predetermined distances
A method includes generating a depth map of a real environment as seen from a viewpoint that comprises pixels having corresponding depth values of one or more physical objects. Based on the depth map a two-dimensional occlusion surface is generated representing at least a visible portion of the one or more physical objects that are located within a predetermined depth range defined relative to the viewpoint. The two-dimensional occlusion surface is posed in a three-dimensional coordinate system such that the two-dimensional occlusion surface is located at a predetermined distance from the viewpoint. The visibility of a virtual object is determined relative to the one or more physical objects by comparing a model of the virtual object with the two-dimensional occlusion surface, and an output image is generated based on the visibility of the virtual object.