Patent classifications
G06T2210/62
Spatial location presentation in head worn computing
Aspects of the present invention relate presentation of digital content, in a see-through display, representing a known location in an environment proximate to a head worn computer. Embodiments may involve a first wearable head device configured to be worn by a first person. The first wearable head device may comprise a see-through display. One or more processors may be configured for determining a first geo-spatial location of the first wearable head device and receiving a second geo-spatial location of a second wearable head device configured to be worn by a second person. The see-through display may be configured for presenting a virtual content on the see-through display at a location associated with the second geo-spatial location. The virtual content may be aligned with a vector from the first geo-spatial location to the second geo-spatial location.
Augmented reality system and method for substrates, coated articles, insulating glass units, and/or the like
Certain example embodiments relate to an electronic device, including a user interface, and processing resources including at least one processor and a memory. The memory stores a program executable by the processing resources to simulate a view of an image through at least one viewer-selected product that is virtually interposed between a viewer using the electronic device and the image by performing functionality including: acquiring the image; facilitating viewer selection of the at least one product in connection with the user interface; retrieving display properties associated with the at least one viewer-selected product; generating, for each said viewer-selected product, a filter to be applied to the acquired image based on retrieved display properties; and generating, for display via the electronic device, an output image corresponding to the generated filter(s) being applied to the acquired image. The electronic device in certain example embodiments may be a smartphone, tablet, and/or the like.
Composite imagery rendering in diminished reality environment for medical diagnosis
Techniques for composite imagery rendering in a diminished reality environment for medical diagnosis are provided. In one aspect, a method for composite image rendering in a diminished reality environment for medical diagnosis includes: analyzing medical imagery scans for a patient visiting a medical office for consultation with a physician for a diagnosis; obtaining real-time images of the patient who is physically located in the medical office; creating a composite image of the patient comprising relevant portions of the medical imagery scans combined with the real-time images of the patient; selecting one or more portions of the composite image to diminish for the diagnosis; and rendering the composite image in a diminished reality environment.
Vehicle display control device and vehicle display control method for displaying predicted wheel locus
A vehicle display control device includes: a locus determination unit determining a predicted locus of a wheel of a subject vehicle according to a steering angle of the subject vehicle; and a display processing unit displaying an image of a field view of the subject vehicle including a vehicle body of the subject vehicle on a display device, and the display processing unit superimposes the guide line indicating the predicted locus determined by the locus determination unit on the image of the field view regardless of the steering angle of the subject vehicle, while semi-transparently displaying a portion of the guide line which overlaps with the vehicle body, thereby allowing visibility of the vehicle body.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
An information processing apparatus includes a control unit configured to generate display control information used as information regarding display control of a display image corresponding to scene information indicating a scene of a seminar.
METHOD OF GENERATING MULTI-LAYER REPRESENTATION OF SCENE AND COMPUTING DEVICE IMPLEMENTING THE SAME
The present disclosure relates to the field of artificial intelligence (AI) and neural rendering, and particularly to a method of generating a multi-layer representation of a scene using neural networks trained in an end-to-end fashion and to a computing device implementing the method. The method of generating a multi-layer representation of a scene includes: obtaining a pair of images of the scene, the pair of the images comprising a reference image and a source image; performing a reprojection operation on the pair of images to generate a plane-sweep volume; predicting, using a geometry network, a layered structure of the scene based on the plane-sweep volume; and estimating, using a coloring network, color values and opacity values for the predicted layered structure of the scene to obtain the multi-layer representation of the scene; wherein the geometry network and the coloring network are trained in end-to-end manner.
USER INTERFACES FOR MAPS AND NAVIGATION
In some embodiments, an electronic device present navigation routes from various perspectives. In some embodiments, an electronic device modifies display of representations of (e.g., physical) objects in the vicinity of a navigation route while presenting navigation directions. In some embodiments, an electronic device modifies display of portions of a navigation route that are occluded by representations of (e.g., physical) objects in a map. In some embodiments, an electronic device presents representations of (e.g., physical) objects in maps. In some embodiments, an electronic device presents representations of (e.g., physical) objects in maps in response to requests to search for (e.g., physical) objects.
Image processing device, image processing method, program, and display device
An image processing device includes circuitry configured to perform an effect process on at least one 3D model of a plurality of 3D models generated from a plurality of viewpoint images captured from a plurality of viewpoints at different times.
VOLUMETRIC LIGHTING OF 3D OVERLAYS ON 2D IMAGES
In some examples, one or more three dimensional (3D) objects may be rendered in relation to a two dimensional (2D) imaging slice. The 3D object may be rendered such that the 3D object casts a colored shadow on the 2D imaging slice. In some examples, the 3D object may be rendered in colors where different colors indicate a distance from the portion of the 3D object from the 2D imaging slice.
Methods and systems for providing a notification in association with an augmented-reality view
The present disclosure is directed to providing a notification in association with an augmented reality (AR) view. In particular, one or more computing devices can generate, for display by at least one of the computing device(s), an interface depicting an AR view of at least a portion of a physical real-world environment. In accordance with aspects of the disclosure, based at least in part on detected movement of the at least one of the computing device(s), the computing device(s) can transition amongst multiple different stages of one or more elements included in the interface to notify a viewer of the interface to mind their situational awareness of the physical real-world environment.