Patent classifications
G06T19/00
Map display method, device, storage medium and terminal
The present disclosure discloses a map display method performed at a terminal. The method includes: obtaining a real scene image of a current location and target navigation data of navigation from the current location to a destination; determining, according to the current location and the target navigation data, virtual navigation prompt information to be overlaid in the real scene image; determining, according to a device configuration parameter of a target device capturing the real scene image, a first location on which the virtual navigation prompt information is overlaid in the real scene image; performing verification detection on the current device configuration parameter; and overlaying the virtual navigation prompt information on the first location when the current device configuration parameter passes through the verification detection. According to the present disclosure, an AR technology is applied to the navigation field so that map display manners are more diverse and diversified.
Waypoint creation in map detection
An augmented reality (AR) device can be configured to generate a virtual representation of a user's physical environment. The AR device can capture images of the user's physical environment to generate a mesh map. The AR device can project graphics at designated locations on a virtual bounding box to guide the user to capture images of the user's physical environment. The AR device can provide visual, audible, or haptic guidance to direct the user of the AR device to look toward waypoints to generate the mesh map of the user's environment.
Methods and apparatus for encoding, communicating and/or using images
Methods and apparatus for capturing, communicating and using image data to support virtual reality experiences are described. Images, e.g., frames, are captured at a high resolution but lower frame rate than is used for playback. Interpolation is applied to captured frames to generate interpolated frames. Captured frames, along with interpolated frame information, are communicated to the playback device. The combination of captured and interpolated frames correspond to a second frame playback rate which is higher than the image capture rate. Cameras operate at a high image resolution but slower frame rate than images could be captured with the same cameras at a lower resolution. Interpolation is performed prior to delivery to the user device with segments to be interpolated being selected based on motion and/or lens FOV information. A relatively small amount of interpolated frame data is communicated compared to captured frame data for efficient bandwidth use.
Methods and apparatus for encoding, communicating and/or using images
Methods and apparatus for capturing, communicating and using image data to support virtual reality experiences are described. Images, e.g., frames, are captured at a high resolution but lower frame rate than is used for playback. Interpolation is applied to captured frames to generate interpolated frames. Captured frames, along with interpolated frame information, are communicated to the playback device. The combination of captured and interpolated frames correspond to a second frame playback rate which is higher than the image capture rate. Cameras operate at a high image resolution but slower frame rate than images could be captured with the same cameras at a lower resolution. Interpolation is performed prior to delivery to the user device with segments to be interpolated being selected based on motion and/or lens FOV information. A relatively small amount of interpolated frame data is communicated compared to captured frame data for efficient bandwidth use.
Multiuser augmented reality method
A multiuser, collaborative augmented reality (AR) system employs individual AR devices for viewing real-world anchors, that is, physical models that are recognizable to the camera and image processing module of the AR device. To mitigate ambiguous configurations when used in the collaborative mode, each anchor is registered with a server to ensure that only uniquely recognizable anchors are simultaneously active at a particular location. The system permits collaborative AR to span multiple sites, by associating a portal with an anchor at each site. Using the location of their corresponding AR device as a proxy for their position, AR renditions of the other participating users are provided. This AR system is particularly well suited for games.
Head mounted display apparatus
The occlusion is faithfully expressed even in the binocular vision in the AR display by a head mounted display apparatus or the like. A head mounted display apparatus 10 includes a lens 12, a lens 13, a camera 14, a camera 15, and a control processor 16. A CG image for a right eye is displayed on the lens 12. A CG image for a left eye is displayed on the lens 13. The camera 14 captures an image for the right eye. The camera 15 captures an image for the left eye. The control processor 16 generates the CG image for the right eye in which occlusion at the time of seeing by the right eye is expressed and the CG image for the left eye in which occlusion at the time of seeing by the left eye is expressed, based on the images captured by the cameras 14 and 15 and projects the generated CG image for the right eye and CG image for the left eye onto the lenses 12 and 13. A center of a lens of the camera 14 is provided at the same position as a center of the lens 12. A center of a lens of the camera 15 is provided at the same position as a center of the lens 13.
CORE MODEL AUGMENTED REALITY
A method of registering geological data at a formation core tracking system includes, at the tracking system, registering a formation core provided within a field of view of an optical imaging system of the tracking system; tracking the orientation of the formation core relative to the tracking system and the distance of the formation core relative to the tracking system; obtaining data associated with a first section of the formation core which is located at a predetermined distance from the tracking system, displaying the data together with an image of the formation core such that an augmented reality image is provided on a display device of the tracking system, changing the distance between the tracking system and the core; and updating the displayed data by obtaining data associated with a second section of the formation core which is located at said predetermined distance from the tracking system.
INFORMATION PROCESSING DEVICE THAT DISPLAYS A VIRTUAL OBJECT RELATIVE TO REAL SPACE
An information processing device including a display unit, a detector, and a first control unit and a method of using same. The display unit may be a head-mounted display. The display unit is capable of providing the user with a field of view of a real space and a virtual object. The detector detects an azimuth of the display unit around at least one axis and display of the virtual object is controlled based in the detected azimuth.
INTELLIGENT SYSTEM FOR CONTROLLING FUNCTIONS IN A COMBAT VEHICLE TURRET
A system for controlling turret functions of a land-based combat vehicle includes: a plurality of image detection sensors for recording sequences of images having an at least partial view of a 360° environment of the land-based combat vehicle; at least one virtual, augmented or mixed reality headset for wear by an operator, the headset presenting the at least partial view of the environment of the land-based combat vehicle on a display, the headset including a direction sensor for tracking an orientation of the headset imparted during a movement of a head of the operator and eye tracking means for tracking eye movements of the operator; a control unit including at least one computing unit for receiving as input and processing: images supplied by the plurality of image detection sensors; headset position and orientation data supplied by the direction sensor; eye position data supplied by the eye tracking means.
Method, Apparatus and Computer Program Product for Selecting a Kit of Parts that Traverse an Incline
A method of selecting a plurality of components as a kit of parts that traverse an incline, the method comprising: acquiring measurement information indicative of one or more physical dimensions of the incline S1202; generating a model of the incline in accordance with the measurement information S1204; confirming the model by overlaying the model upon the incline S1206; and selecting, on the basis of the model, a plurality of components, from a predetermined set of components, as a kit of parts for traversing the incline S1208.