Patent classifications
G06T19/006
Augmented reality placement for user feedback
Methods and systems are provided for generating augmented reality (AR) scenes where the AR scenes include one or more artificial intelligence elements (AIEs) that are rendered as visual objects in the AR scenes. The method includes generating an AR scene for rendering on a display; the AR scene includes a real-world space and virtual objects projected in the real-world space. The method includes analyzing a field of view into the AR scene; the analyzing is configured to detect an action by a hand of the user when reaching into the AR scene. The method includes generating one or more AIEs rendered as virtual objects in the AR scene, each AIE is configured to provide a dynamic interface that is selectable by a gesture of the hand of the user. In one embodiment, each of the AIEs is rendered proximate to a real-world object present in the real-world space; the real-world object is located in a direction of where the hand of the user is detected to be reaching when the user makes the action by the hand.
Distinguishing real from virtual objects in immersive reality
Aspects of the subject disclosure may include, for example, a camera positioned to capture image information of an immersive experience presented to one or more users engaged in the immersive experience and located in an immersive experience space, a processing system and a memory that stores executable instructions to facilitate performance of operations including receiving the image information from the camera, detecting objects located in the immersive experience space with the one or more users, the objects including at least one virtual object created by the immersive experience, determining the at least one virtual object is a projected virtual object of the immersive experience, generating a signal indicating the at least one virtual object is a projected virtual object, and a projector, responsive to the signal, to provide a visual indication in the immersive experience space to identify the projected virtual object as a virtual object to the one or more users engaged in the immersive experience. Other embodiments are disclosed.
Four dimensional energy-field package assembly
Four dimensional (4D) energy-field package assembly for projecting energy fields according to a 4D coordinate function. The 4D energy-field package assembly includes an energy-source system having energy sources capable of providing energy to energy locations, and energy waveguides for directing energy from the energy locations from one side of the energy waveguide to another side of the energy waveguide along energy propagation paths.
AR-based supplementary teaching system for guzheng and method thereof
An AR-based supplementary teaching system for guzheng and method thereof, the system includes an AR device, a data processing device and positioning devices for key positions, the data processing device is signal-connected to the AR device, and the positioning devices is installed on the guzheng code of guzheng, the positioning devices corresponds to the guzheng code of guzheng one by one; the AR device is used to obtain real scene data; the data processing device is used to guzheng and the positioning devices identify and generate string distribution data; also used to obtain operation instruction based on user actions, execute the operation instruction and generate virtual data; the AR device is also used to convert all data based on the string distribution data The virtual data and the real scene data are superimposed and displayed.
Wireless devices with flexible monitors and keyboards
A portable device (e.g., a wireless device such as a cell phone) is provided with a flexible keyboard and a flexible display screen. Such flexible components may be stored in the housing of the portable device when not in use. The flexible display screen and flexible keyboard may be expanded from the housing when the flexible components are utilized by a user. Non-flexible display and input components may be provided on the exterior of the portable device such that the device may be used, in some form, while the flexible components are stored. In one embodiment, a portion of the flexible display (or flexible keyboard) may be utilized when the flexible display (or flexible keyboard) is stored in said first housing.
Smart glasses including object distance adjustment driving gear
Smart glasses are provided in the present disclosure, including a housing, a fixing bracket, a left lens barrel, a right lens barrel, an object distance adjustment mechanism including a left-eye object distance adjustment gear, a right-eye object distance adjustment gear, an object distance adjustment driving gear engaged with the left-eye object distance adjustment gear and the right-eye object distance adjustment gear and a driving motor driving the object distance adjustment driving gear to rotate and being capable of moving back and forth on the fixing bracket along a second direction; a pupil distance adjustment mechanism connected to at least one lens barrel, and configured to drive the lens barrel to move in the first direction when an external force is applied, and a linkage member arranged between the at least one lens barrel and the driving motor.
Systems, methods, and media for displaying interactive augmented reality presentations
Systems, methods, and media for displaying interactive augmented reality presentations are provided. In some embodiments, a system comprises: a plurality of head mounted displays, a first head mounted display comprising a transparent display; and at least one processor, wherein the at least one processor is programmed to: determine that a first physical location of a plurality of physical locations in a physical environment of the head mounted display is located closest to the head mounted display; receive first content comprising a first three dimensional model; receive second content comprising a second three dimensional model; present, using the transparent display, a first view of the first three dimensional model at a first time; and present, using the transparent display, a first view of the second three dimensional model at a second time subsequent to the first time based one or more instructions received from a server.
Device and method for generating dynamic virtual contents in mixed reality
Dynamic virtual content(s) to be superimposed to a representation of a real 3D scene complies with a scenario defined before run-time and involving real-world constraints (23). Real-world information (22) is captured in there al 3D scene and the scenario is executed at runtime (14) in presence of there al-world constraints. When there al-world constraints are not identified (12) from there al-world information, a transformation of the representation of the real 3D scene to a virtually adapted 3D scene is carried out (13) before executing the scenario, so that the virtually adapted 3D scene fulfills those constraints, and the scenario is executed in the virtually adapted 3D scene replacing the real 3D scene instead of there al 3D scene. Application to mixed reality.
Apparatus and methods for augmented reality vehicle condition inspection
Methods, apparatus, systems and articles of manufacture are disclosed for augmented reality vehicle condition inspection. An example apparatus disclosed herein includes a location analyzer to determine whether a camera is at an inspection location and directed towards a first vehicle in an inspection profile, the inspection location corresponding to a location of the camera relative to the first vehicle, an interface generator to generate an indication on a display that the camera is at the inspection location, the indication associated with an inspection image being captured, and an image analyzer to compare the inspection image captured at the inspection location with a reference image taken of a reference vehicle of a same type as the first vehicle, and determine a vehicle part condition or a vehicle condition based on the comparison of the inspection image and the reference image.
Eye image selection
Systems and methods for eye image set selection, eye image collection, and eye image combination are described. Embodiments of the systems and methods for eye image set selection can include comparing a determined image quality metric with an image quality threshold to identify an eye image passing an image quality threshold, and selecting, from a plurality of eye images, a set of eye images that passes the image quality threshold.