Patent classifications
G02B2027/0141
HEAD UP DISPLAY SYSTEM AND CONTROL METHOD THEREOF, AND VEHICLE
A head up display system, a control method thereof and a vehicle are provided. The head up display system includes an imaging device group, a concave reflective element, a directional imaging device and a processor, the imaging device group includes at least one single-layer imaging device; the directional imaging device includes a light source device, a direction control element, a diffusing element and a light modulating layer; the processor is respectively connected with the directional imaging device and the at least one single-layer imaging device in the imaging device group.
INFORMATION DISPLAY SYSTEM AND INFORMATION DISPLAY METHOD
An information display system according to the present disclosure includes: display device provided in a mask, the display device being configured to display a screen viewed by a person wearing the mask; and display control unit for switching a displaying mode of the screen.
VEHICLE DISPLAY CONTROL DEVICE, VEHICLE DISPLAY DEVICE, VEHICLE DISPLAY CONTROL METHOD AND COMPUTER-READABLE STORAGE MEDIUM
The appearance of content that guides a traveling path of a vehicle is improved. When a display control ECU shifts a path line, which is expressed by path line information included in map information for navigation, in a vehicle transverse direction in accordance with a distance along the vehicle transverse direction between a position of a vehicle and the path line, the display control ECU causes content to be displayed on a HUD at a corresponding position on the shifted path line.
Range of motion control in XR applications on information handling systems
More realistic experiences can be provided to a user through the use of a wearable suit. The xR wearable suit may include materials with adjustable characteristics, such as friction, and electronics for controlling the materials to provide feedback to the user wearing the xR suit. In an xR game, the materials may be used to translate virtual damage to physical constraints on the user. For example, when an avatar gets shot in the leg and is debilitated, the user's leg motion can be constricted to understand that shortcoming and stay in sync with the avatar. Examples of such feedback materials include inflating ribs, sheet jamming, and mechanical devices.
Wearable device and control method therefor
A wearable device is disclosed. The wearable device comprises: a camera; a sensor; a display; a laser projector; and a processor for identifying a user's sight line on the basis of sensing data obtained by the sensor, identifying location information of at least one object included in an image obtained by the camera, and controlling the laser projector to provide, on the display, augmented reality (AR) information related to the object on the basis of the identified user's sight line and location information of the object, wherein the laser projector comprises: a light emitting element for emitting a laser light; a focus adjusting element for adjusting a focus of the emitted laser light; and a scanning mirror for controlling a scan direction of the light having the adjusted focus, and the processor controls a driving state of the focus adjusting element on the basis of the identified user's sight line and location information of the object.
Mixed-reality surgical system with physical markers for registration of virtual models
An example method includes obtaining, a virtual model of a portion of an anatomy of a patient obtained from a virtual surgical plan for an orthopedic joint repair surgical procedure to attach a prosthetic to the anatomy; identifying, based on data obtained by one or more sensors, positions of one or more physical markers positioned relative to the anatomy of the patient; and registering, based on the identified positions, the virtual model of the portion of the anatomy with a corresponding observed portion of the anatomy.
Methods and apparatuses for providing procedure guidance
Apparatuses and methods of operating the same are described. An apparatus including a display, an input device, and a processing device coupled to the display and the input device. The processing device may send an output to the display. The output may include a graphical object associated with a first step of a user-implemented procedure. The processing device may receive an input from the input device. The input may indicate a progress on an execution of the first step by an operator. The processing device may determine whether the input indicates that the operator has completed the first step. The processing device may determine whether the first step is a final step in the user-implemented procedure. The processing device may identify a second step in the user-implemented procedure when the input indicates that the operator has completed the first step and the first step is not a final step.
Systems and methods for controlling a head-up display in a vehicle
Systems and methods for controlling a head-up display (HUD) in a vehicle are disclosed herein. One embodiment deactivates the HUD in response to a command from a driver of the vehicle; assigns a level of urgency to an item of information associated with a current vehicular context; and activates the HUD to display the item of information to the driver, when the level of urgency exceeds a predetermined threshold.
Human-powered advanced rider assistance system
A bicycle system with omnidirectional viewing having front-facing, stereoscopic video camera devices relying on computer vision. The front-facing, stereoscopic video camera devices positioned on the bicycle help identify, classify, and recommend a safe trajectory around obstacles in real-time using augmented reality. The bicycle system details safety-related and guidance-related information.
METHOD FOR PROVIDING USER INTERFACE THROUGH HEAD MOUNTED DISPLAY USING EYE RECOGNITION AND BIO-SIGNAL, APPARATUS USING SAME, AND COMPUTER READABLE RECORDING MEDIUM
A method for providing a user interface through a head mounted display using eye recognition and bio-signals comprises the steps of: (a) moving a cursor to a particular location at which a user gazes by referencing the eye information obtained from a first eyeball that is one of the eyeballs of the user through a camera module when the user gazes at a particular location on an output screen; and (b) supporting in order to provide detailed selection items corresponding to an entity when a certain entity exists in the certain position by referencing the movement information obtained from the eyelid corresponding to a second eyeball that is one of the eyeballs of the user through a bio-signal acquisition module.