Patent classifications
G06F3/012
Pilot and passenger seat
The present invention achieves technical advantages as a pilot and passenger seating. An aircraft employs a pilot seat, comprising a contoured structure having ergonomically formed and padded surfaces, with left and right arm supports that include an articulated control knob, movable in three rectangular axes and rotatable about a vertical axis to provide one or more aircraft steering functions for an aircraft, and a touch-sensitive control surface for controlling one or more power system components. A passenger seat, having a contoured structure, having ergonomically formed and padded surfaces, a headrest, a seat, a left support member, and a right support member are adapted to cradle a portion of a passenger's body to support the passenger during travel.
Neural network processing for multi-object 3D modeling
Embodiments are directed to neural network processing for multi-object three-dimensional (3D) modeling. An embodiment of a computer-readable storage medium includes executable computer program instructions for obtaining data from multiple cameras, the data including multiple images, and generating a 3D model for 3D imaging based at least in part on the data from the cameras, wherein generating the 3D model includes one or more of performing processing with a first neural network to determine temporal direction based at least in part on motion of one or more objects identified in an image of the multiple images or performing processing with a second neural network to determine semantic content information for an image of the multiple images.
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
An information processing device according to the present disclosure includes: an acquisition unit that acquires outline information indicating an outline of a user who makes a body motion; and a specification unit that specifies, among body parts, a main part corresponding to the body motion and a related part, which is to be a target of correction processing of motion information corresponding to the body motion, on the basis of the outline information acquired by the acquisition unit.
DISPLAY DEVICE AND DISPLAY METHOD
The present technology provides a display device capable of appropriately displaying information in a visual field range of a user. The present technology provides a display device including: a display system configured to display information in a visual field range of a user by irradiating a retina of an eyeball with light using an element integrally provided on the eyeball of the user; a detection system configured to detect a change in an orientation and/or a position of the eyeball; and a control system configured to control a display position and/or a display mode of the information in the visual field range on the basis of a detection result in the detection system. According to the present technology, it is possible to provide a display device capable of appropriately displaying information in a visual field range of a user.
VIRTUAL REALITY SIMULATOR AND VIRTUAL REALITY SIMULATION PROGRAM
A VR (Virtual Reality) simulator projects or displays a virtual space image on a screen installed at a position distant from a user in a real space and not integrally moving with the user. More specifically, the VR simulator acquires a real user position being a position of the user's head in the real space. The VR simulator acquires a virtual user position being a position in a virtual space corresponding to the real user position. Then, the VR simulator acquires the virtual space image by imaging the virtual space by using a camera placed at the virtual user position in the virtual space, based on virtual space configuration information indicating a configuration of the virtual space. Here, the VR simulator acquires the virtual space image such that a vanishing point exists in a horizontal direction as viewed from the virtual user position.
CAMERA CONTROL USING SYSTEM SENSOR DATA
A method for using cameras in an augmented reality headset is provided. The method includes receiving a signal from a sensor mounted on a headset worn by a user, the signal being indicative of a user intention for capturing an image. The method also includes identifying the user intention for capturing the image, based on a model to classify the signal from the sensor according to the user intention, selecting a first image capturing device in the headset based on a specification of the first image capturing device and the user intention for capturing the image, and capturing the image with the first image capturing device. An augmented reality headset, a memory storing instructions, and a processor to execute the instructions to cause the augmented reality headset as above are also provided.
VIRTUAL REALITY SIMULATOR AND VIRTUAL REALITY SIMULATION PROGRAM
A VR (Virtual Reality) simulator projects or displays a virtual space image on a screen installed at a position distant from a user in a real space and not integrally moving with the user. More specifically, the VR simulator acquires a real user position being a position of the user's head in the real space. The VR simulator acquires a virtual user position being a position in a virtual space corresponding to the real user position. Then, the VR simulator acquires the virtual space image by imaging the virtual space by using a camera placed at the virtual user position in the virtual space, based on virtual space configuration information indicating a configuration of the virtual space. Here, the VR simulator performs a lens shift process that shifts a lens of the camera such that the entire screen fits within a field of view of the camera.
Systems and methods for data visualization in virtual reality environments
A computer-implemented method is provided for visualizing multiple objects in a computerized visual environment. The method includes displaying to a user a virtual three-dimensional space via a viewing device worn by the user, and determining a data limit of the viewing device for object rendering. The method includes presenting an initial rendering of the objects within the virtual space, where the visualization data used for the initial rendering does not exceed the data limit of the viewing device. The method also includes tracking user attention relative to the objects as the user navigates through the virtual space and determining, based on the tracking of user attention, one or more select objects from the multiple objects to which the user is paying attention. The one or more select objects are located within a viewing range of the user.
VIRTUAL REALITY SIMULATOR AND VIRTUAL REALITY SIMULATION PROGRAM
A VR (Virtual Reality) simulator projects or displays a virtual space image on a screen installed at a position distant from a user in a real space and not integrally moving with the user. More specifically, the VR simulator acquires a real user position being a position of the user's head in the real space. The VR simulator acquires a virtual user position being a position in a virtual space corresponding to the real user position. Then, the VR simulator acquires the virtual space image by imaging the virtual space by using a camera placed at the virtual user position in the virtual space, based on virtual space configuration information indicating a configuration of the virtual space. Here, the VR simulator performs adjusts a focal length of the camera such that perspective corresponding to a distance between the real user position and the screen is cancelled.
Gaze and content aware rendering logic
A graphics rendering processor receives data related to a display and a user's gaze which is directed at the display. The user gaze may be detected based on inputs received from an optical sensor, such as a near-infrared sensor. The processor then renders different portions of the display based on the user gaze, such that an area where the user gaze is directed will receive higher rendering priority than an area at which the user gaze is not directed. In a processor with multiple cores which differ in precision, operation cost, etc. a controller may determine what portion of the display to render on which cores, based on the detected user gaze, content, or a combination thereof.