Patent classifications
G06G5/00
Temporal element restoration in augmented reality systems
Methods, apparatuses, computer program products, devices and systems are described that carry out accepting a request associated with at least one of an item, an aspect, or an element that is not present in a field of view of a user's augmented reality device; presenting in a display of the augmented reality device at least one augmented reality representation related to the at least one item, aspect, or element in response to accepting a request associated with at least one item, aspect, or element that is not present in a field of view of an augmented reality device; and processing the request and any related interaction of the user via the at least one augmented reality representation.
Interactive system and method
A system and method for receiving an ordered set of images and analyzing the images to determine at least one position in space and at least one motion vector in space and time for at least one object represented in the images is disclosed. Using these vectors, a four dimensional model of at least a portion of the information represented in the images is formulated. This model generally obeys the laws of physics, though aberrations may be imposed. The model is then exercised with an input parameter, which, for example, may represent a different perspective than the original set of images. The four dimensional model is then used to produce a modified set of ordered images in dependence on the input parameter and optionally the set of images, e.g., if only a portion of the data represented in the images is modeled. The set of images may then be rendered on a display device.
Display apparatus and display system
According to an embodiment, a display block is a display apparatus including a plurality of surfaces. The display block includes a display device, a plurality of transmitting/receiving sections that perform communication within a predetermined distance, and a control section. The display device is provided on at least one surface of the plurality of surfaces. The plurality of transmitting/receiving sections are arranged to correspond to at least two or more side surfaces with respect to the surface on which the display device is provided among the plurality of surfaces. The control section performs control of the plurality of transmitting/receiving sections and the display device.
Electrooptical device and electronic apparatus
A scanning line driving circuit sequentially selects each of a plurality of scanning lines for each unit period. A signal supply circuit supplies, to a signal line, a gradation potential in accordance with a designated gradation of a pixel in a write period within the unit period. The signal supply circuit supplies a pre-charge potential to the signal line in a pre-charge period before the start of the write period in a first unit period of the plurality of unit periods, and the supply of the pre-charge potential to the signal line stops in a second unit period.
Maintenance assistance for an aircraft by augmented reality
A method for supporting aircraft maintenance, performed in a system comprising a display selection device and a portable device with a camera and an augmented reality display. The method comprises the steps of acquiring images of an equipment of the aircraft with the camera, and sending them to the display selection device; identifying the equipment present in these images with the display selection device and determining the identifier thereof, referred to as the useful identifier; on the basis of the useful identifier, sending maintenance assistance data with the display selection device to the augmented reality display; in response, displaying, in augmented reality, images corresponding to the data with the augmented reality display device. The method also comprises steps for displaying guidance data guiding towards one equipment in particular. A device for implementing such a method is also disclosed.
System and method of producing certain video data
A system and method of incorporating additional video objects into source video data to produce output video data. A method includes identifying segments of the source video data, selecting identified segments for the inclusion of additional video objects, creating an intermediate working version of the source video data including video material corresponding to the selected segments, creating metadata which identifies at least one frame within the source video data which corresponds to the selected segments, transmitting the intermediate working version to a remote system for the creation of additional video data including additional video objects to be included in the output video data, receiving video file data associated with the additional video data, obtaining the additional video data based on the video file data, retrieving metadata and incorporating the additional video data with the source video data on the basis of the retrieved metadata to produce the output video data.
Providing tactile feedback to a user through actuators moving a portion of the user's skin
An input interface for a virtual reality (VR) system includes one or more actuators stressing or straining a portion of a user's skin, simulating interactions with presented virtual objects. For example, an actuator comprises a tendon contacting portions of a user's body and coupled to a motor that moves the tendon to move portions of the user's body contacting the tendon. Alternatively, an actuator includes a pad having a surface contacting a surface of the user's body. A driving mechanism moves the pad in one or more directions parallel to the surface of the user's body with varying levels of normal force. In another example, one or more pins contact portions of the user's body and a surface of a bladder. The pins move as the bladder is inflated or deflated, which moves the contacted portions of the user's body. Alternatively, another type of actuator may move the pins.
Display system and head mounted display
A display system includes a plurality of terminals and a head mounted display. The head mounted display performs wireless communication with the plurality of terminals. The head mounted display includes a main controller and a display controller. The main controller detects a line-of-sight direction of a user. The main controller specifies a single terminal out of the plurality of terminals based on the detected line-of-sight direction of the user. The display controller causes at least one setting screen of the terminal specified by the main controller to be displayed such as to be viewable to the use.
Mobile computing device having video-in-video real-time broadcasting capability
A mobile computing device includes a first video camera on a first side of the mobile computing device producing a first camera video stream. A second video camera is on a second side of the mobile computing device producing a second camera video stream. A video processor is coupled to the first video camera and the second video camera to receive the first camera video stream and the second camera video stream, respectively. The video processor is coupled to merge the first camera video stream and the second camera video stream to generate a merged video stream. The video processor includes a network interface coupled to upload the merged video stream to a server in real-time using an Internet wireless network. The server broadcasts the merged video stream to a plurality of receivers in real-time.
Method for assessing the severity of underwater dive and use of said method in a dive computer
A method for for assessing severity of an underwater dive includes calculating a dive severity index by: (a) determining the gas breathed by the diver; (b) measuring the dive time; (c) determining the depth profile of the dive; (d) calculating a dive severity index based on a function of steps (a), (b) and (c); and (e) assessing severity based on the dive severity index calculated in step (d), wherein the steps (a), (b), (c), (d) and (e) are carried out in real time, at each instant of the dive time of the diver, and step (d) is performed according to the formula: DSI=k.sub.1*f(GAS)*f(D)*f(T), where: DSI is the dive severity index, k.sub.1 is an arbitrary constant, f(GAS) is a function of the inhaled gas, f(D) is a function of the dive depth, and f(T) is a function of the dive time.