Patent classifications
G06F3/0325
Virtual reality system with camera shock-mounted to head-mounted display
A virtual-reality system includes a head-mounted display, a camera mounted on and protruding from a surface of the head-mounted display, and a compressible shock mount mounting the camera on the surface. The shock mount is to retract the camera towards the head-mounted display when compressed. The shock mount protects the camera from damage when the head-mounted display is dropped.
INPUT DEVICE
A light-transmitting substrate configuring an input device has a light-transmitting region where electrode parts and light-transmitting wiring parts are formed, and a light-non-transmitting region where light-non-transmitting wiring parts are formed. The light-transmitting region surrounded by an upper end side, both lateral sides, and a bonding boundary part (bend part) is bonded to a panel. The light-non-transmitting region of the substrate is bent inward of a housing from the bonding boundary part (bend part) as a start point, and is connected to a circuit board. The light-transmitting wiring parts are formed of a flexible light-transmitting conductive material layer, and hence the substrate can be bent in the light-transmitting region.
Dynamic, free-space user interactions for machine control
Embodiments of display control based on dynamic user interactions generally include capturing a plurality of temporally sequential images of the user, or a body part or other control object manipulated by the user, and computationally analyzing the images to recognize a gesture performed by the user. In some embodiments, a scale indicative of an actual gesture distance traversed in performance of the gesture is identified, and a movement or action is displayed on the device based, at least in part, on a ratio between the identified scale and the scale of the displayed movement. In some embodiments, a degree of completion of the recognized gesture is determined, and the display contents are modified in accordance therewith. In some embodiments, a dominant gesture is computationally determined from among a plurality of user gestures, and an action displayed on the device is based on the dominant gesture.
Controller for interfacing with a computing program using position, orientation, or motion
A method for determining the position of a controller device, comprises: receiving dimensions of the display input by a user of the computer-based system; capturing successive images of the display at the controller device; determining a position of the controller device relative to the display based on the dimensions of the display and a perspective distortion of the display in the captured successive images of the display; providing the determined position of the controller to the computer-based system to interface with the interactive program to cause an action by the interactive program.
Operating environment with gestural control and multiple client devices, displays, and users
Embodiments described herein includes a system comprising a processor coupled to display devices, sensors, remote client devices, and computer applications. The computer applications orchestrate content of the remote client devices simultaneously across the display devices and the remote client devices, and allow simultaneous control of the display devices. The simultaneous control includes automatically detecting a gesture of at least one object from gesture data received via the sensors. The detecting comprises identifying the gesture using only the gesture data. The computer applications translate the gesture to a gesture signal, and control the display devices in response to the gesture signal.
Adaptive tracking system for spatial input devices
An adaptive tracking system for spatial input devices provides real-time tracking of spatial input devices for human-computer interaction in a Spatial Operating Environment (SOE). The components of an SOE include gestural input/output; network-based data representation, transit, and interchange; and spatially conformed display mesh. The SOE comprises a workspace occupied by one or more users, a set of screens which provide the users with visual feedback, and a gestural control system which translates user motions into command inputs. Users perform gestures with body parts and/or physical pointing devices, and the system translates those gestures into actions such as pointing, dragging, selecting, or other direct manipulations. The tracking system provides the requisite data for creating an immersive environment by maintaining a model of the spatial relationships between users, screens, pointing devices, and other physical objects within the workspace.
MULTI-PROCESS INTERACTIVE SYSTEMS AND METHODS
A multi-process interactive system is described. The system includes numerous processes running on a processing device. The processes include separable program execution contexts of application programs, such that each application program comprises at least one process. The system translates events of each process into data capsules. A data capsule includes an application-independent representation of event data of an event and state information of the process originating the content of the data capsule. The system transfers the data messages into pools or repositories. Each process operates as a recognizing process, where the recognizing process recognizes in the pools data capsules comprising content that corresponds to an interactive function of the recognizing process and/or an identification of the recognizing process. The recognizing process retrieves recognized data capsules from the pools and executes processing appropriate to contents of the recognized data capsules.
Structured Light Sensing for 3D Sensing
Apparatus for structured light scanning, the structured light comprising one or more projected lines or other patterns, comprises at least two independent emitters for each projected line or pattern, typically arranged in a row, and a pattern generator for causing light from respective emitters of a given row to overlap along the pattern axis to form the projected pattern. The independent emitters provide incoherent light along the pattern so that speckle noise is minimized despite the overlapping.
Optical Steering of Component Wavelengths of a Multi-Wavelength Beam to Enable Interactivity
Briefly, in accordance with one or more embodiments, an information handling system comprises a scanning system to scan one or more component wavelength beams into a combined multi-component beam in a first field of view, and a redirecting system to redirect one or more of the component wavelength beams into a second field of view. A first subset of the one or more component wavelength beams is projected in the first field of view and a second subset of the one or more component wavelength beams is projected in the second field of view. The first subset may project a visible image in the first field of view, and user is capable of providing an input to control the information handling system via interaction with the second subset in the second field of view.
Position detecting device and position detecting method
A projector includes: a pointer detecting unit that determines a type of each pointer; a coordinate calculating unit that calculates a pointed position of the pointer; a storage unit that stores a reference variation amount of the pointed position by the pointer for each of the pointers; and a projection control unit that acquires, from the storage unit, the reference variation amount according to the type of the pointer determined by the pointer detecting unit, and corrects, based on the acquired reference variation amount, the pointed position detected by the coordinate calculating unit.