Patent classifications
A63F2300/1087
GAMING DEVICE WITH ROTATABLY PLACED CAMERAS
A method to identify positions of fingers of a hand is described. The method includes capturing images of a first hand using a plurality of cameras that are part of a wearable device. The wearable device is attached to a wrist of a second hand and the plurality of cameras of the wearable device is disposed around the wearable device. The method includes repeating capturing of additional images of the first hand, the images and the additional images captured to produce a stream of captured image data during a session of presenting the virtual environment in a head mounted display (HMD). The method includes sending the stream of captured image data to a computing device that is interfaced with the HMD. The computing device is configured to process the captured image data to identify changes in positions of the fingers of the first hand.
Anamorphic display device
An anamorphic display device is provided. The anamorphic device includes a secondary display configured to be detachably coupled to a computing device including a primary display; and a non-transitory device operatively coupled to the primary and secondary displays and having instructions thereon that are configured, when executed, to render an anamorphic image on at least one of the primary and secondary displays so as to create, in combination, a three-dimensional effect from a point of view facing the primary and secondary displays.
INFORMATION PROCESSING DEVICE, CONTROL METHOD OF INFORMATION PROCESSING DEVICE, AND PROGRAM
An information processing device obtains information regarding the position of each fingertip of a user in a real space, and determines contact between a virtual object set within a virtual space and a finger of the user. The information processing device sets the virtual object in a partly deformed state such that a part of the virtual object, the part corresponding to the position of the finger determined to be in contact with the object among the fingers of the user, is located more to a far side from a user side than the finger, and displays the virtual object having the shape set thereto as an image in the virtual space on a display device.
CARD GAME MATCHUP SYSTEM
A card game matchup system includes a first acquiring portion configured to acquire a first image obtained by imaging a first field where a game card of a first player is placed to play a card game. The system also includes a second acquiring portion configured to acquire a second image obtained by imaging a second field where a game card of a second player is placed. The system also includes a first state recognizing portion configured to recognize a first game state by performing image recognition on the first image using artificial intelligence, a second state recognizing portion configured to recognize a second game state by performing image recognition on the second image using artificial intelligence, and a judging portion configured to judge a state of the card game in accordance with a rule of the card game based on the first game state and the second game state.
Shifter Simulator System, Simulator Provided Therewith and Method for Operating
A shifter simulator system comprising a frame with a gear stick hinge, a gear stick that is hingedly connected to the frame via the gear stick hinge, and a moveable frame part configured for moving relative to the frame and provided with a first contact surface and a second contact surface.
The shifter simulator system also includes a magnetic contact and a tilting element configured for tilting around a tilting axis via a tilting connection in response to a movement of the gear stick and providing a first lever, and wherein the tilting element is provided with a first contact element and a second contact element configured for engaging the respective first and second contact surfaces of the moveable frame part providing a second lever, and by moving the moveable frame part defining a shifter movement with the magnetic contact. The first and/or second levers are adjustable.
Reconfiguring reality using a reality overlay device
Virtual entities are displayed alongside real world entities in a wearable reality overlay device worn by the user. Information related to an environment proximate to the wearable device is determined. For example, a position of the wearable device may be determined, a camera may capture an image of the environment, etc. Virtual entity image information representative of an entity desired to be virtually displayed is processed based on the determined information. An image of the entity is generated based on the processed image information as a non-transparent region of a lens of the wearable device, enabling the entity to appear to be present in the environment to the user. The image of the entity may conceal a real world entity that would otherwise be visible to the user through the wearable device. Other real world entities may be visible to the user through the wearable device.
Game Delivery System
A distributed computer system for delivering a requested game experience at any venue of a plurality of distributed venues comprises: at each venue: a plurality of local units serving respective gameplay areas of the venue, each local unit coupled to a set of multimedia gaming equipment for delivering a game experience in its gameplay area, and a venue central unit configured to connect to each of the local units of that venue; a booking system for managing game bookings across the plurality of distributed venues, the booking system configured to receive, from a user device, a booking request denoting a requested venue of the plurality of distributed venues, and create a booking in response; and a master central server configured to connect to the session management system and the venue central unit of each venue; wherein the master central server is configured to generate a game session based on the booking, and automatically communicate the game session to the venue central unit of the requested venue, wherein the venue central unit receiving the game session is configured render accessible, to the local unit serving one of the gameplay areas, game details of the game session, and wherein that local unit is configured to deliver the requested game experience within that gameplay area, using its set of multimedia gaming equipment, based on the game details of the game session.
Interactive environment with virtual environment space scanning
An interactive environment image may be displayed in a virtual environment space, and interaction with the interactive environment image may be detected within a three-dimensional space that corresponds to the virtual environment space. The interactive environment image may be a three-dimensional image, or it may be two-dimensional. An image is displayed to provide a visual representation of an interactive environment image including one or more virtual objects, which may be spatially positioned. User interaction with the visualized representation in the virtual environment space may be detected and, in response to user interaction, the interactive environment image may be changed.
Virtual reality control system
Provided is a virtual environment control system for providing a virtual image related to at least part of virtual environment to a player who plays in a plurality of divided spaces through a wearable display device which the player is wearing, including: at least one first detecting device getting first detecting data related to a first play space; at least one second detecting device getting second detecting data related to a second play space; at least one auxiliary computing device generating a first virtual image and a second virtual image; a first wearable display device displaying the first virtual image to a first player located in the first play space; a second wearable display device displaying the second virtual image to a second player located in the second play space.
Mixed reality system for context-aware virtual object rendering
A computer-implemented method in conjunction with mixed reality gear (e.g., a headset) includes imaging a real scene encompassing a user wearing a mixed reality output apparatus. The method includes determining data describing a real context of the real scene, based on the imaging; for example, identifying or classifying objects, lighting, sound or persons in the scene. The method includes selecting a set of content including content enabling rendering of at least one virtual object from a content library, based on the data describing a real context, using various selection algorithms. The method includes rendering the virtual object in the mixed reality session by the mixed reality output apparatus, optionally based on the data describing a real context (“context parameters”). An apparatus is configured to perform the method using hardware, firmware, and/or software.