A63F2300/1087

PUZZLE COMPONENT POSITION DETERMINATION SYSTEM
20230080489 · 2023-03-16 ·

A three-dimensional puzzle has a monitoring puzzle piece and multiple monitored puzzle pieces. For puzzle pattern determination, the monitoring puzzle piece is equipped with sensors, a processor, a wireless transceiver, and optionally a gyroscope sensor. The monitored puzzle pieces are rotatably connected to each other and to the monitoring puzzle piece to form the puzzle. The sensors, together with the processor or alternatively with an external client, track the monitored puzzle piece rotating relative to the monitoring puzzle piece. The external client may provide feedback to a user of the puzzle. The system enables the competitions between the user and users of other puzzles without requiring the physical proximity of the competitors.

MIXED REALITY SYSTEM FOR CONTEXT-AWARE VIRTUAL OBJECT RENDERING
20230123933 · 2023-04-20 ·

A computer-implemented method in conjunction with mixed reality gear (e.g., a headset) includes imaging a real scene encompassing a user wearing a mixed reality output apparatus. The method includes determining data describing a real context of the real scene, based on the imaging; for example, identifying or classifying objects, lighting, sound or persons in the scene. The method includes selecting a set of content including content enabling rendering of at least one virtual object from a content library, based on the data describing a real context, using various selection algorithms. The method includes rendering the virtual object in the mixed reality session by the mixed reality output apparatus, optionally based on the data describing a real context (“context parameters”). An apparatus is configured to perform the method using hardware, firmware, and/or software.

INTERACTIVE ENVIRONMENT WITH VIRTUAL ENVIRONMENT SPACE SCANNING

An interactive environment image may be displayed in a virtual environment space, and interaction with the interactive environment image may be detected within a three-dimensional space that corresponds to the virtual environment space. The interactive environment image may be a three-dimensional image, or it may be two-dimensional. An image is displayed to provide a visual representation of an interactive environment image including one or more virtual objects, which may be spatially positioned. User interaction with the visualized representation in the virtual environment space may be detected and, in response to user interaction, the interactive environment image may be changed.

Information processing device, control method of information processing device, and program

An information processing device obtains information regarding the position of each fingertip of a user in a real space, and determines contact between a virtual object set within a virtual space and a finger of the user. The information processing device sets the virtual object in a partly deformed state such that a part of the virtual object, the part corresponding to the position of the finger determined to be in contact with the object among the fingers of the user, is located more to a far side from a user side than the finger, and displays the virtual object having the shape set thereto as an image in the virtual space on a display device.

Virtual reality control system

According to one aspect, there is provided a virtual reality control system including: a sensor detecting a light signal; a display displaying an image to a user; at least one controller controlling the display; and an input device transmitting an input signal input from the user to the controller, wherein the controller is computing position data of the user by using data based on the light signal and computing virtual position data based on the position data of the user, wherein a plurality of areas is displayed on the display based on the virtual position data, wherein the plurality of areas includes an accessible area, where a character based on the virtual position data can move to, and an inaccessible area, where the character cannot move to, wherein an accessible mark is provided in the accessible area which is located within a reference distance from the character.

Virtual-projectile delivery in an expansive environment

One method comprises receiving a hit signal from a device worn by a first player, receiving a position of the device, receiving an orientation of a launch axis of a virtual-projectile launcher, receiving a position of a second player, and outputting a hit assignment on determining, pursuant to receiving the hit signal, that a recognized object and the second player are coincident at an indicated launch of a virtual projectile. Another method comprises receiving an indication of launch of a virtual projectile by a virtual-projectile launcher of a first player, receiving an image aligned to a launch axis of the virtual-projectile launcher, outputting a hit signal to a server on determining, pursuant to receiving the indication of launch, that a recognized object is imaged in a projectile-delivery area of the image, and outputting a position of the device and an orientation of the launch axis to the server.

Gaming system and gaming table

A gaming system for card games is provided. The gaming system includes a plurality of gaming cards and a processor. Each of the gaming cards has a recognition code. The processor is configured to: generate a correspondence between the recognition codes of the gaming cards and a plurality of card faces; obtain an image of the gaming card placed in a recognition area captured by an image capturing device; recognize the recognition code in the image to generate game data according to a recognition result and the correspondence; and generate a game screen to be displayed by a display according to the game data. In addition, a gaming table is also provided.

Analyzing Team Game Play Interactions Using Gaze Data
20170361158 · 2017-12-21 ·

Systems, methods, and computer-readable media are disclosed for capturing, over the course of a sports match, gaze data for each participant on a team, identifying a team-level key performance indicator (KPI) associated with a sports domain to which the sports match corresponds, and generating KPI data corresponding to the KPI. A graph may then be constructed based at least in part on the KPI data, where each node in the graph may represent a player on the team or an object of interest, and each edge connecting adjacent nodes may be weighted to indicate a degree of interaction between the nodes connected by the edge. KPI data may be aggregated across multiple sports domain KPIs and analyzed to assess team performance characteristics during the sports match. Report data indicative of team performance for different game scenarios, and optionally including recommendations for improving team performance in such scenarios, may be generated.

Extensible dictionary for game events
11673061 · 2023-06-13 · ·

A game-agnostic event detector can be used to automatically identify game events. Event data for detected events can be written to an event log in a form that is both human- and process-readable. Descriptive text for the event data can come from a common event dictionary that is hierarchical in nature, such that events of the same type can be correlated across different games even though the precise nature or appearance of those events may be different. The event data can be used for various purposes, such as to generate highlight videos or provide player performance feedback.

Driving simulator control with virtual skeleton

Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation.