A63F13/21

MULTI-MODAL MODEL FOR DYNAMICALLY RESPONSIVE VIRTUAL CHARACTERS

The disclosed embodiments relate to a method for controlling a virtual character (or “avatar”) using a multi-modal model. The multi-modal model may process various input information relating to a user and process the input information using multiple internal models. The multi-modal model may combine the internal models to make believable and emotionally engaging responses by the virtual character. The link to a virtual character may be embedded on a web browser and the avatar may be dynamically generated based on a selection to interact with the virtual character by a user. A report may be generated for a client that provides insights as to characteristics of users interacting with a virtual character associated with the client.

Game processing system, method of processing game, and storage medium storing program for processing game
11642590 · 2023-05-09 · ·

A game processing system for processing of a game that provides interaction with a virtual character according to one embodiment includes a storage that stores action data for specifying one or more actions of the virtual character and one or more computer processors. The game includes a VR mode in which the game progresses in accordance with detection information obtained by a head mounted display. The one or more processors determine an action of the player performed toward the virtual character based on the detection information obtained by the head mounted display attached to the head of the player, cause the virtual character to interact with the player based on the action of the player; and suspend execution of the VR mode if a suspension condition is satisfied.

Game processing system, method of processing game, and storage medium storing program for processing game
11642590 · 2023-05-09 · ·

A game processing system for processing of a game that provides interaction with a virtual character according to one embodiment includes a storage that stores action data for specifying one or more actions of the virtual character and one or more computer processors. The game includes a VR mode in which the game progresses in accordance with detection information obtained by a head mounted display. The one or more processors determine an action of the player performed toward the virtual character based on the detection information obtained by the head mounted display attached to the head of the player, cause the virtual character to interact with the player based on the action of the player; and suspend execution of the VR mode if a suspension condition is satisfied.

Information processing system, storage medium storing information processing program, information processing apparatus, and information processing method
11654352 · 2023-05-23 · ·

A non-limiting example information processing system includes a first input apparatus including a strain sensor, a second input apparatus including a motion sensor, and an information processing apparatus. The strain sensor provides an output corresponding to a force applied to at least a portion of the first input apparatus. The motion sensor provides an output corresponding to a motion of the second input apparatus. The information processing apparatus includes a computer that executes obtaining strain data corresponding to the output of the strain sensor and motion data corresponding to the output of the motion sensor, and executing first control on an object disposed in a virtual space based on the strain data, and second control on the object based on the motion data, the second control being different from the first control.

WHOLE-BODY HUMAN-COMPUTER INTERFACE
20230205315 · 2023-06-29 ·

A human-computer interface system having an exoskeleton including a plurality of structural members coupled to one another by at least one articulation configured to apply a force to a body segment of a user, the exoskeleton comprising a body-borne portion and a point-of-use portion; the body-borne portion configured to be operatively coupled to the point-of-use portion; and at least one locomotor module including at least one actuator configured to actuate the at least one articulation, the at least one actuator being in operative communication with the exoskeleton.

WHOLE-BODY HUMAN-COMPUTER INTERFACE
20230205315 · 2023-06-29 ·

A human-computer interface system having an exoskeleton including a plurality of structural members coupled to one another by at least one articulation configured to apply a force to a body segment of a user, the exoskeleton comprising a body-borne portion and a point-of-use portion; the body-borne portion configured to be operatively coupled to the point-of-use portion; and at least one locomotor module including at least one actuator configured to actuate the at least one articulation, the at least one actuator being in operative communication with the exoskeleton.

OPERATION DEVICE AND OPERATION SYSTEM
20230201710 · 2023-06-29 · ·

Realized is a technology which allows more various operations with respect to an operation target. An operation device includes: at least one six-axis force-moment sensor; and a button. The at least one six-axis force-moment sensor can detect a force in a direction of a first axis which intersects a first main surface of a housing, a moment about the first axis, a force in a direction of a second axis which is along the first main surface, a moment about the second axis, a force in a direction of a third axis which is along the first main surface and which intersects the second axis, and a moment about the third axis. The button can detect at least the force in the direction of the first axis and the moments about the second and third axes, and is provided to the first main surface.

OPERATION DEVICE AND OPERATION SYSTEM
20230201710 · 2023-06-29 · ·

Realized is a technology which allows more various operations with respect to an operation target. An operation device includes: at least one six-axis force-moment sensor; and a button. The at least one six-axis force-moment sensor can detect a force in a direction of a first axis which intersects a first main surface of a housing, a moment about the first axis, a force in a direction of a second axis which is along the first main surface, a moment about the second axis, a force in a direction of a third axis which is along the first main surface and which intersects the second axis, and a moment about the third axis. The button can detect at least the force in the direction of the first axis and the moments about the second and third axes, and is provided to the first main surface.

Computer-controlled sidewalk tiles

Command instruction data can be generated via a computerized control system, with the instruction data being formatted to prompt a plurality of tile units to change their output. The instruction data can be sent from the control system to the tile units, with each of the tile units including a tile controller connected to one or more tiles embedded in one or more sidewalk floors, and with each of the tiles including a user interface output device. At least part of the instruction data from the control system can be received via a tile controller of a tile unit. At least part of the instruction data can be processed via the tile controller. In response, the user interface output device of the tile can be signaled via the tile controller to change the output of the output device.

METHOD AND SYSTEM INCORPORATING REAL ENVIRONMENT FOR VIRTUALITY AND REALITY COMBINED INTERACTION

A method incorporating real environment for virtuality and reality combined interaction includes: step 1: capturing a frame of image in a real environment, and determining a movement state between a previous frame of image and a current image for at least one edge point in the image; step 2: for each of virtual objects in the virtual content, detecting whether the edge point is existed on the periphery of the virtual object, and applying corresponding function on the virtual object according to the movement state of the edge point when the edge point is existed on the periphery of the virtual object; and step 3: displaying the virtual content and the real environment in a superimposed manner according to function effect of the edge point to the virtual object, and returning back to step 1 to proceed until interaction between virtuality and reality is end.