Patent classifications
A63F2300/5553
Driving simulator control with virtual skeleton
Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation.
Multi-modal model for dynamically responsive virtual characters
The disclosed embodiments relate to a method for controlling a virtual character (or “avatar”) using a multi-modal model. The multi-modal model may process various input information relating to a user and process the input information using multiple internal models. The multi-modal model may combine the internal models to make believable and emotionally engaging responses by the virtual character. The link to a virtual character may be embedded on a web browser and the avatar may be dynamically generated based on a selection to interact with the virtual character by a user. A report may be generated for a client, the report providing insights as to characteristics of users interacting with a virtual character associated with the client.
Video generation system to render frames on demand using a fleet of servers
Content controller system comprising rendering server system comprising a plurality of servers. The servers receiving a plurality of segment render requests that correspond respectively to segments included in a set of media content item segments. The servers render the segments corresponding to the segment render requests using a media content identification and a main user identification. Rendering the segments comprises retrieving metadata from a metadata database associated with the media content identification, rendering the segments using the metadata, generating a main user avatar based on the main user identification, and incorporating the main user avatar into the segments. The servers can upload the segments to a segment database and update segment states in a segment state database to indicate that the segments are available. Other embodiments are disclosed herein.
SYSTEM AND METHOD FOR PERSONALIZED AVATAR GENERATION, ESPECIALLY FOR COMPUTER GAMES
A system and method for generating a 3D personalized avatar including a computerized server, a computerized client device, a bidirectional communications channel between the server and the client device, a memory in the client device, storing 3D scan data of at least part of a user's body, a memory in the server stores the 3D scan data received from the client device. A plurality of 3D model data sets are stored in the server memory. A gaming system selector provides information about a gaming system selected for personalized avatar generation. A personalized 3D avatar generation engine is responsive to the selected gaming system for merging the user 3D scan data with a 3D model data set. An avatar package generator generates a personalized avatar package containing the merged data. An avatar package installer in the client device receives the package and makes the personalized 3D avatar accessible to the selected gaming system.
Game program and information processing device
The game program that relates to the present invention directs a computer to execute a process that involves extracting all of the groups that can be constituted by combining a plurality of characters owned by the player from among a plurality of groups respectively composed of a plurality of pre-associated characters, a process that involves assembling a deck constituted by combining groups consecutively selected from all of the extracted groups, and a process that involves conducting a battle game during which the deck is engaged in battle by causing the deck to generate the special effects associated with each selected group.
System and method for processing video to provide facial de-identification
A system and method for real-time image and video face de-identification that removes the identity of the subject while preserving the facial behavior is described The facial features of the source face are replaced with that of the target face while preserving the facial actions of the source face on the target face. The facial actions of the source face are transferred to the target face using personalized Facial Action Transfer (FAT), and the color and illumination is adapted. Finally, the source image or video containing the target facial features is outputted for display. Alternatively, the system can run in real-time.
Persistent customized social media environment
One or more persistent customized social media environments are created allowing users to share content or an activity. The content or activity may comprise a shared media experience or shared participatory experience. Each user accessing the environment utilizes a device alone or in conjunction with other devices to complete a sharing experience. A persistent customized social media environment definition establishes a user environment which provides social networking services as well as content sharing and allows users who are connected to the persistent customized social media environment definition to experience instant messages, while those users who connect to the persistent customized social media environment at a later time will receive messages once they enter the environment.
Image processing method, avatar display adaptation method and corresponding image processing processor, virtual world server and communication terminal
When processing images in a virtual environment in which a plurality of avatars respectively representing associated users evolve, an image processing method is employed comprising the following stages: an adaptation request is received for the display of the avatars on the terminal of a given user, the request comprising at least one adaptation criterion to distinguish the display of the avatars. The data representing the avatars is modified based on the adaptation criterion, and the modified data for an adapted display of the avatars is sent to the terminal of the given user. Additionally, the display of avatars in a virtual environment may be adapted. A corresponding image processing processor, virtual world server and communication terminal for implementing such methods are also provided.
In-Vehicle Gaming Systems and Methods
A gaming system of a vehicle includes: a game application embodying an interactive game and stored in memory; a sensor of a vehicle configured to determine a present condition while the vehicle is moving; a gaming module of the vehicle, the gaming module configured to, while the vehicle is moving: execute the game application; display a virtual environment of the interactive game via one or more displays in the vehicle; output sound of the interactive game via one or more speakers in the vehicle; control action within the virtual environment of the interactive game based on user input received via one or more input devices of the vehicle; and adjust one or more characteristics of the virtual environment of the interactive game based on the present condition.
Methods and apparatus for controlling an information processing system based on geographic position information
Provided is an information processing system capable of, while avoiding running out of storage capacity, increasing the possibility that a program executed in the information processing system can use data according to a calculated position, even when the information processing system cannot receive data from a device. An item communication section (72) receives, from a server that stores data used in the program in association with a position, data associated with a position in a first-size area containing a position calculated by a positioning section (60). An item notification section (74) notifies to outside when a position associated with data received by the item communication section (72) is present in a second-size area, which contains a position calculated by the positioning section (60) and is smaller than the first-size area. An application executing section (66) executes a program using data received by the item communication section (72).