Patent classifications
A63F13/655
Systems and methods for presenting shared in-game objectives in virtual games
Method and system for presenting in-game objectives in a virtual game. For example, the method includes determining first real-world driving characteristics based upon first real-world telematics data of a first real-world user, determining second real-world driving characteristics based upon second real-world telematics data of a second real-world user, generating a shared virtual map based upon the first real-world driving characteristics and the second real-world driving characteristics, generating a shared in-game objective based upon the shared virtual map, presenting the shared in-game objective to a first virtual character associated with the first real-world driving characteristics of the first real-world user and a second virtual character associated with the second real-world driving characteristics of the second real-world user, and allowing the first virtual character and the second virtual character to accomplish the shared in-game objective in the shared virtual map.
Computing images of dynamic scenes
Computing an output image of a dynamic scene. A value of E is selected which is a parameter describing desired dynamic content of the scene in the output image. Using selected intrinsic camera parameters and a selected viewpoint, for individual pixels of the output image to be generated, the method computes a ray that goes from a virtual camera through the pixel into the dynamic scene. For individual ones of the rays, sample at least one point along the ray. For individual ones of the sampled points, a viewing direction being a direction of the corresponding ray, and E, query a machine learning model to produce colour and opacity values at the sampled point with the dynamic content of the scene as specified by E. For individual ones of the rays, apply a volume rendering method to the colour and opacity values computed along that ray, to produce a pixel value of the output image.
Identifying player engagement to generate contextual game play assistance
The present disclosure describes methods and systems directed towards identifying player engagement to generate contextual game play assistance. User gameplay information is monitored so that the user can be provided assistance within the video game where the user may have problems. User gameplay information is monitored in order to identify what type(s) of assistance could be provided to the user. The information can be based on the current level of frustration of the user with the video game.
Identifying player engagement to generate contextual game play assistance
The present disclosure describes methods and systems directed towards identifying player engagement to generate contextual game play assistance. User gameplay information is monitored so that the user can be provided assistance within the video game where the user may have problems. User gameplay information is monitored in order to identify what type(s) of assistance could be provided to the user. The information can be based on the current level of frustration of the user with the video game.
Scavenger hunt facilitation
Methods and systems for facilitating a scavenger hunt. The systems and methods described herein involve receiving at an interface a list of a plurality of attractions, and communicating the list of the plurality of attractions to at least one device associated with a participant over a network. Scavenger hunt participants may then gather imagery of the required attractions. The systems and methods described herein then involve receiving imagery from the at least one participant and executing at least one computer vision procedure to determine whether the received imagery includes at least one of the plurality of attractions.
Scavenger hunt facilitation
Methods and systems for facilitating a scavenger hunt. The systems and methods described herein involve receiving at an interface a list of a plurality of attractions, and communicating the list of the plurality of attractions to at least one device associated with a participant over a network. Scavenger hunt participants may then gather imagery of the required attractions. The systems and methods described herein then involve receiving imagery from the at least one participant and executing at least one computer vision procedure to determine whether the received imagery includes at least one of the plurality of attractions.
IMAGE PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
An image processing method and apparatus, a computer device, and a computer-readable storage medium. The image processing method includes: displaying a first application page, the first application page including an original role object and a face fusion control; acquiring a user face image of a target user in a case that the face fusion control is triggered; and displaying a target role object on a second application page, the target role object being obtained by fusing the user face image and the original role object, a display angle of the target role object matching posture information of the target user, and the posture information of the target user being determined according to the user face image.
RENDERING METHOD FOR DRONE GAME
A rendering method for a drone game includes the following steps. Firstly, a drone, a control device, a display device and an information node are provided. The drone includes a plurality of cameras. Then, a plurality of images acquired from the plurality of cameras of the drone are stitched as a panoramic image by the control device, and the panoramic image is displayed on the display device. Then, a ready signal is issued from the information node to the display device, and the control device accesses the drone game through an authorization of the information node in response to the ready signal. Then, at least one virtual object is generated in the panoramic image. Consequently, the sound, light and entertainment effects of the drone game are effectively enhanced, and the fun and diversity of the drone game are increased.
RENDERING METHOD FOR DRONE GAME
A rendering method for a drone game includes the following steps. Firstly, a drone, a control device, a display device and an information node are provided. The drone includes a plurality of cameras. Then, a plurality of images acquired from the plurality of cameras of the drone are stitched as a panoramic image by the control device, and the panoramic image is displayed on the display device. Then, a ready signal is issued from the information node to the display device, and the control device accesses the drone game through an authorization of the information node in response to the ready signal. Then, at least one virtual object is generated in the panoramic image. Consequently, the sound, light and entertainment effects of the drone game are effectively enhanced, and the fun and diversity of the drone game are increased.
Information processing apparatus and application image distribution method
A game image generating section 120 generates a first image and a second image of an application. An image providing section 152 provides the first image to an output apparatus 4. A sharing processing section 160 streaming-distributes the second image to the sharing server. A display image generating section 150 may generate a display image including at least the first image and information associated with the second image. In addition, the sharing processing section 160 may instruct the application to generate the second image on the basis of a request from a viewing user.