Patent classifications
A63F2300/1093
Visual target tracking
A method of tracking a target includes classifying a pixel having a pixel address with one or more pixel cases. The pixel is classified based on one or more observed or synthesized values. An example of an observed value for a pixel address includes an observed depth value obtained from a depth camera. Examples of synthesized values for a pixel address include a synthesized depth value calculated by rasterizing a model of the target; one or more body-part indices estimating a body part corresponding to that pixel address; and one or more player indices estimating a target corresponding to that pixel address. One or more force vectors are calculated for the pixel based on the pixel case, and the force vector is mapped to one or more force-receiving locations of the model representing the target to adjust the model representing the target into an adjusted pose.
Crane Machine with Camera and Visual Targeting Assistance
Game machine having a housing with a number of upright walls, said housing being adapted to accommodate playing means, for instance pick-up means such as a claw. The claw in the game has a camera mounted in the claw and pointing down. The image from camera is shown on the back wall of the crane so the customer can see it while it is falling toward the toys.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM
An information processing apparatus includes: a storage block configured to store at least one of a correlation between a pattern of a vibration detected on a head-mounted display as a result of a predetermined contact action and a type of data processing and a correlation between a pattern of an image outputted from an image-taking apparatus, the outputting being caused by a predetermined gesture action, and a type of data processing; an acquisition block configured to acquire at least one of information related with the detected vibration and the outputted image; and a data processing block configured to, according to each of the correlations stored in the storage block, execute at least one of data processing of the type corresponding to the detected vibration and data processing of the type corresponding to the outputted image.
Method for realizing user interface using camera and mobile communication terminal for the same
A method for realizing a user interface using a camera module and a mobile communication terminal for the same. If a user makes a predetermined motion in a state in which the camera module of the mobile communication terminal is activated, the mobile communication terminal performs a predetermined action according to the motion pattern by recognizing the user motion and patterning the motion. In this case, the action performed according to the motion pattern corresponds to mouse control in a mouse mode, game control in a game mode, and character input in a character input mode.
Supplemental content on a mobile device
Methods, systems, devices, and software are described for providing supplemental content for presentation on a mobile device that identifies a video and a portion within the video. In one embodiment, a system includes a mobile device with an integrated video camera that tracks a display playing a movie. The mobile device automatically identifies the current scene in the movie and then accesses supplemental non-video content related to the identified scene. The accessed supplemental non-video content (e.g., audio, tactile, olfactory data) is then presented to the user at the same time the movie is played.
Driving simulator control with virtual skeleton
Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation.
IMMERSIVE STORYTELLING ENVIRONMENT
Techniques for providing an immersive storytelling experience using a plurality of storytelling devices. Each of the storytelling devices may be configured to perform one or more actions based on a current context of a story and in response to a stimulus event. The actions include at least one of an auditory and a visual effect. Embodiments also provide a controller device configured to manage a playback of the story, by adjusting the stimulus event and the effect of each of the plurality of storytelling devices, based on the current context of the story, thereby creating an interactive and immersive storytelling experience.
Safety scheme for gesture-based game
Technologies are generally described for providing a notification to a player playing a gesture-based game of a potentially dangerous condition. In some examples, a safety component of a gesture-based game system includes a gesture range determination unit configured to determine a gesture range associated with a gesture-based game; a detection unit configured to detect a movement of an object; and an alarm unit configured to generate an alarm in response to a determination based on the movement of the object which is within the gesture range.
System for interactive image based game
A method for real time processing of images providing a real time preview of face images for interaction with a game environment. Images from one or more image sensors are processed providing a modified image of a certain size and certain shape, for algorithmically detecting and tracking faces in the modified image for a real time preview of face images overlaid on a game environment. The real time preview of face images is configured for user control for interaction with the game environment. A facial gesture control functionality is disclosed for the execution of game commands allowing control of the real time preview of a face images for interaction with the game environment. The method of the present invention further discloses real time processing of images from a second image sensor for a real time preview of a second face images for interaction with the game environment. The present invention further discloses a method for real time processing of images from a first and second image sensor for a real time preview of face images overlaid on a real time preview of background images.
Controller for interfacing with a computing program using position, orientation, or motion
A method for determining the position of a controller device, comprises: receiving dimensions of the display input by a user of the computer-based system; capturing successive images of the display at the controller device; determining a position of the controller device relative to the display based on the dimensions of the display and a perspective distortion of the display in the captured successive images of the display; providing the determined position of the controller to the computer-based system to interface with the interactive program to cause an action by the interactive program.