A TOY SYSTEM FOR AUGMENTED REALITY
20220096947 · 2022-03-31
Inventors
Cpc classification
A63H33/042
HUMAN NECESSITIES
A63F13/30
HUMAN NECESSITIES
A63H33/086
HUMAN NECESSITIES
International classification
Abstract
A toy system, the toy system comprising a data processing system and one or more toys, the plurality of toys including at least one reference toy having a visual appearance recognisable by the data processing system in a captured image of a real-world scene including said at least one reference toy, the data processing system comprising an image capturing device, a processing unit, and a display, wherein the data processing system has stored thereon information associated with the at least one reference toy, the information including at least one predetermined reference position defined relative to the at least one reference toy; and wherein the data processing is configured to: capture a sequence of images of a real-world scene, the real-world scene including said at least one reference toy; process the captured images to detect and recognize said at least one reference toy within the real-world scene; retrieve the at least one predetermined reference position from the stored information associated with the recognized at least on reference toy; process the captured images to identify the at least one predetermined reference position within at least a first image of the sequence of captured images; selectively process a sub-image within the first image, the sub-image depicting said identified reference position to selectively detect a user manipulation of the real-world scene at the identified at least one predetermined reference position; responsive to detecting the user manipulation, generate and render computer-generated perceptual information associated with the detected user manipulation.
Claims
1. A toy system, the toy system comprising a data processing system and one or more toys, the one or more toys including at least one reference toy having a visual appearance recognisable by the data processing system in a captured image of a real-world scene including said at least one reference toy, the data processing system comprising an image capturing device, a processing unit, and a display, wherein the data processing system has stored thereon information associated with the at least one reference toy, the information including at least one predetermined reference position defined relative to the at least one reference toy; and wherein the data processing is configured to: capture a sequence of images of a real-world scene, the real-world scene including said at least one reference toy; process the captured images to detect and recognize said at least one reference toy within the real-world scene; retrieve the at least one predetermined reference position from the stored information associated with the recognized at least on reference toy; process the captured images to identify the at least one predetermined reference position within at least a first image of the sequence of captured images; selectively process a sub-image within the first image, the sub-image depicting said identified reference position, wherein the sub-image has a shape and size smaller than the first image, to selectively detect a user manipulation of the real-world scene at the identified at least one predetermined reference position; responsive to detecting the user manipulation, generate and render computer-generated perceptual information associated with the detected user manipulation.
2. A toy system according to claim 1, wherein the data processing system is configured to: generate and render computer-generated perceptual information prompting the user to manipulate the real-world scene at the identified at least one reference position; process the captured images to detect a user manipulation of the real-world scene at the identified at least one predetermined reference position; responsive to detecting the user manipulation, generate and render computer-generated perceptual information associated with the detected user manipulation.
3. (canceled)
4. A toy system according to claim 2, wherein the data processing system is configured to selectively look for a detectable user manipulation at the reference position during a limited time window after prompting the user to perform the manipulation at the reference position.
5. A toy system according to claim 4, wherein the data processing system is configured to create and render computer-generated content depending on whether a user manipulation at the reference position has been detected within the time window or not.
6. A toy system according to claim 4, wherein the detected user manipulation includes one or more of the following user manipulations: moving an element of the reference toy, positioning an object at the reference position, moving an object away from the reference position, changing the orientation of an object at the reference position.
7. A toy system according to claim 4, wherein detecting the user manipulation comprises providing the sub-image as an input to a computer vision process, in particular a feature detection process or an object recognition process.
8. A toy system according to claim 7, wherein the computer vision process is an object recognition process based on a neural network.
9. A toy system according to claim 1, wherein the reference toy is a toy construction model constructed from a plurality of toy construction elements.
10. A toy system according to claim 1, wherein the data processing system is configured to selectively only detect one or a predetermined set of types of user interactions.
11. A toy system, the toy system comprising a data processing system and one or more toys, the plurality of toys including at least one reference toy having a visual appearance recognisable by the data processing system in a captured image of a real-world scene including said at least one reference toy, the data processing system comprising an image capturing device, a processing unit, and a display, wherein the data processing system has stored thereon information associated with the at least one reference toy, the information including at least one predetermined reference position defined relative to the at least one reference toy; and wherein the data processing is configured to: capture a sequence of images of a real-world scene, the real-world scene including said at least one reference toy; process the captured images to detect and recognize said at least one reference toy within the real-world scene; retrieve the at least one predetermined reference position from the stored information associated with the recognized at least on reference toy; process the captured images to identify the at least one predetermined reference position within at least a first image of the sequence of captured images; generate and render computer-generated perceptual information prompting the user to manipulate the real-world scene at the identified at least one reference position; process the captured images to detect a user manipulation of the real-world scene at the identified at least one predetermined reference position; responsive to detecting the user manipulation, generate and render computer-generated perceptual information associated with the detected user manipulation; wherein the data processing system is configured to selectively look for a detectable user manipulation at the reference position during a limited time window after prompting the user to perform the manipulation at the reference position.
12. A toy system according to claim 11, wherein the data processing system is configured to create and render computer-generated content depending on whether a user manipulation at the reference position has been detected within the time window or not.
13. A toy system according to claim 11, wherein the detected user manipulation includes one or more of the following user manipulations: moving an element of the reference toy, positioning an object at the reference position, moving an object away from the reference position, changing the orientation of an object at the reference position.
14. A toy system according to 11, wherein the reference toy is a toy construction model constructed from a plurality of toy construction elements.
15. A toy system according to 11, wherein the data processing system is configured to selectively only detect one or a predetermined set of types of user interactions.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0089]
[0090]
[0091]
[0092]
[0093]
DETAILED DESCRIPTION
[0094] Various aspects and embodiments of toy construction systems disclosed herein will now be described with reference to toy construction elements in the form of bricks. However, the invention may be applied to other forms of toy construction elements and other forms of toys.
[0095]
[0096]
[0097] The toy system further comprises a reference toy 440. In this example the reference toy 440 is a toy construction model constructed from a plurality of toy construction elements, e.g. toy construction elements of the type described in connection with
[0098] The display 411 is operatively coupled to (e.g. integrated into) the tablet computer 410, and operable to display, under the control of the processing unit of the tablet computer 410, a video image. In the example of
[0099] The digital camera 412 is a video camera operable to capture video images of a real-world scene 430. In the example of
[0100] The digital camera 412 captures video images of the real-world scene 430 and the tablet computer displays the captured video images on the display 411. In the example of
[0101] The captured video images are displayed by the tablet computer 410 on its display 411. Therefore, a user may move the reference toy 440 around and/or otherwise manipulate the reference toy 440 within the field of view 420 of the digital camera 412 and view live video images from the digital camera 412 of the reference toy and at least of parts of the real-world scene 430. Alternatively or additionally, the user may change the position and/or orientation of the digital camera so as to capture images of a (e.g. stationary) reference toy from different positions. Additionally, the computer may be operable to store the captured video images on a storage device, such as an internal or external memory, of the computer, and/or forward the captured video to another computer, e.g. via a computer network. For example, the computer may be operable to upload the captured video images to a website.
[0102] The tablet computer 410 is suitably programmed to execute an AR-enabled digital game, during which the computer performs image processing on the captured video images so as to detect the reference toy 440 within the captured video image. Responsive to the detected reference toy, the computer may be programmed to generate a modified video image, e.g. a video image formed as the captured video image having overlaid to it a computer-generated image, e.g. a video image wherein at least a part of the captured video image is replaced by a computer-generated image. The computer 410 is operable to display the modified video image on the display 411. For the purpose of the present description, a computer operable to implement AR functionality operatively connected to a video camera and a display will also be referred to as an AR system.
[0103] Image processing methods for detecting AR markers and for generating modified video images responsive to detected objects are known as such in the art (see e.g. Daniel Wagner and Dieter Schmalstieg, “ARToolKitPlus for Pose Tracking on Mobile Devices”, Computer Vision Winter Workshop 2007, Michael Grabner, Helmut Grabner (eds.), St. Lambrecht, Austria, February 6-8, Graz Technical University).
[0104] In the example of
[0105] Once the computer has recognized the reference toy, the user may manipulate the physical reference toy within the field of view of the digital camera, e.g. by moving and/or rotating the physical reference toy. The computer 410 tracks the position and orientation of the recognized reference toy. The computer displays the live video feed of the video camera on the display 1 and adds, responsive to the detected position and orientation of the reference toy, augmented reality special effects to the live video feed.
[0106]
[0107] In initial step S1, the process recognizes a reference toy in one or more captured video images received from a digital camera, e.g. from the built-in camera 412 of the tablet computer 410. To this end, the process may initially allow the user to select one of a plurality of available reference toys, e.g. in an on-screen selection menu. In some embodiments, the process may optionally display building instructions for constructing the reference toy from toy construction elements of a toy construction set.
[0108] The user may then place the reference toy on a table or other surface and direct the digital camera to capture video images of the reference toy. During the initial recognition step, the computer may display a frame, object outline or other visual guides in addition to the live video feed in order to aid the user in properly directing the digital camera. An example of a user-interface aiding the positioning of the reference toy is shown in
[0109] In particular,
[0110] Still referring o
[0111] Once the process has recognized the reference toy, the process proceeds at step S2 where the process enters a game mode in which the process receives captured video images from the digital camera in real time. The process tracks the position and orientation of the recognized reference toy in the captured images, and creates computer-generated content, such as graphics, and displays the captured live video overlaid (i.e. augmented) with the generated content. The generated content may also be generated responsive to in-game events, e.g. user inputs to the computer, game level, etc.
[0112]
[0113] In step S3, e.g. responsive to a game event, the process prompts the user to manipulate the reference toy at one or more of the reference positions. To this end, the process may create and render content, such as sound, or graphical content.
[0114]
[0115] Still referring to
[0116] When the process has detected the prompted manipulation, e.g. the positioning of a physical figurine at the reference position, the process proceeds at step S5; otherwise the process proceeds at step S6.
[0117] At step S5, i.e. responsive to detecting the manipulation, the process generates and renders appropriate computer-generated AR content, e.g. as illustrated in
[0118]
[0119] Again referring to
[0120] At step S6, i.e. when the process has not yet detected the manipulated as prompted (e.g. the positioning of a figurine at the reference position as prompted), the process reacts accordingly. For example, in the example of
[0121] If the timer has expired, the process may proceed to step S7 and create and render computer-generated content reflecting the failure to perform the task the user was prompted to do.
[0122] The process then proceeds with the digital game, e.g. by returning to step S3 and prompting the user to perform another manipulation action.
[0123] It will be appreciated that many variations of the above process are possible. For example, the detectable user manipulation of the physical reference toy need not be the addition of a figurine or other object, but may involve another type of manipulation of the physical reference toy.
[0124] Also, the detection of the manipulation may not require a complex object recognition process but may simply involve detecting a dominant color or texture in the sub-image. For example, in the example of
[0125]
[0126] The reference toy 1040 defines three reference positions, each reference position having a respective sub-image associated with it, e.g. a sub-image surrounding or otherwise in a fixed spatial relationship with the reference position. In
[0127] In the example of
[0128] In
[0129] In the claims enumerating several means, several of these means can be embodied by one and the same element, component or item of hardware. The mere fact that certain measures are recited in mutually different dependent claims or described in different embodiments does not indicate that a combination of these measures cannot be used to advantage.
[0130] It should be emphasized that the term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, elements, steps or components but does not preclude the presence or addition of one or more other features, elements, steps, components or groups thereof.