SYSTEMS AND METHODS FOR CREATING A PLAYABLE VIDEO GAME FROM A THREE-DIMENSIONAL MODEL
20200316475 ยท 2020-10-08
Inventors
Cpc classification
A63F13/214
HUMAN NECESSITIES
A63F13/60
HUMAN NECESSITIES
G05B2219/23258
PHYSICS
A63F13/63
HUMAN NECESSITIES
A63F13/213
HUMAN NECESSITIES
A63F13/65
HUMAN NECESSITIES
G05B2219/13144
PHYSICS
G05B2219/23291
PHYSICS
International classification
A63F13/63
HUMAN NECESSITIES
A63F13/213
HUMAN NECESSITIES
A63F13/60
HUMAN NECESSITIES
A63F13/65
HUMAN NECESSITIES
Abstract
Systems and methods for creating a playable video game, or playable video game levels, from a three-dimensional model comprising a plurality of various-colored blocks disposed on a grid. A set of software modules processes a digital image of the static model to translate its component elements into video game elements in a level file, which may then be played using a game driver.
Claims
1. A method of creating a playable video game comprising: providing a computer; at said computer: receiving an image of a design board comprising a plurality of recesses arranged in a grid and having at least one block having a color disposed in a recess of said plurality of recess; recognizing in said received image a first block of said at least one block; determining in said received image the location in said grid of said first block; determining in said received image said color of said first block; translating, according to a glyph language, said determined color to a first functional video game element; and generating a video game level which, when rendered as a playable video game, implements said first functional video game element at a location in said playable video game corresponding to said determined location.
2. The method of claim 1, further comprising capturing said image.
3. The method of claim 2, further comprising capturing said image using an image processing system.
4. The method of claim 3, wherein said image processing system comprises a digital camera.
5. The method of claim 3, wherein said image processing system is integrated into said computer.
6. The method of claim 1, wherein said computer comprises a mobile device or tablet computer.
7. The method of claim 1, wherein said grid is arranged in a 1313 array.
8. A non-transitory computer-readable medium having computer-readable instructions stored thereon which, when executed by a microprocessor, perform the steps of: receiving an image of a design board comprising a plurality of recesses arranged in a grid and having at least one block having a color disposed in a recess of said plurality of recesses; recognizing in said received image said a first block of said at least one block; determining in said received image the location in said grid of said first block; determining in said received image said color of said first block; translating, according to a glyph language, said determined color to a first functional video game element; and generating a video game level which, when rendered as a playable video game, implements said first functional video game element at a location in said playable video game corresponding to said determined location.
9. The method of claim 8, further comprising capturing said image
10. The method of claim 9, further comprising capturing said image using an image processing system.
11. The method of claim 10, wherein said image processing system comprises a digital camera.
12. The method of claim 8, further comprising a computer including said computer-readable medium.
13. The method of claim 12, wherein said computer comprises a mobile device or a tablet computer.
14. The method of claim 8, wherein said grid is arranged in a 1313 array.
15. A method of creating a playable video game comprising: providing a computer; at said computer: receiving an image of a design board comprising a grid of recesses and having a plurality of colored blocks disposed in said grid of recesses; identifying in said received image said plurality of blocks; determining, for each block in said plurality of identified blocks, a color of said each block and a location of said each block in said grid; generating skin data, said skin data comprising an image corresponding to said determined colors and said determined locations; and generating a video game level which, when rendered as a playable video game, implements at least one video game element using said skin data for a visual appearance of said at least video game element.
16. The method of claim 15, further comprising capturing said image.
17. The method of claim 9, further comprising capturing said image using an image processing
18. The method of claim 15, wherein said computer comprises a mobile device or tablet computer.
19. The method of claim 1, wherein said grid is arranged in a 1313 array.
20. The method of claim 15, wherein said plurality of blocks are the only blocks disposed in said grid of recesses.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
DESCRIPTION OF THE PREFERRED EMBODIMENT(S)
[0041] The following detailed description and disclosure illustrates by way of example and not by way of limitation. This description will clearly enable one skilled in the art to make and use the disclosed systems and methods, and describes several embodiments, adaptations, variations, alternatives and uses of the disclosed systems and apparatus. As various changes could be made in the above constructions without departing from the scope of the disclosures, it is intended that all matter contained in the above description or shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
[0042] Throughout this disclosure, the term computer describes hardware which generally implements functionality provided by digital computing technology, particularly computing functionality associated with microprocessors. The term computer is not intended to be limited to any specific type of computing device, but it is intended to be inclusive of all computational devices including, but not limited to: processing devices, microprocessors, personal computers, desktop computers, laptop computers, workstations, terminals, servers, clients, portable computers, handheld computers, smart phones, tablet computers, mobile devices, server farms, hardware appliances, minicomputers, mainframe computers, video game consoles, handheld video game products, and wearable computing devices including but not limited to eyewear, wristwear, pendants, and clip-on devices.
[0043] As used herein, a computer is necessarily an abstraction of the functionality provided by a single computer device outfitted with the hardware and accessories typical of computers in a particular role. By way of example and not limitation, the term computer in reference to a laptop computer would be understood by one of ordinary skill in the art to include the functionality provided by pointer-based input devices, such as a mouse or track pad, whereas the term computer used in reference to an enterprise-class server would be understood by one of ordinary skill in the art to include the functionality provided by redundant systems, such as RAID drives and dual power supplies.
[0044] It is also well known to those of ordinary skill in the art that the functionality of a single computer may be distributed across a number of individual machines. This distribution may be functional, as where specific machines perform specific tasks; or, balanced, as where each machine is capable of performing most or all functions of any other machine and is assigned tasks based on its available resources at a point in time. Thus, the term computer as used herein, can refer to a single, standalone, self-contained device or to a plurality of machines working together or independently, including without limitation: a network server farm, cloud computing system, software-as-a-service, or other distributed or collaborative computer networks.
[0045] Those of ordinary skill in the art also appreciate that some devices which are not conventionally thought of as computers nevertheless exhibit the characteristics of a computer in certain contexts. Where such a device is performing the functions of a computer as described herein, the term computer includes such devices to that extent. Devices of this type include but are not limited to: network hardware, print servers, file servers, NAS and SAN, load balancers, and any other hardware capable of interacting with the systems and methods described herein in the matter of a conventional computer.
[0046] Throughout this disclosure, the term software refers to code objects, program logic, command structures, data structures and definitions, source code, executable and/or binary files, machine code, object code, compiled libraries, implementations, algorithms, libraries, or any instruction or set of instructions capable of being executed by a computer processor, or capable of being converted into a form capable of being executed by a computer processor, including without limitation virtual processors, or by the use of run-time environments, virtual machines, and/or interpreters. Those of ordinary skill in the art recognize that software can be wired or embedded into hardware, including without limitation onto a microchip, and still be considered software within the meaning of this disclosure. For purposes of this disclosure, software includes without limitation: instructions stored or storable in RAM, ROM, flash memory BIOS, CMOS, mother and daughter board circuitry, hardware controllers, USB controllers or hosts, peripheral devices and controllers, video cards, audio controllers, network cards, Bluetooth and other wireless communication devices, virtual memory, storage devices and associated controllers, firmware, and device drivers. The systems and methods described here are contemplated to use computers and computer software typically stored in a computer- or machine-readable storage medium or memory.
[0047] Throughout this disclosure, terms used herein to describe or reference media holding software, including without limitation terms such as media, storage media, and memory, may include or exclude transitory media such as signals and carrier waves.
[0048] Throughout this disclosure, the terms web, web site, web server, web client, and web browser refer generally to computers programmed to communicate over a network using the HyperText Transfer Protocol (HTTP), and/or similar and/or related protocols including but not limited to HTTP Secure (HTTPS) and Secure Hypertext Transfer Protocol (SHTP). A web server is a computer receiving and responding to HTTP requests, and a web client is a computer having a user agent sending and receiving responses to HTTP requests. The user agent is generally web browser software.
[0049] Throughout this disclosure, the term glyph means a symbol, letter, number, pictogram, structure, gesture, tone, mark, or element which, in a given use case domain, has or is indicative of or contributive to semantic meaning. While in typography and linguistics, the term glyph generally means a written mark, in the present application the term is defined more broadly to include other indicators of meaning, as described herein. For example, a glyph as used herein may comprise a three-dimensional symbol or object, including but not necessarily limited to blocks or cubes, tactile languages such as Braille, poker chips, chess pieces, and so on. A glyph may also be four-dimensional, including but not necessarily limited to motion-based glyphs which acquire semantic meaning over time, such as sign languages and gestures. A glyph may also be non-visual in nature, such as auditory glyphs like musical notes, tones, animal noises, or spoken language. A particular glyph may have different semantic meanings in different use case domains.
[0050] Throughout this disclosure, the term use case domain means a particular field or application which may have or use conventional, standard, predefined or generally known symbols, glyphs, pictograms, gestures, tones, sounds, or structures to indicate elements used or known in the particular field or application. For example, it is common in network design to use a cloud to symbolize a network. Also by way of example and not limitation, it is common in electrical or circuit diagrams to indicate the presence of a resistor using a pictogram comprising a jagged line.
[0051] The terms level and/or video game level are a term of art hailing from the golden age of gaming, when video games generally comprised a sequence of playable levels with defined beginnings and endings. This includes, but is not limited to, games like Pac-Man, Donkey Kong, and the well-known Nintendo Entertainment System product Super Mario Brothers which was noted for its level notation (e.g., 1-1, 1-2, 1-3, 1-4, 2-1, etc.). One of ordinary skill in the art will understand that the term level has become a team of art referring to a defined playable space within a particular video game, and the particular structure and form of such space necessarily varies from genre to genre. By way of example and not limitation, a level in a side scroller-style like Super Mario Brothers generally comprises a beginning point and goal and, when the player reaches or achieves the goal, the player has finished or beaten the level and begins play on an alternative level. For other genres, such as a first person shooter, a game level is generally a map defining a limited three-dimensional space in which game play occurs until a condition is met, such as the expiration of time, defeating a certain number of opponents, or locating an exit. In still other genres, a level may lack clearly defined beginning and end points. By way of example and not limitation, in an online role-playing game, players generally move smoothly from one map to another without having to achieve any particular condition, and a game level in such an embodiment may comprise data indicative of adjacent game levels on which the player can play by departing the current game level. In still further embodiments, a game level may comprise other data or take other forms particular to the genre. The concept of a game level and the meaning of that term will be understood by one of ordinary skill in the art as the term applies to a particular video game genre.
[0052] A video game element is an element of a playable video game which contributes to the user experience, including but not necessarily limited to: world design, system design, content design, event design, level design, audio design, graphic design, model design, user interface design, narrative design, and multiplayer design. Video game elements may comprise any audiovisual element of the game and generally comprise interactive elements of the game, such as the setting and geography of the game, and the objects or features in the game with which the player can interact.
[0053] Certain video game elements are defined functionally with respect to the player avatar in the game. The term avatar will be understood by one of ordinary skill in the art to refer to the character or other representation of the player which is manipulated by the player while playing the game and is generally the mechanism for player agency within the video game. The functional definition of interactive video game elements will vary from genre and genre and game to game. By way of example and not limitation, side scrollers such as Super Mario Brothers typically include interactive game elements which injure, damage, or heal the player upon collision detection, or which provide for loot to the player upon collision. These interactive game elements may have further characteristics, such as that they are subject to gravity physics (or not), they are stationary (or not), or they only cause injury, damage, or healing if collision is detected from certain angles (e.g., dorsal collision does not cause damage, but lateral collision does). While interactive video game elements are defined functionally, they are typically represented visually as a game literal. For example, an element causing damage upon collision detection might have a game literal such as fire, spikes, thorns, or an enemy. Interactive game elements have a functional relationship to the player avatar, whereas non-interactive game elements, such as the score counter or game music, generally are experienced or used by the player but not directly used for interaction by the avatar.
[0054] A video game literal or game literal as used herein generally references to the aesthetic appearance of a video game element. More than a matter of merely skinning a pre-defined game model, selection of a game literal for a game element is effectively arbitrary, as the game literal defines the narrative or associative meaning of the game element, as opposed to the functional meaning. The choice of game literal may generally bear some relationship to the functional game element as a matter of design choice and information efficiency, it need not. For example, for a game element such as damage upon collision detection, the game literal will generally be something that a typical user will associate with causing injury when touched, such as a very sharp object (spikes) or a very hot object (fire). This is so that the game can quickly and efficiently communicate to the player information about how the avatar will interact with the game environment, without the player having to read lengthy instructions or tutorial lessons.
[0055] It should be understood that while terms such as level data and video game level are defined and used herein with respect to video game use case domains, other use case domains are specifically contemplated and alternative formats and engines/drivers/renderers may be used for other use case domains. By way of example and not limitation, in the use case domain of an electrical schematic diagram, the output format may not be video game level data, but rather a CAD file used or usable by a CAD engine, driver, or renderer as an input file. Generally speaking, the accumulator produces data in a format usable by productivity or business software, or used or usable directly by an end-user, for a particular use case domain.
[0056] At a high level, the systems and methods described herein comprise an image processing system, a translation system, and a generation system, though additional component systems or subsystems may be included. One or more these components may be implemented as software instructions on a computer system. Generally, a user creates a model having glyphs selected from a pre-defined glyph language, the image capturing system creates a digital image of the model, the translation system converts the glyphs in the digital image into a game element (generally an interactive game element), and the generation system assembles the translated glyphs into a playable video game or video game level. These and other components are discussed in further detail herein.
[0057]
[0058] In the embodiment of
[0059] Generally, a game level is drawn on graph paper divided into a plurality of grids or sectors, with each glyph being generally drawn on one or more sectors and conforming to the provided grid lines. Exemplary embodiments of such graph paper are depicted in
[0060] This sectoring technique may be used in an embodiment to establish a one-to-one correlation between a point on the model and a location on the screen. That is, the systems and method may use sectoring to locating on the computing device display on which the game is ultimately played a display location for a sector corresponding to its respective, relative position on the model. This may or may not also be the corresponding position in the video game produced using this method, depending on factors such as the dimensions of the game space.
[0061]
[0062] A similar board (1923) or template (1923) is also depicted in
[0063] It should be further noted that, as with two-dimensional hand-drawn models, the block (1919A) model (1917) may comprise blank glyphs (1920). In the three-dimensional block model (1917), the blank glyph (1920) is simply an empty grid sector or recess (1925). The bottom of the grid sector may be specially colored, such as white, to improve accurate recognition of a blank glyph (1920), or may alternatively or additionally comprise a special symbol printed or otherwise indicated, such as an X or O or other easily recognized character or symbol.
[0064] Users generally create the model at least in part according to a glyph language. A glyph language generally comprises a set of glyphs and an associated set of rules or instructions indicating how patterns, symbols, glyphs, marks, shapes, arrangements, elements of a glyph or model correspond to or are associated with game elements in various contexts. Examples include, but are not limited to, terrain, sky, obstacles, levels, heroes, terrain, hazard pits, monkey bars, moving platforms, spikes, barriers, portals, powerups, floors, ceiling, boundaries, and the like. One embodiment of such rules or instructions is depicted in
[0065] By way of example and not limitation, in the depicted embodiment of
[0066] The depicted embodiment of
[0067] Typically, the image processing system includes a digital camera, which may also be integrated into or otherwise attached to, attachable to, able to communicate with, or communicating with a computer system, such as a tablet PC or mobile phone. An embodiment of such a system is depicted in
[0068] Raw image data is generally provided to computer software which identifies in the image the data glyphs in the real-world model and translates the identified glyphs into game elements. The raw image data (1804) may first be prepared for this processing, such as by the preprocessing module (1907) depicted in
[0069] The preprocessor (1907) generally generates altered image data, generally referred to herein as preprocessed image data. This data may replace or overwrite raw image data, or may be stored separately from raw image data. Preprocessed image data may be stored in a memory or non-transitory computer-readable storage and may be transmitted or provided to, or received from, a module or computer. In an embodiment, a plurality of preprocessed image data sets may be generated for a single model.
[0070] In an embodiment, geometric criteria are used to identify boundary markers (e.g., markers 2121 as depicted in
[0071] Preprocessed image data is generally processed by a recognizer module (1806), such as the recognizer module (1909) depicted in
[0072] The recognizer (1909) generally generates or creates a dataset comprising data indicative of one or more glyphs (1919), or other elements of the model (1917), recognized in the preprocessed image data. This dataset is generally referred to herein as glyph data. Glyph data may be stored in a memory or non-transitory computer-readable storage and/or may be transmitted to or received from a module or computer. In an embodiment, this dataset may further comprise other data indicative of characteristics of an identified glyph (1919), including but not necessarily limited to: the color of the glyph (1919); the identification of glyphs (1919) adjacent to the glyph (1919); the position or location of the glyph (1919) in the model (1917).
[0073] In an embodiment, the recognizer (190) a segmenting process to process preprocessed image data on a sector-by-sector basis. For example, where the model is hand-draw artwork on graph paper having grid lines, the grid lines may be used to segment the model into a plurality of grid locations, or sectors, with each grid location potentially having a glyph drawn thereon. Alternatively, where the model is a three-dimensional model using blocks in a grid system (1925), the raised grid edges defining the boundaries of the grid recesses into which blocks are placed map to a sector. Using the preprocessing features described herein, the graph paper (or board (1923) may be oriented properly for such grid lines (or edges/recesses (1925)) to be algorithmically detected by software, increasing the speed and accuracy of other modules, such as the recognition module, in finding and identifying glyphs.
[0074] In an embodiment, the recognizer (1909) is implemented iteratively, such as through sequential execution of a plurality of support vector machine classifiers for each glyph in the glyph language. Each such classifier may determine whether the preprocessed image data, or a portion thereof (such as a sector), matches a particular glyph or not. If a particular glyph is recognized, the classification process terminates as to that glyph, and/or that portion of the preprocessed image data. If the glyph is not recognized, the next classifier in the plurality of classifiers may be executed to search for another particular glyph. A classifier may determine whether preprocessed image data matches a particular glyph by, for example, analyzing whether preprocessed image data includes a data pattern identical, similar, indicative of, or related to data patterns associated with the particular glyph the classifier is programmed to recognize. The recognizer may also store in the glyph data the corresponding sector in which an identified glyph is located.
[0075] In an embodiment, glyph data is processed by a semantic module, such as the semantic module (1911) depicted in
[0076] The semantic module (1911) generally generates or creates semantic data based at least in part on glyph data. Such semantic data generally comprises data indicative of one or more game elements, such elements generally being interactive game elements having functional meaning, translated from glyph data. The game elements are generally associated with a glyph (1919) in the glyph language.
[0077] In an embodiment, the semantic module performs multiple passes through the glyph data. This is because some glyphs require little or no context to be translated to a game element, but the corresponding functional meaning of other glyphs may not be determined without additional context, such as by translating adjacent glyphs. In a pass through the data, additional glyphs are translated, or attempted to be translated, to game elements. As successive passes through the glyph data provide incrementally more context, more and more glyphs can be translated to game elements, until all glyphs have been fully translated. The process of translating a glyph is referred to herein as resolving the glyph. Some glyphs may be only partially resolved during a pass, and then fully resolved in a subsequent pass.
[0078] This may be better understood with reference to an illustrative example. Suppose the glyph language defines a rectangle glyph (or, for a block-based model as in
[0079] Likewise, the functional meaning of the arrow glyph cannot be fully determined until the rectangle glyph is at least partially resolved. Thus, multiple passes are neededfirst to partially resolve the rectangle glyph into a static traversable, second to resolve the arrow glyph, and third to fully resolve the rectangle glyph. It should be noted that, in an embodiment, certain glyph resolutions can be combined in a single pass. This may be done for, among other things, processing or development efficiency. In this illustrative example, resolving the arrow glyph and resolving the movable traversable could be handled in a single pass. The particular composition of each pass will necessarily vary with the particular glyph language, and with the programming techniques used to identify glyphs. For a given glyph language, many different algorithmic approaches are possible to resolve all glyphs. This is but one illustrative example and should not be understood as limiting.
[0080] In an embodiment, the semantic module may create and/or maintain a data structure for tracking and updating glyph resolution. This is generally stored and maintained in a computer-readable medium operatively coupled to a processor and other typical computer hardware. By way of example and not limitation, this data structure may comprise a context table or context graph having context data. A context graph may comprise a one or more datasets corresponding to a sector and each such dataset may be used to track and update context data and/or glyph resolution data for the corresponding sector. This approach is particularly useful for glyphs which resolve to functions that have few or no data analogs in the resulting video game level data (discussed elsewhere herein), such as empty or open space, as it can reduce the memory footprint and increase processing efficiency. By way of example and not limitation, a semantic module pass may indicate that a given blank glyph has a sky function (passable/no collision detection) and update the context graph data corresponding to that given sector to have a sky context.
[0081] This also may be better understood with reference to an illustrative example. In an embodiment, the glyph language defines the blank glyph as having a different meaning in different contexts. For example, a blank glyph may have a sky function (passable/no collision detection) in one context and a ground function (impassable/collision detection) in another (e.g., the blank glyph is enclosed in a polygon glyph defining a static traversable). To determine whether a given blank glyph is sky or ground, the semantic module may complete one or more passes through the glyph data to develop sufficient context data to determine which functional meaning to apply to each blank glyph. An example of this technique is depicted in the flow chart of
[0082] In the depicted flow chart, a context graph is created (2000) in memory comprising data corresponding to sectors. The context for each sector is initially defaulted to an unknown value (2002), such as a program constant. In the depicted embodiment, the system determines whether any sectors remain unknown (2004) at the beginning of each iteration, though this step may be carried out during or after each iteration in an alternative embodiment. The choice of when to perform this check is an implementation detail generally left to a programmer's design discretion. If no sectors remain unknown, the context parsing may terminate (2006). However, if sectors remain unknown, additional iterations may proceed to examine additional unknown contexts or glyphs and resolve them (2008).
[0083] Continuing the illustrative example, during early passes, glyphs whose functional meaning is not highly context-dependent may be resolved and functional meanings assigned. Similarly, glyphs which provide context meaning (whether or not they can be resolved during the pass) may be used to determine context meaning for themselves and/or for adjacent glyphs or sectors. By way of example and not limitation, if a + glyph (or, e.g., a yellow block in
[0084] Continuing the illustrative example, when the + glyph (or yellow block) is found in the glyph data, its corresponding function can be determined (2010) and its associated functional meaning can be applied (2012), such as by referring to the glyph language and/or use case domain. The resulting game element is then added to the semantic data (2012). Likewise, this glyph provides context (2014) to adjacent blank glyphs (e.g., also sky context) and the context graph for such adjacent glyphs can be updated (2016) to reflect the context discovered during this pass. This in turn allows additional blank glyphs, adjacent to the blank glyphs adjacent to the +/yellow glyph, to be assigned functional meaningagain, sky. A flood-fill algorithm may be used to repeat this process and locate all such blank glyph locations in the context graph and indicate sky context for such glyphs in the context graph. In an embodiment, the flood fill algorithm may be performed in the same pass as the identification of the +/yellow glyph, or in one or more subsequent passes.
[0085] Continuing the illustrative example, some glyphs may have context-sensitive functional meanings, meaning at least some context is resolved before the glyph is resolved. By way of example and not limitation, an X glyph (or red block) may have the functional meaning in the glyph language of avatar damage upon collision detection in one context (e.g., a game literal of lava, spikes, or fire) but of suspended avatar physics in another (e.g., monkey bars). Thus, when an X/red glyph is found, adjacent glyphs are evaluated to determine context and identify the corresponding functional meaning for a particular X/red glyph. If the adjacent glyphs have not yet been resolved, the functional meaning for X/red may not yet be determinable (2010), and the glyph is not yet assigned functional meaning. However, the glyph may still provide context information (2014) whether or not its functional meaning is determinable during the pass.
[0086] Continuing the illustrative example, contexts may also be resolved by algorithmically locating the borders of a given context and assigning glyphs on the opposing side of the border with an opposing context. By way of example and not limitation, if a blank glyph is known to have a sky context and the borders of the sky context are defined as a closed polygon, the blank glyphs within the enclosed polygon on the opposing side of the border of the sky context are necessarily ground context and can be translated (2016) as such, and the context graph updated accordingly.
[0087] Continuing the illustrative example, still other glyphs may require a completed or nearly-completed context graph to be resolved. By way of example and not limitation, an arrow glyph such as > might apply motion physics to adjacent impassable/collision detection game elements to form a moving platform. This may, in an embodiment, require otherwise complete context data. As such, glyphs corresponding to directional movement and/or distance may be resolved in the latter passes after most, or all, of the context graph is complete and few or no unknown locations remain. By way of example and not limitation, where the object is a moving land mass such as a platform, the moving platform may be identified algorithmically and direction and distance determined from the motion physics glyph (or glyphs) applicable to that platform as provided in the glyph language and/or use case domain. Again, a context graph (2016) and/or semantic data (2012) may be updated with the resulting context data and/or game element data.
[0088] In an embodiment, semantic data is processed by an accumulator module, such as the accumulator module (1913) depicted in
[0089] For any given use case domain, the recognizer and/or semantic modules may be preprogrammed or otherwise supplied with, the glyph language. The language may be generally known or associated in the use case domain with certain meanings, or may be developed for a particular or specific use case domain. By way of example and not limitation, a use-case domain may be or comprise: video games; a platform video game; a racing video game; an isometric adventure video game; storyboarding; music and/or music notation; network design; electrical design; Braille; web design; architecture; software design; modeling; model trains and/or logistics; electric or circuit design; medical and/or biological; presentations; welding; HVAC; 3D printing; sports and play design; automotive; 3D models of blocks, including but not limited to an orthogonal layout of blocks such as that depicted in
[0090] In an alternative embodiment, specific use case domains may be defined or provided which define a glyph language for that domain and the associated functional meaning of such glyphs in such language. As such, the meaning of a glyph may vary between use case domains, and even between related or similar use case domains. By way of example and not limitation, a use case domain may be a video game genre, such as a platformer in which the glyph X has the semantic meaning of a surface capable of supporting a sprite, such as but not limited to an avatar. However, in an alternative use case domain, such as an isometric realtime roleplaying game, the glyph X may have the semantic meaning of impassable terrain. In a still further use case domain, such as landscape design, the glyph X may indicate an existing structure to be demolished and removed.
[0091] In an embodiment, the video game level data may also be generated at least in part using a sprite sheet, an example of which is depicted in
[0092] In an embodiment, the user may view, edit, and revise the video game level data, including but not limited to by drawing or re-drawing the level changing images, sounds, and music included in the generated game level. This may be done, among other reasons, to correct rendering or translation errors. Generally, the user performs these functions using a computer application, an embodiment of which is depicted in
[0093] Also described herein is a system for creating a playable video game from a real-world model comprising, such as the system depicted in
[0094] It should be noted that the depicted embodiments of
[0095] The depicted embodiments of
[0096] In an embodiment of the systems and/or methods described herein, the systems and/or methods further comprise displaying, conveying, or indicating to a user an image or representation of the based at least in part on preprocessed image data and/or glyph data. The systems and/or methods may further comprise editing or modifying preprocessed image data, glyph data, and/or semantic data based at least in part on user input provided to a computer system. In an embodiment, edited or modified preprocessed image data, glyph data, and/or semantic data may be produced or provided to a user in a non-digital format, including but not necessarily limited to by rendering or generating a recreation of the model. By way of example and not limitation, the systems or methods may display or render, or cause to be displayed or rendered, to the user a digital representation or impression of the model. The user may use editing software to modify the data, and/or the user may print or otherwise generate or create a modified hard copy of the model. For example, if an error occurs where a glyph is incorrectly recognized, the user may correct the glyph in the system, such as by changing the identity of the detected glyph, and then reprint the model based on the modified glyph data. Display and editing may be performed before, between, during, or after any method step or steps described herein.
[0097] In an embodiment, the model is not necessarily created by a user, but may be a pre-existing model. By way of example and not limitation, the model may be a terrain or satellite image of a geographic space, or a floor plan, and the glyphs may comprise geographic features. A user could thus use the systems and methods to create, for example, level data representative or indicative of the geography, or other features, of a real-world location, such as a neighborhood, building, or skyline.
[0098] In an embodiment, the systems and methods further comprise splicing, wherein a model is segmented into a plurality of grids, sections, sectors, or markers, and each segment is processed separately and/or independently. Such segmentation may be done on a glyph-by-glyph basis, or may use larger segments comprising a plurality of glyphs. In such an embodiment, multiple datasets indicative of at least part of said model and/or glyphs thereon may be generated and/or processed together, separately, and/or independently or interdependently. In an alternative embodiment, splicing comprises combining data indicative of a plurality of models and/or glyphs into a single dataset indicative of a model or a unified model. This may be done in an embodiment by, for example, arranging or sequencing a plurality of models in a preferred or defined layout or manner and imaging the plurality of models as a single model. By way of example and not limitation, in an implementation for a top-down adventure game, such as a game in the spirit of The Legend of Zelda, multiple models may be drawn and imaged to represent each room of a dungeon or each section of the overworld. These models may be linked or joined in the video game level data into a cohesive video game level or video game world, such as by examining glyphs on the models indicating the relationship between different models, or using computerized editing tools to arrange the multiple models appropriately to generate the desired world.
[0099] By way of example and not limitation, a user may draft multiple game level models, image and process each model as provided herein, edit and refine each processed image, such as to correct errors and make modifications, reprint the modified model, arrange the printed models in proper sequence, and then re-imagine the sequenced levels as a single model.
[0100] While this invention has been disclosed in connection with certain preferred embodiments, this should not be taken as a limitation to all of the provided details. Modifications and variations of the described embodiments may be made without departing from the spirit and scope of this invention, and other embodiments should be understood to be encompassed in the present disclosure as would be understood by those of ordinary skill in the art.