Game aim assist
10035064 ยท 2018-07-31
Assignee
Inventors
- John Garvin (Bend, OR, US)
- Christopher Reese (Bend, OR, US)
- Darren Yager (Bend, OR, US)
- Ron Allen (Bend, OR, US)
Cpc classification
A63F2300/306
HUMAN NECESSITIES
A63F13/422
HUMAN NECESSITIES
A63F13/22
HUMAN NECESSITIES
A63F13/30
HUMAN NECESSITIES
A63F2300/6054
HUMAN NECESSITIES
A63F13/56
HUMAN NECESSITIES
International classification
A63F13/422
HUMAN NECESSITIES
A63F13/30
HUMAN NECESSITIES
A63F13/56
HUMAN NECESSITIES
A63F13/22
HUMAN NECESSITIES
A63F13/219
HUMAN NECESSITIES
Abstract
Methods for game aim assist are provided. In electronic games, a game player may control the actions of a game character within a game environment. The game environment, which includes a focus area, may be displayed from a perspective of or with respect to a game character. During game play, one or more objects may be detected in the focus area. One of the objects may then be selected based on the distance between the object and the center of the focus area. The aim of the game character is then automatically adjusted toward the selected object, which allows the game player to direct an attack action on the object.
Claims
1. A method for game aim assist, the method comprising: executing instructions stored in memory, wherein execution of the instructions by a processor: detects one or more selectable objects in a defined focus area within a digital environment viewed from a perspective of a user, wherein the viewed digital environment includes the defined focus area, and wherein the digital environment view is displayed on a display screen of a user device; polls, for a pre-determined period of time, for user input associated with selecting one or more of the detected selectable objects in the defined focus area; generates automatic selection instructions for the user device whenever no user input associated with a selection is received for the pre-determined period of time, wherein the generated automatic selection instructions are executable to automatically select one of the detected selectable objects within the defined focus area; and overrides the generated automatic selection instructions when user input associated with the selection is received, wherein the received user input changes the selected selectable object to a different selectable object within the defined focus area; and automatically executing an object-related action when at least one of the detected selectable objects within the defined focus area has been selected.
2. The method of claim 1, wherein the generated automatic selection instructions are executable to select the selectable object within the defined focus area based on a monitored state of each of the detected selectable objects within the defined focus area.
3. The method of claim 2, wherein the monitored state of each of the detected selectable objects include an activity of each of the detected selectable objects.
4. The method of claim 2, wherein the monitored state of each of the detected selectable objects include viability of each of the detected selectable objects.
5. The method of claim 2, wherein the monitored state of each of the detected selectable objects includes a distance that each of the detected selectable objects is compared to a center location of the defined focus area.
6. The method of claim 2, wherein the monitored state of each of the detected selectable objects includes a distance that each of the detected selectable objects is relative to a location of the perspective of the user.
7. The method of claim 1, wherein the automatic selection instructions are further executable to provide a notification to the user device when the selected one of the detected selectable objects has been selected.
8. The method of claim 7, wherein the notification is provided via visual effects on the display screen of the user device.
9. The method of claim 7, wherein the notification is provided via audio effects.
10. The method of claim 1, wherein the user input, associated with the overriding of the generated automatic selection instructions satisfies pre-requisite conditions.
11. The method of claim 10, wherein the pre-requisite conditions specify that the user input be provided via a press of a button on a controller with at least a pre-determined amount of pressure.
12. The method of claim 10, wherein the pre-requisite conditions specify that the user input be provided via controller for at least a pre-determined period of time.
13. The method of claim 10, wherein the pre-requisite conditions are customizable to the user.
14. The method of claim 1, further comprising delaying a pre-determined amount of time between the generation of the automatic selection instructions and the execution of the object-related action.
15. An apparatus for game aim assist, the apparatus comprising: a display device that displays a user perspective of a game environment including a defined focus area; a processor for executing instructions stored in memory, wherein execution of the instructions by the processor: detects one or more selectable objects in the defined focus area within a digital environment viewed from a perspective of a user, wherein the viewed digital environment includes the defined focus area, and wherein the digital environment view is displayed on a display screen of a user device; polls, for a pre-determined period of time, for user input associated with selecting one or more of the detected selectable objects in the defined focus area; generates automatic selection instructions for the user device whenever no user input associated with selection is received for the pre-determined period of time, wherein the generated automatic selection instructions are executable to automatically select one of the detected selectable objects within the defined focus area; and overrides the generated automatic selection instructions when user input associated with the selection is received, wherein the received user input changes the selected selectable object to a different selectable object within the defined focus area; and a controller interface that receives the user input associated with the selection of the selected selectable object within the defined focus area.
16. A non-transitory computer-readable storage medium having embodied thereon a program, the program being executable by a computer processor to perform a method for game aim assist, the method comprising: detecting one or more selectable objects in a defined focus area within a digital environment viewed from a perspective of a user, wherein the viewed digital environment includes the defined focus area, and wherein the digital environment view is displayed on a display screen of a user device; polling, for a pre-determined period of time, for user input associated with selecting one or more of the detected selectable objects in the defined focus area; generating automatic selection instructions for the user device whenever no user input associated with a selection is received for the pre-determined period of time, wherein the generated automatic selection instructions are executable to automatically select one of the detected selectable objects within the defined focus area; overriding the generated automatic selection instructions when user input associated with the selection is received, wherein the received user input changes the selected selectable object to a different selectable object within the defined focus area; and automatically executing an object-related action when at least one of the detected selectable objects within the defined focus area has been selected.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION
(10) Various electronic games allow a game player to control the actions of a game character. The game environment may be displayed from a third-person perspective with respect to such a game character. In embodiments of the present invention, the display of the game environment includes a defined focus area. During game play, one or more objects may be detected in the focus area. One of the objects may then be selected as a target based on the distance between the object and the center of the focus area. The aim of the game character may then be automatically adjusted toward the selected object, which allows the game player to direct an attack action on the object.
(11)
(12) In step 210, a perspective of a game environment including a defined focus area may be displayed. The game environment may be displayed from a point of view that is close to (i.e., third-person) or that belongs to (i.e., first-person) a game character whose movement and actions may be controlled by the real-world game player. As the real-world game player moves the game character through the game environment, the display of the game environment may be adjusted to reflect the changes around the game character.
(13) In some embodiments, the display may reset after a period of inactivity in game play. Resetting of the display may include a return to an original state, which may include a particular line-of-sight or directional orientation of the game character. For example, the real-world game player may direct the game character to look up toward the sky or to focus on an object at extremely close range. After a period of inactivity, the display may automatically reset such that the display reflects the game environment from the original perspective of or with respect to the game character (e.g., straight ahead with no intensified focus).
(14)
(15) Returning to
(16) In step 230, an object may be selected from the objects detected in step 220. The selection of the object may be based on the distance between a particular object and the center of the focus area. Referring again to
(17) As illustrated in
(18) Selection of the object may be further based on a location depth of the object (i.e., distance between the object and a focal point of the game character).
(19) Selection may alternatively be based on user input. For various reasons, a real-world game player may wish to select a particular object apart from proximity to the center of the focus area and/or proximity to the game character. For example,
(20) User input may include information concerning an amount of pressure on a button on the controller or a length of time that a button on the controller is pressed. For example, a user may press a button for a particular duration to indicate that the user wishes to select a different object than the object automatically selected. Pressure and time settings may be calibrated for each particular user and/or for each new game session.
(21) Returning once again to
(22) The speed at which the targeting display 330 moves toward a selected object may be customized by the user or game player or may be defined by the particular game title. An indication of when the aim has been adjusted toward a selected object may be provided. Such indications may include various visual, audio, and/or textual indications. One such example of an indication may be a change in color, shape, or size of the targeting display 330.
(23) In optional step 250 of
(24)
(25) In optional step 260 of
(26) In optional step 270, the aim of the game character may be automatically adjusted toward the selected next object. As in step 240, automatic adjustment allows the game character to direct an action with some degree of accuracy toward the selected next object without manual intervention, which may be complicated in game controllers like that illustrated in
(27) The present invention may be implemented in a game that may be operable using a variety of end user devices. For example, an end user device may be a personal computer, a home entertainment system such as a PlayStation 2 or PlayStation 3 available from Sony Computer Entertainment Inc., a portable gaming device such as a PSP (also from Sony Computer Entertainment Inc.), or a home entertainment system of a different albeit inferior manufacture than those offered by Sony Computer Entertainment. The present methodologies described herein are fully intended to be operable on a variety of devices. The present invention may also be implemented with cross-title neutrality wherein an embodiment of the present system may be utilized across a variety of titles from various publishers.
(28) It is noteworthy that any hardware platform suitable for performing the processing described herein is suitable for use with the invention. Computer-readable storage media refer to any medium or media that participate in providing instructions to a CPU for execution. Such media can take many forms, including, but not limited to, non-volatile and volatile media such as optical or magnetic disks and dynamic memory, respectively. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic medium, a CD-ROM disk, digital video disk (DVD), any other optical medium, RAM, PROM, EPROM, a FLASHEPROM, any other memory chip or cartridge.
(29) Various forms of transmission media may be involved in carrying one or more sequences of one or more instructions to a CPU for execution. A bus carries the data to system RAM, from which a CPU retrieves and executes the instructions. The instructions received by system RAM can optionally be stored on a fixed disk either before or after execution by a CPU.
(30) While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. The descriptions are not intended to limit the scope of the invention to the particular forms set forth herein. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments. It should be understood that the above description is illustrative and not restrictive. To the contrary, the present descriptions are intended to cover such alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims and otherwise appreciated by one of ordinary skill in the art. The scope of the invention should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.