IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM FOR GENERATING IMAGE OF MIXED WORLD
20230058228 · 2023-02-23
Inventors
Cpc classification
G06T19/20
PHYSICS
G06F3/011
PHYSICS
International classification
G06T19/20
PHYSICS
G06T19/00
PHYSICS
Abstract
An image processing apparatus that generates an image of a mixed world by superimposing at least one virtual object on an image of a real world includes an acquisition unit configured to acquire information related to a motion status of a user of the image processing apparatus, a determination unit configured to determine a position for placing the at least one virtual object in the image of the real world, based on the motion status acquired by the acquisition unit, and an image processing unit configured to generate an image in which the at least one virtual object is placed at the position determined by the determination unit. The determination unit determines a position corresponding to a direction in which the user moves as the position for placing the at least one virtual object.
Claims
1. An image processing apparatus that generates an image of a mixed world by superimposing at least one virtual object on an image of a real world, the image processing apparatus comprising: at least one memory storing instructions; and at least one processor that, upon execution of the instructions, is configured to operate as: an acquisition unit configured to acquire information related to a motion status of a user of the image processing apparatus; a determination unit configured to determine, based on the acquired motion status information, a position for placing the at least one virtual object in the image of the real world; and an image processing unit configured to generate an image in which the at least one virtual object is placed at the determined position, wherein the determination unit determines a position corresponding to a direction in which the user moves as the position for placing the at least one virtual object.
2. The image processing apparatus according to claim 1, wherein the image processing unit places the at least one virtual object so that the at least one virtual object moves to a target position over a plurality of frames.
3. The image processing apparatus according to claim 2, wherein, in response to a forward movement as the motion status of the user, the determination unit determines the position for placing the at least one virtual object so that the target position of the at least one virtual object is above the center of the image of the real world.
4. The image processing apparatus according to claim 2, wherein, in response to a descending movement on stairs as the motion status of the user, the determination unit determines the position for placing the at least one virtual object so that the target position of the at least one virtual object is below the center of the image of the real world.
5. The image processing apparatus according to claim 3, wherein the determination unit further changes a range of the target position of the at least one virtual object, depending on a moving speed of the user.
6. The image processing apparatus according to claim 1, wherein the image processing unit changes a frame rate depending on the motion status of the user.
7. The image processing apparatus according to claim 1, wherein the image processing unit adjusts a size of the at least one virtual object over a plurality of frames.
8. The image processing apparatus according to claim 1, wherein, the at least one virtual object comprises a plurality of virtual objects, and the image processing unit places each of the plurality of virtual objects based on a predetermined superimposition priority level.
9. The image processing apparatus according to claim 1, wherein the motion status includes information as to whether the user is moving and about a speed and a traveling direction.
10. The image processing apparatus according to claim 1, wherein the image processing unit places the at least one virtual object after applying image processing to the at least one virtual object depending on the motion status.
11. The image processing apparatus according to claim 10, wherein the image processing is processing of applying shadow to the at least one virtual object.
12. The image processing apparatus according to claim 10, wherein the image processing is processing of applying lighting to the at least one virtual object.
13. The image processing apparatus according to claim 10, wherein the image processing is processing of changing quality of rendering the at least one virtual object.
14. An image processing method to be executed by an image processing apparatus that generates an image of a mixed world by superimposing a virtual object on an image of a real world, the image processing method comprising: acquiring information related to a motion status of a user of the image processing apparatus; determining a position for placing the virtual object in the image of the real world, based on the acquired motion status; and performing image processing of generating an image in which the virtual object is placed at the determined position, wherein a position corresponding to a direction in which the user moves is determined as the position for placing the virtual object.
15. A non-transitory computer-readable storage medium storing a program for causing a computer to execute a control method for an image processing apparatus that generates an image of a mixed world by superimposing a virtual object on an image of a real world, the control method comprising: acquiring information related to a motion status of a user of the image processing apparatus; determining a position for placing the virtual object in the image of the real world, based on the acquired motion status; and performing image processing of generating an image in which the virtual object is placed at the determined position, wherein a position corresponding to a direction in which the user moves is determined as the position for placing the virtual object.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0008]
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
DESCRIPTION OF THE EMBODIMENTS
[0020] A first exemplary embodiment of the present disclosure will be described with reference to some drawings.
[0021]
[0022] In this process, the relocation is performed over a plurality of rendering frames, and the virtual objects are displayed with a visual effect for making the virtual objects each gradually move from the coordinates indicated in the image 101 to the coordinates indicated in the image 102 when the user is walking forward. In the present exemplary embodiment, the virtual objects are relocated above the center of the screen for the purpose of raising the line of sight of the user in order to avoid risk, when the user is walking forward.
[0023] In the present exemplary embodiment, along with the start of the movement of the virtual objects for the relocation, the sizes of the virtual objects, a method of displaying the condition of overlap between the virtual objects, a polygon model, and a method of rendering shadow and lighting of texture are changed. In the description of the present exemplary embodiment, the way of rendering each of the virtual objects is changed as follows, in the figures including
[0024] An image 106 is an image example when the user is descending stairs. In the present exemplary embodiment, the virtual objects are relocated below the center of the screen for the purpose of lowering the line of sight of the user in order to avoid risk, when the user is moving forward and downward such as when the user is descending stairs. The change of the size of the virtual object and the method of displaying the overlap condition are similar to those in the image 102 when the user is walking forward.
[0025] In the present exemplary embodiment, the frame rate for displaying the image is changed depending on the status of the user's being at a stop, walking forward, and descending stairs. High quality is set for the user's being at a stop, and low quality is set for the other motion statuses. For example, the frame rate is 120 fps (frame per second) for the user's being at a stop, and 30 fps for the other motion statuses.
[0026] The above-described setting is an example in the present exemplary embodiment, and setting/initial setting values different form this setting may be provided. The coordinates for relocating virtual objects as represented by the example illustrated in
[0027]
[0028] The image processing apparatus 200 in the present exemplary embodiment includes the CPU 201, the ROM 202, the RAM 203, and the I/F 204, all of which are connected to one another by a bus 205. The CPU 201 controls the operation of the image processing apparatus 200, and a program loaded into the ROM 202 or the RAM 203 carries out processing in flowcharts described below. Further, the RAM 203 is also used as a work memory that stores temporary data for the processing performed in the CPU 201, and also functions as an image buffer that temporarily holds image data to be displayed. The I/F 204 is an interface for communicating with the outside, and image data about a real world and data for determining the status of a user are input to and image data to be displayed is output from the I/F 204.
[0029] Although one CPU is illustrated in
[0030]
[0031] In
[0032]
[0033] An item “motion status” refers to information about stop, walking, or descending. Part or all of information such as frontward, backward, leftward, rightward, upward, and downward directions in the screen may be used. Alternatively, as for descending, information may be obtained from “traveling direction” to be described below, and no information may be held as the motion status. An item “speed” refers to the moving speed of the user. An item “traveling direction” refers to the vector of the traveling direction of the user. An item “head angle” refers to the vector of the orientation of the head of the user. Further, in the example illustrated in
[0034]
[0035] In
[0036] An item “size at relocation time” refers to the size of the virtual object at the time of relocation. For example, the item for the virtual object 103 in
[0037] An item “shadow at relocation time” refers to shadow information to be applied to the texture of the virtual object to be rendered at the time of relocation. In the present exemplary embodiment, whether to render the texture using the shadow information is defined by being expressed as applied/not applied. An item “lighting at relocation time” refers to lighting information to be applied to the texture of the virtual object to be rendered at the time of relocation. In the present exemplary embodiment, whether to render the texture using the lighting information is defined by being expressed as applied/not applied.
[0038] For the shadow and the lighting at relocation time, an image processing technique to be applied to the texture may be defined.
[0039] An item “superimposition priority level” refers to information about the method for displaying the overlap condition between the virtual objects after the relocation. In the present exemplary embodiment, this is expressed in three grades as highest, high, and low, and as the priority level is higher, the virtual object is displayed further frontward. An integer may be used as the superimposition priority level, or the priority level may be changed. An item “location coordinates when walking” refers to relocation coordinates when the user is walking forward, and this is defined using, for example, the coordinates indicated in the image 102 when the user is walking forward in
[0040] An item “relocation coordinates” refers to relocation coordinates depending on the motion status of the current user. The item holds the pre-relocation coordinates, the location coordinates when walking, or the location coordinates when descending. An item “current coordinates” refers to display coordinates in the corresponding frame during relocation. An item “frame rate” refers to the quality of the frame rate for each motion status, and this is defined as high quality or low quality. The frame rate may be defined using a specific value (fps) for each motion status.
[0041] The types of values indicated as the size at relocation time, the polygon model at relocation time, the shadow at relocation time, the lighting at relocation time, the superimposition priority level, and the frame rate described in the present exemplary embodiment are not limited to the example described above. One or more pieces of coordinate information may be held based on a threshold of a speed at the time of walking, as the location coordinates when walking described in the present exemplary embodiment. For example, the information may be in a form divided into walking and running. Alternatively, a relocation to different coordinates every 2 m/s may be performed.
[0042]
[0043] In step S601, the information input unit 301 determines whether to continue the experience of the mixed world. As a result of this determination, if the experience of the mixed world is to be continued (YES in step S601), the processing proceeds to step S602. Otherwise (NO in step S601), the processing ends.
[0044] In step S602, the information input unit 301 interprets the status data about the user obtained via the I/F 204, and updates the data about the motion status, the speed, the traveling direction, and the head angle in the motion status data 302 based on the interpreted status data.
[0045] In the present exemplary embodiment, the method has been described of constantly acquiring the status data about the user and updating the motion status data 302, but the timing of the update is not particularly limited. For example, the update may be performed every frame, or may be performed in response to a change in the status data about the user obtained via the I/F 204.
[0046]
[0047] In step S701, the relocation determination unit 303 determines whether the information about the motion status data 302 is updated. As a result of this determination, if the information is updated (YES in step S701), the processing proceeds to step S702. Otherwise (NO in step S701), the processing proceeds to step S703.
[0048] In step S702, the relocation determination unit 303 updates the relocation coordinates of each of the virtual objects of the relocation data 305 based on the updated information about the motion status data 302. The processing will be specifically described below with reference to
[0049] In step S703, the relocation determination unit 303 determines whether to continue the experience of the mixed world. As a result of this determination, if the experience of the mixed world is to be continued (YES in step S703), the processing returns to step S701. Otherwise (No in step S703), the processing ends.
[0050]
[0051] In step S801, the relocation determination unit 303 determines whether the current motion status in the motion status data 302 indicates stop. As a result of this determination, if the motion status indicates stop (YES in step S801), the processing proceeds to step S802. Otherwise (NO in step S801) the processing proceeds to step S803.
[0052] In step S802, the relocation determination unit 303 updates the relocation coordinates of the relocation data 305 with the pre-relocation coordinates, and the processing ends.
[0053] In step S803, the relocation determination unit 303 determines whether the motion status of the motion status data 302 indicates descending. As a result of this determination, if the motion status indicates descending (YES in step S803), the processing proceeds to step S804. Otherwise (NO in step S803) the processing proceeds to step S805.
[0054] In step S804, the relocation determination unit 303 updates the relocation coordinates of the relocation data 305 with the location coordinates when descending, and the processing ends.
[0055] In step S805, because the motion status of the motion status data 302 indicates walking, the relocation determination unit 303 updates the relocation coordinates of the relocation data 305 with the location coordinates when walking, and the processing ends. The processing order depending on the type of the motion status described in the present exemplary embodiment is not limited to this example, and the processing order may be changed. Alternatively, the processing may be further changed depending on the speed, by referring to the speed of the motion status data 302 during walking.
[0056]
[0057] In step S901, the image processing unit 304 determines whether “current coordinates” of the relocation data 305 are identical to the relocation coordinates. As a result of this determination, if the current coordinates are identical to the relocation coordinates (YES in step S901), the processing proceeds to step S905. Otherwise (NO in step S901) the processing proceeds to step S902.
[0058] In step S902, the image processing unit 304 calculates the coordinates of each of the virtual objects to be displayed in the next frame, and updates the current coordinates of the relocation data 305 with the calculated coordinates. The new current coordinates can be calculated, on the assumption that the user is moving from the current coordinates before update, to the relocation coordinates as target coordinates, at a constant speed.
[0059] In step S903, the image processing unit 304 creates image data to be displayed on the display unit 306 by superimposing the virtual object on the image of the real world based on the information about the relocation data 305, and stores the created image data into the image buffer. Here, the image processing unit 304 refers to the information about the polygon model at relocation time, the shadow at relocation time, and the lighting at relocation time of the relocation data 305, as image processing information. Further, if a value other than “no change” is predefined as the size at relocation time in the relocation data 305, the image processing unit 304 calculates a magnification, on the assumption that the user is moving from the current coordinates before update, to the relocation coordinates as the target coordinates, at a constant speed, and performs enlargement or reduction on the virtual object based on the calculated magnification for adjustment.
[0060] In step S904, the image processing unit 304 awaits display on the display unit 306 of the image data stored into the image buffer in step S903, and the processing returns to step S901.
[0061] On the other hand, in step S905, the image processing unit 304 creates image data to be displayed on the display unit 306 by superimposing the virtual object on the image of the real world based on the information about the relocation data 305, and stores the created image data into the image buffer. In this case, the virtual object is displayed at the relocation coordinates, and its image data is generated based on the size, the polygon model, and other data predefined in the relocation data 305.
[0062] In step S906, the image processing unit 304 awaits display on the display unit 306 of the image data stored into the image buffer in step S905.
[0063] In step S907, the image processing unit 304 determines whether there is a frame to be displayed next. As a result of this determination, if there is a frame to be displayed next (YES in step S907), the processing returns to step S901. Otherwise (NO in step S907) the processing ends.
[0064] The described above technique according to the present exemplary embodiment provides the relocation of a virtual object while maintaining a visual effect to prevent an experience of a user from being impaired, while preventing loss of an opportunity to sense the danger in the real world.
[0065] A second exemplary embodiment will be described. In the present exemplary embodiment, an example will be described of holding information about the image area for relocating each virtual object to, instead of holding coordinate information about each virtual object in relocating virtual objects. The internal configurations of an image processing apparatus according to the present exemplary embodiment are similar to those in
[0066]
[0067] Further, likewise, an image area to which the virtual objects are relocated varies between when walking and when running, when the user descends while moving forward such as when descending stairs, although not illustrated in
[0068]
[0069]
[0070] In step S1201, the relocation determination unit 303 determines the image area of the mapping destination based on the speed in motion status data 302, and further acquires information about the location area when walking and the location area when descending. First, the relocation determination unit 303 determines whether the user is walking or running based on “speed” in the motion status data 302, and determines that the user is running if the speed is more than or equal to a threshold in this processing. Subsequently, the relocation determination unit 303 acquires the information about “location area when walking” or “location area when descending” in walking or running. The relocation determination unit 303 then calculates relocation coordinates for mapping in the corresponding location area. Which position in the location area when walking or the location area when descending to be determined to be the relocation coordinates is not limited, and any relocation coordinates may be determined if the relocation coordinates are within the corresponding location area.
[0071] The described above technique according to the present exemplary embodiment allows holding image area information related to a range for relocating each virtual object to without holding coordinate information about each virtual object beforehand, in relocating virtual objects.
OTHER EMBODIMENTS
[0072] Embodiment(s) of the present disclosure can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
[0073] While the present disclosure has been described with reference to exemplary embodiments, it is to be understood that the disclosure is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
[0074] This application claims the benefit of Japanese Patent Application No. 2021-133923, filed Aug. 19, 2021, which is hereby incorporated by reference herein in its entirety.