METHOD FOR ADJUSTING VIRTUAL OBJECT, HOST, AND COMPUTER READABLE STORAGE MEDIUM
20230222625 · 2023-07-13
Assignee
Inventors
Cpc classification
International classification
Abstract
The embodiments of the disclosure provide a method for adjusting a virtual object, a host, and a computer readable storage medium. The method includes: obtaining a first field of view (FOV) of a virtual world; obtaining a second FOV of a camera, wherein a first physical object locates within the second FOV of the camera; determining a FOV ratio based on the first FOV and the second FOV; determining a first position of a first virtual object in the virtual world relative to a reference object in the virtual world, wherein the first virtual object corresponds to the first physical object; determining a second position of the first virtual object in the virtual world based on the first position and the FOV ratio; and showing the first virtual object at the second position in the virtual world.
Claims
1. A method for adjusting a virtual object, adapted to a host, comprising: obtaining a first field of view (FOV) of a virtual world; obtaining a second FOV of a camera, wherein a first physical object locates within the second FOV of the camera; determining a FOV ratio based on the first FOV and the second FOV; determining a first position of a first virtual object in the virtual world relative to a reference object in the virtual world, wherein the first virtual object corresponds to the first physical object; determining a second position of the first virtual object in the virtual world based on the first position and the FOV ratio; and showing the first virtual object at the second position in the virtual world.
2. The method according to claim 1, wherein the first FOV and the second FOV are characterized by a first viewing angle and a second viewing angle, and the step of determining the FOV ratio based on the first FOV and the second FOV comprises: obtaining the FOV ratio via dividing the first viewing angle by the second viewing angle.
3. The method according to claim 1, wherein the first FOV comprises a sub-FOV, the first FOV and the sub-FOV are characterized by a first viewing angle and a third viewing angle and the step of determining the FOV ratio based on the first FOV and the second FOV comprises: obtaining the FOV ratio via dividing the third FOV by the second FOV.
4. The method according to claim 1, wherein the reference object comprises at least one reference plane in the virtual world, and the step of determining the first position of the first virtual object in the virtual world relative to the reference object in the virtual world comprises: obtaining a distance between the first virtual object and each of the at least one reference plane as the first position of the first virtual object in the virtual world relative to the reference object in the virtual world.
5. The method according to claim 4, wherein the at least one virtual plane comprises at least one of a Y-Z plane, an X-Z plane, and an X-Y plane in the virtual world, and the method comprises: in response to determining that the reference object is the Y-Z plane, obtaining an X component of a coordinate of the first virtual object in the virtual world as the distance between the first virtual object and the Y-Z plane; in response to determining that the reference object is the X-Z plane, obtaining a Y component of the coordinate of the first virtual object in the virtual world as the distance between the first virtual object and the X-Z plane; and in response to determining that the reference object is the X-Y plane, obtaining a Z component of the coordinate of the first virtual object in the virtual world as the distance between the first virtual object and the X-Y plane.
6. The method according to claim 4, wherein the step of determining the second position in the virtual world based on the first position and the FOV ratio comprises: obtaining the second position via multiplying the FOV ratio by the distance between the first virtual object and each of the at least one reference plane.
7. The method according to claim 4, wherein before the step of showing the first virtual object at the second position in the virtual world, the method further comprises: correcting a depth of the second position.
8. The method according to claim 1, wherein the reference object comprises a user representative object in the virtual world, and the step of determining the first position of the first virtual object in the virtual world relative to the reference object in the virtual world comprises: obtaining a multi-axis angle of the first virtual object relative to the user representative object as the first position of the first virtual object in the virtual world relative to the reference object.
9. The method according to claim 8, wherein the step of determining the second position in the virtual world based on the first position and the FOV ratio comprises: obtaining the second position via multiplying the FOV ratio by the multi-axis angle.
10. The method according to claim 1, wherein the first FOV of the virtual world is larger than the second FOV of the camera.
11. A host, comprising: a non-transitory storage circuit, storing a program code; a processor, coupled to the non-transitory storage circuit and accessing the program code to perform: obtaining a first field of view (FOV) of a virtual world; obtaining a second FOV of a camera, wherein a first physical object locates within the second FOV of the camera; determining a FOV ratio based on the first FOV and the second FOV; determining a first position of a first virtual object in the virtual world relative to a reference object in the virtual world, wherein the first virtual object corresponds to the first physical object; determining a second position in the virtual world based on the first position and the FOV ratio; and showing the first virtual object at the second position in the virtual world.
12. The host according to claim 11, wherein the first FOV and the second FOV are characterized by a first viewing angle and a second viewing angle, and the processor performs: obtaining the FOV ratio via dividing the first FOV by the second FOV.
13. The host according to claim 11, wherein the first FOV comprises a sub-FOV, the first FOV and the sub-FOV are characterized by a first viewing angle and a third viewing angle and the processor performs: obtaining the FOV ratio via dividing the first viewing angle by the second viewing angle.
14. The host according to claim 11, wherein the reference object comprises at least one reference plane in the virtual world, and the processor performs: obtaining a distance between the first virtual object and each of the at least one reference plane as the first position of the first virtual object in the virtual world relative to the reference object in the virtual world.
15. The host according to claim 14, wherein the at least one virtual plane comprises at least one of a Y-Z plane, an X-Z plane, and an X-Y plane in the virtual world, and the processor performs: in response to determining that the reference object is the Y-Z plane, obtaining an X component of a coordinate of the first virtual object in the virtual world as the distance between the first virtual object and the Y-Z plane; in response to determining that the reference object is the X-Z plane, obtaining a Y component of the coordinate of the first virtual object in the virtual world as the distance between the first virtual object and the X-Z plane; and in response to determining that the reference object is the X-Y plane, obtaining a Z component of the coordinate of the first virtual object in the virtual world as the distance between the first virtual object and the X-Y plane.
16. The host according to claim 14, wherein the processor performs: obtaining the second position via multiplying the FOV ratio by the distance between the first virtual object and each of the at least one reference plane.
17. The host according to claim 14, wherein before showing the first virtual object at the second position in the virtual world, the processor further performs: correcting a depth of the second position.
18. The host according to claim 11, wherein the reference object comprises a user representative object in the virtual world, and the processor performs: obtaining a multi-axis angle of the first virtual object relative to the user representative object as the first position of the first virtual object in the virtual world relative to the reference object.
19. The host according to claim 18, wherein the processor performs: obtaining the second position via multiplying the FOV ratio by the multi-axis angle.
20. A non-transitory computer readable medium, the computer readable storage medium recording an executable computer program, the executable computer program being loaded by a host to perform steps of: obtaining a first field of view (FOV) of a virtual world; obtaining a second FOV of a camera, wherein a first physical object locates within the second FOV of the camera; determining a FOV ratio based on the first FOV and the second FOV; determining a first position of a first virtual object in the virtual world relative to a reference object in the virtual world, wherein the first virtual object corresponds to the first physical object; determining a second position of the first virtual object in the virtual world based on the first position and the FOV ratio; and showing the first virtual object at the second position in the virtual world.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the disclosure.
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
DESCRIPTION OF THE EMBODIMENTS
[0024] Reference will now be made in detail to the present preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the description to refer to the same or like parts.
[0025] See
[0026] The storage circuit 202 is one or a combination of a stationary or mobile random access memory (RAM), read-only memory (ROM), flash memory, hard disk, or any other similar device, and which records a plurality of modules that can be executed by the processor 204.
[0027] The processor 204 may be coupled with the storage circuit 102, and the processor 104 may be, for example, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like.
[0028] In one embodiment, the host 200 can be implemented as a tracking device that is capable of performing, for example, inside-out tracking and/or outside-in tracking. In one embodiment, the tracking device can be a wearable device such as a head-mounted display (HMD). In some embodiments, the HMD can be used to provide reality services (e.g., an AR service, a VR service, and/or the like) by displaying the corresponding visual contents to the wearer, but the disclosure is not limited thereto.
[0029] In one embodiment, the host 200 can be disposed with one or more (tracking) camera for capturing images used to perform tracking functions, such as the inside-out tracking.
[0030] In the embodiments of the disclosure, the processor 204 may access the modules stored in the storage circuit 202 to implement the method for adjusting a virtual object provided in the disclosure, which would be further discussed in the following.
[0031] See
[0032] In step S310, the processor 204 obtains a first FOV F1 of a virtual world. In the embodiment, the virtual world can be the VR environment provided by the host 200 as the VR service to the user of the host 200, but the disclosure is not limited thereto. In
[0033] In step S320, the processor 204 obtains a second FOV F2 of a camera. In one embodiment, the camera can be the tracking camera disposed on the host 200 for capturing images of one or more to-be-tracked physical objects within the second FOV F2.
[0034] In some embodiments, the processor 204 can simply read the system parameters/settings corresponding to the virtual world and the camera to obtain the first FOV F1 and the second FOV F2, but the disclosure is not limited thereto.
[0035] In the embodiments of the disclosure, the first FOV F1 and the second FOV F2 are characterized by a first viewing angle AN1 and a second viewing angle AN2. In
[0036] In step S330, the processor 204 determines a FOV ratio based on the first FOV F1 and the second FOV F2. In one embodiment, the processor 204 can obtain the FOV ratio via dividing the first viewing angle AN1 by the second viewing angle AN2, i.e., the processor 204 obtains the FOV ratio by calculating AN1/AN2.
[0037] In another embodiment, the first FOV F1 can include a sub-FOV having a size between the first FOV F1 and the second FOV F2, and the sub-FOV can be characterized by a third viewing angle (referred to as AN3). In this case, the processor 204 can obtain the FOV ratio via dividing the third viewing angle AN3 by the second viewing angle AN2, i.e., the processor 204 obtains the FOV ratio by calculating AN3/AN2, but the disclosure is not limited thereto.
[0038] In the scenario of
[0039] In step S340, the processor 204 determines a first position of a first virtual object in the virtual world relative to a reference object in the virtual world, wherein the first virtual object corresponds to one of the physical objects within the second FOV F2. Taking
[0040] In one embodiment, the considered reference object can include at least one reference plane in the virtual world. In this case, the processor 204 can obtaining a distance between the first virtual object and each of the at least one reference plane as the first position of the first virtual object in the virtual world relative to the reference object in the virtual world.
[0041] In
[0042] For example, the processor 204 can obtain the X component of the coordinate of the virtual object O1 in the virtual world as the distance X1 between the virtual object O1 and the Y-Z plane P1. In this case, the distance X1 can be the first position of virtual object O1 in the virtual world relative to the reference object in the virtual world.
[0043] For another example, the processor 204 can obtain the X component of the coordinate of the virtual object O2 in the virtual world as the distance X2 between the virtual object O2 and the Y-Z plane P1. In this case, the distance X2 can be the first position of virtual object O2 in the virtual world relative to the reference object in the virtual world.
[0044] In
[0045] For example, the processor 204 can obtain the Y component of the coordinate of the virtual object O1 in the virtual world as the distance Y1 between the virtual object O1 and the X-Z plane P2. In this case, the distance Y1 can be the first position of virtual object O1 in the virtual world relative to the reference object in the virtual world.
[0046] For another example, the processor 204 can obtain the Y component of the coordinate of the virtual object O2 in the virtual world as the distance Y2 between the virtual object O2 and the X-Z plane P2. In this case, the distance Y2 can be the first position of virtual object O2 in the virtual world relative to the reference object in the virtual world.
[0047] In other embodiments, the reference object can be an X-Y plane. In this case, the processor 204 can obtain a Z component of the coordinate of the first virtual object in the virtual world as the distance between the first virtual object and the X-Y plane.
[0048] For example, the processor 204 can obtain the Z component of the coordinate of the virtual object O1 in the virtual world as the distance between the virtual object O1 and the X-Y plane. For another example, the processor 204 can obtain the Z component of the coordinate of the virtual object O2 in the virtual world as the distance between the virtual object O2 and the X-Y plane.
[0049] In step S350, the processor 204 determines a second position of the first virtual object in the virtual world based on the first position and the FOV ratio. In one embodiment, the processor 204 obtains the second position via multiplying the FOV ratio by the distance between the first virtual object and each of the at least one reference plane.
[0050] For example, in
[0051] Similarly, the processor 204 can multiply the FOV ratio by the distance X2 between the virtual object O2 and the Y-Z plane P1 to obtain a distance X2′ for characterizing the second position of the virtual object O2. In the embodiment, the second position of the virtual object O2 can be represented by the to-be-shown position L22.
[0052] For another example, in
[0053] Similarly, the processor 204 can multiply the FOV ratio by the distance Y2 between the virtual object O2 and the X-Z plane P2 to obtain a distance Y2′ for characterizing the second position of the virtual object O2. In the embodiment, the second position of the virtual object O2 can be represented by the to-be-shown position L22′.
[0054] In step S360, the processor 204 shows the first virtual object at the second position in the virtual world.
[0055] In
[0056] In this case, when the physical object corresponding to the virtual object O1 move to be near the boundary of the second FOV F2, the virtual object O1 would be accordingly moved to be near the boundary of the first FOV F1. In addition, when the physical object corresponding to the virtual object O1 reaches the boundary of the second FOV F2 and leaves the second FOV F2, the virtual object O1 would also reach the boundary of the first FOV F1 and naturally leaves the first FOV1, rather than suddenly disappear somewhere in the first FOV F1 as shown in
[0057] Accordingly, the visual experience of the user can be prevented from being affected by the suddenly disappeared virtual objects even if the corresponding physical objects are in the dead zone of the camera.
[0058] In
[0059] Accordingly, the visual experience of the user can be guaranteed based on the reasons in the above.
[0060] In one embodiment, the results in
[0061] In one embodiment, after obtaining the second position of the first virtual object, the processor 204 can further correct a depth of the second position of the first virtual object. Taking the virtual object O1 as an example, if the second position thereof is determined to be a specific position whose X component and Y component are the distances X1′ and Y1′, respectively, the distance between this specific position and the user representative object 499 would be longer than the distance between the to-be-projected position L11 and the user representative object 499. That is, if the virtual object O1 is directly shown at the specific position, the distance between this specific position and the user representative object 499 would be slightly distorted.
[0062] Therefore, the processor 204 can correct the depth of this specific position based on the distance between the to-be-projected position L11 and the user representative object 499, such that the distance between the user representative object 499 and the corrected specific position (i.e., the corrected second position) can be less distorted. In one embodiment, after correcting the depth of the specific position, the distance between the corrected specific position and the user representative object 499 can be substantially the same as the distance between the to-be-projected position L11 and the user representative object 499, but the disclosure is not limited thereto.
[0063] From another perspective, the concept of the disclosure can be understood as creating a mapping relationship between the first FOV F1 and the second FOV F2.
[0064] See
[0065] Therefore, for the virtual objects O1-O3, instead of showing the virtual objects O1-O3 at the corresponding to-be-projected positions 511-513, the processor 204 would show the virtual objects O1-O3 at the corresponding second positions 511′-513′, i.e., the mapped positions of the to-be-projected positions 511-513 in the first FOV F1. Therefore, when the virtual objects O1-O3 reaches the boundary of the second FOV F2 and leaves the second FOV F2, the user would see the virtual objects reaches the boundary of the first FOV F1 and leaves the first FOV F1. Accordingly, the visual experience of the user can be prevented from being affected by the suddenly disappeared virtual objects even if the corresponding physical objects are in the dead zone of the camera.
[0066] In one embodiment, the considered reference object can be the user representative object 499. In this case, when the processor 204 determines the first position of the first virtual object in the virtual world relative to the reference object in the virtual world, the processor 204 can obtain a multi-axis angle of the first virtual object relative to the user representative object 499 as the first position of the first virtual object in the virtual world relative to the reference object.
[0067] See
[0068] In
[0069] In one embodiment, the processor 204 obtains the second position via multiplying the FOV ratio by the multi-axis angle MA1. In
[0070] In this case, when the physical object corresponding to the virtual object O1 move to be near the boundary of the second FOV F2, the virtual object O1 would be accordingly moved to be near the boundary of the first FOV F1. In addition, when the physical object corresponding to the virtual object O1 reaches the boundary of the second FOV F2 and leaves the second FOV F2, the virtual object O1 would also reach the boundary of the first FOV F1 and naturally leaves the first FOV1, rather than suddenly disappear somewhere in the first FOV F1.
[0071] See
[0072] In the embodiment, the size of the sub-FOV F3 can be between the first FOV F1 and the second FOV F2. As mentioned in the above, the sub-FOV F3 can be characterized by the third viewing angle AN3. In this case, the processor 204 can obtain the FOV ratio via dividing the third viewing angle AN3 by the second viewing angle AN2, i.e., the processor 204 obtains the FOV ratio by calculating AN3/AN2, but the disclosure is not limited thereto.
[0073] With the FOV ratio (i.e., AN3/AN2), the processor 204 can accordingly perform steps S340-S360 based on the above teachings, and the details can be referred to the descriptions in the above embodiments, which would not be repeated herein.
[0074] Similar to the teachings of
[0075] Therefore, for the virtual objects O1-O2, instead of showing the virtual objects O1-O3 at the corresponding to-be-projected positions 711-712, the processor 204 would show the virtual objects O1-O2 at the corresponding second positions 711′-712′, i.e., the mapped positions of the to-be-projected positions 711-712 in the sub-FOV F3.
[0076] The disclosure further provides a computer readable storage medium for executing the method for adjusting a virtual object. The computer readable storage medium is composed of a plurality of program instructions (for example, a setting program instruction and a deployment program instruction) embodied therein. These program instructions can be loaded into the host 200 and executed by the same to execute the method for adjusting a virtual object and the functions of the host 200 described above.
[0077] In summary, the embodiments of the disclosure provide a mechanism for adjusting the shown position of the virtual object based on the size relationship between the first FOV of the virtual world and the second FOV of the camera. Accordingly, the visual experience of the user can be prevented from being affected by the suddenly disappeared virtual objects even if the corresponding physical objects are in the dead zone of the camera. In addition, when tracking the physical object (e.g., the handheld controllers of the VR system), the host does not need to rely on the motion data provided by the auxiliary devices on the physical object.
[0078] It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present invention without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the present disclosure cover modifications and variations of this invention provided they fall within the scope of the following claims and their equivalents.