METHOD FOR REAL AND VIRTUAL COMBINED POSITIONING
20220373692 ยท 2022-11-24
Inventors
Cpc classification
G06T7/246
PHYSICS
G01S19/485
PHYSICS
G01S19/396
PHYSICS
Y02D30/70
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
International classification
G01S19/39
PHYSICS
G01S19/48
PHYSICS
G06T7/246
PHYSICS
Abstract
The present invention discloses a method for real and virtual combined positioning, it not only sends positioning information to the server through the electronic device for tracking the positioning of the electronic device, but also further captures external scene image and scene sound through the electronic device, or the server generates corresponding scene image and scene sound based on the positioning information, further used to confirm the positioning of electronic device.
Claims
1. A method for real and virtual combined positioning, which comprises: a server send a positioning request to a first electronic device; according to the positioning request, the first electronic device generates a positioning information, at least one scene image and at least one scene sound and send to the server, wherein, the first electronic device uses the image capturing unit captures at least one scene image nearby the first electronic device; the first electronic device uses the sound capturing unit captures at least one scene sound near the first electronic device, the server reads a database at least one reference image and at least one reference sound according to the positioning information and compares them to the at least one scene image and the at least one scene sound to confirm the correctness of the positioning information; and the server confirms the location of the first electronic device based on the at least one scene image, the at least one scene sound and the positioning information.
2. The method for real and virtual combined positioning of claim 1, wherein, at least one scene image is an image of building object, indoor object or person.
3. The method for real and virtual combined positioning of claim 1, wherein, at least one scene sound is a sound of fluid, running machine, animal or insect.
4. The method for real and virtual combined positioning of claim 1, wherein, in the step where the first electronic device generates a positioning information at least one scene image and at least one scene sound according to the positioning request and sends them to the server, the electronic device obtains the positioning information through a global positioning system (GPS), a gyroscope, an accelerometer or a combination thereof to send to the server.
5. The method for real and virtual combined positioning of claim 1 also includes: a second electronic device sends the positioning request to the server.
6. The method for real and virtual combined positioning of claim 1, wherein, in the step where the first electronic device produces a positioning information, at least one scene image and at least one scene sound according to the positioning request and sends them to the server, the first electronic device further sends a time mark to the server, the time mark corresponds to the at least one scene image and the at least one scene sound, so that the server can compare the at least one scene image and the at least one scene sound based on the positioning information and further on the time mark.
7. A method for real and virtual combined positioning, which comprises: a server send a positioning request to a first electronic device; according to the positioning request, the first electronic device generates a positioning information, and meanwhile generates a virtual information and send to the server, the virtual information contains a virtual parameter of at least one virtual image and at least one virtual sound; based on the virtual information, the server creates at least one virtual image and at least one virtual sound, said at least one virtual image and at least one virtual sound correspond to positioning information; the server reads at least one reference image and at least one reference sound according to the positioning information and compare them with the at least one virtual image and the at least one virtual sound to confirm the correctness of the positioning information; and the server confirms the location of the first electronic device based on the at least one virtual image, the at least one virtual sound and the positioning information.
8. The method for real and virtual combined positioning of claim 7, wherein, said at least one virtual image is an image of building object, indoor object or person.
9. The method for real and virtual combined positioning of claim 7, wherein, said at least one virtual sound is a sound of fluid, running machine, animal or insect.
10. The method for real and virtual combined positioning of claim 7, wherein, in the step where the first electronic device sends a positioning information and a multimedia information to the server through an application and according to the positioning request, the first electronic device obtains the positioning information through a global positioning system (GPS), a gyroscope, an accelerometer or a combination thereof to send to the server.
11. The method for real and virtual combined positioning of claim 7 also includes a step where a second electronic device sends the positioning request to the server.
12. The method for real and virtual combined positioning of claim 7, wherein, in the step where the first electronic device generates a positioning information according to the positioning request, and generates a virtual information based on the positioning information to send to the server, the first electronic device generates the virtual information based on the latitude and longitude and a region name of the positioning information.
13. The method for real and virtual combined positioning of claim 7, wherein, in the step where the server creates at least one virtual image and at least one virtual sound based on the virtual information, the server reads a plurality of image data and a plurality of sound data in the region database based on the virtual information to produce at least one virtual image and at least one virtual sound.
14. The method for real and virtual combined positioning of claim 7, wherein, the virtual information further contains a time mark of the at least one virtual image and the at least one virtual sound, specifically, in step where the server creates at least one virtual image and at least one virtual sound based on the virtual information, the first electronic device creates the at least one virtual image and the at least one virtual sound based on the positioning information and further on the time mark.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
DETAILED DESCRIPTION OF THE PRESENT INVENTION
[0031] The present invention is described in detail below with respect to specific embodiments and with reference to the accompanying drawings, so that those skilled in the art can have a complete understanding of the features and objects mentioned above.
[0032] Embodiment 1, Method for Real and Virtual Combined Positioning
[0033]
[0034] Step S10: The second electronic device sends a positioning request to the server;
[0035] Step S20: The server sends a positioning request to the first electronic device;
[0036] Step S30: The first electronic device generates a positioning information, at least one scene image and at least one scene sound according to the positioning request, and send them to the server;
[0037] Step S40: The server reads at least one reference image and at least one reference sound according to the positioning information and compares them to the at least one scene image and the at least one scene sound;
[0038] Step S50: The server judges if the similarity is greater than the threshold; and
[0039] Step S60: The server confirms the location of the first electronic device based on the at least one scene image, the at least one scene sound and the positioning information.
[0040]
[0041] In Step S10, the second electronic device 30 sends a positioning request RQ to the server 10 via operation by the user; then in Step S20, the positioning request RQ is transferred to the first electronic device 20 via the server 10. Therefore, the second electronic device 30 sends the positioning request RQ to the first electronic device 20 through the server 10. In step S30, according to the positioning request RQ received, the first electronic device 20 initiates the electrically connected image capturing unit 22 and sound capturing unit 24 to capture at least one scene image IMG and at least one scene sound S nearby the first electronic device 20. Based on this, the first electronic device 20 sends the positioning information P, the at least one scene image IMG and the at least one scene sound S together to the server 10, wherein the first electronic device 20 obtains the positioning information P through a global positioning system (GPS), a gyroscope, an accelerometer or a combination thereof In this embodiment, the first electronic device 20 is a mobile phone, and the image capturing unit 22 and sound capturing unit 24 are built-in camera and microphone of the mobile phone. However, the present invention is not limited to his. The first electronic device 20 can also be a laptop computer, while the image capturing unit 22 and sound capturing unit 24 are respectively the built-in camera and microphone of the laptop computer. Moreover, the second electronic device 30 can also be a mobile phone, a laptop computer or any other mobile device, such as a tablet or Google glasses.
[0042] In Step S40, according to the positioning information P, the server 10 reads the corresponding reference image R1 and reference sound R2 in the database 12 to compare them with the at least one scene image IMG and at least one scene sound S from the first electronic device 20. In Step S50, by comparing the reference image R1 and reference sound R2 with the at least one scene image IMG and the at least one scene sound S from the first electronic device 20 to determine if the similarity is greater than the threshold (e.g., 80%), the correctness of the positioning information P is confirmed. When the similarity is greater than the threshold, Step S60 is executed. On the other hand, when the similarity is not greater than the threshold, Step S30 is executed. Finally, in Step S60, the server 10 confirms the correctness of the positioning information P on the basis of the at least one scene image IMG and the at least one scene sound S from the first electronic device 20. The location of the first electronic device 20 is confirmed based on the at least one scene image IMG and the at least one scene sound S together with the positioning information P. The location of the first electronic device 20 is further sent back to the second electronic device 30. The scene image IMG and the scene sound S can also be sent to the second electronic device 30 together.
[0043] Specifically, the scene image IMG is an image of building object, indoor object or person. The scene sound S is a sound of fluid, running machine, animal or insect.
[0044] Referring further to
[0045] In the above embodiment, the method for real and virtual combined positioning of the present invention confirms the positioning information by capturing the external image and sound nearby the electronic device, and further confirms the position of the first electronic device 20. The above embodiment describes a case wherein other uses request positioning to locate the first electronic device 20, but in the other embodiment, the user sets the server 10 to track the first electronic device 20, and therefore the server 10 sends a positioning request RQ directly, as shown in
[0046] In addition, the present invention can also directly retrieve the data in the database 12 of the server 10 to further confirm the position.
[0047] Embodiment 2, Method for Real and Virtual Combined Positioning (the Second Method)
[0048]
[0049] Step S110: The second electronic device sends a positioning request to the server;
[0050] Step S120: The server sends a positioning request to the first electronic device;
[0051] Step S130: The first electronic device generates a positioning information according to the positioning request and generates a virtual information based on the positioning information to send to the server;
[0052] Step S140: The server creates at least one virtual image and at least one virtual sound according to the virtual information;
[0053] Step S150: The server reads at least one reference image and at least one reference sound according to the positioning information and compare them with the at least one virtual image and the at least one virtual sound;
[0054] Step S160: The server judges if the similarity is greater than the threshold; and
[0055] Step S170: The server confirms the location of the first electronic device based on the at least one virtual image, the at least one virtual sound and the positioning information.
[0056] The
[0057] Step S110 and Step S120 are the same as Step S10 and Step S20, and are therefore not described here. In step S130, the first electronic device 20 generates a positioning information P according to the positioning request RQ. Meanwhile, the first electronic device 20 generates a corresponding virtual information VR based on the positioning information P. The first electronic device 20 sends the positioning information P and the virtual information VR to the server 10. Specifically, the first electronic device 20 generates the corresponding virtual information VR according to the latitude and longitude and the corresponding region name recorded in the positioning information P. In Step S140, the server 10 creates at least one virtual image V1 and at least one virtual sound V2 based on the virtual information VR, wherein the virtual image V1 is an image of building object, indoor object or person, the virtual sound V2 is a sound of fluid, running machine, animal or insect. In particular, the server 10 reads a plurality of image data VD and a plurality of sound data SD in a database 12 based on the virtual information VR to generate at least one virtual image V1 and at least one virtual sound V2. For example, regarding Taipei Ximen MRT station, these image data VD and sound data SD will correspond to an image of buildings nearby Taipei Ximen MRT Station, an indoor object or a person, and a sound of a fluid, running machine, animal or insect, for example, the sound of vehicles traveling outside the MRT station.
[0058] In Step S150, based on the positioning information P, the server 10 reads the corresponding reference image R1 and reference sound R2 in the database 12 to compare them with the at least one virtual image V1 and the at least one virtual sound V2. In Step S160, by comparing the reference image R1 and reference sound R2 with the virtual image V1 and the virtual sound V2 to determine if the similarity is greater than the threshold (e.g., 80%), the correctness of the positioning information P is confirmed. When the similarity is greater than the threshold, Step S170 is executed. On the other hand, when the similarity is not greater than the threshold, Step S140 is executed. Finally, in Step S170, the server 10 confirms the correctness of the positioning information P on the basis of the at least one virtual image V1 and the at least one virtual sound V2. The location of the first electronic device 20 is confirmed based on the at least one virtual image V1 and the at least one virtual V2 together with the positioning information P. The location of the first electronic device 20 is further sent back to the second electronic device 30. Meanwhile, the virtual image V1 and virtual sound V2 can be further sent to the second electronic device 30.
[0059] Moreover, in another embodiment, the virtual information VR further contains a time mark (not shown) of the virtual image V1 and the virtual sound V2. Therefore, in Step S140, the first electronic device 20 creates the virtual image V1 and the virtual sound V2 not only based on the positioning information P, but also based on the aforementioned time mark
[0060] To sum up, the method for real and virtual combined positioning of the present invention captures scene images and scene sounds outside the electronic device, or creates virtual information based on the positioning information for the server to create corresponding virtual images and virtual sounds, so as to confirm the correctness of the positioning information and to improve the reliability of the positioning information.
[0061] Described above are only preferred embodiments of the present invention and are not intending to limit the scope of the present invention. Therefore, any equal changes and modifications to the shape, structure, characteristics and spirit defined in the claims of the present invention shall be covered in the protection scope of the present invention.