Method to determine a present position of an object, positioning system, tracker and computer program
11662456 · 2023-05-30
Assignee
Inventors
- Stephan Otto (Heroldsberg, DE)
- Tobias Feigl (Gerhardshofen, DE)
- Christian Daxer (Happurg, DE)
- Alexander Bruckmann (Nuremberg, DE)
- Christoffer Loeffler (Nuremberg, DE)
- Christopher Mutschler (Cadolzburg, DE)
- Marc Faßbinder (Nuremberg, DE)
Cpc classification
G06F3/011
PHYSICS
International classification
Abstract
A method (100) to determine a present position (122) of an object (600). The method (100) comprises using (102) an optical positioning system (104) to determine a first preliminary position (112) and using (106) a radio-based positioning system (108) to determine a second preliminary position (114), determining (110) a supposed position (116) on the basis of one of the preliminary positions (112, 114) and combining (108) the supposed position (116) with a previous position (212) of the object to determine the present position (122) of the object, if the supposed position (116) is based on a different positioning system (104, 108) than a previous supposed position (116′). A positioning system (500) with combined optical and a radio-based determination of a position of a tracker (600) and a tracker (600) with an active light source (608).
Claims
1. A method to determine a present position of an object, comprising: using an optical positioning system to determine an optical position, being a position of an optical marker at the object; using a radio-based positioning system to determine a radio position, being a position of a transmitter at the object; determining a supposed position on the basis of one of the optical or the radio positions; determining whether the supposed position is based on a different positioning system than a previous position of the object, wherein the previous position is based on a previous supposed position; initially setting an offset when the supposed position is first based on the different positioning system than the previous supposed position; combining the supposed position and the offset with the previous position of the object to determine the present position of the object; and reducing the offset between the previous position and the supposed position.
2. The method of claim 1, further comprising: using an available optical position or an available radio position as the supposed position if either the optical or the radio position is not available.
3. The method of claim 1, further comprising: using the optical position as the supposed position if both the optical and the radio positions are available.
4. The method of claim 1, wherein reducing an offset takes place if the supposed position is changed with respect to a previous supposed position and the supposed position is based on a same positioning system as the previous supposed position.
5. The method of claim 4, wherein reducing an offset takes place only if the supposed position is changed towards the previous position.
6. The method of one of claim 1, wherein reducing the offset further comprises: reducing the offset by a predetermined fraction of the initial offset until the offset is compensated and/or the supposed position is based on a different positioning system than a previous supposed position.
7. The method of one of claim 1, wherein reducing the offset comprises: reducing the offset by up to 20% of a position change between the supposed position and the previous supposed position.
8. The method of claim 1, further comprising: determining an orientation of the object, by using an optical positioning system.
9. The method of claim 8, further comprising: reducing an initial offset between the orientation and a reference orientation, if the reference orientation is determined using a reference system.
10. The method of claim 1, further comprising: reducing a residual offset completely in one single step if a predefined condition is fulfilled.
11. The method of claim 1, further comprising: using an active light source at the object as an optical marker.
12. The method of claim 11, wherein the active light source comprises at least one infrared light.
13. The method of claim 12, wherein the active light source is modulated and emitting blinking or pulsing light with a predefined modulation pattern.
14. A positioning system with combined optical and a radio-based determination of a position of a tracker, comprising: a tracker, comprising at least one optical and at least one radio-based marker for optical and radio-based position determination respectively; a tracking device, comprising a camera and an antenna configured to receive an optically determined and a radio-based determined position of the tracker as possible supposed positions of the tracker, wherein the tracking device is configured to, determine whether a supposed position of the possible supposed positions is based on a different positioning system than a previous position of the object, wherein the previous position is based on a previous supposed position of the possible supposed positions, initially set an offset when the supposed position is first based on the different positioning system than the previous supposed position, combine the supposed position and the offset with a previous position of the tracker to determine a present position of the tracker, and reduce the offset between the previous position and the supposed position.
15. The tracker for the combined optical and radio-based positioning system of claim 14, the tracker comprising: an optical marker, comprising an active light source; and a radio transmitter, configured to send an information signal comprising a position of the tracker.
16. The tracker of claim 15, wherein the active light source comprises an infrared light which emits a constant light signal or a pulsed light signal.
17. The tracker of claim 15, wherein the tracker is configured to be wearable on a head of a user or attachable to a helmet and/or the tracker is a virtual-reality-wearable and/or augmented-reality-wearable.
18. A non-transitory, computer-readable medium comprising program code, when executed, to cause a programmable processor to perform the method of claim 1.
Description
BRIEF DESCRIPTION OF THE FIGURES
(1) Some examples of apparatuses and/or methods will be described in the following by way of example only, and with reference to the accompanying figures, in which
(2)
(3)
(4)
(5)
(6)
(7)
DETAILED DESCRIPTION
(8) Various examples will now be described more fully with reference to the accompanying drawings in which some examples are illustrated. In the figures, the thicknesses of lines, layers and/or regions may be exaggerated for clarity.
(9) Accordingly, while further examples are capable of various modifications and alternative forms, some particular examples thereof are shown in the figures and will subsequently be described in detail. However, this detailed description does not limit further examples to the particular forms described. Further examples may cover all modifications, equivalents, and alternatives falling within the scope of the disclosure. Like numbers refer to like or similar elements throughout the description of the figures, which may be implemented identically or in modified form when compared to one another while providing for the same or a similar functionality.
(10) It will be understood that when an element is referred to as being “connected” or “coupled” to another element, the elements may be directly connected or coupled or via one or more intervening elements. If two elements A and B are combined using an “or”, this is to be understood to disclose all possible combinations, i.e. only A, only B as well as A and B. An alternative wording for the same combinations is “at least one of A and B”. The same applies for combinations of more than 2 Elements.
(11) The terminology used herein for the purpose of describing particular examples is not intended to be limiting for further examples. Whenever a singular form such as “a,” “an” and “the” is used and using only a single element is neither explicitly or implicitly defined as being mandatory, further examples may also use plural elements to implement the same functionality. Likewise, when a functionality is subsequently described as being implemented using multiple elements, further examples may implement the same functionality using a single element or processing entity. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used, specify the presence of the stated features, integers, steps, operations, processes, acts, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, processes, acts, elements, components and/or any group thereof.
(12) Unless otherwise defined, all terms (including technical and scientific terms) are used herein in their ordinary meaning of the art to which the examples belong.
(13)
(14) In an example the supposed position 116 is based on the radio-based positioning system 108, i.e. it is determined on the basis of the second preliminary position 114. The previous position 120 is a previously determined position of the object, i.e. a present position of the object in at least one determination step, i.e. performing the method, before the present performance of the method. The previous position in this example is based on a previous supposed position 116′ which is based on the optical positioning system 104. Thus, for determining the present position 122 in this example, the supposed position 116 and the previous position 120 are combined because the supposed position 116 is based on a different positioning system (namely the radio-based positioning system 108) than the previous supposed position 116′ (which is based on the optical positioning system 104).
(15) In the same example, for determining a next present position 122 due to performing the method 100 again, the supposed position 116 can again be based on the radio-based positioning system 108. Combining 118 can still be carried out although now a previous supposed position (namely the supposed position of the method 100 carried out before) is also based on the radio-based positioning system 108. However, the previous supposed position which counts in this example remains the previous supposed position 116′ which is based on the optical positioning system 104. This can be the case until combining 118 is not necessary anymore (e.g. if the previous position 120 and the supposed position 116 are equal or adapted or a later described offset is completely reduced to zero) or until the basis of a supposed position 116 changes again with respect to a previous position. In this example, this would mean that the supposed position 116 is again based on the optical positioning system 104, like the previous position 116′. The method can also be used for other positioning systems using two different technologies for determining a same position.
(16)
(17) At a point t1 the optical system 104 shows a failure, e.g. an object moves between a camera of the optical system and an optical marker of the object such that a visual contact is interrupted and the first preliminary position can temporary not be determined. Thus, during the time span trans1 the supposed position is based on the radio-based positioning system 108. As shown in
(18) Reducing the offset 214 according to the example shown in
(19)
(20) For every axis x,y,z the adaptation of the position P to P′ and/or the distance d=P′−P is taken over similar to the angular adaptation.
(21) An equation for position calculation (without correction factor) could be given as:
P′.sub.n=P.sub.n+O*(d.sub.n-1−|.sub.n−P.sub.n-1|*α)
(22) wherein O is the direction vector of the distance, d is the length of the distance and a the proportion of the position change.
(23) Some examples are given in a pseudo code (without correction factor): An example shows distance correction Δd=(x,y,z) only in case of a position change in the respective axis. In an example the distance d is d=(4,1,1), i.e. 4 units in the x direction, one unit each for y and z. The user makes a movement b=(1,0,0), i.e. only in the x direction. Then a distance correction is executed only in x direction. I.e., the distance correction Δd is for x≠0, 0 for y and z. An example shows distance correction only along the axis of position change.
(24) An example shows distance correction only in the direction of position change, not in the opposite direction (and/or less in the opposite direction). In an example the distance correction Δd is a fraction of the measured movement b and is at most the distance in the (respective) axis Δd (d>Δd and Δd<b). The executed distance correction Δd is subtracted from d, in order to only execute the correction only until the distance d has been settled.
(25) In an example correction is executed adaptively. An example shows that the distance correction Δd is executed with a proportion of 0-20% of the user movement b. An example shows the distance correction Δd is approx. 5% of the user movement b. An example shows the distance correction Δd is approx. 5% of the user movement b multiplied by a situation-dependent correction factors (e.g. ratio error to Δd). An example shows the distance correction Δd may be executed via different functions (easing functions). An example shows Resetting the distance (between radio/optical) when putting on the glasses (this may be unambiguously detected using a sensor at the glasses).
(26)
(27)
(28) In some examples, in between two reference measurements of an (external) reference system the orientation system may already have drifted off due to different effects, so that an angular error Δα exists. This erroneous orientation is adapted so that a person does not perceive this correction. Here, if necessary, for a certain time period a deviation of the orientation from the actual orientation may be accepted in order to prevent a perceivable and/or visible reset of the angular error. In an example, an exception can occur, wherein the first reference measurement should be set directly without adaptation (e.g. when putting on the VR (virtual reality)/AR (augmented reality) headset).
(29) In other examples, in certain intervals, e.g. with every measurement of the rotation rate (d) and/or with every rendered frame a gradual correction of the currently erroneous orientation a towards the reference angle αr is executed.
(30) An example shows an angular error correction only in case of a change of the rotation angle. An example shows an angular error correction only in the direction of the change of the rotation angle, not in the opposite direction (and/or less in the opposite direction). An examples shows an angular correction Δα′ is only executed if a rotation (ω) around the vertical axis is measured, e.g. by a rotation rate sensor. An example shows the angular correction Δα′ is a fraction of the measured rotation ω and is at most the angular error Δα (d>Δα′ and Δα′<=Δα). The executed angular correction Δα′ is subtracted from Δα, to execute the correction only until the angular error Δα is settled. An example shows an angular correction Δα′ is executed with a proportion of 0-20% of the rotation rate ω. An example shows an angular correction Δα′ is approx. 1% of the rotation rate ω. An examples shows an angular correction Δα′ may be executed via different functions (easing functions).
(31)
(32)
(33) In an example the optical marker can emit a modulated, pulsing signal to identify and synchronize the respective target. If no optical positon is visible it is possible to use a radiofrequency signal to ask or activate the optical beacon of a particular tracker or head mounted device to blink in a detectable manner. Thus, a detectability of an optical tracker could be improved.
(34) In one example for determining the present position 204 of user 508, the optical system 104 is used, i.e. the supposed position is based on the first preliminary position 208. At t1 user 510 covers user 508 such that cameras 502 cannot provide the first preliminary position 208 due to an interruption of the visual contact to the virtual reality device 512. Thus, second preliminary position 210 is used as the basis of the supposed position and due to combining 118 the offset 214 is added such that changing to the radio-based system does not lead to jerks in a scene shown to user 508 by the virtual reality device 512. During trans1, e.g. during the next 2 seconds after changing to the radio-based system, the offset can be reduced e.g. by the position server 514, e.g. if user 508 moves such that she does not notice a reduction of the offset 214. After trans1 the determined present position of user 508 can be adapted to the radio-based system wherein the adaptation was carried out gradually. The position of user 508 can be provided by wireless transmission 414 e.g. to the other users 510 or their virtual reality devices 512 respectively.
(35) In some examples it is possible that the combined optical and radio-based positioning system uses both first and second preliminary position to determine two respective supposed positions that are combined to a present position. An effect may be a reduced latency in determining the position due to a possible predictive determination as well as verifying or checking a basic functionality of the positioning system. If for example both optically and radio-based determined positions differ within a predefined tolerance range it is high probable that the systems operates without errors. Also fluctuations of radio signal data may be compensated. By comparing the respective other positioning system it is possible that calibration data can be optimized.
(36)
(37) The tracker 600 comprises an optical marker 608 and a radio transceiver 610. The optical marker can be e.g. an active infrared light or comprise a plurality of active infrared lights or plurality of passive markers. A transparent housing 612 can cover the infrared lights, e.g. infrared LEDs. The optical marker 608 is positioned above a head of the user 606 if the wearable is worn. Due to this exposed position a probability that an infrared light beam can be sensed by a camera can be increased such that a reliability of the optical system can be increased. The transparent housing can be configured such that an infrared light can be emitted in all directions, i.e. 360° around the optical marker 608. In this way, a present position of the marker can be determined optically independent from an orientation of user 606 or the optical marker 608, respectively. The tracker 600 comprises a hook 614 or clip 614, e.g. that the tracker 600 can be clipped on a headband or a helmet. The tracker can be attached in different directions depending on the given headset. One possible configuration is shown with an attachment of the hook side to the back of the headset.
(38) In some examples, the tracker can be positioned on the head in other ways. E.g., the tracker can be attached to a headset, in the front or on a back side.
(39) The shown approach describes a method which enables tracking via different systems in virtual/augmented reality (VR/AR). There are two tracking methods for VR/AR which each may have specific characteristics.
(40) Optical methods may be highly accurate, provides 6DOF (6 degree of freedom (x,y,z,yaw,roll,pitch)), 3DOF (x,y,z), small range between camera and (passive) marker, visual connection can be required, with 6DOF small distance to camera and/or big target can be required, identification via target can be required.
(41) Radio-based methods can be reliable, can work even without visual connection, have high range, be less accurate, can comprise that per transmitter usually 3DOF, x,y,z is provided with high accuracy.
(42) The distance of the markers in the target and the distance of the target to the camera play a substantial role. If the distance of the target to the camera gets too big, neither the markers in the target may be clearly resolved (and consequently the target may not be detected) nor may the reflections of the markers be detected. The range of the distance for passive marker systems currently is a few meters. If the target is not detected any more, identification is not possible and a clear allocation of objects and/or users is lost. For large areas the only possibility is to clearly increase the number of cameras (linear to the area). Several hundred cameras may here soon be required. Thus, in case of optical IR systems costs are strongly coupled to the size of the area.
(43) Examples of the shown method describe the combination of radio and optical systems which combines the advantages of both methods for VR/AR applications. Here, active optical markers are utilized.
(44) Examples relate to active infrared marker, individual marker per person instead of target, big area: more than 5×5 m, ceiling camera, in applicable design of the marker, if applicable several markers per user and/or Props (objects), blinking vs continuous transmission (currently continuous use).
(45) Other examples show a combined transmitter and/or marker (optical, radio); a combined transmitter with active optical emission via a marker; a combined transmitter with active optical emission via several markers and/or as a target; a position server which combines both systems and each provides the fused position for each object/user; discarding all optical positions (e.g. reflections) without any corresponding radio marker nearby.
(46) Other examples show that the system may compensate the failure of a position (optically due to masking, radio due to shielding) by using the respectively associated other position; in order to reduce latencies the system may use any combination of the tracking information of both tracking systems (optical and radio) to execute a predictive positioning; the data of both position providing systems (optical and radio) may be used to check the input data of the respective individual systems and adaptively optimize the same. Primarily this means a reduction of fluctuation of the radio data; the system as a whole may compensate the failure of a tracking system as (short-time) data recording enables an imperceptible transition; utilization may in principle also be applied to alternative tracking systems providing positions and/or orientation and any temporal derivatives;
(47) Other examples show that the imperceptible transition corresponds to a gradual adaptation of the input data; a non-identifiable marker (here optical marker) may also be identified by adding an identifiable marker (here radio); the two tracking systems may mutually optimize their calibration data by mutual comparing; INS (acceleration, gyroscope, magnetic field, barometer) based POSE assessment for motion classification (e.g. putting on glasses, person standing, walking, static)—in parallel to that motion classification with the help of optical trajectory: improvement of the identification (allocation radio system/INS to optical) when putting on and/or allocation between radio system/INS and optical system in ongoing operation detection/removal of static reflections of the optical system
(48) Some examples use splines (different input streams) for optimizing the virtual user trajectory. In this respect, initial learning using the combination of the different input streams in a regressor/classifier takes place: radio trajectory INS trajectory optical trajectory (perfect reference when learning e.g. on small area)
(49) Then, in the ongoing operation (big area) the data is also passed through the regressor/classifier which then, with the help of the learnt facts, supplements the then incomplete/partial data into an optimal trajectory (user specific movement model).
(50) With commercial infrared systems (IR) conventionally for an identification and measurement of the 6DOF pose (x,y,z,yaw,roll,pitch) a so-called target is used. Targets are a unique combination of reflecting/active markers.
(51) A pose can consists of those elements: position, consisting of x,y,z orientation a (and/or yaw and/or rotation around the vertical axis)
(52) The further angles (roll, pitch) may not necessarily be required, as the same may be determined unambiguously by sensors at the head of the user. For adapting the Pose, orientation and position errors and/or distances are adapted.
(53) Regarding the systems, for the radio system a high failure safety is used to utilize radio positions in case of a failure of the optical position. A typical course of position of the different systems and the resulting virtual position of the user over time is shown in
(54) Generally, adaptation is planned to be gradual. Exception may be: The first reference measurement out to be set directly without an adaptation (e.g. when putting on the VR/AR headset). An abrupt adaptation may be sensible in some situations and be supported from the content side, e.g. by: Simulated earthquake: the viewport and/or the virtual camera is shaken as in case of an earthquake and adaptation may be executed during that phase (both orientation and also position). Masking the viewport by effects: particle effects, fog, hiding the camera by virtual objects. Fading out the virtual world: darkness (black), closing of a virtual door Transformation of the virtual world (shifting, stretching, compressing, distorting, . . . ), twisting paths.
(55) Thus, an optical and a radio system is combined so that two substantial advantages result: By using a combined transmitter (optical/radio) also without an identification of the target a distinct ID (via radio) with respect to a non-distinct optical marker is given. A high accuracy is acquired by using optical transmitters and at the same time using as low a number of cameras as possible.
(56) To avoid unwanted effects or position jumps when changing the systems, an adaptation of a position and/or an orientation is provided. The method can be provided for other combined positioning systems rather than optical and radio-based as well. Examples might relate to a positioning system comprising a piezoresistive and/or capacitive and/or magnetic position determination of a position of an user (e.g. a finger position) on a plate or display. Other examples might relate to finger printing on a magnetic field.
(57) The aspects and features mentioned and described together with one or more of the previously detailed examples and figures, may as well be combined with one or more of the other examples in order to replace a like feature of the other example or in order to additionally introduce the feature to the other example.
(58) Examples may further be or relate to a computer program having a program code for performing one or more of the above methods, when the computer program is executed on a computer or processor. Steps, operations or processes of various above-described methods may be performed by programmed computers or processors. Examples may also cover program storage devices such as digital data storage media, which are machine, processor or computer readable and encode machine-executable, processor-executable or computer-executable programs of instructions. The instructions perform or cause performing some or all of the acts of the above-described methods. The program storage devices may comprise or be, for instance, digital memories, magnetic storage media such as magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. Further examples may also cover computers, processors or control units programmed to perform the acts of the above-described methods or (field) programmable logic arrays ((F)PLAs) or (field) programmable gate arrays ((F)PGAs), programmed to perform the acts of the above-described methods.
(59) The description and drawings merely illustrate the principles of the disclosure. Furthermore, all examples recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor(s) to furthering the art. All statements herein reciting principles, aspects, and examples of the disclosure, as well as specific examples thereof, are intended to encompass equivalents thereof.
(60) A functional block denoted as “means for . . . ” performing a certain function may refer to a circuit that is configured to perform a certain function. Hence, a “means for s.th.” may be implemented as a “means configured to or suited for s.th.”, such as a device or a circuit configured to or suited for the respective task.
(61) Functions of various elements shown in the figures, including any functional blocks labeled as “means”, “means for providing a signal”, “means for generating a signal.”, etc., may be implemented in the form of dedicated hardware, such as “a signal provider”, “a signal processing unit”, “a processor”, “a controller”, etc. as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which or all of which may be shared. However, the term “processor” or “controller” is by far not limited to hardware exclusively capable of executing software, but may include digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included.
(62) A block diagram may, for instance, illustrate a high-level circuit diagram implementing the principles of the disclosure. Similarly, a flow chart, a flow diagram, a state transition diagram, a pseudo code, and the like may represent various processes, operations or steps, which may, for instance, be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. Methods disclosed in the specification or in the claims may be implemented by a device having means for performing each of the respective acts of these methods.
(63) It is to be understood that the disclosure of multiple acts, processes, operations, steps or functions disclosed in the specification or claims may not be construed as to be within the specific order, unless explicitly or implicitly stated otherwise, for instance for technical reasons. Therefore, the disclosure of multiple acts or functions will not limit these to a particular order unless such acts or functions are not interchangeable for technical reasons. Furthermore, in some examples a single act, function, process, operation or step may include or may be broken into multiple sub-acts, -functions, -processes, -operations or -steps, respectively. Such sub acts may be included and part of the disclosure of this single act unless exillicitly excluded.
(64) Furthermore, the following claims are hereby incorporated into the detailed description, where each claim may stand on its own as a separate example. While each claim may stand on its own as a separate example, it is to be noted that—although a dependent claim may refer in the claims to a specific combination with one or more other claims—other examples may also include a combination of the dependent claim with the subject matter of each other dependent or independent claim. Such combinations are explicitly proposed herein unless it is stated that a specific combination is not intended. Furthermore, it is intended to include also features of a claim to any other independent claim even if this claim is not directly made dependent to the independent claim.