INTERACTIVE SYSTEM SETUP CONCEPT
20170236298 · 2017-08-17
Inventors
Cpc classification
A46B15/0002
HUMAN NECESSITIES
A46B2200/1066
HUMAN NECESSITIES
H04N23/64
ELECTRICITY
G06V40/10
PHYSICS
International classification
Abstract
A device and a method for determining a position of a user's body portion. The device includes a camera, configured to capture the body portion, and a display for providing visual feedback. A sensor determines at least one of a roll angle, a pitch angle, and a yaw angle of the device, and an interface receives picture data related to a pictorial representation of the body portion captured and sensor data related to the determined angle of the device. An analyzer analyzes, based on the picture data, whether the captured body portion is within a predetermined region of the picture and, based on the sensor data, whether at least one of the roll angle, the pitch angle, and the yaw angle is within a predetermined angle range. The method includes capturing the body portion, providing visual feedback, receiving angle data, receiving picture data, and analyzing whether the captured body portion is within a predetermined region and whether at least one of the roll, pitch, and yaw angles is within a predetermined angle range.
Claims
1. A device (1) for determining a position of a body portion (2) of a user (3), the device (1) comprising: a camera (4) configured to capture the body portion (2) of the user (3) to obtain a pictorial representation (5) of the body portion (2) of the user (3), a display (6) for providing visual feedback to the user (3), at least one sensor (7) for determining at least one of a roll angle, a pitch angle and a yaw angle of the device (1), an interface (8) for receiving picture data related with the pictorial representation (5) of the body portion (2) captured by the camera (4) and for receiving sensor data related with the determined angle of the device (1) determined by the at least one sensor (7), and an analyzer (9) to analyze, based on the picture data, whether the captured body portion (2) is within a predetermined region (34) of the picture captured by the camera (4), and to analyze, based on the sensor data, whether the roll angle and/or the pitch angle and/or the yaw angle of the device (1) is within a predetermined angle range.
2. The device according to claim 1, wherein the analyzer (9) comprises a body portion detection algorithm to determine the position of the body portion (2) within the focus (10) of the camera (4).
3. The device according to claim 1, wherein the body portion (2) of the user (3) is the face of the user (3), and wherein the analyzer (9) comprises a face detection algorithm to determine the position of the face (2) within the focus (10) of the camera (4).
4. The device according to claim 2, wherein the analyzer (9) is configured to determine, based on at least the picture data and optionally on the sensor data, a relative orientation of the device (1) relative to the detected body portion (2) of the user (3), wherein said relative orientation is a relative distance between the detected body portion (2) and the device (1) and/or a relative position between the detected body portion (2) and the device (1) along a plane that is substantially perpendicular to the orientation of the camera (4).
5. The device according to claim 4, wherein the analyzer (9) is configured to overlay the pictorial representation (5) of the body portion (2) with the predetermined region (34) of the picture and, if the analyzer (9) analyzes that the body portion (2) is at least partly outside the predetermined region (34) of the picture, the device (1) is configured to display a message (77) and/or an image (46) on the display (6) in order to prompt the user (3) to alter the relative orientation between the body portion (2) and the device (1).
6. The device according to one claim 1, wherein the at least one sensor (7) is configured to determine the at least one of a roll angle, a pitch angle and a yaw angle of the device (1) and to display the determined roll angle and/or pitch angle and/or yaw angle of the device (1) on the display (6).
7. The device according to claim 6, wherein, if the at least one sensor (7) determines that the roll angle and/or the pitch angle and/or the yaw angle lies outside the predetermined angle range, the device (1) is configured to display an image (31, 32a, 32b) and/or a message (42) on the display (6) prompting the user (3) to position the device (1) such that it comprises a roll angle and/or a pitch angle and/or a yaw angle that lies within said predetermined angle range.
8. The device according to claim 7, wherein the predetermined angle range of the roll angle and/or the pitch angle and/or the yaw angle lies between +3° and −3°.
9. The device according to claim 1, wherein the predetermined region (34) of the picture covers about 60% to 80%, and preferably 75% of the focus (10) of the camera (4).
10. The device according to claim 1, further comprising a communication interface that is configured to communicate with a personal care device in order to receive information from said personal care device.
11. A method for determining a position of a body portion (2) of a user (3), the method comprising capturing the body portion (2) of the user (3) in order to obtain a pictorial representation (5) of the body portion (2) of the user (3), providing visual feedback to the user (3), receiving angle data corresponding to at least one of a roll angle, a pitch angle and a yaw angle of a device (1, 4) by means of which the pictorial representation (5) was captured, receiving picture data related with the pictorial representation (5) of the body portion (2), and analyzing, based on the picture data, whether the captured body portion (2) is within a predetermined region (34) of the picture captured by the device (1, 4), and analyzing, based on the angle data, whether the roll angle and/or the pitch angle and/or the yaw angle of the device (1, 4) is within a predetermined angle range.
12. The method according to claim 11, further comprising detecting the body portion (2) of the user (3) and determining the position of the body portion (2) of the user (3) within the picture captured by the device (1, 4).
13. The method according to one of claim 11, further comprising determining a relative orientation between the device (1, 4) and the body portion (2) of the user (3), wherein said relative orientation is a relative distance between the body portion (2) of the user (3) and the device (1, 4) and/or a relative position between the body portion (2) of the user (3) and the device (1, 4) along a plane that is substantially parallel to the picture plane of the pictorial representation.
14. The method according to claim 13, further comprising overlaying the pictorial representation (5) of the body portion (2) of the user (3) with the predetermined region (34) of the picture and, if the pictorial representation of the body portion (2) is at least partly outside the predetermined region (34) of the picture, displaying a message (77) and/or an image (46) in order to prompt the user (3) to alter the relative orientation between the body portion (2) and the device (1, 4).
15. A computer program for performing, when running on a computer, the method according to claim 11.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0030] In the following, embodiments of the present invention are described in more detail with reference to the figures, in which
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
DETAILED DESCRIPTION
[0045] Equal or equivalent elements or elements with equal or equivalent functionality are denoted in the following description by equal or equivalent reference numerals.
[0046]
[0047] The device 1 further comprises a display 6 for providing visual feedback to the user 3.
[0048] The device 1 further comprises at least one sensor 7 for determining at least one of a roll angle, a pitch angle and a yaw angle of the device 1.
[0049] The device 1 further comprises an interface 8 for receiving picture data related with the pictorial representation 5 of the body portion 2 captured by the camera 4 and for receiving sensor data related with the determined angle of the device 1 determined by the at least one sensor 7.
[0050] The device 1 further comprises an analyzer 9 to analyze, based on the picture data, whether the captured body portion 2 is within a predetermined region 33 of the picture captured by the camera 4, and to analyze, based on the sensor data, whether the roll angle and/or the pitch angle and/or the yaw angle of the device 1 is within a predetermined angle range.
[0051] In the example shown in
[0052] The camera 4 obtains a pictorial representation of the user's head 2. Picture data that is related with the pictorial representation is fed to the interface 8. For this purpose, the camera 4 may be connected to the interface 8 via a physical or wireless data transmission channel 12. The data transmission channel 12 may be configured for unidirectional or bidirectional data transmission.
[0053] The interface 8 further receives sensor data related with the determined angle of the device 1. The sensor data may be provided by physical or wireless data transmission channel 13 between sensor 7 and interface 8. The data transmission channel 13 may be configured for unidirectional or bidirectional data transmission.
[0054] For determining an angle of the device 1, at least one sensor 7 is provided. The sensor 7 may, for instance, be an inertial sensor that is configured to determine at least one of a roll angle, a pitch angle and a yaw angle of the device 1. The inertial sensor may preferably be configured to determine all three angles. It may also be possible that the device 1 comprises an individual sensor for each of the aforementioned three angles, i.e. a first sensor for determining the roll angle of the device 1, a second sensor fort determining the pitch angle of the device 1, and a third sensor for determining the yaw angle of the device 1. In any case, the interface 8 is configured to receive the respective sensor data related with the respective one of the pitch angle, roll angle and yaw angle.
[0055] Accordingly, the interface 8 is configured to receive picture data related with the pictorial representation captured by the camera 4, as well as sensor data related with a current angle of the device 1 and being determined by the at least one sensor 7.
[0056] The device 1 further comprises an analyzer 9. The analyzer 9 may also be connected to the interface 8 via a physical or wireless data transmission channel The data transmission channel may be configured for unidirectional or bidirectional data transmission. The analyzer 9 may, for instance, be a CPU or any other type of logical unit that is configured to process the picture data and the sensor data, respectively.
[0057] Based on the picture data related with the pictorial representation of the user 3, the analyzer 9 is configured to analyze whether the captured body portion, i.e. the user's head 2, particularly the pictorial representation of the user's head 2, is within a predetermined region of the picture captured by the camera 4. Said predetermined region of the picture may be a cross hair, a rectangle, a circle 33, or the like, as exemplarily shown in
[0058] Based on the sensor data related with the determined angle of the device 1, the analyzer 9 is configured to analyze whether the determined roll angle and/or pitch angle and/or yaw angle is within a predetermined angle range.
[0059] The device 1 may, for instance, be a mobile phone. The interface 8 may be connected with a sensor 7 that may already be available within the mobile phone 1. The interface 8 may further be connected with a camera 4 that may already be available within the mobile phone 1. Furthermore, the interface 8 may be connected with a display 6 that may already be available within the mobile phone 1. The analyzer 9 may be the CPU of the mobile phone. The interface 8 may be connected with the CPU 9 or be a part of the CPU 9.
[0060]
[0061] As can be seen in
[0062] As can be seen in
[0063] As can be seen in
[0064] The device 1 may be configured to display the respective angle on the display 6 in order to provide visual feedback to the user 3. For example, the device 1 may be configured to display a level bar on the display 6. An example of a level bar and a possible way of visual angle indication is shown in
[0065] As shown in
[0066] As the device 1 rotates around the X-Axis 21 (
[0067] As the device 1 rotates around the Z-Axis 22 (
[0068] A different visualization may be chosen for the yaw angle. As can be seen in
[0069] As long as the device 1 is within its initial position (
[0070] The following Figures may exemplarily show some possible visualizations on the display 6 of the device 1 in order to provide visual feedback to the user 3 regarding a current pitch angle and/or roll angle and/or yaw angle of the device 1 as well as visual feedback regarding the body portion that is currently captured by the camera 4 of the device 1.
[0071] In this example, the device 1 may be used to guide the user 3 through a setup process in order to arrange the device 1 relative to the user 3 such that the device 1 is usable by the user 3 as desired. For the following description with reference to the following figures, it is assumed that the device 1 will be set up for a subsequent usage with a personal care device, for instance a toothbrush that may be configured to communicate with the device 1. Therefore, the device 1 is configured to determine the position of a body portion 2 of the user 3. In the following examples regarding the toothbrush, it may be convenient that said body portion 2 of the user 3 is, just by way of example, the user's face. Thus, the device 1 may, for instance, be configured to determine the position of the user's face relative to the device 1.
[0072]
[0073]
[0074] The pictorial representation 5 is the picture or image of the user 3 that is captured by the camera 4 of the device 1. Preferably, the camera 4 captures a moving image sequence, e.g. a movie, of the user 3 and displays the user's image instantaneously on the display 6 of the device 1, so to say in real-time. Accordingly, when the user 3 moves relative to the device 1, then this relative movement is instantaneously shown on the display 6 so that the user 3 may always see his current position relative to the device 1.
[0075]
[0076] For example, the indicator marks 32 may be displayed on the left and on the right side of the display 6. In particular, upper indicator marks 32a and lower indicator marks 32b may be displayed. As described above, the level bar 31 and the indicator marks 32a, 32b may be used for indicating a pitch angle and/or a roll angle of the device 1. The region between the upper indicator marks 32a and the lower indicator marks 32b may represent the predetermined angle range.
[0077] In the example shown in
[0078] The device 1 is configured to display a message 42 on the display 6 prompting the user 3 to position the device 1 such that it comprises a roll angle that lies within the predetermined angle range, i.e. between the upper indicator marks 32a and the lower indicator marks 32b. According to this example, the device displays a message 42 on the display 6 informing the user 3 to tilt the camera 4 forward, or to respectively tilt the device 1 comprising the camera 4 forward.
[0079] Alternatively, if the level bar 31 may be positioned above the upper indicator mark 32a, the device may display an alternative message 42 on the display 6 informing the user 3 to tilt the camera 4 backward, or to respectively tilt the device 1 comprising the camera 4 backward.
[0080] Additionally or alternatively to the text message 42, an image, such as an upward or downward directed arrow or the like, may be presented to the user 3.
[0081] The message 42 and/or image may be dismissed once the device 1 has detected that it has been moved such that the level bar 31 lies within the predetermined angle range indicated by the indicator marks 32a, 32b, or after a certain time, for example after 3 seconds, whichever is longer. According to an example, messages 42 and/or images are not shown on the display 6 anymore once they are dismissed for the first time.
[0082] Furthermore, the aforementioned user's selection regarding his/her preferred hand may still be displayed on the display 6 by means of softkey buttons 43, 44. Optionally, a further softkey button 45 may be displayed on the display 6. By clicking said button 45, the user 3 may input and signal his/her desire to the device 1 to continue even though the respective determined angles (roll, pitch, yaw) may not yet be within the predetermined angle range.
[0083] Still with reference to
[0084] In other words, if the device 1 is to determine a different body portion, such as a leg or an arm, for example, the device 1, and in particular the analyzer 9, may comprise a respective body portion detection algorithm that is configured to detect the respective body portion within the focus of the camera 4.
[0085] Referring back to
[0086] In this example, the user's face, i.e. the first circle 33, is at least partly outside the second circle 34. The inner diameter of the second circle 34 may represent the predetermined region of the picture captured by the camera 4 inside of which the user's face, i.e. the first circle 33, shall be positioned.
[0087] As can be seen in the example shown in
[0088] As mentioned above, the face detection algorithm enables the analyzer 9 to detect the user's face which detection is represented by the first circle 33 that may be displayed on the display 6. In the present example, the analyzer 9 analyzes whether the user's face is within the predetermined region. Stated differently, the analyzer 9 analyzes whether the first circle 33 is within the second circle 34. If the analyzer 9 analyzes that the face is at least partly outside the predetermined region of the picture, the device 1 is configured to display a message and/or an image 46 on the display 6 in order to prompt the user 3 to alter the relative orientation between the face and the device 1.
[0089] In the example shown in
[0090] The image 46 may point into the direction of the center of the second circle 34.
[0091] Accordingly, the user 3 may be prompted to alter the position of his/her face relative to the device 1, or at least relative to the camera 4. Additionally or alternatively, the position of the device 1 itself may be altered. In this example, the device 1 may be moved upward and right. Accordingly, the user 3 may be prompted to alter the position of the device 1 relative to his/her face. However, in the latter example, the user should take care that the level bar 31 is between the indicator marks 32a, 32b after having repositioned the device 1.
[0092] The image 46 and/or message may be displayed as long as the detected body portion, i.e. the user's face in this example, is at least partly outside the predetermined region 34. In other words, the image 46 and/or message may be displayed as long as the first circle 33 is at least partly outside the second circle 34. Accordingly, the image 46 and/or message may not be displayed anymore in case the analyzer 9 analyzes that the position of the device 1 relative to the user 3 is adjusted such that the user's face is within the predetermined region.
[0093] Once the analyzer 9 analyzes that the captured body portion, i.e. the face of the user 3 is within the predetermined region of the picture captured by the camera 4, and that the roll angle and/or pitch angle and/or yaw angle of the device 1 is within the predetermined angle range, the setup process may be terminated. As an example, the device 1 may then switch to an operational mode in which it communicates with a personal device, such as a toothbrush, in order to present brushing instructions to the user via the display 6.
[0094] A further example is depicted in
[0095] As described above, the device 1, and in particular the analyzer 9, is configured to determine a relative distance, or a variation (e.g. by a forward or a backward movement of the user 3 relative to the camera 4) of the relative distance, between the camera 4 and the user 3. The device 1, and in particular the analyzer 9, is also configured to determine a movement of the user 3 in front of the camera 4, which movement may be any one of a left, a right, an upward and a downward directed movement. The device 1, and in particular the analyzer 9, is also configured to detect a combination of the aforementioned movements.
[0096] Stated in more general terms, the analyzer 9 is configured to determine, based on at least the picture data and optionally on the sensor data a relative orientation of the device 1 relative to the detected body portion of the user 3, wherein said relative orientation may be at least one of a relative distance between the detected body portion and the device 1 (or the camera 4, respectively) and/or a relative position between the detected body portion and the device 1 (or the camera 4, respectively) along a plane that is substantially perpendicular to the orientation of the camera 4.
[0097] These variations in the relative orientation between the camera 4 and the user 3 shall be explained in more detail with reference to the following Figures.
[0098]
[0099] As the face of the user 3 is at least partly outside the predetermined region 34, the image 46 in the form of three consecutive arrow heads prompting the user 3 to move his/her face into the direction of the center of the second circle 34 is also displayed.
[0100] All of these graphical elements, i.e. level bar 31, indicator marks 32a, 32b, first circle 33, second circle 34, arrow heads 46, pictorial representation of the user 3, are overlayed on the camera feed, i.e. over the picture captured by the camera 4 and may be used in order to provide visual feedback to the user 3.
[0101] The second circle 34 is a fixed circle located substantially in the middle of the screen. The second circle 34 indicates where the user 3 should have his/her face.
[0102] Whenever the user's face is at least partly outside the second circle 34, the first circle 33 is displayed. The first circle 33 is displayed as a translucent circle or dot which is overlayed on the face of the user 3. The second circle 33 follows the user's face if the user 3 moves relative to the camera 4.
[0103] The directional arrows 46 point from the detected face position towards the center of the second circle 34. The arrows 46 shall prompt the user 3 to alter the relative position between his/her face and the device 1, or the camera 4 respectively. As mentioned above, the user 3 may alter his position in order to move his face into the alignment circle 34 while the device 1 itself is not moved. Additionally or alternatively, the position of the device 1 may be altered such that the user's face appears within the alignment circle 34.
[0104]
[0105] As further depicted in
[0106] Stated differently, the level bar 31 measures the phone's roll and pitch so that it can be properly vertical and facing towards the user 3. The level bar 31 should act as a level and move up and down as the pitch changes (e.g. as the device 1 is tilted backwards the level bar 31 would be lower as shown here). The level bar 31 should also tilt diagonally if the phone is rolled and no longer perpendicular to the floor (e.g. it should always remain parallel to the floor regardless of the phone's orientation). The indicator marks 32a, 32b are fixed guides. The level bar 31 must be aligned between these two guides 32a, 32b.
[0107]
[0108] Furthermore, in comparison to
[0109] In other words, if the analyzer 9 analyzes that the detected body portion (i.e. the user's face in this example) is located to a predetermined extent within the predetermined region of the picture (i.e. within the second circle 34), then the analyzer 9 is configured to display alignment guides 51, 52, 53, preferably a first horizontal alignment guide 51, a second horizontal alignment guide 52 and a vertical alignment guide 53, on the display 6.
[0110] The magnitude of the aforementioned ‘predetermined extent’ will be explained in more detail further below with respect to
[0111] The alignment guides 51, 52, 53 shall help the user 3 in aligning his face correctly inside the second circle 34. In particular, the user 3 shall be prompted to align his/her eye region with the first horizontal alignment guide 51, to align his/her mouth region with the second horizontal alignment guide 52, and to align the vertical center region of his/her face, e.g. the bridge of his/her nose, with the vertical alignment guide 53.
[0112] Stated differently, when the user's face is in the alignment circle 34, alignment guides 51, 52, 53 for the eyes, nose and mouth may be displayed on the display 6.
[0113]
[0114] In other words, the analyzer 9 is configured to analyze the relative position between the user 3 and the device 1 (or the camera 4, respectively) such that the detected body portion is aligned with an alignment guide 50 displayed on the display 6.
[0115] As can be seen, if the detected body portion, i.e. the user's face in this example, is aligned with the alignment guides 50, then the second circle 34 may change its appearance, e.g. by switching to a different color or contrast. In this example, the second circle 34 may switch to a darker contrast.
[0116] Furthermore, when the user's face is aligned with the alignment guides 50 and when the level bar 31 is located within the indicator marks 32a, 32b, a softkey button 54 may be displayed. By clicking said softkey button 54, the screen may switch to a different state, such as shown in
[0117] Stated differently, the alignment circle 34 and the level 31 may change color to indicate correct alignment. The continue button 54 should become enabled once the user's face and the phone 1 are aligned correctly. The user 3 may tap the button 54 to go to a position detection or brushing challenge timer (
[0118] As can be seen in
[0119]
[0120]
[0121]
[0122]
[0123]
[0124] Stated differently, as the user 3 gets farther from the camera 4 the dot 33 tracking his/her face should get smaller. If the user's face gets too far away, the alignment arrows 46 should not be displayed anymore.
[0125]
[0126] Stated differently, if the user's face is inside the second circle 34, the face tracking dot 33 should appear if the user 3 is a sufficient distance away to warrant warning him/her to get closer for alignment. For details as to when to trigger this tracking dot 33 while the user's face is still in the second circle 34, it is referred to
[0127] At a certain point, i.e. upon detection of a certain minimum size of the first circle 33, a screen such as shown in
[0128] Stated differently, if the user 3 gets far enough away that the device 1 may soon not be able to track him/her anymore, a full screen error message 61 may be displayed on the display 6 prompting the user 3 to move closer.
[0129]
[0130] At a certain point, i.e. upon detection of a certain maximum size of the first circle 33, a screen (not shown) may be displayed on the display 6. A message 61 and/or an image may be displayed on the display 6 prompting the user 3 to move further away from the camera 4.
[0131] Stated differently, as the user 3 gets closer to the camera 4, the dot 33 tracking his/her face should get bigger. If the user's face gets too close, the alignment arrows 46 should not be displayed.
[0132]
[0133] In the example shown in
[0134] The screen shown in
[0135]
[0136] As mentioned before with reference to
[0137] As mentioned before with reference to
[0138]
[0139] Stated differently, whenever the head is out of range and a face cannot be detected at all, the timer 79 as depicted in
[0140] If the user's face may be displayed on the display 6, such as previously discussed by way of example with reference to
[0141] Reference is now made to
[0142]
[0143]
[0144]
[0145]
[0146]
[0147]
[0148] A first distance region 1101 is a distance region that is between 0 cm and about 30 cm away from the camera 4. This distance is too close for properly detecting the user's face. If the analyzer 9 analyzes that the user's face is within said first distance region 1101 for a predetermined time, e.g. for more than three seconds, a screen 1106 may be displayed on the display 6. This screen may correspond to the screen that has been previously discussed with reference to
[0149] A second distance region 1102 is a distance region that is between about 30 cm and about 90 cm away from the camera 4. This distance is accurate for properly detecting the user's face. If the analyzer 9 analyzes that the user's face is within said second distance region 1102, a screen 1107 may be displayed on the display 6. This screen may correspond to the screen that has been previously discussed with reference to
[0150] A third distance region 1103 is a distance region that is between about 90 cm and about 110 cm away from the camera 4. This distance is too far away for properly detecting the user's face. If the analyzer 9 analyzes that the user's face is within said third distance region 1103 for a predetermined time, e.g. for more than three seconds, a screen 1108 may be displayed on the display 6. This screen may correspond to the screen that has been previously discussed with reference to
[0151] A fourth distance region 1104 is a distance region that is between about 110 cm and about 140 cm away from the camera 4. This distance is too far away for accurately detecting the user's face. If the analyzer 9 analyzes that the user's face is within said fourth distance region 1104 a screen 1109 may be immediately displayed on the display 6. This screen may correspond to the screen that has been previously discussed with reference to
[0152] A fifth distance region 1105 is a distance region that is more than about 140 cm away from the camera 4. This distance is too far away for properly detecting the user's face. If the analyzer 9 analyzes that the user's face is within said fifth distance region 1105 a screen 1110 may be immediately displayed on the display 6. This screen may correspond to the screen that has been previously discussed with reference to
[0153] If the user's head or face cannot be detected for more than 30 seconds, for example, then this session will count as a normal brushing session instead of Position Detection. A screen such as depicted in
[0154] A corresponding message 78 and/or image may be displayed for 10 seconds, for example, and will then automatically be dismissed. The user may also dismiss it via the exit button 56 in the top-right. A screen only showing the brush timer 79 may be displayed on the display 6, as shown in
[0155] After dismissal the device may return to the position detection screen once the user's head is detected again. When the session is finished, a regular Session Summary screen, as shown in
[0156]
[0157]
[0158]
[0159]
[0160]
[0161]
[0162]
[0163] Summarizing the invention in other words, the invention may be an interactive system setup process that guides users through the installation of a brushing position determination system. The system may take into account sensor data from a smartphone 1 to continuously measure the orientation of the phone 1 during the setup and provides corresponding feedback to the user 3 in order to enable him/her to install everything correctly.
[0164] The same applies for the position of the user's face which is required to always being visible for the smartphone's front camera 4. If he starts leaving the focus, he will be warned to not leave the focus and finally gets a message that the system cannot work anymore if he leaves the focus. Additionally he needs to stay in a certain distance of the camera 4. The system guides the user through the setup and checks face position and smartphone position continuously during the usage of the brushing application.
[0165] To enable the consumer for cooperation, the smartphone application instructs/guides/educates the consumer to set the whole system up correctly. While guiding the consumer to follow the instructions, the system simultaneously checks the proper installation.
[0166] The system can detect when the consumer has completed the tasks. Appropriate feedback is provided. Completed setup tasks trigger the system to start working and providing the actual feedback during brushing. If the consumer changes the adjustment during usage, the system will detect this and provide corresponding instructions or actions.
[0167] For the system to work as desired it is advantageous when the following criteria are fulfilled:
[0168] Consumer may stand in an upright position, gazing straight ahead into the front camera 4
[0169] The smartphone 1 may be affixed to a vertical wall with the front camera 4 at the height level of the consumers nose level
[0170] Roll of the phone should ideally be 0°, but may have a tolerance of e.g. +/−3°, depending on the sensitivity of the video processing algorithm.
[0171] Pitch of the phone should ideally be 0°, but may have a tolerance of e.g. +/−3°, depending on the sensitivity of the video processing algorithm.
[0172] Distance between consumer's face and front camera 4 depends on the lens of the camera 4, but should end up in the face covering approximately 75% of the camera's sensor area (Advantageous: complete consumer's face, especially mouth area including parts of the brush body and consumer's hand may always be visible for the camera 4)
[0173] Bluetooth connection between power brush and smartphone shall be established
[0174] Lighting conditions shall illuminate the face above a certain level. Too light or too dark rooms may affect the result negatively.
[0175] Smartphone App shall be executed
[0176] The process described below can, for instance, be used for setting the system up before using it, but can also serve as a feedback system during the usage of the brushing app. If the smartphone's position or orientation would be changed during the usage of the system, all data gathered by the at least one sensor would be bad, so an appropriate feedback during usage is required. The sensitivity of the sensor measurement (e.g. roll and pitch of the phone, face distance, face position) can be different during usage of the brushing app from the sensitivity during set up.
[0177] Roll, Pitch, Yaw of the Smartphone and Head Position Measurements
[0178] This is the definition of what the smartphone measures with its at least one sensor, e.g. with built-in inertial sensors, during the setup and usage of the system. The head/face alignment is measured by the front camera 4 of the smartphone 1 and may use the Fraunhofer SHORE face detection algorithm to determine the position of the face within the camera focus.
[0179] All measurements may have an ideal value range, a tolerance value range and an out of range value range. As long as the required face position/smartphone orientation is in the ideal value range, the system indicates that the next step can be taken, e.g. starting the brushing application. If a value is in the tolerance value range, the user is prompted to correct the orientation of the phone/position of the face/distance to the camera. If the values are out of range, another message will ask the user to correct the position/orientation and that the system will not work at all.
[0180] On-Screen Instructions During Setup
[0181] The user may be asked which hand he prefers using for brushing. This may help the algorithm to reduce the variances of the image processing which may increase feedback quality.
[0182] On-screen guidance may provide instant feedback of what the consumer needs to do with the smartphone 1 in order to position it correctly. Every movement and orientation change of the phone 1 may be continuously measured and fed back to the user. In this case the Roll and pitch may be measured.
[0183] In parallel the system determines a face in the camera focus and provides guidance on how far away the user should stand and whether the phone 1 is mounted at the right height.
[0184] If the ambient lighting conditions are too bad for the system to detect the face properly and stable, the system may provide feedback, that the user should turn on the light or close the blinds/disable direct face illumination/background light which blinds the camera.
[0185] On-Screen Instruction During Usage
[0186] Whenever the head is out of range and a face cannot be detected the DZM timer may be displayed in such a way that it is clear that position is not currently being tracked because the user is out of range. Also, a corresponding message may be shown. The handle of the toothbrush may trigger a vibration every second while the user's face cannot be detected. The timer should continue counting as long as the toothbrush is on. The disabled UI (User Interface) for position, the corrective message, and vibrations should be automatically demised once the user's face is in view again. Once the face is detected again another message may be shown, for example for two seconds, such as a message containing the information: “Oops, we couldn't see your face. Always face forward to get the most accurate results.”
[0187] 2. If the user's brush position cannot be detected, even if his/her head can be, Position Detection may be disabled, but the timer may continue to count up. The DZM timer may be displayed in such a way that it is clear that position is not currently being tracked because the user is out of range. No message.
[0188] 3. If the user's head cannot be detected for more than 30 seconds, for instance, then this session will count as a normal brushing session instead of Position Detection. None of the position detection data may be recorded for this session. A corresponding message, such as message 1201 shown in
[0189] Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus.
[0190] The inventive decomposed signal can be stored on a digital storage medium or can be transmitted on a transmission medium such as a wireless transmission medium or a wired transmission medium such as the Internet.
[0191] Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.
[0192] Some embodiments according to the invention comprise a non-transitory data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
[0193] Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
[0194] Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
[0195] In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
[0196] A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein.
[0197] A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
[0198] A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
[0199] A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
[0200] In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are preferably performed by any hardware apparatus.
[0201] The above described embodiments are merely illustrative for the principles of the present invention. It is understood that modifications and variations of the arrangements and the details described herein will be apparent to others skilled in the art. It is the intent, therefore, to be limited only by the scope of the impending patent claims and not by the specific details presented by way of description and explanation of the embodiments herein.
[0202] The dimensions and values disclosed herein are not to be understood as being strictly limited to the exact numerical values recited. Instead, unless otherwise specified, each such dimension is intended to mean both the recited value and a functionally equivalent range surrounding that value. For example, a dimension disclosed as “40 mm” is intended to mean “about 40 mm.”
[0203] Every document cited herein, including any cross referenced or related patent or application and any patent application or patent to which this application claims priority or benefit thereof, is hereby incorporated herein by reference in its entirety unless expressly excluded or otherwise limited. The citation of any document is not an admission that it is prior art with respect to any invention disclosed or claimed herein or that it alone, or in any combination with any other reference or references, teaches, suggests or discloses any such invention. Further, to the extent that any meaning or definition of a term in this document conflicts with any meaning or definition of the same term in a document incorporated by reference, the meaning or definition assigned to that term in this document shall govern.
[0204] While particular embodiments of the present invention have been illustrated and described, it would be obvious to those skilled in the art that various other changes and modifications can be made without departing from the spirit and scope of the invention. It is therefore intended to cover in the appended claims all such changes and modifications that are within the scope of this invention.