Method for Determining a Viewing Direction of a Person

20170287163 · 2017-10-05

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for determining a viewing direction of a person, such as a driver in a motor vehicle, is provided. The method includes detecting a surface contour of a surface of an eye of the person, ascertaining a normal vector on the surface of the eye depending on the detected surface contour, and determining the viewing direction of the person depending on the normal vector.

    Claims

    1. A method for determining a viewing direction of a person, the method comprising the acts of: acquiring a surface contour of a surface of an eye of the person; detecting a normal vector on the surface of the eye as a function of the acquired surface contour; and determining the viewing direction of the person as a function of the normal vector.

    2. The method according to claim 1, wherein the act of acquiring the surface contour of the surface of the eye of the person is performed by a depth sensor.

    3. The method according to claim 1, wherein the normal vector is detected as a function of a pupil region detected from the surface contour and a curvature of the pupil region.

    4. The method according to claim 3, wherein the pupil region is determined as an elevation on the surface of the eye and the pupil region projects over a spherical surface of an apple of the eye.

    5. The method according to claim 3, wherein the normal vector at the center of the pupil region is detected as a function of the curvature at the center of the pupil region.

    6. The method according to claim 1, wherein the step of acquiring the surface contour of the surface of the eye includes acquiring the position of the eye via another camera and acquiring the surface contour by the position of the eye.

    7. The method according to claim 1, wherein the viewing direction is determined by applying one or two correction angles to the normal vector, the one or more correction angles being determined by a calibration method.

    8. The method according to claim 1, wherein the person is a driver in a motor vehicle.

    9. The method according to claim 2, wherein the depth sensor is a TOF camera or a LIDAR sensor.

    10. A viewing direction detection system for determining a viewing direction of a person comprising: a depth sensor that acquires a surface contour of a surface of an eye of the person; and a control unit for executing stored instructions to: detect a normal vector on the surface of the eye as a function of the acquired surface contour, and determine the viewing direction of the person as a function of the normal vector.

    11. The viewing direction detection system according to claim 10, further comprising an additional device for determining the viewing direction of the person, wherein the control unit uses an additional viewing direction acquired by the additional device to provide the viewing direction of the person.

    12. The viewing direction detection system according to claim 10, wherein the viewing direction detection system is included in a motor vehicle in order to detect the viewing direction of a driver in a motor vehicle.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0023] FIG. 1 illustrates a viewing direction detection system for use in a motor vehicle in accordance with one or more aspects of the disclosure.

    [0024] FIG. 2 illustrates a flow chart of a method for determining a viewing direction from an acquired three-dimensional pupil shape in accordance with one or more aspects of the disclosure.

    DETAILED DESCRIPTION OF THE DRAWINGS

    [0025] FIG. 1 illustrates a viewing direction detection system 1, by which a viewing direction B of an eye 2 may be detected. The eye, for example, may be the eye of a driver of a motor vehicle and, by way of the detected viewing direction B, functions of the motor vehicle, such as driver assistance functions and warning functions may be activated.

    [0026] The viewing direction detection system 1 may include a camera 3, which is directed at the driver's head. The camera 3 is connected to a control unit 4, in which instance, the position of at least one eye 2 of the driver may be detected by the control unit 4 ad known detection methods. As such, the detected eye position is used by a depth sensor 5 in order to record or to acquire a detailed three-dimensional surface contour of the eye 2, including the pupil of the eye 2. Alternatively, the camera 3 may be omitted, and the depth sensor 5 may be used for the detection of the position of the at least one eye 2.

    [0027] The depth sensor 5 is a 3D camera system that is capable of recording a three-dimensional surface contour of an object.

    [0028] An example of the depth sensor 5 is a TOF camera which, via the time-of-light (TOF) method, measures distances between an optical recording device and a surface of the object to be imaged. For this purpose, the object is illuminated by way of light pulses. For each pixel, the 3D camera measures the time between the emission of the light pulse and a reception of the light pulse reflected at the object in the 3D camera. The required time is directly proportional to the distance between the optical recording device and the object. For each pixel, the optical recording device supplies a distance to a point on the surface of the object to be acquired.

    [0029] While such a TOF camera acquires several pixels for each emitted light pulse, laser scanning to scan each point and/or pixel by an individual light pulse acquires a 3D contour of the object by scanning the region to be acquired.

    [0030] The depth sensor 5 is designed for transmitting contour data, which indicates a surface contour of the eye 2, to the control unit 4 and to determine the location of the pupil 21 by the detected surface contour of the eye 2. The pupil 21 can be detected as an elevation on the apple of the eye. The depth sensor 5 is provided such that the surface contour of the eye can be detected with sufficient precision.

    [0031] By determining the location and the dimensions of the pupil 21 and its curvature, a normal vector on the pupil originating from the center of the pupil 21 may be determined as viewing direction B. The viewing direction B of the eye 2 has a fixed reference to this normal vector and will not deviate from the latter by not more than 5 degrees. Via the control unit 4, the viewing direction B may be correspondingly corrected, for example, by a calibration method.

    [0032] FIG. 2 is a flow chart describing a method for detecting a viewing direction B in accordance with one or more aspects of the disclosure.

    [0033] In step S1, via the camera 3, a head region of the driver's location in the motor vehicle is first recorded. The head region corresponds to the space which the driver's head will take up when the driver is sitting normally in the driver's seat. From the acquired image of the head region, in step S2, the position of the driver's head is detected by an image detection algorithm executed by the control unit 4, and the eye position is detected therefrom.

    [0034] In step S3, the eye position is transmitted to the depth sensor 5, which aligns itself, and carries out a three-dimensional acquisition of the surface contour of the eye 2 from the eye position in step S4.

    [0035] In step S5, the contour data, which three-dimensionally describe the surface contour of the eye 2, is transmitted to the control unit 4.

    [0036] In step S6, the control unit determines the position of the pupil 21 of the eye 2. The pupil 21 corresponds to an elevation of the surface contour above a region, which projects above an essentially spherical surface of an apple 22 of the eye 2 with a curvature that differs therefrom. The pupil region P is an essentially round region on the surface of the apple 22 of the eye.

    [0037] In step S7, a center M of the pupil region P is determined and a normal vector N on the center of the pupil 21 is determined by way of the circumferential line of the pupil region P or a curvature of the pupil 21 to be determined from the surface contour.

    [0038] The normal vector N determined by the depth sensor 5 corresponds to an optical axis of the eye 2 and may correspond to a viewing direction B, or the viewing direction B may be detected from the normal vector N by the application of one or two (e.g., differently directed) correction angles of up to 5 degrees.

    [0039] In step S8, the determined viewing direction B is corrected by a calibration process by adjusting the direction between the pupil 21 and an object to be observed. In a learning process, one or two correction angles oriented at a right angle to one another are determined, which indicate the deviation between the normal vector N and the viewing direction B of the eye 2.

    [0040] As a result of the use of brief light pulses by the light sensor 5, the interference susceptibility during the acquisition of the surface contour of the eye is low, and a depth sensor is therefore suitable for use in a motor vehicle even in instances of rapidly changing light situations.

    [0041] Since a conventional camera 3 is advantageous for the alignment of the depth sensor 5, the eye shape acquisition method may also be combined with other methods for determining the viewing direction based on a simple two-dimensional image capture by the camera 3. As a result, the acquired viewing direction may become more precise in that the viewing direction acquired by one method is made plausible by the viewing direction acquired by the additional method(s).

    LIST OF REFERENCE NUMBERS

    [0042]

    TABLE-US-00001  1 Viewing direction acquisition system  2 Eye 21 Pupil 22 Apple of the eye  3 Camera  4 Control unit  5 Depth sensor B Viewing direction N Normal vector P Pupil region

    [0043] The foregoing disclosure has been set forth merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to persons skilled in the art, the invention should be construed to include everything within the scope of the appended claims and equivalents thereof.