METHOD FOR ADAPTING A TRIGGERING ALGORITHM OF A PERSONAL RESTRAINT DEVICE AND CONTROL DEVICE FOR ADAPTING A TRIGGERING ALGORITHM OF A PERSONAL RESTAINT DEVICE

20230356682 · 2023-11-09

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for adapting a triggering algorithm of a personal restraint device of a vehicle on the basis of a detected vehicle interior state of the vehicle. The method comprises detecting of key points of a vehicle occupant by an optical sensor device, ascertaining a vehicle occupant posture of the vehicle occupant based on the connection of the detected key points to a skeleton-like representation of body parts of the vehicle occupant, wherein the skeleton-like representation reflects the relative position and orientation of individual body parts of the vehicle occupant, predicting a future vehicle occupant posture of the vehicle occupant based on a predicted future position of at least one of the key points, and modifying the triggering algorithm of the personal restraint device based on the predicted future posture of the vehicle occupant. A control device for adapting a triggering algorithm of a personal restraint device is also disclosed.

    Claims

    1. A method for adapting a triggering algorithm of a personal restraint device of a vehicle on the basis of a detected vehicle interior state of the vehicle, the method comprising: detecting key points of a vehicle occupant by an optical sensor device; ascertaining a vehicle occupant posture of the vehicle occupant based on the connection of the detected key points to a skeleton-like representation of body parts of the vehicle occupant, wherein the skeleton-like representation reflects the relative position and orientation of individual body parts of the vehicle occupant; predicting a future vehicle occupant posture of the vehicle occupant based on a predicted future position of at least one of the key points; and modifying the triggering algorithm of the personal restraint device based on the predicted future posture of the vehicle occupant.

    2. The method according to claim 1, further comprising: detecting of a mobile object held by the vehicle occupant by the optical sensor device; predicting a future mobile object state; and modifying the triggering algorithm of the personal restraint device based on the predicted future posture of the vehicle occupant and the predicted future mobile object state.

    3. The method according to claim 2, wherein the predicted future mobile object state comprises at least one of a future position of the mobile object inside the vehicle interior, a future velocity of the mobile object, or a future orientation of the mobile object.

    4. The method according to claim 2, further comprising: correlating the position and/or movement of the mobile object with respect to the position and/or movement of the key points representing the vehicle occupant wrists; determining from the correlation which hand of the vehicle occupant is holding the mobile object.

    5. The method according to claim 1, wherein the predicted future position of at least one of the key points is estimated based on at least one of a vehicle occupant size, a vehicle occupant seat position, a vehicle occupant seat backrest angle, a seat belt status, a vehicle velocity, or a vehicle acceleration, and wherein the vehicle occupant size is estimated based on the size of a body part of the vehicle occupant.

    6. The method according to claim 2, wherein the predicted future state of the mobile object is estimated based on at least one of a vehicle occupant size, a vehicle occupant seat position, a vehicle occupant seat backrest angle, a seat belt status, a vehicle velocity, or a vehicle acceleration and wherein the vehicle occupant size is estimated based on the size of a body part of the vehicle occupant.

    7. The method according to claim 1, wherein the predicted future position of at least one of the key points is estimated based on a calculated future vectorial key point velocity, wherein the future vectorial key point velocity is formed by a vector sum of an estimated vectorial key point velocity and a current vectorial vehicle velocity multiplied with a first scalar parameter derived from a signal of a crash sensor.

    8. The method according to claim 7, wherein the future vectorial key point velocity is calculated by the formula
    {right arrow over (ν.sub.KP t.sub.n+1)}={right arrow over (ν.sub.KP t.sub.n+1 Est)}+(α)*{right arrow over (ν.sub.Vehicle)}+(β)*{right arrow over (ν.sub.CT)}, wherein {right arrow over (ν.sub.KP t.sub.n+1)} is the future vectorial key point velocity, {right arrow over (ν.sub.KP t.sub.n+1 Est)} is the estimated future vectorial key point velocity, (α) is the first scalar parameter derived from a signal of a crash sensor, {right arrow over (ν.sub.Vehicle)} is the current vectorial vehicle velocity, (β) is a second scalar parameter derived from a signal of a crash sensor and {right arrow over (ν.sub.CT)} is a current vectorial colliding target velocity.

    9. The method according to claim 2, wherein the predicted future position of the mobile object is estimated based on a calculated future vectorial mobile object velocity, wherein the future vectorial mobile object velocity is formed by a vector sum of an estimated vectorial mobile object velocity and a current vectorial vehicle velocity multiplied with a first scalar parameter derived from a signal of a crash sensor.

    10. The method according to claim 9, wherein the future vectorial mobile object velocity is calculated by the formula
    {right arrow over (ν.sub.Obj t.sub.n+1)}={right arrow over (ν.sub.Obj t.sub.n+1 Est)}+(α)*{right arrow over (ν.sub.Vehicle)}+(β)*{right arrow over (ν.sub.CT)}, wherein {right arrow over (ν.sub.Obj t.sub.n+1)} is the future vectorial mobile object velocity, {right arrow over (ν.sub.KP t.sub.n+1 Est)} is the estimated future vectorial mobile object velocity, (α) is the first scalar parameter derived from a signal of a crash sensor, {right arrow over (ν.sub.Vehicle)} is the current vectorial vehicle velocity, (β) is a second scalar parameter derived from a signal of a crash sensor and {right arrow over (ν.sub.CT)} is a current vectorial colliding target velocity.

    11. The method according to claim 2, wherein the detecting of key points of the vehicle occupant is made in using at least an IR camera and a 3D camera.

    12. The method according to claim 11, wherein 2D key points of the vehicle occupant detected by the IR camera are converted into 3D key points of the vehicle occupant by fusing the information provided by the 3D camera, and wherein the ascertaining of the vehicle occupant posture of the vehicle occupant is based on the 3D key points of the vehicle occupant.

    13. The method according to claim 12, wherein the detecting of the mobile object held by the vehicle occupant is made in using only the IR camera.

    14. The method according to claim 1, wherein the key points of the vehicle occupant are the vehicle occupant skeleton joints.

    15. A control device for adapting a triggering algorithm of a personal restraint device of a vehicle on the basis of a detected vehicle interior state of the vehicle, wherein the control device is configured to: detect key points of a vehicle occupant by an optical sensor device; ascertain a vehicle occupant posture of the vehicle occupant based on the connection of the detected key points to a skeleton-like representation of body parts of the vehicle occupant, wherein the skeleton-like representation reflects the relative position and orientation of individual body parts of the vehicle occupant; predict a future vehicle occupant posture of the vehicle occupant based on a predicted future position of at least one of the key points; and modify the triggering algorithm of the personal restraint device based on the predicted future posture of the vehicle occupant.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0036] The present disclosure will become more fully understood from the detailed description and the accompanying drawings, wherein:

    [0037] FIG. 1 shows in a schematic diagram a vehicle comprising a control device for adapting a triggering algorithm of a personal restraint device based on a detected vehicle interior state;

    [0038] FIG. 2 shows a flow chart for adapting a triggering algorithm of a personal restraint device carried out by the control device of FIG. 1;

    [0039] FIG. 3 shows a control device for adapting a triggering algorithm of a personal restraint device of FIG. 2; and

    [0040] FIG. 4 shows a flow chart for adapting a triggering algorithm of a personal restraint device.

    DETAILED DESCRIPTION

    [0041] The following describes the present disclosure in detail with reference to the accompanying drawings and in combination with embodiments. It should be noted that, without conflicts, the embodiments in the present disclosure and features in the embodiments may be combined with each other. Parts corresponding to each other are always provided with the same reference signs in all figures.

    [0042] FIG. 1 shows in a schematic diagram a vehicle 1 comprising a control device 2 for adapting a triggering algorithm of a personal restraint device (not shown) of the vehicle 1 on the basis of a detected vehicle interior state. The vehicle 1 comprises as an optical sensor device a 3D camera 3 and an IR camera 4, wherein the 3D camera 3 and the IR camera 4 are integrated in the roof lining between the vehicle front seats 3 and in an area close to the rear-view mirror of the vehicle 1, and wherein these cameras 3, 4 monitor the interior 5 of the vehicle 1. The vehicle 1 further comprises a surrounding sensor (not shown), like a radar sensor, a lidar sensor, an ultrasonic sensor or a surrounding camera or a combination thereof and at least one collision sensor (not shown).

    [0043] The control device 2 is configured to detect the skeleton joints of a vehicle occupant 6 as key points 7 of the vehicle occupant 6 by the cameras 3, 4. Thereby, 2D key points of the vehicle occupant 6 detected by the IR camera 4 are converted into 3D key points 7 of the vehicle occupant 6 by fusing the information provided by the 3D camera 3. In FIG. 1 only individual key points 7 are provided with the reference sign 7 in order not to overload the illustration. Based on these key points 7 a vehicle occupant posture of the vehicle occupant 6 based on the connection of the detected key points 7 to a skeleton-like representation of body parts of the vehicle occupant 6 is ascertained, wherein the skeleton-like representation reflects the relative position and orientation of individual body parts of the vehicle occupant 6.

    [0044] The control device 2 is further configured to predict a future vehicle occupant posture of the vehicle occupant 6 based on a predicted future position of at least one of the key points 7 and to modify the triggering algorithm of the personal restraint device based on the predicted future posture of the vehicle occupant 6.

    [0045] The method 100 carried out by the control device 2 is shown and described in more detail in FIG. 2.

    [0046] FIG. 2 shows a flow chart of a method 100 for adapting a triggering algorithm of a personal restraint device of the vehicle 1 shown in FIG. 1 on the basis of a detected vehicle interior state of the vehicle 1, wherein the vehicle 1 collides or will collide with a colliding target, and wherein the colliding target is a moving target vehicle. In step 110 key points 7 of the vehicle occupant 6 are detected by the optical sensor device, wherein the optical sensor device is the combination of the IR camera 4 and the 3D camera 3, and wherein 2D key points of the vehicle occupant 6 detected by the IR camera 4 are converted into 3D key points 7 of the vehicle occupant 6 by fusing the information provided by the 3D camera 3.

    [0047] In step 112 a vehicle occupant posture of the vehicle occupant 6 is ascertained based on the connection of the detected key points 7 to a skeleton-like representation of body parts of the vehicle occupant 6, wherein the skeleton-like representation reflects the relative position and orientation of individual body parts of the vehicle occupant 6.

    [0048] In step 114 a future vehicle occupant posture of the vehicle occupant 6 is predicted based on a predicted future position of at least one of the key points 7. The predicted future position of at least one of the key points 7 is estimated based on a calculated future vectorial key point velocity, wherein the future vectorial key point velocity is formed by a vector sum of an estimated vectorial key point velocity, a current vectorial vehicle velocity multiplied with a first scalar parameter derived from a signal of a crash sensor and a current vectorial colliding target velocity multiplied with a second scalar parameter derived from a signal of the same crash sensor. In this way, the future vectorial key point velocity is calculated by the following formula:


    {right arrow over (ν.sub.KP t.sub.n+1)}={right arrow over (ν.sub.KP t.sub.n+1 Est)}+(α)*{right arrow over (ν.sub.Vehicle)}+(β)*{right arrow over (ν.sub.CT)}, [0049] wherein {right arrow over (ν.sub.KP t.sub.n+1)} is the future vectorial key point velocity, {right arrow over (ν.sub.KP t.sub.n+1 Est)} is the estimated future vectorial key point velocity, (α) is the first scalar parameter derived from a signal of a crash sensor, {right arrow over (ν.sub.Vehicle)} is the current vectorial vehicle velocity, (β) is the second scalar parameter derived from a signal of the crash sensor and {right arrow over (ν.sub.CT)} is a current vectorial colliding target velocity.

    [0050] The first scalar parameter and the second scalar parameter are each determined by also taking into account the vehicle size and the vehicle weight of the vehicle 1. The vectorial colliding target velocity is derived from a signal of a surrounding sensor of the vehicle 1, like a radar sensor, a lidar sensor, an ultrasonic sensor or a surrounding camera.

    [0051] This will enable dynamic key point tracking and particularly precise predicting of the future key point position and therefore precise predicting a future vehicle occupant posture.

    [0052] In step 116 the triggering algorithm of the personal restraint device is modified based on the predicted future posture of the vehicle occupant 6.

    [0053] Thus, an accurately determination of the interior state is ensured and thus maximum protection of the vehicle occupant 6 in the event of the restraint device being triggered is enabled.

    [0054] FIG. 3 shows the control device 2 for adapting a triggering algorithm of a personal restraint device of a vehicle 1 on the basis of a detected vehicle interior state of the vehicle 1. The control device 2 is configured or designed for carrying out the method 100 according to FIG. 2.

    [0055] FIG. 4 shows a flow chart of a method 100 for adapting a triggering algorithm of a personal restraint device of a vehicle 1 on the basis of a detected vehicle interior state of the vehicle 1 according to another embodiment. The method 100 essentially corresponds to the method 100 described in FIG. 2, whereby the method 100 according to FIG. 4 comprises some further aspects.

    [0056] In step 120 key points 7 of a vehicle occupant 6 are detected by an optical sensor device, wherein the optical sensor device is a combination of an IR camera 4 and a 3D camera 3, and wherein 2D key points of the vehicle occupant 6 detected by the IR camera 4 are converted into 3D key points 7 of the vehicle occupant 6 by fusing the information provided by the 3D camera 3. Furthermore, a mobile object held by the vehicle occupant 6 is detected by the IR camera 4.

    [0057] In step 122 a vehicle occupant posture of the vehicle occupant 6 is ascertained based on the connection of the detected key points 7 to a skeleton-like representation of body parts of the vehicle occupant 6, wherein the skeleton-like representation reflects the relative position and orientation of individual body parts of the vehicle occupant 6.

    [0058] In step 124 a future vehicle occupant posture of the vehicle occupant 6 is predicted based on a predicted future position of at least one key point 7. The predicted future position of at least one key point 7 is estimated based on a calculated future vectorial key point velocity, wherein the future vectorial key point velocity is formed by a vector sum of an estimated vectorial key point velocity, a current vectorial vehicle velocity multiplied with a first scalar parameter derived from a signal of a crash sensor and a current vectorial colliding target velocity multiplied with a second scalar parameter derived from the signal of another crash sensor. In this way, the future vectorial key point velocity is calculated by the following formula:


    {right arrow over (ν.sub.KP t.sub.n+1)}={right arrow over (ν.sub.KP t.sub.n+1 Est)}+(α)*{right arrow over (ν.sub.Vehicle)}+(β)*{right arrow over (ν.sub.CT)},

    [0059] wherein {right arrow over (ν.sub.KP t.sub.n+1)} is the future vectorial key point velocity, {right arrow over (ν.sub.KP t.sub.n+1 Est)} is the estimated future vectorial key point velocity, (α) is the first scalar parameter derived from a signal of a crash sensor, {right arrow over (ν.sub.Vehicle)} is the current vectorial vehicle velocity, (β) is a second scalar parameter derived from a signal of another crash sensor and {right arrow over (ν.sub.CT)} is a current vectorial colliding target velocity.

    [0060] The first scalar parameter and the second scalar parameter are each determined by also taking into account the vehicle size and the vehicle weight of the vehicle 1. The vectorial colliding target velocity is derived from a signal of a surrounding sensor of the vehicle 1, like a radar sensor, a lidar sensor, an ultrasonic sensor or a surrounding camera.

    [0061] This will enable dynamic key point tracking and particularly precise predicting of the future key point position and therefore precise predicting a future vehicle occupant posture.

    [0062] Furthermore, a future mobile object state is predicted, wherein the predicted future mobile object state comprises a future position of the mobile object inside the vehicle interior 5 relative to the steering wheel, a future velocity of the mobile object and a future orientation of the mobile object. The predicted future position of the mobile object is estimated based on a calculated future vectorial mobile object velocity, wherein the future vectorial mobile object velocity is formed by a vector sum of an estimated vectorial mobile object velocity, the current vectorial vehicle velocity multiplied with the first scalar parameter and the current vectorial colliding target velocity multiplied with the second scalar parameter. In this way, the future vectorial mobile object velocity is calculated by the following formula:


    {right arrow over (ν.sub.Obj t.sub.n+1)}={right arrow over (ν.sub.Obj t.sub.n+1 Est)}+(α)*{right arrow over (ν.sub.Vehicle)}+(β)*{right arrow over (ν.sub.CT)},

    [0063] wherein {right arrow over (ν.sub.KP t.sub.n+1)} is the future vectorial mobile object velocity, {right arrow over (ν.sub.KP t.sub.n+1 Est)} is the estimated future vectorial mobile object velocity, (α) is the first scalar parameter derived from a signal of a crash sensor, {right arrow over (ν.sub.Vehicle)} is the current vectorial vehicle velocity, (β) is the second scalar parameter derived from the signal of another crash sensor and {right arrow over (ν.sub.CT)} is the current vectorial colliding target velocity.

    [0064] The first scalar parameter and the second scalar parameter are each determined by also taking into account the vehicle size and the vehicle weight of the vehicle 1. The vectorial colliding target velocity is derived from a signal of a surrounding sensor of the vehicle 1, like a radar sensor, a lidar sensor, an ultrasonic sensor or a surrounding camera.

    [0065] This will enable dynamic mobile object tracking and particularly precise predicting of the future mobile object position.

    [0066] In step 126 the triggering algorithm of the personal restraint device is modified based on the predicted future posture of the vehicle occupant 6 and the predicted future mobile object state.

    [0067] In this way, also a mobile object held by the vehicle occupant 6 is taken into account and so a more accurate determination of the vehicle interior state and therefore further maximization of protection of the vehicle occupant 6 in the event of a collision or crash can be reached.