INTERIOR OBSERVATION FOR SEATBELT ADJUSTMENT
20190359169 · 2019-11-28
Assignee
Inventors
Cpc classification
B60R22/48
PERFORMING OPERATIONS; TRANSPORTING
B60R21/01552
PERFORMING OPERATIONS; TRANSPORTING
B60W10/30
PERFORMING OPERATIONS; TRANSPORTING
B60R21/015
PERFORMING OPERATIONS; TRANSPORTING
B60W30/08
PERFORMING OPERATIONS; TRANSPORTING
B60R21/01538
PERFORMING OPERATIONS; TRANSPORTING
International classification
B60R22/195
PERFORMING OPERATIONS; TRANSPORTING
B60W30/08
PERFORMING OPERATIONS; TRANSPORTING
B60W10/30
PERFORMING OPERATIONS; TRANSPORTING
B60R21/015
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A driver assistance system for a vehicle may comprise a control unit that is configured to determine a state of a vehicle occupant via a neural network. The control unit may also activate a safety belt system for positioning and securing the vehicle occupant based on the identified state of the vehicle occupant.
Claims
1. A driver assistance system for a vehicle comprising: a control unit; and a safety belt system, wherein the control unit is configured to determine a state of a vehicle occupant via a neural network, and wherein the control unit is also configured to activate the safety belt system for at least one of positioning and securing the vehicle occupant based on the identified state of the vehicle occupant.
2. The driver assistance system according to claim 1, wherein the control unit is configured to identify a predefined driving situation, and wherein the control unit is configured to activate the safety belt system for positioning or securing the vehicle occupant when a predefined driving situation has been identified.
3. The driver assistance system according to claim 1, wherein the control unit is configured to identify parameters of a predefined driving situation, and wherein the control unit is configured to activate the safety belt system for positioning or securing the vehicle occupant based on these parameters.
4. The driver assistance system according to claim 1, wherein the control unit is configured to determine the state of the vehicle occupant through the analysis of one or more camera images from one or more vehicle interior cameras by the neural network.
5. The driver assistance system according to claim 1, wherein the state of the vehicle occupant is defined by the posture of the vehicle occupant and the weight of the vehicle occupant.
6. The driver assistance system according to claim 1, wherein the control unit is configured to generate a 3D model of the vehicle occupant based on the camera images from one or more vehicle interior cameras and to determine the state of the vehicle occupant through the analysis of the 3D model by the neural network.
7. The driver assistance system according to claim 1, wherein the safety belt system is composed of a plurality of units, and wherein each unit of the plurality of units is activated independently of the other units of the plurality of units.
8. The driver assistance system according to claim 1, wherein the safety belt system comprises one or more controllable belt tensioners.
9. The driver assistance system according to claim 1, wherein the safety belt system comprises a controllable belt lock.
10. A driver assistance system for a vehicle comprising: a control unit, wherein the control unit is configured to determine a state of a vehicle occupant via a neural network, and wherein the control unit is also configured to activate a safety belt system for at securing the vehicle occupant based on the identified state of the vehicle occupant.
11. A method for a driver assistance system, the method comprising: determine a state of a vehicle occupant via a neural network of a control unit, and activating a safety belt system for at securing the vehicle occupant based on the identified state of the vehicle occupant.
12. The method of claim 11, wherein the control unit is configured to identify a predefined driving situation, and wherein the control unit is configured to activate the safety belt system for positioning or securing the vehicle occupant when a predefined driving situation has been identified.
13. The method of claim 11, wherein the control unit is configured to identify parameters of a predefined driving situation, and wherein the control unit is configured to activate the safety belt system for positioning or securing the vehicle occupant based on these parameters.
14. The method of claim 11, wherein the control unit is configured to determine the state of the vehicle occupant through the analysis of one or more camera images from one or more vehicle interior cameras by the neural network.
15. The method of claim 11, wherein the state of the vehicle occupant is defined by the posture of the vehicle occupant and the weight of the vehicle occupant.
16. The method of claim 11, wherein the control unit is configured to generate a 3D model of the vehicle occupant based on the camera images from one or more vehicle interior cameras and to determine the state of the vehicle occupant through the analysis of the 3D model by the neural network.
17. The method of claim 11, wherein the safety belt system is composed of a plurality of units, and wherein each unit of the plurality of units is activated independently of the other units of the plurality of units.
18. The method of claim 11, wherein the safety belt system comprises one or more controllable belt tensioners.
19. The method of claim 11, wherein the safety belt system comprises a controllable belt lock.
20. The method of claim 11, further comprising installing the control unit.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
DETAILED DESCRIPTION
[0025] According to the exemplary embodiments described below, a driver assistance system for a vehicle is created that comprises a control unit that is configured to determine a state of a vehicle occupant by means of a neural network, and activate a safety belt system for positioning or securing the vehicle occupants based on the identified state of the vehicle occupant(s).
[0026] If the occupant is leaning forward, such that he would not be optimally protected by an air bag in the event of an accident, he may then pulled back into a normal sitting position by tensioning the safety belt system, and restrained there. By way of example, the vehicle may skid prior to an accident, before the collision. As a result, the occupants of the vehicle are displaced, e.g. to the side, toward the windshield, or B-pillar of the vehicle, resulting in an increased risk of injury.
[0027] The control unit may be a control device, for example (electronic control unit, ECU, or electronic control module, ECM), which comprises a processor or the like. The control unit can be the control unit for an on-board computer in a motor vehicle, for example, and can assume, in addition to the generation of a 3D model of a vehicle occupant, other functions in the motor vehicle. The control unit can also be a component, dedicated for generating a virtual image of the vehicle interior.
[0028] The processor may be a control unit, e.g. a central processing unit (CPU), that executes program instructions.
[0029] According to one exemplary embodiment, the control unit is configured to identify a predefined driving situation, and to activate the safety belt system for positioning or securing the vehicle occupants when the predefined driving situation has been identified. By restraining the occupants prior to an accident, the occupants can be retained in an optimized position, in particular prior to a collision, a braking procedure, or skidding, such that the risk of injury to the occupants is reduced, and moreover, the vehicle driver is brought into a position in which he can react better to the critical situation, and potentially contribute to a stabilization of the vehicle.
[0030] The control unit may be configured to identify parameters of a predefined driving situation, and activate the safety belt system for positioning or securing the vehicle occupants based on these parameters. The control unit is configured to activate a safety belt system, for example. In particular, the control unit is configured to activate the safety belt system based on the detection of an impending collision, depending on the posture and weight of the vehicle occupant.
[0031] The safety belt system may be composed of numerous units that are activated independently of one another. By way of example, the safety belt system can comprise one or more belt tensioners. Alternatively or additionally, the safety belt system can comprise a controllable belt lock.
[0032] The control unit may also be configured to determine the state of the vehicle occupant by the analysis of one or more camera images from one or more vehicle interior cameras by the neural network. The one or more vehicle interior cameras can be black-and-white or color cameras, stereo cameras, or time-of-flight cameras. The cameras preferably have wide-angle lenses. The cameras can be positioned such that every location in the vehicle interior lies within the viewing range of at least one camera. Typical postures of the vehicle guests can be taken into account when installing the cameras, such that people do not block the view, or only block it to a minimal extent. The camera images are composed, e.g., of numerous pixels, each of which defines a gray value, a color value, or a datum regarding depth of field.
[0033] Additionally or alternatively, the control unit can be configured to generate a 3D model of the vehicle occupant based on camera images of one or more vehicle interior cameras, and to determine the state of the vehicle occupant through the analysis of the 3D model by the neural network. The control unit can also be configured to identify common features of a vehicle occupant in numerous camera images in order to generate a 3D model of the vehicle occupant. The identification of common features of a vehicle occupant can take place, for example, by correlating camera images with one another. A common feature can be a correlated pixel or group of pixels, or it can be certain structural or color patterns in the camera images. By way of example, camera images can be correlated with one another in order to identify correlating pixels or features, wherein the person skilled in the art can draw on appropriate image correlation methods that are known to him, e.g. methods such as those described by Olivier Faugeras et al. in the research report, Real-time correlation-based stereo: algorithm, implementations and applications, RR-2013, INRIA 1993. By way of example, two camera images can be correlated with one another. In order to increase the precision of the reconstruction, numerous camera images can be correlated with one another.
[0034] The control unit may be configured to reconstruct the model of the vehicle occupant from current camera images by means of stereoscopic techniques. As such, the generation of a 3D model can comprise a reconstruction of the three dimensional position of a vehicle occupant, e.g. a pixel or feature, by means of stereoscopic techniques. The 3D model of the vehicle occupant obtained in this manner can be generated, for example, as a collection of the three dimensional coordinates of all of the pixels identified in the correlation process. In addition, this collection of three dimensional points can also be approximated by planes, in order to obtain a 3D model with surfaces.
[0035] The state of the vehicle occupant can be defined, for example, by the posture of the vehicle occupant and the weight of the vehicle occupant. The control unit is configured, for example, to determine a posture and a weight of a vehicle occupant, and to activate the safety belt system on the basis of the posture and the weight of the vehicle occupant. The posture and weight of an occupant can be determined in particular by an image analysis of camera images from the vehicle interior cameras. In particular, the control unit can be configured to generate a 3D model of a vehicle occupant through evaluating camera images from one or more interior cameras or by correlating camera images from numerous vehicle interior cameras, which allows for conclusions to be drawn regarding the posture and weight. Posture refers herein to the body and head positions of the vehicle occupant, for example. Moreover, conclusions can also be drawn regarding the posture of the vehicle occupant, e.g. the line of vision and the position of the wrists of the occupant.
[0036] The control unit may also be configured to generate the model of the vehicle occupant taking depth of field data into account, provided by at least one of the cameras. Such depth of field data is provided, for example, by stereoscopic cameras or time-of-flight cameras. Such cameras provide depth of field data for individual pixels, which can be drawn on in conjunction with the pixel coordinates for generating the model.
[0037] According to some embodiments, the safety belt system according to the invention is provided such that, after tensioning the belt tensioners, a controllable belt lock retains the occupants in a retracted position.
[0038] The exemplary embodiments described in greater detail below also relate to a method for a driver assistance system in which a state of a vehicle occupant (Ins) is determined by means of a neural network, and safety belt system is activated for positioning or securing a vehicle occupant (Ins) based on the detected state of the vehicle occupant.
[0039] Now referring to the figures,
[0040]
[0041] The environment sensors 6 are configured to record the environment of the vehicle, wherein the environment sensors 6 are mounted on the vehicle, and record objects or states in the environment of the vehicle. These include cameras, radar sensors, lidar sensors, ultrasound sensors, etc. in particular. The recorded sensor data from the environment sensors 6 is transferred via the vehicle communication network 5 to the control unit 3, in which it is analyzed with regard to the presence of a critical driving situation, as is described below in reference to
[0042] Vehicle sensors 7 are preferably sensors that record a state of the vehicle or a state of vehicle components, in particular their state of movement. The sensors can comprise a vehicle speed sensor, a yaw rate sensor, an acceleration sensor, a steering wheel angle sensor, a vehicle load sensor, temperature sensors, pressure sensors, etc. By way of example, sensors can also be located along the brake lines in order to output signals indicating the brake fluid pressure at various locations along the hydraulic brake lines. Other sensors can be provided in the proximity of the wheels, which record the wheel speeds and the brake pressure applied to the wheel.
[0043]
[0044] The processor 40 in the control unit 3 is configured to continuously receive camera images from the vehicle interior cameras Cam1-Cam8, and execute image analyses. The processor 40 in the control unit 3 is also, or alternatively, configured to generate a 3D model of one or more vehicle occupants by correlating camera images, as is shown in
[0045] The control unit 3 also comprises a memory and an input/output interface. The memory can be composed of one or more non-volatile computer-readable media, and comprises at least one program storage region and a data storage region. The program storage region and the data storage region can comprise combinations of various types of memories, e.g. a read-only memory 43 (ROM), and a random access memory 42 (RAM) (e.g. dynamic RAM (DRAM), synchronous DRAM (SDRAM), etc.). The control unit for autonomous driving 18 can also comprise an external memory drive 44, e.g. an external hard disk drive (HDD), a flash memory drive, or a non-volatile solid state drive (SSD).
[0046] The control unit 3 also comprises a communication interface 45, via which the control unit can communicate with the vehicle communication network (5 in
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053] Neural networks, in particular convolutional neural networks (CNNs), enable a modeling of complex spatial relationships in image data, for example, and consequently a data driven status classification (weight and posture of a vehicle occupant). With a capable computer, both the vehicle behavior as well as the behavior of the occupant and the state of the occupant can be modeled, in order to derive predictions for actions by passive safety systems, e.g. belt tensioners and belt locks.
[0054] The properties and implementation of neural networks are known to the relevant experts. In particular, reference is made here to the comprehensive literature regarding the structure, types of networks, learning rules, and known applications of neural networks.
[0055] In the present case, image data from cameras Cam1-Cam8 are sent to the neural network. The neural network can receive filtered image data, or the pixels P1, . . . , Pn thereof as input, and process this in order to determine a driver's state as output, e.g. whether the vehicle occupant is in an upright position, output neuron P1, or in a slouched position, output neuron P2, and whether the vehicle occupant is light, output neuron G1, a medium weight, output neuron G2, or heavy, output neuron G3. The neural network can classify a recorded vehicle occupant, for example, as occupant in upright position, or occupant in slouched posture, or as light occupant, medium weight occupant, or heavy occupant.
[0056] The neural network can contain a neural network constructed according to a multi-level (or deep) model. A neural multi-level network model can contain an input level, numerous inner layers, and an output layer. A neural multi-level network model can also contain a loss level. For the classification of sensor data (e.g. a camera image), values in the sensor data (e.g. pixel values) are assigned to input nodes and then fed through the numerous inner layers of the neural network. The numerous inner layers can execute a series of non-linear transformations. After the transformations, an output node produces a value corresponding to the classification (e.g. upright or slouched) that is deduced by the neural network.
[0057] The neural network is configured (trained) such that for certain known input values, the expected responses are obtained. If such a neural network has been trained, and its parameters have been set, the network is normally used as a type of black box, which also produces associated and appropriate output values for unfamiliar input values.
[0058] In this manner, the neural network can be trained to distinguish between desired classifications, e.g. occupant in upright position, or occupant in slouched position, light occupant, medium weight occupant, and heavy occupant, based on camera images.
[0059]
[0060] The status classifications listed herein are schematic and exemplary. Additionally or alternatively, other states can be defined, and would also be conceivable to draw conclusions regarding the behavior of the vehicle occupant from a camera image from the interior cameras Cam1-Cam8, or a 3D model of the vehicle occupant. By way of example, a line of vision, a wrist position, etc. could be derived from the image data, and classified by means of a neural network.
[0061]
[0062] According to the invention, the belt tensioners GSPO and GSPU are activated by the control unit (3 in
[0063] The intention is to bring the occupant into an optimal position prior to a collision with a corresponding pulling direction and tensile force by the belt tensioner GST and using the belt lock GSP. The optimal position is defined herein as the position in which the passive safety system (airbag, etc.) assumes the optimal level of efficiency. It is assumed that this corresponds to the upright position of the occupant, wherein the belt is tensioned. If, for example, a passenger assumes a slouched position, he is then no longer in the position in which an optimal protection by the airbag is ensured, and his position can be corrected by tensioning the safety belt.
[0064] The optimal position is obtained more quickly as a result of the belt lock GSP, because the length of belt that is to be retracted between the upper belt tensioner GSTO and the belt lock GSP is decisive, and there is no need to retract the entire length of belt between the two belt tensioners.
[0065] A belt tensioner can be in the form of an electric motor, for example. In this case, a voltage that is higher than the nominal voltage of the electric motor can be supplied to the electric motor serving as a belt tensioner, in order to generate the increased belt tensioning force. Alternatively, a gearing ratio of the electric motor can be altered. In a further alternative embodiment, the increased belt tensioning force can be obtained by means of a mechanical or electrical energy store.
[0066] According to the invention, the control unit is configured to activate the belt tensioner in the safety belt system 4, and introduce defined forces when a critical driving situation has been identified, e.g. in the event of a predicted collision or a predicted emergency braking that may be triggered by a collision or an actuation of the brake pedal, and detection of an object with sensors that look ahead, or by the braking assistance.
[0067] The control unit 3 is also configured such that the state of a vehicle occupant determined by the image processing is incorporated into the control of the belt tensioner. As a result, the level of force can be increased for heavier occupants, and reduced for lighter occupants, in order to thus ensure not only optimal safety, but also maximum comfort for the occupant.
[0068] A heuristic is provided for the adapted user of the belt tensioner, for example, which defines a corresponding belt tensioning routine based on the posture and weight of the occupant, as well as a vehicle status/driving situation. Additionally or alternatively, this can be learned based on data, and thus optimized.
[0069]
[0070] As can be seen from the table in
[0071] In a preferred embodiment of the present invention, the belt parameters are also adapted taking a predicted deceleration into account, which the driver would experience in a collision or braking procedure. In order to anticipate the deceleration, a collision prediction is first carried out. The aim is to estimate, for example, the point of no return, at which point a collision can no longer be avoided, and impact is immanent. The deceleration strategy and the resulting decelerations are then derived on the basis of this point of no return, and the resulting impact speed.
[0072]
[0073]
[0074] The upper table in
[0075] The lower table in
[0076] The use of a neural network for determining the driver state enables, for example, a determination of a so-called attention map, which indicates which parts of a vehicle occupant are particularly relevant for the detection of the occupant's state.
[0077]