Method and System for Seat Belt Status Detection
20230166687 · 2023-06-01
Inventors
Cpc classification
G06V20/59
PHYSICS
B60R22/48
PERFORMING OPERATIONS; TRANSPORTING
G06V10/811
PHYSICS
B60R2022/4866
PERFORMING OPERATIONS; TRANSPORTING
B60R2022/4808
PERFORMING OPERATIONS; TRANSPORTING
G06V10/22
PHYSICS
International classification
B60R22/48
PERFORMING OPERATIONS; TRANSPORTING
G06V10/22
PHYSICS
G06V10/80
PHYSICS
G06V20/59
PHYSICS
Abstract
Disclosed are techniques and apparatuses for seat belt status detection in a vehicle. In an aspect, a method includes the operations of: obtaining first infrared (IR) images and second visible light images of a seat belt region where a seat belt is expected to be visible when it is worn by a person in the vehicle; transmitting the first images to a first classifier and the second image to a second classifier; determining a first estimate of probability and a second estimate of probability that the seat belt is correctly worn, from the first and second classifiers; combining the two estimates of probability, using respective scores attributed to the first and second images, each score being indicative of the trustworthiness of the corresponding image to detect a seat belt status, to determine a combined probability; and comparing the combined probability to a predetermined threshold to determine a seat belt status.
Claims
1. A method comprising: obtaining at least one first image in the infrared spectrum of at least part of a seat belt region where a seat belt is expected to be visible when it is worn by a person in a vehicle; obtaining at least one second image in the visible light spectrum of at least part of the seat belt region used to obtain the at least one first image; transmitting the at least one first image to a first classifier, dedicated for images in the infrared spectrum; transmitting the at least one second image to a second classifier, dedicated for images in the visible light spectrum; determining from the first classifier a first estimate of probability that the at least one first image includes a seat belt correctly worn; determining from the second classifier a second estimate of probability that the at least one second image includes a seat belt correctly worn; combining the first estimate of probability and the second estimate of probability, using respective scores attributed to the at least one first image and to the at least one second image, each score being indicative of a trustworthiness of the corresponding image to detect a seat belt status, to determine a combined probability; and comparing the combined probability to a predetermined threshold to determine a seat belt status.
2. The method according to claim 1, further comprising: performing image analysis to extract information from each of the at least one first image and the at least one second image; and attributing the score to each of the at least one first image and at least one second image based on the information extracted from the corresponding image.
3. The method according to claim 2, wherein the operation of performing image analysis comprises, for each image, at least one of the following image analysis tasks: analysis of statistical properties of the image; detection of an occlusion to detect that vision of the seat belt is occluded in the image; seat belt color profile recognition to recognize a predetermined profile of a seat belt color in the image; or clothing detection to detect clothing occluding a back area in the image.
4. The method of claim 3, wherein the image analysis tasks are performed by digital image processor.
5. The method according to claim 1, wherein the operation of obtaining the at least one first image further comprises: obtaining the at least one first image from a first image capturing device, the first image capturing image configured for capturing the at least one first image in the infrared spectrum inside the vehicle; wherein the operation of obtaining the at least one second image further comprises: obtaining the at least one second image from a second image capturing device, the second image capturing device configured for capturing the at least one second image in the visible light spectrum inside the vehicle; and the method further comprising: generating, from said at least one first image, a plurality of first images of different parts of the seat belt region respectively; and generating, from said at least one second image, a plurality of second images of different parts of the seat belt region respectively.
6. The method according to claim 5, wherein the operation of obtaining the at least one first image further comprises: selecting one or more of the plurality of first images; wherein the operation of obtaining the at least one second image further comprises: selecting one or more of the plurality of second images; and wherein only the selected first and second images are transmitted to first and second classifiers, respectively.
7. The method according to claim 6, wherein the selection of the first and second images is performed based on the scores attributed to the first images and second images.
8. The method according to claim 5, further comprising: detecting body key points for one or more occupants of the vehicle visible in each of the first and second images; and determining the seat belt region, based on the detected body key points, in each of the first and second images.
9. The method according to claim 5, wherein the operations of obtaining at least one first image and obtaining at least one second image are repeated cyclically, and, at each cycle, N first images are generated from the at least one first image, N second images are generated from the at least one second image, and M first images from the N first images and M second images from the N second images are selected, with M<N, wherein image positions of the selected images change at each cycle.
10. The method according to claim 9, wherein a result of the combining operation is stabilized by a feed-back loop where the scores of the at least one first image and of the at least one second image and the outputs of the first and second classifiers are provided as inputs for the next execution of the cycle.
11. The method according to claim 1, wherein, in the operation of combining, the first estimate of probability of the at least one first image and the second estimate of probability of the at least one second image are weighted using the scores respectively attributed to the at least one first image and to the at least one second image to compute the combined probability.
12. The method according to claim 1, wherein the operation of combining uses predefined rules for combining the estimates of probability of the at least one first image and the at least one second image.
13. The method of claim 1, wherein at least one of: obtaining the at least one first image comprises: obtaining the image from a first image capturing device; obtaining the at least one second image comprises: obtaining the image from a second image capture device; or obtaining the at least one first image comprises obtaining the first image from a first image capturing device and obtaining the at least one second image comprises obtaining the second image from the first image capturing device.
14. A system for a vehicle, comprising: an image capturing device; and a processing device operable to: obtain, from the image capturing device, at least one first image in the infrared spectrum of at least part of a seat belt region where a seat belt is expected to be visible when it is worn by a person in the vehicle; obtain, from the image capturing device, at least one second image in the visible light spectrum of at least part of the seat belt region used to obtain the at least one first image; transmit the at least one first image to a first classifier, dedicated for images in the infrared spectrum; transmit the at least one second image to a second classifier, dedicated for images in the visible light spectrum, determine from the first classifier a first estimate of probability that the at least one first image includes a seat belt correctly worn; determine from the second classifier a second estimate of probability that the at least one second image includes a seat belt correctly worn; combine the first estimate of probability and the second estimate of probability, using respective scores attributed to the at least one first image and to the at least one second image, each score being indicative of a trustworthiness of the corresponding image to detect a seat belt status, to determine a combined probability; and compare the combined probability to a predetermined threshold to determine a seat belt status.
15. The system of claim 14, further comprising the vehicle.
16. The system of claim 14, wherein the processing device is further operable to: perform image analysis to extract information from each of the at least one first image and the at least one second image; and attribute the score to each of the at least one first image and at least one second image based on the information extracted from the corresponding image.
17. The system of claim 16, wherein the operation of perform image analysis comprises, for each image, at least one of the following image analysis tasks: analysis of statistical properties of the image; detection of an occlusion to detect that vision of the seat belt is occluded in the image; seat belt color profile recognition to recognize a predetermined profile of the seat belt color in the image; or clothing detection to detect clothing occluding a back area in the image.
18. The system of claim 17, further comprising: a digital image processor, wherein the digital image processor performs the image analysis tasks.
19. The system of claim 14, further comprising: wherein the operation of obtain the at least one first image further comprises: obtain the at least one first image from a first image capturing device, the first image capturing image configured to capture the at least one first image in the infrared spectrum inside the vehicle; wherein the operation of obtain the at least one second image further comprises: obtain the at least one second image from a second image capturing device, the second image capturing device configured to capture the at least one second image in the visible light spectrum inside the vehicle; and the processing device is further operable to: generate, from said at least one first image, a plurality of first images of different parts of the seat belt region respectively; and generate, from said at least one second image, a plurality of second images of different parts of the seat belt region respectively.
20. An apparatus comprising: a processor; and a computer-readable medium having stored thereon instructions that, responsive to execution by the processor, cause the processor to execute operations comprising: obtain, from the image capturing device, at least one first image in the infrared spectrum of at least part of a seat belt region where a seat belt is expected to be visible when it is worn by a person in the vehicle; obtain, from the image capturing device, at least one second image in the visible light spectrum of at least part of the seat belt region used to obtain the at least one first image; transmit the at least one first image to a first classifier, dedicated for images in the infrared spectrum; transmit the at least one second image to a second classifier, dedicated for images in the visible light spectrum, determine from the first classifier a first estimate of probability that the at least one first image includes a seat belt correctly worn; determine from the second classifier a second estimate of probability that the at least one second image includes a seat belt correctly worn; combine the first estimate of probability and the second estimate of probability, using respective scores attributed to the at least one first image and to the at least one second image, each score being indicative of a trustworthiness of the corresponding image to detect a seat belt status, to determine a combined probability; and compare the combined probability to a predetermined threshold to determine a seat belt status.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] Other features, purposes and advantages of the disclosure will become more explicit by means of reading the detailed statement of the non-restrictive embodiments made with reference to the accompanying drawings.
[0018]
[0019]
[0020]
[0021]
[0022]
DETAILED DESCRIPTION
[0023] The present disclosure relates to techniques and apparatuses for seat belt status detection in a vehicle.
[0024] The image capturing device 200 is operable for capturing images in the infrared (IR) part and/or in the near-infrared (NIR) part of the electromagnetic spectrum, for example wavelengths between 750 nm to 1000 nm, and images in the visible light part of the electromagnetic spectrum, for example wavelengths between 380 nm and 750 nm. In an embodiment, the image capturing device 200 may have four spectral channels for red, green, blue and IR and/or NIR wavelengths. For example, the image capturing device 200 is an RGB-IR camera. Two images of a same scene, respectively in IR and in visible light or color, may be captured simultaneously by the device 200 at a given point in time. When one or more occupants are in the vehicle, the captured images include one or more seat belt regions, each seat belt region being an area where a seat belt is expected to be visible when it is worn by an occupant in the vehicle. For example, the images captured by the capturing device 200 correspond to a front view of the vehicle interior including the two front seats and part of the rear seats, as illustrated in
[0025] In the following description, the images captured in the infrared or near-infrared spectrum will be termed as “IR images”, and the images captured in the visible light spectrum will be termed as “RGB images” or “color images”. But the present disclosure is not limited to IR and RGB.
[0026] The processing device 300 has the function of receiving and processing initial images from the image capturing device 200 and, based on the received initial images, detecting whether an occupant of the vehicle is correctly wearing a seat belt, in other words detecting a seat belt status.
[0027] The processing device 300 may include a processor 310, a first classifier 320, a second classifier 330, a ranking module 340, a fusion module 350, and a seat belt status detection module 360. The two classifiers 320, 330, the ranking module 340, the fusion module 350 and the seat belt status detection module 360 may be implemented by software or computer programs running on the processor 310.
[0028] The two classifiers 320, 330 may have a function of seat belt recognition in an image, for determining whether a seat belt is present or visible in an image, and/or a function of identifying different cases of seat belt misuse in an image, for detecting whether a seat belt is misused, not correctly worn, in the image. The classifiers 320, 330 may be two separate modules 320, 330, one for the IR and/or NIR channel and the other one for the visible light channel, for example a RGB channel. The first classifier 320 is dedicated for processing IR and/or NIR images, and the second classifier 330 is dedicated for processing the RGB or color images. In operation, each classifier 320, 330 receives an image as an input and may determine an estimate of probability that the image includes a seat belt correctly worn.
[0029] The two classifiers 320, 330 may be neural networks trained for seat belt recognition and/or for identification of different cases of seatbelt misuse in an image. In that case, the first classifier 320 is trained with infrared images and the second classifier 330 is trained with RGB or color images.
[0030] In another embodiment, the classifiers 320, 330 may implement another algorithm suited for classification problems, such as support vector machines (SVM), randomized decision forests, etc.
[0031] The ranking module 340 has the function of attributing scores to images. The score attributed to each image is indicative of the trustworthiness of the corresponding image to detect a seat belt status. It represents how the image is trustworthy, or relevant, to detect a seat belt status. A main purpose of the ranking module 340 is to analyze different aspects of an image, that are relevant for determining whether the content in the image is likely to help in determining a seat belt status. In an embodiment, the process of attributing a score to an image may be based on analysis of information in image. The analysis of information in image allows to extract from an image information that is useful to evaluate how the image is trustworthy to detect the seat belt and/or the seat belt status. The ranking module 340 may include one or more functional blocks operable for performing different tasks of analysis of information in the image, such as: a block 341 for analyzing statistical properties of the image, such as brightness, contrast, . . . ; a block 342 for occlusion detection, that detects that vision of the seat belt is occluded in the image, for example by an arm; a block 343 of seat belt color profile recognition, that recognizes a predetermined profile of the seat belt color in the image; a block 344 of clothing detection that detects clothing occluding a back area in the image.
[0032] The ranking module 340 could include other functional blocks for implementing other types of tasks of analysis of information in images and/or optionally in the vehicle environment. The analysis of information in the environment allows to determine current environmental conditions like luminosity, weather conditions, time of day, etc.
[0033] The ranking module 340 may further include a scoring block 345 that receives, for each image to be scored, image analysis results from the functional blocks 341-344, and computes a score to be attributed to the image, based on predefined rules. The score attributed to each image may correspond to a weighting factor, as explained later. The scores of images are transmitted to the fusion module 350.
[0034] The fusion module 350 has the function of combining or fusing estimates of probability of a plurality of images, that are determined by the first classifier 320 and second classifier 330, to compute a combined probability, as will be explained later in more detail. The combined probability may then be transmitted by the fusion module 350 to the seat belt status detection module 360.
[0035] The seat belt status detection module 360 is responsible for comparing the combined probability to a predetermined detection threshold, to detect a status of the seat belt, indicating whether the seat belt is correctly worn by the occupant of the vehicle or not.
[0036] In an embodiment, the system 100 may further include an image generator 370 operable for: generating a plurality of first images, such as IR or NIR images, that may be termed as sub-images, respectively corresponding to different parts of a seat belt region, from an initial IR image from the image capturing device 200; generating a plurality of second images, such as visible light images or RGB images or color images, that may be termed as sub-images, respectively corresponding to different parts of the seat belt region, from an initial RGB or color image from the image capturing device 200.
[0037] The first images and second images may be cropped images from the initial IR image and the initial visible light image, respectively. For example, the first images and second images are delimited by bounding boxes, for example of rectangular shape, as illustrated in
[0038] The system 100 may further include a body key points detection module 380 for detecting and locating body key points for one or more occupants of the vehicle visible in an image. The body key points may include one or more points for left and right shoulders, hip, elbow, hands, and/or face of the occupant. The detected body key points are intended to be used by the generator 370 to identify and locate one or more seat belt regions, where a seat belt is expected to be visible when it is worn by an occupant in the vehicle in the image. For example, a seat belt region may be defined by an area around a line, possibly curved, extending from a given shoulder point—left or right depending on the seat—to an opposite hip point of an occupant.
[0039] The system 100 may further include a module 390 for selection of images among all the first IR images and second RGB or color images produced by the generator 370 respectively from one initial IR image and one initial RGB image, both captured simultaneously or approximately simultaneously. The task of the selection module 390 is to determine which images from the first IR images are to be transmitted to the first classifier 320, and which images from the second RGB images are to be transmitted to the second classifier 330. A purpose of the image selection is to limit the computational efforts required for seat belt status detection, as only the selected images are passed to the classifiers 320, 330 for classification. The image selection allows to save computational resources, since the tasks performed by the classifiers are only executed on the selected images, and not on images that are not selected.
[0040] In one instance of the detection process or algorithm, the selection module 390 might determine that all first and second images may be transmitted to the corresponding first classifier and second classifier in case the computational budget allows it, in other words in case the available computational resources are sufficient. In another instance of the process, the selection module 390 might ignore, not select, one or more particular images, IR image(s) and/or color image(s), with the lowest scores provided by the ranking module 340, or with scores under a predetermined critical threshold. Low scores may indicate that the corresponding images are not trustworthy for seat belt status detection, for example because the location of the image or sub-image in the initial image or the modality of the image, IR or RGB, is not informative for detecting the seat belt status at a current point in time.
[0041] The generator 370, the module for body key points detection 380, and the module 390 for image selection may be implemented by software or computer programs running on the processor 310.
[0042] A computer-implemented method for seat belt status detection in a vehicle, executed by the system 100, is illustrated in
[0043] The method includes the execution of the following detection process or algorithm, under control of the processor 310.
[0044] At a given point in time, in a step S1, the image capturing device 200 captures one initial IR image IR_IM.sub.0 and one initial RGB or color image RGB_IM.sub.0 of a same scene in the vehicle, for example a wide-angle view of the two front seats and partially the rear seats.
[0045] In a step S2, the two initial IR and RGB images IR_IM.sub.0 and RGB_IM.sub.0 are transmitted from the image capturing device 200 to the processing device 300.
[0046] In a step S3, the body key points module 380 processes each of the initial IR image IR_IM.sub.0 and the initial RGB image RGB_IM.sub.0, and detects body key points, for example left/right shoulders, hip, hands, etc., for one or more occupants of the vehicle visible in the initial images.
[0047] In a step S4, the image generator 370 generates a plurality of sub-images from each initial image IR_IM.sub.0 and RGB_IM.sub.0, using detected body key points provided by the module 380. The sub-images in the IR or NIR spectrum are termed as “first images” and the sub-images in the RGB or color spectrum are termed as “second images”. More precisely, based on the body key points detected in each initial image IR_IM.sub.0, RGB_IM.sub.0, the image generator 370 determines, for each occupant visible in the initial image, one seat belt region corresponding to an area in the image where the seat belt is expected to be visible when it is correctly worn by the occupant in the vehicle. For example, a curved line extending from a shoulder to an opposite hip, along which the seat belt is expected to be located may be determined. In case two occupants are seated in the two front seats of the vehicle and are both visible in each of the IR image IR_IM.sub.0 and the RGB image RGB_IM.sub.0, the module 380 detects body key points for the two front occupants and the generator 370 determines two corresponding seat belt regions in each of the IR image IR_IM.sub.0 and the RGB image RGB_IM.sub.0. Then, the generator 370 generates: a plurality of first IR images or sub-images IR_IM.sub.1i . . . , from the initial IR image IR_IM.sub.0, with i=1, 2, . . . , corresponding to different parts of each seat belt region respectively, and a plurality of second RGB images or sub-images RGB_IM.sub.2j . . . , from the initial RGB image RGB_IM.sub.0, with j=1, 2, . . . , corresponding to different parts of each seat belt region.
[0048] The sub-images may be located along a line defining the seat belt region, adjacent to each other.
[0049] Optionally, the image generator 370 may determine a buckle region, that is expected to include the seat belt buckle assembled with the buckle receiver, in each initial image IR_IM.sub.0 and RGB_IM.sub.0, for each occupant in the image, using the detected body key points. In that case, the image generator may further generate a first IR image, or sub-image, corresponding to the buckle region from the initial IR image IR_IM.sub.0 and a second RGB image, or sub-image, corresponding to the buckle region in the initial RGB image RGB_IM.sub.0.
[0050] The first images may be cropped from the initial IR image IR_IM.sub.0, and the second images may be cropped from the initial RGB image RGB_IM.sub.0. For example, the cropped images may be delimited by bounding boxes, for example rectangular, as illustrated in
[0051] The generation of the first IR images, or sub-images, IR_IM.sub.1i and second RGB images, or sub-images, RGB_IM.sub.2j from the initial IR image and from the initial RGB image, respectively, is not limited to using body key points. In another embodiment, the location or orientation of the first IR images, or sub-images, and second RGB images, or sub-images, are based on fixed positions in the initial image, or other parameters that give hint to which location in the image is most informative to extract the seat belt status.
[0052] In a variant, the generator 370 could generate one sub-image, or first image, from the initial IR image IR_IM.sub.0 and one sub-image, or second image, from the initial color image RGB_IM.sub.0.
[0053] In a step S5, the ranking module 340 performs an image analysis of each of the images generated by the generator 370, including first IR images and second RGB images, to extract information from each of the generated images. Thus, the ranking module 340 may analyze different aspects in each of the generated images, that are relevant for determining whether the content in the analyzed image is likely to help in determining a seat belt status, to attribute a score to the analyzed image. The ranking module 340 may execute one or more tasks of analysis of information in image, such as: a statistical analysis of the properties of the image, such as brightness, contrast, . . . ; an occlusion detection to detect if the seat visible is not visible because it is occluded by something; a seat belt color profile recognition, to determine if the seat belt in the image matches a predetermined color profile of the vehicle seat belt; a clothing detection to detect clothing, for example a jacket, occluding a back area in the image.
[0054] The tasks of analysis of information in image may be performed by digital image processing means (e.g., a digital image processor).
[0055] Then, in a step S6, the ranking module attributes a score to each of the images generated by the generator 370, using the information extracted by image analysis. The extracted information may include the results of the tasks of analysis of information.
[0056] In a step S7, the selection module 390 selects images, among all the first IR images IR_IM.sub.1i and second RGB images RGB_IM.sub.2j generated by the generator 370, based on the scores. For example, the images having a score that is higher than a predetermined threshold are selected, or the highest score p IR images are selected, p being a predetermined number. The selection may be configurable. For example, it may be adjusted depending on the available computational resources and/or a computational budget. The less computational resources available, the more important the selection. The selected images may include a mix of one or more first IR images IR_IM.sub.1i and one or more second RGB images RGB_IM.sub.2j. In some cases, all the generated images IR_IM.sub.1i and RGB_IM.sub.2j may be selected. It might happen that only one or more first IR images IR_IM.sub.1i are selected, and no RGB second images are selected, or vice versa. The image selection among all the generated sub-images also allows to dynamically determine which location in the initial IR and color images to trust more, in the current situation.
[0057] Then, in a step S8, the only selected images are passed to the classifiers 320, 330. More precisely, when the selected images include a mix of one or more first IR images IR_IM.sub.1i and one or more second RGB images RGB_IM.sub.2j, the one or more first IR images IR_IM.sub.1i selected are transmitted to the first classifier 320, and the one or more second RGB images RGB_IM.sub.2j selected are transmitted to the second classifier 330.
[0058] In a step S9, the first classifier 320 processes each received IR first image and determines a first estimate of probability that the IR first image includes a seat belt that is correctly worn.
[0059] In a step S10, the second classifier 330 concomitantly processes each received RGB second image and determines a second estimate of probability that the RGB second image includes a seat belt that is correctly worn.
[0060] Then, in a step S11, the fusion module 350 combines the first estimate(s) of probability and the second estimate(s) of probability determined by the two classifiers 320, 330 for the received IR first image(s) and RGB second image(s), using respective scores attributed to the images, each score representing how the corresponding image is trustworthy to detect the seat belt, and determines a combined probability. In an embodiment, the estimates of probability of the different IR or RGB images may be weighted using the respective scores of the images. For example, the combined probability is computed using the following expression:
where
p.sub.combined is the combined probability
α.sub.i is the score attributed to an image of index i
p.sub.i is the estimate of probability of the image of index i, determined by the corresponding classifier.
[0061] Alternatively, or additionally, the fusion module 350 could use predefined rules for combining the estimates of probability of the at least one first image and the at least one second image. For example, the predefined rules may be based on an experimental and/or statistical knowledge about which combination of sub-images are more trustworthy than others, in other words better at detecting the seat belt status.
[0062] Then, in a step S12, the combined probability is compared, by the detection module 360, to a predetermined detection threshold to determine a seat belt status detection result. If the combined probability is higher than the detection threshold, see branch “Y” in
[0063] The detection threshold may be configurable. For example, it may be adjusted depending on a level of reliability desired for the detection. The more the detection reliable, the higher the detection threshold.
[0064] The process described above, including the steps S1 to S12, may be repeated at successive points in time, for example iteratively or cyclically.
[0065] In that case, the result or output of the combining step S11 may be stabilized by a feed-back loop where the scores of the IR first images and of the RGB second images and the outputs of the two classifiers 320, 330 are provided as inputs for the next execution of the process.
[0066] The step S4 of generating sub-images from each initial image, in IR and visible light, is optional. The step S7 of image selection is also optional.
[0067] In an embodiment, the selection of sub-images in an initial image is not based on the image scores, but on positions of the sub-images in the initial images, the positions of the selected images changing at each cycle or for each new pair of initial images. A purpose of such a sub-image selection is to reduce the computational resources needed for the seat belt status detection.
[0068] Let's consider the following illustrative and non-limitative example described below:
[0069] The step S1 is performed iteratively or cyclically, so that the capturing device 200 captures a sequence of pairs of initial images including an IR image and a color image, at successive points in time t.sub.0, t.sub.1, t.sub.2, . . . : {IR_IM.sub.0(t.sub.0), RGB_IM.sub.0(t.sub.0)}, {IR_IM.sub.0(t.sub.1), RGB_IM.sub.0(t.sub.1)}, {IR_IM.sub.0(t.sub.2), RGB_IM.sub.0(t.sub.2)}, {IR_IM.sub.0(t.sub.3), RGB_IM.sub.0(t.sub.3)}, {IR_IM.sub.0(t.sub.4), RGB_IM.sub.0(t.sub.4)}, . . . .
[0070] At each cycle, after the step S1, the steps S2 to S6 are performed as previously described. In the step S4, N first images or sub-images are generated from the initial IR image and N second images or sub-images are generated from the initial color image. The N first/second images correspond to N different sub-image positions pos.sub.1, . . . , pos.sub.N. For example, N=4 as illustrated in
[0071] After the step S6, a selection step S7 is performed. In the selection step S7, M first images from the N first images and M second images from the N second images, with M<N, are selected. For example, M=2. And the sub-image positions of the selected images change at each cycle, for example, by a one-position shift. All the sub-image positions may be selected after a few cycles. For example, the selection is as follows:
[0072] at t.sub.0: selection of the sub-images at positions pos.sub.1, pos.sub.2;
[0073] at t.sub.1: selection of the sub-images at positions pos.sub.2, pos.sub.3;
[0074] at t.sub.2: selection of the sub-images at positions pos.sub.3, pos.sub.4;
[0075] at t.sub.3: selection of the sub-images at positions pos.sub.4, pos.sub.1;
[0076] at t.sub.4: selection of the sub-images at positions pos.sub.1, pos.sub.2;
[0077] at t.sub.5: selection of the sub-images at positions pos.sub.2, pos.sub.3;
[0078] and so on . . . .
[0079] The present disclosure also concerns: a computer program or to an instruction code corresponding to a software program that includes instructions to cause the seat belt status detection system 100 to execute the steps of the method previously disclosed; a computer-readable medium having stored thereon the above computer program; a vehicle including the seat belt status detection system 100.
[0080] As previously described, the seat belt status detection uses one or more first images in the infrared spectrum and one or more second images in the visible light spectrum, that are respectively transmitted to a first classifier, dedicated for images in the infrared spectrum, and to a second classifier, that is dedicated for images in the visible light. More generally, the seat belt status detection may use one or more first images in a first spectrum and one or more second images in second spectrum, different from the first spectrum, that are respectively transmitted to a first classifier dedicated for images in the first spectrum and a second classifier dedicated for images in the second spectrum. The first spectrum and the second spectrum are two different parts of the electromagnetic spectrum, preferably two separate parts of the electromagnetic spectrum.
[0081] Although implementations for techniques and apparatuses for seat belt status detection in a vehicle have been described in language specific to certain features and/or methods, the subject of the appended claims is not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as example implementations for techniques and apparatuses for seat belt status detection.
[0082] Unless context dictates otherwise, use herein of the word “or” may be considered use of an “inclusive or,” or a term that permits inclusion or application of one or more items that are linked by the word “or” (e.g., a phrase “A or B” may be interpreted as permitting just “A,” as permitting just “B,” or as permitting both “A” and “B”). Also, as used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. For instance, “at least one of a, b, or c” can cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c, or any other ordering of a, b, and c). Further, items represented in the accompanying figures and terms discussed herein may be indicative of one or more items or terms, and thus reference may be made interchangeably to single or plural forms of the items and terms in this written description.