FITTING WITH AUTOMATIC OBJECT RECOGNITION AND METHOD FOR CONTROLLING A FITTING BY MEANS OF AUTOMATIC OBJECT RECOGNITION
20210355663 · 2021-11-18
Assignee
Inventors
- Roland Obrist (Scharans, CH)
- Daniel KNUPFER (Zizers, CH)
- Philipp TRIET (Bad Ragaz, CH)
- Patric CATHOMAS (Flims, CH)
Cpc classification
A47K2005/1218
HUMAN NECESSITIES
A47K5/1217
HUMAN NECESSITIES
International classification
E03C1/05
FIXED CONSTRUCTIONS
E03C1/04
FIXED CONSTRUCTIONS
Abstract
A fitting, in particular an outlet fitting, for a sanitary installation having a control unit and a sanitary element controllable with the control unit is disclosed. The fitting has at least one imaging sensor. The control unit is adapted to trigger an action of the sanitary element depending on an object recognition based on at least one output signal from the at least one imaging sensor. A fitting arrangement having a fitting as well as a method for controlling a fitting is also disclosed.
Claims
1. A fitting (1) for a sanitary installation (2), comprising a control unit (3) and a sanitary element (4) controllable with the control unit (3), characterized in that the fitting (1) comprises at least one imaging sensor (5), wherein the control unit (3) is adapted to trigger an action of the sanitary element (4) depending on an object recognition based on at least one output signal from the at least one imaging sensor (5).
2. The fitting (1) according to claim 1, characterized in that the at least one imaging sensor (5) is a 2D or 3D camera, a thermal imaging camera, an ultrasonic sensor, a radar sensor or a laser distance measuring sensor, in particular a LIDAR or ToF sensor or a laser scanner, or a combination of said sensor types.
3. The fitting (1) according to claim 1, characterized in that the fitting (1) additionally comprises an illumination unit (6) for illuminating objects (7), wherein the illumination unit (6) comprises in particular one or more LEDs or laser diodes or a UV or IR light source.
4. The fitting (1) according to claim 1, characterized in that the fitting (1) additionally comprises an image processing unit (8) to perform the object recognition.
5. The fitting (1) according to claim 1, characterized in that the fitting (1) additionally comprises a communication unit (9) for sending the at least one output signal from the at least one imaging sensor (5) to a remote image processing unit (8), e.g. a server, in particular in the cloud (10), for object recognition and for receiving a result of the object recognition.
6. The fitting (1) according to claim 1, characterized in that the image processing unit (8) is adapted to perform the object recognition by means of a neural network, in particular by means of a neural network trained before the fitting (1) is put into operation.
7. The fitting (1) according to claim 1, characterized in that the image processing unit (8) is adapted to perform an object classification as part of the object recognition in order to assign an object (7) to a specific class from a plurality of predefined classes.
8. The fitting (1) according to claim 7, characterized in that by means of the object classification different classes of kitchen utensils, such as plates, glasses, cups, cutlery, cooking pots, pans, etc., or of limbs, such as hands, arms, feet or legs, of a user of the fitting or of cleaning utensils, such as a cleaning brush, a cleaning sponge, steel wool or a cleaning cloth, can be recognized.
9. The fitting (1) according to claim 1, characterized in that the image processing unit (8) is adapted to determine at least one property of an object (7), such as transparency, color, size or degree of soiling.
10. The fitting (1) according to claim 1, characterized in that the image processing unit (8) is adapted to determine a position, in particular relative to a reference position, and/or a movement of an object (7).
11. The fitting (1) according to claim 7, characterized in that the control unit (3) is adapted to trigger a specific action of the sanitary element (4) depending on the object classification and/or the property of the object (7) and/or the position of the object (7) and/or the movement of the object (7).
12. The fitting (1) according to claim 11, characterized in that one or more of the following actions are triggerable: selecting a preset of the sanitary element (4) and executing a behavior of the sanitary element (4) defined according to the preset; dispensing a specified amount of a fluid (11); dispensing a specified amount of a fluid (11) per unit of time; dispensing a fluid (11) with a preset maximum, minimum or preferred temperature of the fluid (11) to be dispensed; dispensing a specific fluid (11) from a plurality of different fluids, in particular depending on a position of the object (7); switching on and/or off the delivery of a fluid (11).
13. A fitting arrangement (12), comprising a sanitary installation (2) and a fitting (1) according to claim 1, wherein the imaging sensor (5) is arranged in such a way that an inlet region (13), in particular for dispensing a fluid (11), and/or an outlet region (14), in particular for leading away the dispensed fluid (11), of the fitting (1) is/are detectable by the imaging sensor (8).
14. The fitting arrangement (12) according to claim 13, wherein the sanitary installation (2) is a sink or washtub, bidet, shower, bathtub, soap dispenser, lotion dispenser, sanitizer dispenser, hand dryer, hair dryer, toilet, shower toilet, urinal, or wash station.
15. A method for triggering an action of a sanitary element (4) of a fitting (1) according to claim 1, comprising the steps of: detecting a scene at the sanitary element (4) with at least one imaging sensor (5), in particular in an inlet region (13) and/or an outlet region (14) of the fitting (1), wherein the scene is detected by means of one or more images as output of the at least one imaging sensor (8), wherein in particular in the case of several images these are recorded staggered in time, e.g. as a film or video, or from different viewing angles; performing object recognition for recognizing at least one object (7), such as a hand of a user (15), of the fitting (1), a kitchen utensil, or a cleaning utensil; and triggering an action of the sanitary element (4) depending on the object recognition.
16. The method according to claim 15, characterized in that the scene is illuminated by means of an illumination unit (6), in particular one or more LEDs or laser diodes or a UV or IR light source.
17. The method according to claim 15, characterized in that the scene detected with the at least one imaging sensor (5) is sent as an output of the at least one imaging sensor (5) to a remote image processing unit (8), e.g. a server, in particular in the cloud (10), for object recognition, and a result of the object recognition is received from the remote image processing unit (8).
18. The method according to claim 15, characterized in that the object recognition is performed by means of a neural network, in particular by means of a neural network trained before the fitting (1) is put into operation.
19. The method according to claim 15, characterized in that as part of the object recognition, an object classification is performed to assign an object (7) to a particular class from a plurality of predefined classes.
20. The method according to claim 15, characterized in that different classes of kitchen utensils, such as plates, glasses, cups, cutlery, cooking pots, pans, etc., or of limbs, such as hands, arm, feet and legs, of a user of the fitting or of cleaning utensils, such as a cleaning brush, a cleaning sponge, steel wool or a cleaning cloth, are recognized by means of the object classification.
21. The method according to claim 15, characterized in that at least one property of an object (7), such as transparency, color, size or degree of contamination, is determined.
22. The method according to claim 15, characterized in that a position, in particular relative to a reference position, and/or a movement of an object (7) is determined.
23. The method according to claim 20, characterized in that a specific action of the sanitary element (4) is triggered depending on the object classification and/or the property of the object (7) and/or the position of the object (7) and/or the movement of the object (7).
24. The method according to claim 23, characterized in that the action comprises at least one of the following: selecting a preset of the sanitary element (4) and executing a behavior of the sanitary element (4) defined according to the preset; dispensing a specified amount of a fluid (11); dispensing a specified amount of a fluid (11) per unit of time; dispensing a fluid (11) with a preset maximum, minimum or preferred temperature of the fluid (11) to be dispensed; dispensing a specific fluid (11) from a plurality of different fluids, in particular depending on a position of the object (7); switching on and/or off the delivery of a fluid (11).
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0058] Non-limiting exemplary embodiments of the present invention are explained in further detail below with reference to figures, wherein:
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
[0067]
[0068]
[0069]
[0070] In the figures, the same reference numerals stand for the same elements.
DETAILED DESCRIPTION OF THE INVENTION
[0071] In order to explain the basic principle of the present invention,
[0072] The advantage of object recognition is now that different actions can be performed depending on the detected object and its movement(s). For example, the flow rate of water can be controlled by the imaging sensor 5 depending on the distance of the object 7 (e.g., the hands)—e.g., the further away the hands are from the camera 5, the more water is dispensed per unit time. In addition, for example, in the case of a mixing faucet, i.e., a combined hot- and cold-water outlet valve, the water temperature can be adjusted depending on a movement of the object 7, for example—e.g., a movement/shift of the hands 7 to the right leads to colder water and a movement/shift of the hands 7 to the left leads to warmer water.
[0073]
[0074]
[0075]
[0076] For example, as part of object recognition, object classification is performed to assign an object to a particular class from a variety of predefined classes. Each object class comprises a certain type of object, such as kitchen utensils (e.g. plates, glasses, cups, cutlery, cooking pots, pans, etc.) or limbs (e.g. hands, fingers, arms, feet or legs) of a user of the fitting or cleaning utensils (e.g. cleaning brush, cleaning sponge, steel wool or cleaning cloth). Depending on the detected object, various actions of the sanitary element can then be triggered, i.e., each object (or each type of object) is assigned a specific action.
[0077] Object recognition can be carried out in particular by means of a neural net or network. The neural network was trained for this purpose before the fitting was put into operation (e.g. by the manufacturer of the fitting). This means that the settings of the neural network were determined e.g. with the help of training data. The training data for training the neural network consist of output signals/data of the imaging sensor 5 as well as an assignment to a specified (known) object (called “data labeling” in the field of machine learning). The training of the neural network is performed offline as mentioned above and is typically carried out using powerful computers (e.g. in the cloud) and specialized software tools. Local object recognition (e.g., by “inference” using the neural network) at the fitting has the advantage that the recorded images do not have to be transmitted to an external server, which is preferred especially for data protection reasons as well as to preserve privacy. It is conceivable that the user records training data for new objects to be recognized by means of the imaging sensor and transmits these to an (automatic) service, which sends back to the user new settings (e.g. in the form of a “TensorFlow Lite” model) for the neural network or new firmware for object recognition or classification.
[0078]
[0079] In
[0080] In
[0081] In
[0082] In
[0083]
[0084] Analogous to the embodiments for the application for a shower, the fitting according to the invention can also be used for a hand or hair dryer mounted on a wall, wherein the air flow can be switched on and off and the temperature and strength (as well as direction) can be regulated depending on the object recognition.
[0085]
[0086] If a certain (new, previously unknown) object is to be recognized, corresponding images (or film or videos) can be taken by the user as training data with the camera of the fitting (or with another camera, e.g. of a smartphone). This data can be transmitted to a server, as mentioned above, which then returns an appropriately trained neural network to the fitting (e.g., in the form of a “TensorFlow Lite” model). The assignment of a certain object (or a gesture concerning this object) to a certain action can be done or changed by the user himself, for example by means of a web app or an app on his smartphone.
[0087] Further areas of application of the fitting according to the invention are conceivable, for example, in beverage dispensers or automatic washing systems. Depending on the detected container (large, small, transparent, of a certain color), different beverages are dispensed in the beverage vending machines, e.g. pure water, sparkling water, tea, coffee, soup, etc.
[0088] The proposed fitting can be used both in the private sector and in public sanitary facilities, although for the private sector application a much higher degree of personalization is possible, as well as a greater variety of different actions. For the public sector, however, the proposed fitting also has the advantage that an installer, plumber or service technician can configure the system without a control element simply by using the built-in imaging sensor (e.g. camera), and does not need a special configuration/programming device to do so.
LIST OF REFERENCE NUMERALS
[0089] 1 Fitting, outlet fitting, water faucet, shower inlet fitting [0090] 2 Sanitary installation, sink, shower [0091] 3 Control unit, processor [0092] 4 Sanitary element, (outlet) pipe with valve [0093] 5 Imaging sensor, (2D/3D) camera [0094] 6 Illumination unit, display/indicator [0095] 7 Object, hands, dishes, kitchen utensils, head [0096] 8 Image processing unit, processor [0097] 9 Communication unit, transmitter/receiver (transceiver) [0098] 10 (Cloud) server [0099] 11, 11′ Liquid (water, disinfectant, soap) [0100] 12 Fitting arrangement [0101] 13 Inlet region [0102] 14 Outlet region [0103] 15 User [0104] 16 Valve [0105] 17, 17′ (Liquid) outlet [0106] P1, P2 Position