TRAINING A SMART HOUSEHOLD APPLIANCE

20220351482 ยท 2022-11-03

    Inventors

    Cpc classification

    International classification

    Abstract

    A method trains a recognition system for recognizing an object in an interior space of a household appliance. The method includes the steps of capturing images from a plurality of predetermined perspectives of the object placed on an alignment sheet; producing training data on the basis of the images; and training the adaptive recognition system using the training data.

    Claims

    1-10. (canceled)

    11. A method for training an adaptive identification facility for identifying an object in an interior space of a household appliance, which comprises the following steps of: recording images of the object placed on an adjustment sheet from multiple perspectives; generating training data on a basis of the images; and training the adaptive identification facility using the training data.

    12. The method according to claim 11, which further comprises creating a three-dimensional model of the object on a basis of the images and the training data is generated on a basis of the three-dimensional model.

    13. The method according to claim 11, which further comprises moving the adjustment sheet with the object to predetermined positions with respect to a camera for recording the images.

    14. The method according to claim 13, which further comprises providing instructions to move the adjustment sheet with the object to a predetermined position with respect to the camera.

    15. The method according to claim 14, which further comprises recording the adjustment sheet with the object being located at the predetermined position with respect to the camera.

    16. The method according to claim 11, which further comprises recording the images of the object placed on the adjustment sheet from multiple, predetermined perspectives.

    17. A method for identifying an object in an interior space of a household appliance, which comprising the steps of: recording images of the object placed on an adjustment sheet from multiple perspectives; generating training data on a basis of the images; training an adaptive identification facility using the training data; and recording an image of the object in the interior space and an identifying of the object on a basis of the image.

    18. A system, comprising: an adjustment sheet for placing an object on said adjustment sheet; a camera for recording images of the object placed on said adjustment sheet from multiple perspectives; and a processor configured to generate training data on a basis of the images and to train an adaptive identification facility using the training data.

    19. The system according to claim 18, wherein said camera includes a depth-sensing camera.

    20. The system according to 18, further comprising a projection facility for projecting a position mark on a surface, on which said adjustment sheet with the object is to be placed.

    21. The system according to claim 18, wherein said camera is part of a smartphone.

    Description

    [0021] The invention will now be described in more detail with reference to the accompanying figures, in which

    [0022] FIG. 1 shows an exemplary system with a household appliance;

    [0023] FIG. 2 shows an exemplary method for training a household appliance;

    [0024] FIG. 3 shows exemplary variants of apparatuses for recording images of an object; and

    [0025] FIG. 4 shows an exemplary adjustment sheet with an object.

    [0026] FIG. 1 shows an exemplary system 100 with a household appliance 105, which here is designed as a refrigerator by way of example. The household appliance 105 comprises an interior space 110, in which an object 115 can be arranged. The object 115 usually comprises a foodstuff, for example a food, a dish or an ingredient. In this context, a container of the object 115 may vary; for example, the same foodstuff may be present in different packaging or sizes. In the present case, the object 115 is placed on an adjustment sheet 120, which is positioned in the interior space 110.

    [0027] An identification facility 125 comprises a camera 130 that can be directed into the interior space 110, a processing facility 135, as well as optionally an output apparatus 140, here in the form of a graphical output apparatus 140, or a communication apparatus 145. The processing facility 135 preferably comprises a microcomputer. The output apparatus 140 may provide textual or graphical outputs, for example. In this context, the output may be provided on the inside and/or the outside of the household appliance 105. Optionally, an acoustic output apparatus 140 is provided.

    [0028] The communication facility 145 is configured for communication with an external facility 150. In a usual operation of the household appliance 105, a content of the household appliance 105 can be identified and processed and the processed information can be transmitted to the external facility 150, for example in text form. The external facility 150 can forward the information, for example to a fixed or mobile device of a user of the household appliance 105. The information can also be routed directly to the device of the user by means of the communication facility 145.

    [0029] For a technique described herein, the external facility 150 can be configured for the training of the identification facility 125. To this end, a dedicated facility 150 may be provided, which differs from the facility 150 for the processing or transmitting of information regarding identified objects 115. The tasks of the external facility 150 can also be performed locally by the processing facility 135 of the identification facility 125 or another local processing facility. The external facility 150 preferably comprises a processing facility 155, a communication facility 160 and an optional storage apparatus 165.

    [0030] It is proposed to record, by means of the camera 130, a number of images of the object 115 placed on the adjustment sheet 120, and on the basis of the images to train the processing facility 135 in order to identify the object 115. To this end, the images are preferably transmitted to the external facility 150, where a three-dimensional model of the object 115 is determined therefrom. On the basis of the model, it is possible to generate training data, which in particular may comprise views of the object 115 from various perspectives or with various coverage by other items. The training data may be used to train a computer-implemented system that is capable of learning. The system, or a characteristic part thereof, may be transmitted back to the identification facility 125, in order to identify the object 115 in the interior space 110 of the household appliance 105 on an image recorded by means of the camera 130. In particular, the trained system may comprise an artificial neural network, and characteristic parameters, in particular regarding an arrangement and/or interconnection of artificial neurons, can be transmitted.

    [0031] FIG. 2 shows a flow diagram of a method 200 for training an identification facility 120. In particular, the method may be carried out by means of a system 100. It should be noted that the elements shown in FIG. 1 preferably are primarily used to identify the object 115 if the identification facility 125 has already been trained accordingly. Training described in the following can be carried out with such elements. Preferably, however, other facilities are used, which are explained in more detail further below.

    [0032] In a step 205, the object 115 is placed on the adjustment sheet 120, wherein the adjustment sheet 120 is brought to a predetermined position, from which the camera 130 has a predetermined perspective of the object 115. The position can be determined in a dynamic manner, for example on the basis of a size of the object 115. An indication of the predetermined position may be output by means of the output facility 140. If the adjustment sheet 120 has assumed the position, this can be identified on the basis of an image taken by the camera 130, or an actuation of an input apparatus can be recorded.

    [0033] In a step 210, an image of the object 115 on the adjustment sheet 120 can be recorded. In this context, the entire object 115 and at least a predetermined section of the adjustment sheet 120 are depicted, wherein the section may show a visual marking that can be used to determine a position and/or orientation of the adjustment sheet 120.

    [0034] In a step 215, it can be determined whether there are already sufficient images of the object 115 on the adjustment sheet 120 from different, predetermined positions with respect to the camera 130. If this is not the case, the steps 205 and 210 may be run through once again. It should be noted in step 205 that, although the adjustment sheet 120 can be moved with respect to the camera 130, an orientation and position of the object 115 with respect to the adjustment sheet 120 preferably remains unchanged.

    [0035] In a step 220, a three-dimensional model of the object 115 can be determined. This step is preferably performed on the part of the external facility 150. The three-dimensional model is configured to show the object 115 to the greatest possible extent from all views that the object 115 is able to assume with respect to the camera 130. To this end, information of the images can accordingly be combined and aligned with one another. The model preferably only reflects visual features of the object 115.

    [0036] In a step 225, training data can be generated on the basis of the model. In each case, the training data may comprise a view of the object 115 from a predetermined perspective. Optionally, the view is subjected to a predetermined impairment, for example being partially obscured by another object.

    [0037] In a step 230, the identification facility 125 can be trained on the basis of the training data. In practice, it is not the identification facility 125 of the household appliance 105 that is trained, but rather a copy or a derivative of characteristic parts of the identification system 125, in particular in the form of an artificial neural network.

    [0038] In a step 235, the identification facility 235 can be used to produce an image of the object 115 in the interior space 110 by means of the camera 130 and to identify the object 115 or to segment the image in order to isolate, identify or single out the object 115.

    [0039] The use of the household appliance 105 to produce images, which ultimately can be used by the method 200 to train the identification facility 125, may be time-consuming, as for the correct arrangement of the object 115 on the adjustment sheet 120 in each case a door of the household appliance has to be opened and closed again in order to record an image. In addition, a quality of the camera 130 may be limited. A perspective of the camera 130 may be suboptimal for the present purpose. Lighting in the household appliance 105 furthermore may be relatively weak, meaning that the images are unable to achieve a high quality.

    [0040] FIG. 3 shows exemplary variants of apparatuses that may be better suited to recording images of an object 115 for the generation of training data. Without restricting the generality, it is assumed that the object 115 placed on the adjustment sheet 120 is located on a surface 305 that in particular is able to run horizontally and may form the top side of a countertop.

    [0041] A first apparatus 310 comprises a mobile device, for example a laptop computer, a tablet computer or a smartphone. Usually, the device comprises a camera 130 as well as a processing facility 135 and a communication facility 145. In order to perform the method 200, in particular the steps 205-215, the device can be brought into a constant position with respect to the surface 305 by means of a stand.

    [0042] A second apparatus 315 comprises a PAI, which usually may be attached above the surface 305, for example on the bottom side of a wall cupboard or shelf, or on a vertical wall. In a further embodiment, the apparatus 315 may also be held above the surface 305 by means of a mast.

    [0043] Usually, the PAI comprises a camera 130, a processing facility 135 and a communication facility 145. Additionally provided as an output apparatus 140 is a projector 320, which may be attached with a slight lateral offset from the camera 130. The projector 320 is preferably configured to project a representation on the surface 305 and the camera 130 may be configured to determine a position of an object, in particular a hand of a user, with respect to the representation. The PAI may be advantageously used in a particular manner to project a desired position for the adjustment sheet 120 onto the surface 305. If the adjustment sheet 120 assumes the projected position, then this can be determined by means of the camera 130. Alternatively, an input of a user can be recorded. The input may take place in relation to a button projected onto the surface 305.

    [0044] Both apparatuses 310, 315 can be easily used by a user of the household appliance 105. Other embodiments of apparatuses 310, 315 are likewise possible.

    [0045] FIG. 4 shows an exemplary adjustment sheet 120, on which an object 115 is placed. The representation is produced from an elevated position and with optics of the camera 130 at a short focal distance, meaning that noticeable perspective distortions are produced. By way of example, the object 115 is substantially cuboid in shape and may, for example, comprise a carton of milk. Print on the packaging is not shown.

    [0046] The adjustment sheet 120 preferably carries an arrangement 405 with at least one visual marking 410. The markings 410 shown are arranged at even relative distances on a circular line, in the region of which the object 115 is placed. Due to the size of the object 115, it is not possible for all markings 410 to be seen by the camera 130 at the same time. By way of example, the markings 410 each comprise a centering point, about which one or more circular arcs are shown.

    REFERENCE CHARACTERS

    [0047] 100 System

    [0048] 105 Household appliance

    [0049] 110 Interior space

    [0050] 115 Object

    [0051] 120 Adjustment sheet

    [0052] 125 Identification facility

    [0053] 130 Camera

    [0054] 135 Processing facility

    [0055] 140 Output apparatus

    [0056] 145 Communication facility

    [0057] 150 External device

    [0058] 155 Processing facility

    [0059] 160 Communication facility

    [0060] 165 Storage apparatus

    [0061] 200 Method

    [0062] 205 Placed object on adjustment sheet

    [0063] 210 Recorded image of the object

    [0064] 215 Are there sufficient images?

    [0065] 220 Create 3D model of the object

    [0066] 225 Generate training data

    [0067] 230 Train identification unit

    [0068] 235 Use identification unit

    [0069] 305 Surface

    [0070] 310 First apparatus

    [0071] 315 Second apparatus

    [0072] 320 Projector

    [0073] 405 Arrangement

    [0074] 410 Marking