TRAINING A SMART HOUSEHOLD APPLIANCE
20220351482 ยท 2022-11-03
Inventors
Cpc classification
G06F18/214
PHYSICS
G06T2200/08
PHYSICS
F25D29/00
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
International classification
Abstract
A method trains a recognition system for recognizing an object in an interior space of a household appliance. The method includes the steps of capturing images from a plurality of predetermined perspectives of the object placed on an alignment sheet; producing training data on the basis of the images; and training the adaptive recognition system using the training data.
Claims
1-10. (canceled)
11. A method for training an adaptive identification facility for identifying an object in an interior space of a household appliance, which comprises the following steps of: recording images of the object placed on an adjustment sheet from multiple perspectives; generating training data on a basis of the images; and training the adaptive identification facility using the training data.
12. The method according to claim 11, which further comprises creating a three-dimensional model of the object on a basis of the images and the training data is generated on a basis of the three-dimensional model.
13. The method according to claim 11, which further comprises moving the adjustment sheet with the object to predetermined positions with respect to a camera for recording the images.
14. The method according to claim 13, which further comprises providing instructions to move the adjustment sheet with the object to a predetermined position with respect to the camera.
15. The method according to claim 14, which further comprises recording the adjustment sheet with the object being located at the predetermined position with respect to the camera.
16. The method according to claim 11, which further comprises recording the images of the object placed on the adjustment sheet from multiple, predetermined perspectives.
17. A method for identifying an object in an interior space of a household appliance, which comprising the steps of: recording images of the object placed on an adjustment sheet from multiple perspectives; generating training data on a basis of the images; training an adaptive identification facility using the training data; and recording an image of the object in the interior space and an identifying of the object on a basis of the image.
18. A system, comprising: an adjustment sheet for placing an object on said adjustment sheet; a camera for recording images of the object placed on said adjustment sheet from multiple perspectives; and a processor configured to generate training data on a basis of the images and to train an adaptive identification facility using the training data.
19. The system according to claim 18, wherein said camera includes a depth-sensing camera.
20. The system according to 18, further comprising a projection facility for projecting a position mark on a surface, on which said adjustment sheet with the object is to be placed.
21. The system according to claim 18, wherein said camera is part of a smartphone.
Description
[0021] The invention will now be described in more detail with reference to the accompanying figures, in which
[0022]
[0023]
[0024]
[0025]
[0026]
[0027] An identification facility 125 comprises a camera 130 that can be directed into the interior space 110, a processing facility 135, as well as optionally an output apparatus 140, here in the form of a graphical output apparatus 140, or a communication apparatus 145. The processing facility 135 preferably comprises a microcomputer. The output apparatus 140 may provide textual or graphical outputs, for example. In this context, the output may be provided on the inside and/or the outside of the household appliance 105. Optionally, an acoustic output apparatus 140 is provided.
[0028] The communication facility 145 is configured for communication with an external facility 150. In a usual operation of the household appliance 105, a content of the household appliance 105 can be identified and processed and the processed information can be transmitted to the external facility 150, for example in text form. The external facility 150 can forward the information, for example to a fixed or mobile device of a user of the household appliance 105. The information can also be routed directly to the device of the user by means of the communication facility 145.
[0029] For a technique described herein, the external facility 150 can be configured for the training of the identification facility 125. To this end, a dedicated facility 150 may be provided, which differs from the facility 150 for the processing or transmitting of information regarding identified objects 115. The tasks of the external facility 150 can also be performed locally by the processing facility 135 of the identification facility 125 or another local processing facility. The external facility 150 preferably comprises a processing facility 155, a communication facility 160 and an optional storage apparatus 165.
[0030] It is proposed to record, by means of the camera 130, a number of images of the object 115 placed on the adjustment sheet 120, and on the basis of the images to train the processing facility 135 in order to identify the object 115. To this end, the images are preferably transmitted to the external facility 150, where a three-dimensional model of the object 115 is determined therefrom. On the basis of the model, it is possible to generate training data, which in particular may comprise views of the object 115 from various perspectives or with various coverage by other items. The training data may be used to train a computer-implemented system that is capable of learning. The system, or a characteristic part thereof, may be transmitted back to the identification facility 125, in order to identify the object 115 in the interior space 110 of the household appliance 105 on an image recorded by means of the camera 130. In particular, the trained system may comprise an artificial neural network, and characteristic parameters, in particular regarding an arrangement and/or interconnection of artificial neurons, can be transmitted.
[0031]
[0032] In a step 205, the object 115 is placed on the adjustment sheet 120, wherein the adjustment sheet 120 is brought to a predetermined position, from which the camera 130 has a predetermined perspective of the object 115. The position can be determined in a dynamic manner, for example on the basis of a size of the object 115. An indication of the predetermined position may be output by means of the output facility 140. If the adjustment sheet 120 has assumed the position, this can be identified on the basis of an image taken by the camera 130, or an actuation of an input apparatus can be recorded.
[0033] In a step 210, an image of the object 115 on the adjustment sheet 120 can be recorded. In this context, the entire object 115 and at least a predetermined section of the adjustment sheet 120 are depicted, wherein the section may show a visual marking that can be used to determine a position and/or orientation of the adjustment sheet 120.
[0034] In a step 215, it can be determined whether there are already sufficient images of the object 115 on the adjustment sheet 120 from different, predetermined positions with respect to the camera 130. If this is not the case, the steps 205 and 210 may be run through once again. It should be noted in step 205 that, although the adjustment sheet 120 can be moved with respect to the camera 130, an orientation and position of the object 115 with respect to the adjustment sheet 120 preferably remains unchanged.
[0035] In a step 220, a three-dimensional model of the object 115 can be determined. This step is preferably performed on the part of the external facility 150. The three-dimensional model is configured to show the object 115 to the greatest possible extent from all views that the object 115 is able to assume with respect to the camera 130. To this end, information of the images can accordingly be combined and aligned with one another. The model preferably only reflects visual features of the object 115.
[0036] In a step 225, training data can be generated on the basis of the model. In each case, the training data may comprise a view of the object 115 from a predetermined perspective. Optionally, the view is subjected to a predetermined impairment, for example being partially obscured by another object.
[0037] In a step 230, the identification facility 125 can be trained on the basis of the training data. In practice, it is not the identification facility 125 of the household appliance 105 that is trained, but rather a copy or a derivative of characteristic parts of the identification system 125, in particular in the form of an artificial neural network.
[0038] In a step 235, the identification facility 235 can be used to produce an image of the object 115 in the interior space 110 by means of the camera 130 and to identify the object 115 or to segment the image in order to isolate, identify or single out the object 115.
[0039] The use of the household appliance 105 to produce images, which ultimately can be used by the method 200 to train the identification facility 125, may be time-consuming, as for the correct arrangement of the object 115 on the adjustment sheet 120 in each case a door of the household appliance has to be opened and closed again in order to record an image. In addition, a quality of the camera 130 may be limited. A perspective of the camera 130 may be suboptimal for the present purpose. Lighting in the household appliance 105 furthermore may be relatively weak, meaning that the images are unable to achieve a high quality.
[0040]
[0041] A first apparatus 310 comprises a mobile device, for example a laptop computer, a tablet computer or a smartphone. Usually, the device comprises a camera 130 as well as a processing facility 135 and a communication facility 145. In order to perform the method 200, in particular the steps 205-215, the device can be brought into a constant position with respect to the surface 305 by means of a stand.
[0042] A second apparatus 315 comprises a PAI, which usually may be attached above the surface 305, for example on the bottom side of a wall cupboard or shelf, or on a vertical wall. In a further embodiment, the apparatus 315 may also be held above the surface 305 by means of a mast.
[0043] Usually, the PAI comprises a camera 130, a processing facility 135 and a communication facility 145. Additionally provided as an output apparatus 140 is a projector 320, which may be attached with a slight lateral offset from the camera 130. The projector 320 is preferably configured to project a representation on the surface 305 and the camera 130 may be configured to determine a position of an object, in particular a hand of a user, with respect to the representation. The PAI may be advantageously used in a particular manner to project a desired position for the adjustment sheet 120 onto the surface 305. If the adjustment sheet 120 assumes the projected position, then this can be determined by means of the camera 130. Alternatively, an input of a user can be recorded. The input may take place in relation to a button projected onto the surface 305.
[0044] Both apparatuses 310, 315 can be easily used by a user of the household appliance 105. Other embodiments of apparatuses 310, 315 are likewise possible.
[0045]
[0046] The adjustment sheet 120 preferably carries an arrangement 405 with at least one visual marking 410. The markings 410 shown are arranged at even relative distances on a circular line, in the region of which the object 115 is placed. Due to the size of the object 115, it is not possible for all markings 410 to be seen by the camera 130 at the same time. By way of example, the markings 410 each comprise a centering point, about which one or more circular arcs are shown.
REFERENCE CHARACTERS
[0047] 100 System
[0048] 105 Household appliance
[0049] 110 Interior space
[0050] 115 Object
[0051] 120 Adjustment sheet
[0052] 125 Identification facility
[0053] 130 Camera
[0054] 135 Processing facility
[0055] 140 Output apparatus
[0056] 145 Communication facility
[0057] 150 External device
[0058] 155 Processing facility
[0059] 160 Communication facility
[0060] 165 Storage apparatus
[0061] 200 Method
[0062] 205 Placed object on adjustment sheet
[0063] 210 Recorded image of the object
[0064] 215 Are there sufficient images?
[0065] 220 Create 3D model of the object
[0066] 225 Generate training data
[0067] 230 Train identification unit
[0068] 235 Use identification unit
[0069] 305 Surface
[0070] 310 First apparatus
[0071] 315 Second apparatus
[0072] 320 Projector
[0073] 405 Arrangement
[0074] 410 Marking