Determining a control mechanism based on a surrounding of a remove controllable device
11475664 · 2022-10-18
Assignee
Inventors
- Bartel Marinus Van De Sluis (Eindhoven, NL)
- Dzmitry Viktorovich Aliakseyeu (Eindhoven, NL)
- Mustafa Tolga EREN (EINDHOVEN, NL)
- Dirk Valentinus Rene Engelen (Heusden-Zolder, BE)
Cpc classification
G06F2203/0383
PHYSICS
G06F3/04842
PHYSICS
H04N21/42222
ELECTRICITY
G06F3/017
PHYSICS
G06F3/011
PHYSICS
Y02B20/40
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
H04N21/42204
ELECTRICITY
H04N7/18
ELECTRICITY
H04N21/4131
ELECTRICITY
International classification
Abstract
The invention relates to a system (1) for identifying a device using a camera and for remotely controlling the identified device. The system is configured to obtain an image (21) captured with a camera. The image captures at least a surrounding of a remote controllable device (51). The system is further configured to analyze the image to recognize one or more objects (57) and/or features in the surrounding of the remote controllable device and select an identifier associated with at least one of the one or more objects and/or features from a plurality of identifiers stored in a memory. The memory comprises associations between the plurality of identifiers and remote controllable devices and the selected identifier is associated with the remote controllable device. The system is further configured to determine a control mechanism for controlling the remote controllable device and control the remote controllable device using the determined control mechanism.
Claims
1. A system for identifying a given remote controllable device using a camera and for remotely controlling said identified given remote controllable device, said system comprising: at least one input interface; at least one output interface; and at least one processor configured to: obtain an image via said at least one input interface, said image captured with a camera, said image capturing at least a surrounding of said given remote controllable device and said given remote controllable device, recognize one or more objects and/or features in said surrounding of said given remote controllable device by analyzing said image, wherein said one or more objects and/or features is distinct from said given remote controllable device and wherein said one or more objects and/or features is primarily configured for a function that is distinct from controlling said given remote controllable device, identify said given remote controllable device by selecting a given identifier associated with at least one of said one or more objects and/or features from a plurality of identifiers stored in a memory, said memory comprising associations between said plurality of identifiers and remote controllable devices, and said selected given identifier being associated with said given remote controllable device, determine a control mechanism for controlling said identified given remote controllable device, and use said at least one output interface to control said identified given remote controllable device using said determined control mechanism; wherein said at least one processor is configured to recognize said given remote controllable device in said image and select said given identifier based on the recognizing in the image of both the given remote controllable device and the at least one of said one or more objects and/or features.
2. A system as claimed in claim 1, wherein said at least one processor is configured to recognize a plurality of remote controllable devices in said image, and wherein said given identifier is associated with both said recognized at least one of said one or more objects and/or features in said surrounding and an unidentified one of said plurality of remote controllable devices.
3. A system as claimed in claim 2, wherein said at least one processor is configured to: select a further identifier associated with both at least one of said one or more objects and/or features and a further remote controllable device of said plurality of remote controllable devices, and determine that said given remote controllable device is nearer to the center or other reference point of said image than said further remote controllable device, wherein said identifier of said given remote controllable device is selected in response to determining that said given remote controllable device is nearer to the center or other reference point of said image than said further remote controllable device.
4. A system as claimed in claim 2, wherein said at least one processor is configured to: select a further identifier associated with both at least one of said one or more objects and/or features and a further remote controllable device of said plurality of remote controllable devices, and determine that a user is looking at said given remote controllable device in a rendering of said image, wherein said given identifier of said given remote controllable device is selected in response to determining that said user is looking at said given remote controllable device in said rendering of said image.
5. A system as claimed in claim 1, wherein said at least one processor is configured to: select a further identifier associated with at least one object and/or feature of said one or more objects and/or features from said plurality of identifiers stored in said memory, said further identifier being associated with a further remote controllable device, and use said at least one input interface to allow a user to use said determined control mechanism to select said given remote controllable device from at least said given remote controllable device and said further remote controllable device, and wherein said control mechanism is further associated with said further identifier and said control mechanism is further suitable for controlling said further remote controllable device.
6. A system as claimed in claim 5, wherein said at least one processor is configured to associate at least one further object and/or feature of said one or more objects and/or features with said given identifier of said given remote controllable device in said memory in dependence on said user selecting said given remote controllable device.
7. A system as claimed in claim 1, wherein said at least one processor is configured to determine a light effect in said surrounding of said given remote controllable device in said image and said selected given identifier is further associated with one or more characteristics of said determined light effect.
8. A system as claimed in claim 1, wherein said memory comprises one or more descriptions of said at least one of said one or more objects and/or features associated with said given identifier, the one or more descriptions comprising one or more words derived from a room in which said at least one of said one or more objects and/or features is located and one or more words describing said at least one of said one or more objects and/or features.
9. A system of claim 8, wherein said at least one processor is configured to use said at least one input interface to receive one or more given descriptions of objects within a certain distance of said given remote controllable device, select one or more object models based on said one or more given descriptions and associate said selected one or more object models with said given identifier of said given remote controllable device in said memory.
10. A system as claimed in claim 1, wherein said given remote controllable device comprises a controllable light source.
11. A lighting system comprising the system of claim 1 and said given remote controllable lighting device.
12. A method of identifying a given remote controllable device using a camera and remotely controlling said identified given remote controllable device, said method comprising: obtaining an image captured with a camera, said image capturing at least a surrounding of said given remote controllable device and said given remote controllable device; recognizing one or more objects and/or features in said surrounding of said given remote controllable device by analyzing said image, wherein said one or more objects and/or features is distinct from said given remote controllable device and wherein said one or more objects and/or features is primarily configured for a function that is distinct from controlling said given remote controllable device; identifying said given remote controllable device by selecting a given identifier associated with at least one of said one or more objects and/or features from a plurality of identifiers stored in a memory, said memory comprising associations between said plurality of identifiers and remote controllable devices and said selected given identifier being associated with said given remote controllable device; determining a control mechanism for controlling said identified given remote controllable device; and controlling said identified given remote controllable device using said determined control mechanism; wherein said at least one processor is configured to recognize said given remote controllable device in said image and select said given identifier based on the recognizing in the image of both the given remote controllable device and the at least one of said one or more objects and/or features.
13. A non-transitory computer readable medium comprising a code of instructions, wherein the code of instructions is configured to cause at least one processor to perform the method of claim 12 when the at least one processor executes the code of instructions.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) These and other aspects of the invention are apparent from and will be further elucidated, by way of example, with reference to the drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10) Corresponding elements in the drawings are denoted by the same reference numeral.
DETAILED DESCRIPTION OF THE EMBODIMENTS
(11)
(12) The mobile device 1 comprises a receiver 3, a transmitter 4, a processor 5, memory 7, a camera 8 and a touchscreen display 9. The processor 5 is configured to use an internal interface, e.g. bus, to obtain an image captured with the camera 8. The image captures at least a surrounding of a remote controllable device, e.g. one or more of the lighting devices 51, 53 and 55 and the TV 73. In an alternative embodiment, the processor 5 is configured to use the receiver 3 to receive an image captured with an external camera.
(13) In the embodiment of
(14) In the embodiment of
(15) The image may be a photograph captured when the user presses a button or may be part of video captured by the user, for example. In the latter case, identification of the remote controllable device that a user wants to control may be started as soon an object corresponding to a remote controllable device is in the center of the image or in another control area and/or receives attention for a predefined minimal time duration, e.g. if a user does not move the device for this time duration.
(16) Not only may identification of the remote controllable device that a user wants to control be started as soon an object corresponding to a remote controllable device is in the control area, but this control area may also be used to determine which device from a plurality of remote controllable devices the user wants to control. The control area may be indicated on a photo or video stream and the user may be instructed to ensure that the device of interest is within this control area. The size of the control area may be predefined or dynamic, e.g. based on multitude of controllable devices detected.
(17) However, the use of a control area is not required. For example, identification of the remote controllable device that a user wants to control may be started if the user is detected to be looking at the remote controllable device in the image for a predefined minimal time duration. The latter may be implemented using gaze detection, for example. Gaze detection may also be used to determine which device from a plurality of remote controllable devices the user is looking at. The gaze of the user may be detected with a camera directed to the user's eyes, e.g. with a selfie camera in the mobile device 1. This camera may also be used to receive explicit input from the user. For example, the user may be able to provide explicit input indicating that he wishes to access the control mechanism, e.g. by a double blink of the eyes. In this case, it is not necessary to wait until the user has looked at the remote controllable device in the image for a predefined minimal time duration.
(18) Objects, including the objects surrounding the remote controllable device, may be recognized using known object recognition algorithms. Features of the objects are detected and compared with object models. Object models comprise visual identification properties. The objects models may be learned by training the system with calibration images. Alternatively, descriptions of objects may be provided, and object models may be obtained based on these descriptions. Alternatively, connected devices could broadcast their object models. Connected devices could also broadcast information identifying which control mechanism, e.g. application, can be used to control them.
(19) Normally, by recognizing objects and/or features in the surrounding of the remote controllable device, it should be possible to identify the remote controllable device's control mechanism uniquely. However, additional properties may also be used to help to uniquely identify the remote controllable device. For example, an identifier of a remote controllable device may be further associated with a location and RF-based localization or GPS-based position detection may be used to determine whether the mobile device 1 is located near this remote controllable device and thus, that this remote controllable device might be the remote controllable device that the user wishes to control.
(20) In the embodiment of the mobile device 1 shown in
(21) The receiver 3 and the transmitter 4 may use one or more wireless communication technologies such as Wi-Fi (IEEE 802.11) to communicate with the wireless LAN access point 12, for example. In an alternative embodiment, multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter. In the embodiment shown in
(22) In the embodiment of
(23)
(24) In this example, an object model corresponding to the painting 57 as well as an object model corresponding to the lighting device 51 are associated with an identifier of the lighting device 51, e.g. in the memory 7 of the mobile device 1. Although the lighting device 55 looks the same as the lighting device 51, as the lighting device 55 has not been associated with an object model corresponding to the painting 57, the mobile device 1 is able to select the identifier of the lighting device 51 and determine which control mechanism can be used to control the lighting device 51.
(25)
(26) In this example, an object model corresponding to the TV cabinet 71 as well as characteristics of the light effects 75 and 77 are associated with an identifier of the TV 73, e.g. in the memory 7 of the mobile device 1. Although the TV 73 looks similar to another TV in the same home, as this other TV has not been associated with an object model corresponding to the TV cabinet 71 and characteristics of the light effects 75 and 77, the mobile device 1 is able to select the identifier of the TV 73 and determine which control mechanism can be used to control the TV 73.
(27) If the lighting devices inside the TV cabinet 71 are also remote controllable, then it may be beneficial to determine the control mechanism for controlling these lighting devices as well. For example, an identifier of the lighting device at the top of the TV cabinet 71 may be associated with the object model corresponding to the TV cabinet 71 and with characteristics of the light effect 75. If an image would be captured that comprises both the area 85 and the object 81, but not the area 87 and the object 83, then the identifier of the lighting device at the top of the TV cabinet 71 or both this identifier and the identifier of the TV 73 could be selected. In the latter case, an application might be launched or recommended that is able to control both the TV 71 and the lighting device at the top of the TV cabinet 73.
(28)
(29) In this example, an object model corresponding to the painting 57 as well as an object model corresponding to the lighting device 51 are associated with an identifier of the lighting device 51, e.g. in the memory 7 of the mobile device 1. Furthermore, an object model corresponding to the table 59 as well as an object model corresponding to the lighting device 53 are associated with an identifier of the lighting device 53, e.g. in the memory 7 of the mobile device 1.
(30) Initially, both the identifier corresponding to the lighting device 51 and the identifier corresponding to the lighting device 53 are selected from the plurality of identifiers. Next, either an application is launched which is able to control both lighting device 51 and lighting device 53 and in which the user can select the desired lighting device or the identifier of one of the two lighting devices 51 and 53 is selected and an application is launched which is able to control this specific lighting device and in which the user does not need to select the desired lighting device.
(31) In order to select one identifier from the two identifiers, the method may involve determining whether the user is looking at object 61 or at object 63 or the method may involve determining whether object 61 or object 63 is nearer to the center or other reference point of the image 41. In the example of
(32) A first embodiment of the method of identifying a device using a camera and remotely controlling the identified device is shown in
(33) After the calibration steps, steps 111,112,115,117 and 119 are performed. Step 111 comprises obtaining an image captured with a camera. The image captures at least a surrounding of a remote controllable device. Step 112 comprises analyzing the image. In the embodiment of
(34) Step 125 comprises analyzing the image to determine one or more light effects in the surrounding of the remote controllable device, e.g. by comparing a recognized illumination pattern with stored illumination patterns or by comparing other characteristics of a recognized light effect. If at least a certain quantity of adjacent pixels has an increased brightness/lightness compared to non-adjacent pixels and shows a decrease in brightness/lightness in directions away from a center region, the area formed by these adjacent pixels may be recognized as a light effect and the shape of this area and the shape of the center region may be used as characteristics of the illumination pattern, for example.
(35) Step 115 comprises selecting an identifier associated with at least one of the one or more recognized objects and/or features from a plurality of identifiers stored in a memory. The memory comprises associations between the plurality of identifiers and remote controllable devices. The selected identifier is associated with the remote controllable device. In the embodiment of
(36) Step 131 comprises determining all identifiers associated with at least one of the objects and/or features recognized in the image. If no such identifier exists, step 111 may be repeated. Otherwise, step 133 is performed. If distances between objects and/or features are associated with the identifiers and recognized in step 112, then identifiers associated with distances between objects and/or features, e.g. between a sprinkler and a door, that do not match recognized distances between these objects and/or features may be removed before proceeding to step 133.
(37) Step 133 comprises determining how many remote controllable devices have been recognized in the image. At this stage, it has been determined that the image captures one or more remote controllable devices, e.g. the features of an object in the image matches one or more remote controllable devices, but no remote controllable devices have been identified yet, e.g. because the object matches multiple remote controllable devices.
(38) If multiple unidentified remote controllable devices have been recognized, i.e. that are multiple objects in the image that match one or more remote controllable devices, then step 135 is performed. Step 135 comprises determining which of the objects is nearest to the center or other reference point of the image. Step 137 comprises removing the identifiers not associated with the nearest object, i.e. not associated with the unidentified remote controllable device nearest to the center or other reference point of the image.
(39) Step 141 is performed after step 137. Step 141 comprises determining whether there are multiple remaining identifiers. If not, then the one remaining identifier is selected and step 117 is performed. If multiple identifiers remain, step 143 is performed. Step 143 comprises selecting one of these multiple identifiers. If an identifier is associated with a surrounding object or feature that is not recognized in the image, then this does not mean that it is not the remote controllable device corresponding to this identifier that is captured in the image, as the image might only capture part of the surroundings of the remote controllable device, for example.
(40) However, if an identifier is associated with multiple recognized objects and/or features (e.g. an object corresponding to the remote controllable device itself and a surrounding object or feature), then it is more likely that this identifier corresponds to the remote controllable device captured in the image than if the identifier is associated with a single recognized object (e.g. only an object corresponding to the remote controllable device itself). In the embodiment of
(41) If it is determined in step 133 that a single unidentified remote controllable device has been recognized, then step 139 performed. Step 139 comprises removing the identifiers not associated with the recognized unidentified remote controllable device. In the embodiment of
(42) If it is determined in step 133 that no remote controllable device has been recognized, then step 141 is performed immediately. If a first identifier is associated with multiple recognized surrounding objects and/or features or a recognized surrounding object and/or feature and a recognized light effect and a second identifier is associated with a single recognized surrounding object and/or feature, then it is more likely that the first identifier corresponds to the remote controllable device that the user wishes to control, and the second identifier may be removed in step 143.
(43) Step 117 comprises determining a control mechanism, e.g. an application or control commands, for controlling the remote controllable device. The application may be automatically downloaded and/or automatically started. Step 119 comprises controlling the remote controllable device using the determined control mechanism. The determined control mechanism may be used by a user of a device or may be used by the device itself without user involvement. For example, based on the selected identifier, an associated app may be activated. If the app is not available on the device yet it may be downloaded. This can either depend on user preferences or can be based on detecting that the user is seriously interested in or interacting with the detected object. After step 119, step 111 may be repeated.
(44) A second embodiment of the method is shown in
(45) Step 117 is performed after steps 141 and 151. In the embodiment of
(46) A step 153 is performed after step 117. Step 153 comprises determining whether there are multiple remaining identifiers. If not, then step 155 is performed. Step 155 comprises launching the application determined in step 117 and informing the application which remote controllable device the user wants to control such that he is immediately able to control his remote controllable device. Step 119 is performed after step 155.
(47) If multiple identifiers remain, step 157 is performed. Step 157 comprises launching the application determined in step 117. A step 158 is performed after step 157. Step 158 allows the user to select the remote controllable device that he wants to control from within the application. The application may allow the user to select from among all devices that are known to the application or from a subset thereof. The subset includes the remote controllable devices corresponding to the remaining identifiers.
(48) Steps 119 and 159 are performed after step 158. Step 159 comprises associating at least one further object and/or feature of the one or more surrounding objects and/or features with the identifier of the remote controllable device selected in step 158. Step 119 comprises controlling the remote controllable device using the determined control mechanism.
(49) A third embodiment of the method is shown in
(50) After steps 181-185, steps 111 and 112 of
(51) In the embodiments of
(52) In the embodiment of
(53) In the embodiment of
(54) As a first example of a use case, a lighting control app may be launched upon detecting a connected lighting device. In this use case, the identified object is a connected lighting device and upon detecting and identifying it, the corresponding lighting control app is activated. In the case that it is the first time that the lighting device is detected, the associated lighting control app may be downloaded. In this case, based on additional inputs, it may be determined if the user is really interested in configuring or controlling the connected lighting device in order to avoid download of apps which are unlikely to be relevant or used. This can be done by detecting that the user is really focused on the lighting device, e.g. looking at it or moving towards it while the area is also (getting) dark.
(55) Besides downloading the app, the app may also be already available on the user device and launched or activated upon detecting the lighting device. The lighting device is uniquely identified by using features from the surrounding area to distinguish between lighting devices having an identical appearance. This step may be executed by the operating system before the app is activated, for example.
(56) Activated apps may run in the background to enable fast access or may launch in the foreground to allow the user to control the lighting device immediately. The latter may be performed if it is clear that it is the user's intention to control the device, for example. If the app is not activated yet, a subtle indication may be displayed enabling the user to activate the associated app, e.g. by presenting an app icon in the image with a visual relation to the device that can be controlled.
(57) As a second example of a use case, a music streaming app may be launched upon detecting a connected speaker. When pointing an augmented reality (AR) device (e.g. a mobile phone or augmented reality glasses) to the connected speaker, the streaming app is started on the AR device and/or the speaker. The speaker can already start playing a preferred playlist, while the user can still modify the music selection via the streaming app.
(58) As a third example of a use case, the determined control mechanism uses voice control. When a user is watching a certain (detected) object corresponding to a remote controllable device and provides a voice command, the voice command is routed to the associated app. For example, in a setting with a speaker and a few lamps, the speaker is streaming some music while the lamps provide light effects supporting the music. The command “stop playing” when looking at the speaker, stops the music playback and/or streaming. When looking at one of the lamps, the rendering of the supporting light effects is stopped, while the music streaming continues.
(59)
(60) As shown in
(61) The memory elements 304 may include one or more physical memory devices such as, for example, local memory 308 and one or more bulk storage devices 310. The local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive or other persistent data storage device. The processing system 300 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 310 during execution. The processing system 300 may also be able to use memory elements of another processing system, e.g. if the processing system 300 is part of a cloud-computing platform.
(62) Input/output (I/O) devices depicted as an input device 312 and an output device 314 optionally can be coupled to the data processing system. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g. for voice and/or speech recognition), or the like. Examples of output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening I/O controllers.
(63) In an embodiment, the input and the output devices may be implemented as a combined input/output device (illustrated in
(64) A network adapter 316 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 300, and a data transmitter for transmitting data from the data processing system 300 to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300.
(65) As pictured in
(66) Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein). In one embodiment, the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression “non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal. In another embodiment, the program(s) can be contained on a variety of transitory computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. The computer program may be run on the processor 302 described herein.
(67) The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
(68) The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of embodiments of the present invention has been presented for purposes of illustration, but is not intended to be exhaustive or limited to the implementations in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present invention. The embodiments were chosen and described in order to best explain the principles and some practical applications of the present invention, and to enable others of ordinary skill in the art to understand the present invention for various embodiments with various modifications as are suited to the particular use contemplated.