Patent classifications
H05B47/125
Emergency notification system
An emergency notification system for a building is provided. The emergency notification system includes one or more inputs devices mounted throughout the building. The one or more input devices can be configured to receive a manual user-input. The emergency notification system can include one or more output devices mounted throughout the building. The one or more output devices can include a plurality of LED arrays. Each of the plurality of LED arrays can be configured to emit light of a different color to indicate a different type of emergency. The emergency notification system can include one or more control devices configured determine a type of emergency occurring within the building based on the manual user-input. Furthermore, in response to determining the type of emergency, the one or more control devices can activate one of the LED arrays to emit light of a color indicative of the type of emergency.
Vehicle in-cabin lighting system, method for actuating vehicle in-cabin lighting system, and storage medium
A vehicle in-cabin lighting system includes: an outward gaze guiding section that is provided inside a vehicle cabin at a vehicle front side and at a vehicle width direction outer side of a vehicle seat, and that becomes brighter than its surroundings on activation of an outer side lighting device; and a control section that is configured to determine whether or not an occupant sifting on the vehicle seat has an interest in interacting with another occupant inside the vehicle cabin based on detection of operation of an operation section by the occupant or based on detection of a state of the occupant, and to activate the outer side lighting device so as to make the outward gaze guiding section brighter in a case in which determination is made that there is no interest in interacting.
Controller for controlling light sources and a method thereof
A controller and a method for controlling at least two light sources is disclosed. The method 700 comprises: obtaining 702 an image captured with a camera, wherein the image comprises at the least two light sources and at least two objects, analyzing 704 the image to detect the at least two light sources and the at least two objects in the image, identifying 706 the at least two light sources, determining 708 positions of the at least two objects relative to the at least two light sources in the image, obtaining 710 color information related to colors of the at least two objects, determining 712 a first color for a first light source of the at least two light sources and a second color for a second light source of the at least two light sources based on the color information and based on the determined relative positions and controlling 714 the first light source according to the first color and the second light source according to the second color, wherein the image comprises a third light source, and the method further comprises determining a third color for the third light source based on the first color and/or the second color, and controlling the third light source according to the third color.
Controller for controlling light sources and a method thereof
A controller and a method for controlling at least two light sources is disclosed. The method 700 comprises: obtaining 702 an image captured with a camera, wherein the image comprises at the least two light sources and at least two objects, analyzing 704 the image to detect the at least two light sources and the at least two objects in the image, identifying 706 the at least two light sources, determining 708 positions of the at least two objects relative to the at least two light sources in the image, obtaining 710 color information related to colors of the at least two objects, determining 712 a first color for a first light source of the at least two light sources and a second color for a second light source of the at least two light sources based on the color information and based on the determined relative positions and controlling 714 the first light source according to the first color and the second light source according to the second color, wherein the image comprises a third light source, and the method further comprises determining a third color for the third light source based on the first color and/or the second color, and controlling the third light source according to the third color.
Device state interface
A system and method for visually automated interface integration that includes collecting image data; detecting a device interface source in the image data; processing the image data associated with the device interface source into an extracted interface representation; and exposing at least one access interface to the extracted interface representation.
Device state interface
A system and method for visually automated interface integration that includes collecting image data; detecting a device interface source in the image data; processing the image data associated with the device interface source into an extracted interface representation; and exposing at least one access interface to the extracted interface representation.
LIGHT SOCKET SURVEILLANCE SYSTEMS
A light socket surveillance system can be used to enable a first party to communicate with a second party. The light socket surveillance system can be used to detect an audible notification from the first party. In response to detecting the audible notification from the first party, the light socket surveillance system can initiate a communication session with a remote computing device of the second party.
LIGHT SOCKET SURVEILLANCE SYSTEMS
A light socket surveillance system can be used to enable a first party to communicate with a second party. The light socket surveillance system can be used to detect an audible notification from the first party. In response to detecting the audible notification from the first party, the light socket surveillance system can initiate a communication session with a remote computing device of the second party.
SYSTEMS AND METHODS FOR MULTIMODAL SENSOR FUSION IN CONNECTED LIGHTING SYSTEMS USING AUTOENCODER NEURAL NETWORKS FOR OCCUPANT COUNTING
A system for determining an occupancy of an environment is provided. The system may include an image sensor, a motion sensor, and a controller in communication with the image sensor and the motion sensor. The controller may be configured to generate an encoded image representation by encoding the image signal based on an autoencoder. The controller may be further configured to generate an encoded motion representation by encoding the motion signal based on the autoencoder. The controller may be further configured to train the autoencoder with the image signal and/or motion signal. The controller may be further configured to generate a fused representation based on the encoded image representation and the encoded motion representation. The controller may be further configured to determine the occupancy of the environment based on the fused representation. The occupancy of the environment may be determined by applying the fused representation to a machine learning module.
SYSTEMS AND METHODS FOR MULTIMODAL SENSOR FUSION IN CONNECTED LIGHTING SYSTEMS USING AUTOENCODER NEURAL NETWORKS FOR OCCUPANT COUNTING
A system for determining an occupancy of an environment is provided. The system may include an image sensor, a motion sensor, and a controller in communication with the image sensor and the motion sensor. The controller may be configured to generate an encoded image representation by encoding the image signal based on an autoencoder. The controller may be further configured to generate an encoded motion representation by encoding the motion signal based on the autoencoder. The controller may be further configured to train the autoencoder with the image signal and/or motion signal. The controller may be further configured to generate a fused representation based on the encoded image representation and the encoded motion representation. The controller may be further configured to determine the occupancy of the environment based on the fused representation. The occupancy of the environment may be determined by applying the fused representation to a machine learning module.