G06V10/235

METHODS, SYSTEMS, AND MEDIA FOR ROBUST CLASSIFICATION USING ACTIVE LEARNING AND DOMAIN KNOWLEDGE

Methods, computing systems, and computer-readable media for robust classification using active learning and domain knowledge are disclosed. In embodiments described herein, global feature data (such as a list of keywords) is generated for use in a classification task (such as a NLP text classification task). Expert knowledge, based on decisions made by human users, is combined with existing domain knowledge, which may be derived from existing trained classification models in the problem domain, such as keyword models trained using various datasets. By combining the expert knowledge with the domain knowledge, global feature data may be generated that is more effective in performing the classification task than either a classifier using the expert knowledge or a classifier using the domain knowledge.

Methods and systems for presenting alert event indicators

A method is performed at a client device with a display screen, processor(s), and memory storing program(s) for execution by the processor(s). The method comprises obtaining alert events from smart devices at a physical location. The smart devices include a camera located at or in proximity to the physical location. The method further comprises displaying in a scrollable list a chronological sequence of camera event items. Each of the camera event items includes a thumbnail image, a time of the alert event, and one or more activity alert indicators corresponding to predefined activity alert types. The method further comprises receiving a user selection of a first thumbnail image corresponding to a first one of the camera event items, and responsive to the user selection, enabling playback of a video of a first alert event in a video player interface while maintaining display of the scrollable list.

Trigger regions

Example implementations may relate to methods and systems for detecting an event in a physical region within a physical space. Accordingly, a computing system may receive from a subscriber device an indication of a virtual region within a virtual representation of the physical space such that the virtual region corresponds to the physical region. The system may also receive from the subscriber a trigger condition associated with the virtual region, where the trigger condition corresponds to a particular physical change in the physical region. The system may also receive sensor data from sensors in the physical space and a portion of the sensor data may be associated with the physical region. Based on the sensor data, the system may detect an event in the physical region that satisfies the trigger condition and may responsively provide to the subscriber a notification that indicates that the trigger condition has been satisfied.

Generic card feature extraction based on card rendering as an image
11599571 · 2023-03-07 · ·

Methods and apparatus for using features of images representing content items to improve the presentation of the content items are disclosed. In one embodiment, a plurality of digital images are obtained, where each of the images represents a corresponding one of a plurality of content items Image features of each of the digital images are determined. Additional features including at least one of user features pertaining to a user of a client device or contextual features pertaining to the client device are ascertained. At least a portion of the content items are provided via a network to the client device using features that include or are derived from both the image features of each of the plurality of digital images and the additional features.

Configuration of a visible light sensor

A visible light sensor may be configured to sense environmental characteristics of a space using an image of the space. The visible light sensor may be controlled in one or more modes, including a daylight glare sensor mode, a daylighting sensor mode, a color sensor mode, and/or an occupancy/vacancy sensor mode. In the daylight glare sensor mode, the visible light sensor may be configured to decrease or eliminate glare within a space. In the daylighting sensor mode and the color sensor mode, the visible light sensor may be configured to provide a preferred amount of light and color temperature, respectively, within the space. In the occupancy/vacancy sensor mode, the visible light sensor may be configured to detect an occupancy/vacancy condition within the space and adjust one or more control devices according to the occupation or vacancy of the space. The visible light sensor may be configured to protect the privacy of users within the space via software, a removable module, and/or a special sensor.

Structural design systems and methods to define areas of interest for modeling and simulation-based space planning

Structural design systems, methods, and computer readable media for selective simulation of coverage in a floor plan are disclosed. The system may include a processor configured to: access a floor plan demarcating multiple rooms; perform a machine learning method, semantic analysis, or geometric analysis on the floor plan to identify at least one opening associated with at least one room from the multiple rooms; access a functional requirement associated with the at least one opening; access at least one rule associating the functional requirement with the at least one opening; define at least one area of interest or disinterest using the at least one rule and the functional requirement; access a technical specification associated with the functional requirement; generatively analyze the at least one room, the technical specification and the area of interest or disinterest to define a solution that conforms to the functional requirement; and output the solution.

Eye contact prompting communication device

A communication device, method, and computer program product prompt correct face/eye positioning to enable perceived eye-to-eye contact of a user of a video capturing device with camera on a same device side as the viewable display device. A first communication device includes a first display device having a first graphical user interface (GUI). A first image capturing device of the first communication device has a field of view that captures a face of a first user viewing the first GUI. The first image capturing device generates a first image stream of the field of view. A controller of the communication device identifies a look target area of the first GUI proximate to the first image capturing device. The controller presents visual content on the first GUI within the look target area to prompt the first user viewing the first GUI to look towards the look target area.

Object recognition for improving interfaces on an eyewear device and other wearable and mobile devices
11598976 · 2023-03-07 · ·

A wearable or a mobile device includes a camera to capture an image of a scene with an unknown object. Execution of programming by a processor configures the device to perform functions, including a function to capture, via the camera, the image of the scene with the unknown object. To create lightweight human-machine user interactions, execution of programming by the processor further configures the device to determine a recognized object-based adjustment; and produce visible output to the user via the graphical user interface presented on the image display of the device based on the recognized object-based adjustment. Examples of recognized object-based adjustments include launch, hide, or display of an application for the user to interact with or utilize; display of a menu of applications related to the recognized object for execution; or enable or disable of a system level feature.

Assisting users in visualizing dimensions of a product

A computer readable medium for sizing a product includes instructions, that when executed by at least one processor, cause a computing device to: retrieve from a webpage information on a product including product dimensions; present on a display of a client device a graphical button that upon access by a user activates a camera for capturing an image of an object positioned at a focal distance from the camera, the object having a surface; prompt the user to enter boundary information of an imaginary housing to be placed on the surface; generate the imaginary housing dimensions in two dimensions (2D) based on the boundary information and the focal distance; and determine whether the product fits within the imaginary housing by comparing the product dimensions against the imaginary housing dimensions.

SYSTEM AND METHOD FOR DISPLAYING OBJECTS OF INTEREST AT AN INCIDENT SCENE

A system and method for displaying an image of an object of interest located at an incident scene. The method includes receiving, from the image capture device, a first video stream of the incident scene, and displaying the video stream. The method includes receiving an input indicating a pixel location in the video stream, and detecting the object of interest in the video stream based on the pixel location. The method includes determining an object class, an object identifier, and metadata for the object of interest. The metadata includes the object class, an object location, an incident identifier corresponding to the incident scene, and a time stamp. The method includes receiving an annotation input for the object of interest, and associating the annotation input and the metadata with the object identifier. The method includes storing, in a memory, the object of interest, the annotation input, and the metadata.