Patent classifications
G06V20/36
AUTOMATIC TOPOLOGY MAPPING PROCESSING METHOD AND SYSTEM BASED ON OMNIDIRECTIONAL IMAGE INFORMATION
An automatic topology mapping processing method and system. The automatic topology mapping processing method includes the steps of: obtaining, by the automatic topology mapping processing system, a plurality of images, wherein at least two of the plurality of images include a common area in which a common space is captured; extracting, by the automatic topology mapping processing system, from respective images, features of the respective images through a feature extractor using a neural network; and determining, by the automatic topology mapping processing system, mapping images of the respective images on the basis of the features extracted from the respective images.
IMAGE-BASED KITCHEN TRACKING SYSTEM WITH ORDER ACCURACY MANAGEMENT
The subject matter of this specification can be implemented in, among other things, methods, systems, computer-readable storage medium. A method can include receiving, by a processing device, image data including one or more image frames indicative of a current state of a meal preparation area. The processing device determines one of a meal preparation item or a meal preparation action associated with the current state of the kitchen based on the image data. The processing device receives order data comprising one or more pending meal orders. The processing device can determine an order preparation error based on the order data and at least one of the meal preparation item or the meal preparation action. The processing device causes the order preparation error to be displayed on a graphical user interface (GUI).
COGNITION ASSISTANCE
A system for providing cognition assistance including a contextual memory trainer, which receives preprocessed data including facial data, scene data, and activity data related to a video in association with temporal data and geographical location data of the camera that captured the video, where the scene data, the activity data, the geographical location data, and the temporal data collectively define spatiotemporal data. The trainer identifies an unknown aspect in the preprocessed data based on historical data and determines a predefined priority factor therefor. The priority factor includes one of a frequency of occurrence within a set period and relative proximity of the unknown aspect to the camera, a known face, place, or scene. The unknown aspect is prioritized for annotation based on a value of the priority factor exceeding a predefined threshold value, based on which facial data is associated with the spatiotemporal data to provide contextual annotated data.
SMART OCCUPANT EMERGENCY LOCATOR AND HEADCOUNTER
A method and systems for evacuating a building during an incident are provided. An exemplary method includes tracking location of personnel in the building, monitoring incident sensors, detecting an incident based, at least in part, on the incident sensors, activating alert systems, locating unevacuated personnel in the building, and displaying the location of the unevacuated personnel on a fire alarm control panel (FACP).
Electronic device and control method thereof
An electronic device is provided. The electronic device includes: a display; a camera; a memory; and a processor configured to identify a video captured in real time through the camera as a plurality of image sections, obtain spatial information corresponding to each image section, map the obtained spatial information to each of the plurality of image sections and store same in the memory, and control the display to add a virtual object image to the video and display the same on the basis of the spatial information mapped to each of the plurality of image sections when a user command for adding the virtual object image to the video is input.
Toilet configured to distinguish excreta type
A system for distinguishing a type of excreta deposited in a toilet is disclosed. The system includes a toilet and a processor. The toilet has a bowl adapted to receive multiple types of excreta from a user and a sensor which monitors the volume of excreta deposited in the toilet. The processor compares excreta volume data derived from the sensor to a database comprising excreta-type volume data and associates a time segment from the excreta volume data as representing an excreta-type. This system can provide data which may be used to determine the rate of excreta deposit into the toilet and associated those rates with excreta events types such as urination or defecation.
Automated understanding of three dimensional (3D) scenes for augmented reality applications
An electronic device is configured to performing a three-dimensional (3D) scan of an interior space. In some cases, the electronic device acquires information and depth measurements relative to the electronic device. The electronic device acquires voxels in a 3D grid that is generated from the 3D scan. The voxels represent portions of the volume of the interior space. The electronic device determines a trajectory and poses of the electronic device concurrently with performing the 3D scan of the interior space. The electronic device labels voxels representing objects in the interior space based on the trajectory and the poses. In some cases, the electronic device uses queries to perform spatial reasoning at an object level of granularity, positions, overlays, or blends virtual objects into an augmented reality representation of the interior space or modifies positions or orientations of the objects by applying a transformation to corresponding connected components.
IMAGE MATCHING METHOD AND APPARATUS AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
Disclosed are an image matching method and apparatus. The image matching method is inclusive of steps of obtaining a panoramic image of at least one subspace in a 3D space and a 2D image of the 3D space; acquiring a 2D image of the at least one subspace in the 3D space; performing 3D reconstruction on the panoramic image of the at least one subspace, and procuring a projection image corresponding to the panoramic image of the at least one subspace; and attaining a matching relationship between the panoramic image of the at least one subspace and the 2D image of the at least one subspace, and establishing an association relationship between the panoramic image of the at least one subspace and the 2D image of the at least one subspace between which the matching relationship has been generated.
Method for judging rotating characteristics of light sources based on summation calculation in visible light indoor positioning
A method for judging rotating characteristics of light sources based on summation calculation in visible light indoor positioning is disclosed. The method is implemented based on an LED positioning system, and comprises the following steps: firstly, arranging the light sources into a convex pattern in order, and arranging the emitted sequence for each light source according to set conditions; secondly, fixing the position and attitude of a cell phone as a receiving end, continuously shooting with the cell phone camera to obtain a set of light source pictures, and performing image processing to obtain emitted sequences of the light sources; thirdly, performing sequence correlation operation on adjacent light sources to obtain emitted sequence delays, and performing summation calculation on the emitted sequence delays, to judge true and false light sources; and finally, excluding the false light source, and then completing positioning using a positioning algorithm.
AUTOMATED TRAINING DATA COLLECTION FOR OBJECT DETECTION
A method, system, and computer program product for automated collection of training data and training object detection models is provided. The method generates a set of reference images for a first set of products. Based on the set of reference images, the method identifies a subset of products within an image stream. Based on the subset of products, a second set of products is determined within the image stream. The method identifies a set of product gaps based on the subset of products and the second set of products. The method generates a product detection model based on the set of reference images, the subset of products, the second set of products, and the product gaps.