Patent classifications
G06V20/36
Detecting changes of items hanging on peg-hooks
A method for reacting to changes of items hanging on peg-hooks connected to pegboards may include: determining a location of a store shelf within a retail store; obtaining a first coverage parameter corresponding to a first product type and a second coverage parameter corresponding to a second product type; accessing a database to determine a first height of products of the first product type and a second height of products of the second product type; determining a position for placing a camera configured to capture images of at least a portion of the store shelf by analyzing the location of the store shelf, the first coverage parameter, the second coverage parameter, the first height, and the second height; and providing, to a user interface of a user device, information relating to the determined position of the camera.
Automated location capture system
A locating system includes a mobile smart device and/or central server to receive and process data; a device to be secured or assigned to a person or asset, the device having an RFID tag with communication technology to receive commands from one or more smart devices, such that location is updated without the need for real-time infrastructure.
ACTIVE COMPRESSIVE SENSING VIA A THERMAL SENSOR FOR HUMAN SCENARIO RECOGNITION
Disclosed and described herein is a system and a method for thermal detection of static and moving objects.
Method for categorizing a scene comprising a sub-scene with machine learning
A method for identifying a scene, comprising a computing device receiving a plurality of data points corresponding to a scene; the computing device determining one or more subsets of data points from the plurality of data points that are indicative of at least one sub-scene in said scene, said at least one sub-scene displayed on a display device that is part of said scene, wherein said at least one sub-scene does not represent said scene; the computing device categorizing said scene, disregarding said at least one sub-scene, wherein the categorizing includes interpreting said scene by a computer vision system such that said at least one sub-scene is not taken into account in the categorizing of said scene.
Automated analysis of image contents to determine the acquisition location of the image
Techniques are described for using computing devices to perform automated operations for determining the acquisition location of an image using an analysis of the image's visual contents. In at least some situations, images to be analyzed include panorama images acquired at acquisition locations in an interior of a multi-room building, and the determined acquisition location information includes a location on a floor plan of the building and in some cases orientation direction information—in at least some such situations, the acquisition location determination is performed without having or using information from any distance-measuring devices about distances from an image's acquisition location to objects in the surrounding building. The acquisition location information may be used in various automated manners, including for controlling navigation of devices (e.g., autonomous vehicles), for display on one or more client devices in corresponding graphical user interfaces, etc.
LEARNING-BASED SYSTEM AND METHOD FOR ESTIMATING SEMANTIC MAPS FROM 2D LIDAR SCANS
A system and method are disclosed herein for developing robust semantic mapping models for estimating semantic maps from LiDAR scans. In particular, the system and method enable the generation of realistic simulated LiDAR scans based on two-dimensional (2D) floorplans, for the purpose of providing a much larger set of training data that can be used to train robust semantic mapping models. These simulated LiDAR scans, as well as real LiDAR scans, are annotated using automated and manual processes with a rich set of semantic labels. Based on the annotated LiDAR scans, one or more semantic mapping models can be trained to estimate the semantic map for new LiDAR scans. The trained semantic mapping model can be deployed in robot vacuum cleaners, as well as similar devices that must interpret LiDAR scans of an environment to perform a task.
IDENTIFYING PRODUCTS FROM ON-SHELF SENSOR DATA AND VISUAL DATA
A non-transitory computer-readable medium includes instructions that when executed by a processor cause the processor to perform a method for identifying products from on-shelf sensors and image data. The method may include receiving data captured using a plurality of sensors positioned between at least part of a retail shelf and one or more products placed on the at least part of the retail shelf. The method may also include receiving an image of the at least part of the retail shelf and at least one of the one or more products. The method may also include analyzing the captured data and the image to determine a product type of the one or more products.
MOBILE ROBOT AND METHOD OF CONTROLLING THE SAME
A mobile robot of the present disclosure includes a first pattern emission unit configured to emit a first patterned light downward and forward from the main body on a floor of an area to be cleaned; and an image acquisition unit configured to acquire an image of first patterned light emitted by the first pattern emission unit and incident on an obstacle. A pattern is detected from the acquired image to determine an obstacle, and a cliff is detected based on at least one of a shape or a position of the pattern in the image. The mobile robot may identify a travel path that does not lead to the cliff.
ON DEMAND VISUAL RECALL OF OBJECTS/PLACES
Aspects of the subject disclosure may include, for example, observing a plurality of objects viewed through a smart lens, wherein the plurality of objects are in a frame of an image viewed by the smart lens, determining an identification for an object of the plurality of objects, assigning tag information for the object based on the identification, storing the tag information for the object and the frame in which the object was observed, receiving a recall request for the object, retrieving the tag information for the object and the frame responsive to the receiving the recall request, and displaying the tag information and the frame. Other embodiments are disclosed.
User presence/absence recognition during robotic surgeries using deep learning
Various user-presence/absence recognition techniques based on deep learning are provided. More specifically, these user-presence/absence recognition techniques include building/training a CNN-based image recognition model including a user-presence/absence classifier based on training images collected from the user-seating area of a surgeon console under various clinically-relevant conditions/cases. The trained user-presence/absence classifier can then be used during teleoperation/surgical procedures to monitor/track users in the user-seating area of the surgeon console, and continuously classify the real-time video images of the user-seating area as either a user-presence state or a user-absence state. In some embodiments, the user-presence/absence classifier can be used to detect a user-switching event at the surgeon console when a second user is detected to have entered the user-seating area after a first user is detected to have exited the user-seating area. If the second user is identified as a new user, the disclosed techniques can trigger a recalibration procedure for the new user.