Patent classifications
G06V20/36
Image Recognition Method and Related Device
In an image recognition method, a terminal determines, based on first positioning information, target object information corresponding to building information in a to-be-recognized image in desensitized map data. The desensitized map data does not include a sensitive building. Then, when the terminal determines that the target object information does not include the building information, the terminal determines that the map data does not include the building information. In this case, the terminal determines to recognize the building information as a sensitive building. In other words, the terminal may recognize, by using the desensitized map data, building information corresponding to a sensitive building in the to-be-recognized image.
OBJECT INFORMATION MANAGEMENT METHOD, APPARATUS AND DEVICE, AND STORAGE MEDIUM
Provided are an object information management method, apparatus and device, and a storage medium. The method includes: acquiring, in response to an object state change event corresponding to a placement area, at least one object identifier that is obtained by detecting an object in the placement area by using a communication identification system; determining a first identification result based on the at least one object identifier and an object information mapping table, where the object information mapping table includes a mapping relationship between an object identifier and object information; acquiring a second identification result that is obtained by detecting the object in the placement area by using a visual identification system, where the second identification result includes second object information of the object in the placement area; and determining, based on the first object information and the second object information, real object information corresponding to the object state change event.
METHOD AND DEVICE FOR IDENTIFYING PRESENCE OF THREE-DIMENSIONAL OBJECTS USING IMAGES
Provided are a method and apparatus for identifying the presence of a 3D object using an image. According to the method and the apparatus, two-dimensional images are used to identify whether a 3D object exists in the images. According to the method and apparatus for identifying the presence of a 3D object by using an image, the presence of a 3D object in space can be accurately and quickly identified by using two-dimensional images, leading to higher productivity.
Automated training data collection for object detection
A method, system, and computer program product for automated collection of training data and training object detection models is provided. The method generates a set of reference images for a first set of products. Based on the set of reference images, the method identifies a subset of products within an image stream. Based on the subset of products, a second set of products is determined within the image stream. The method identifies a set of product gaps based on the subset of products and the second set of products. The method generates a product detection model based on the set of reference images, the subset of products, the second set of products, and the product gaps.
CONTEXT AND STATE AWARE TREATMENT ROOM EFFICIENCY
A system and method are provided for performing operations comprising: receiving one or more images from an image capture device of a medical treatment location; applying a trained machine learning model to the one or more images to detect presence of a patient in the medical treatment location, the trained machine learning model being trained to establish a relationship between one or more features of images of the medical treatment location and patient presence; generating context assessment for the medical treatment location based on the detected presence of the patient; and transmitting, over a network, the context assessment for presentation on a user interface of a client device.
REMOTE OPERATION APPARATUS AND COMPUTER-READABLE MEDIUM
In a robot (3) in a remote location, an action scene of the robot (3) is determined based on a feature amount derived from its position and motion detection data and video data, and a video parameter or an imaging mode corresponding to the determined action scene is selected. Then, a process of adjusting the selected video parameter for the video data or a process of setting the selected imaging mode to the camera is performed, and the processed video data is transmitted to the information processing apparatus (2) on the user side via the network (4) and displayed on the HMD (1).
INTERACTIVE IMAGE GENERATION
A content generation platform is generally described herein. More specifically, interactive image generation and techniques and features thereof are disclosed herein. One or more sets of images of a scene are captured in an imaging studio. The captured one or more sets of images of the scene are processed using one or more machine learning based networks to generate an interactive image of the scene comprising a plurality of interactive features. One or more of the plurality of interactive features of the generated interactive image may be modified or edited according to user preferences.
FLOORPLAN GENERATION BASED ON ROOM SCANNING
Various implementations disclosed herein include devices, systems, and methods that generate floorplans and measurements using a three-dimensional (3D) representation of a physical environment generated based on sensor data.
Methods and system for occupancy class prediction and occlusion value determination
The present disclosure describes a method for occupancy class prediction, such as for occupancy class detection in a vehicle. In aspects, the method includes determining, for a plurality of points of time, measurement data related to an area and determining, for a plurality of points of time, occlusion values based on the measurement data. The method further includes selecting, for a present point of time, one of a plurality of modes for occupancy class prediction based on the occlusion values for at least one of the present point of time and a previous point of time and/or based on one of the plurality of modes for occupancy class prediction selected for the previous point of time. The method additionally includes determining, for the present point of time, one of a plurality of predetermined occupancy classes of the area based on the selected mode for the present point of time.
SEATING POSITION MANAGEMENT SYSTEM AND SEATING POSITION MANAGEMENT METHOD
Provided is a system that enables inexpensive and accurate identification of the seating position of each user in a free address office without incurring additional equipment costs. The system identifies a user who has entered an office by performing a face verification operation which involves comparing a face image of an entering person who is entering the office acquired from an image captured by a first entrance camera, with the face image of each registered user for matching, and identifies the seating position of the user in the office by performing a person verification operation which involves comparing a first person image acquired from an image captured by a second entrance camera, with a second person image acquired from an image captured by an in-area camera to thereby associate a person who has entered the office with a corresponding person who is seated in the office.