Patent classifications
G06V20/53
QUEUE ANALYSIS APPARATUS, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLEMEDIUM
A queue analysis apparatus (2000) estimates a position and an orientation of each object (20) included in a target image (10). The target image (10) is generated by a camera (50) that captures the object (20). The queue analysis apparatus (2000) generates a queue line (40) that expresses, by a linear shape, a queue included in a queue region (30) being a region representing a queue in the target image (10), based on a position and an orientation being estimated for each object (20) included in the queue region (30).
APPARATUS AND METHOD FOR IDENTIFYING CONDITION OF ANIMAL OBJECT BASED ON IMAGE
An image-based animal object condition identification apparatus includes: a communication module that receives an image of an object; a memory that stores therein a program configured to extract animal condition information from the received image; and a processor that executes the program. The program extracts continuous animal detection information of each object by inputting the received image into an animal detection model that is trained based on learning data composed of animal images and determines predetermined animal condition information for each class of each animal object by inputting the continuous animal detection information of each object into an animal condition identification model.
Tracking positions using a scalable position tracking system
A scalable tracking system processes video of a space to track the positions of people within a space. The tracking system determines local coordinates for the people within frames of the video and then assigns these coordinates to time windows based on when the frames were received. The tracking system then combines or clusters certain local coordinates that have been assigned to the same time window to determine a combined coordinate for a person during that time window.
IMAGE PROCESSING SYSTEM AND METHOD
There is provided an image processing system and method for identifying a user. The system comprises a processor configured to identify a first user in an image, determine a plurality of characteristic vectors associated with the first user, compare the characteristic vectors associated with the first user with a plurality of predetermined characteristic vectors associated with a plurality of users including the first user, and identify the first user based on the comparison.
NON-CONTACT TEMPERATURE MEASUREMENT IN THERMAL IMAGING SYSTEMS AND METHODS
- Louis Tremblay ,
- Pierre M. Boulanger ,
- Justin Muncaster ,
- James Klingshirn ,
- Robert Proebstel ,
- Giovanni Lepore ,
- Eugene Pochapsky ,
- Katrin Strandemar ,
- Nicholas Högasten ,
- Karl Rydqvist ,
- Theodore R. Hoelter ,
- Jeremy P. Walker ,
- Per O. Elmfors ,
- Austin A. Richards ,
- Sylan M. Rodriguez ,
- John C. Day ,
- Hugo Hedberg ,
- Tien Nguyen ,
- Fredrik Gihl ,
- Rasmus Loman
Systems and methods include an image capture component configured to capture infrared images of a scene, and a logic device configured to identify a target in the images, acquire temperature data associated with the target based on the images, evaluate the temperature data and determine a corresponding temperature classification, and process the identified target in accordance with the temperature classification. The logic device identifies a person and tracks the person across a subset of the images, identify a measurement location for the target in a subset of the images based on target feature points identified by a neural network, and measure a temperature of the location using corresponding values from one or more captured thermal images. The logic device is further configured calculate a core body temperature of the target using the temperature data to determine whether the target has a fever and calibrate using one or more black bodies.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
The present technology relates to an information processing apparatus, an information processing method, and a program capable of making a path plan avoiding a crowd.
A cost map indicating a risk of passing through a region is generated using crowd information. The present technology can be applied to unmanned aerial vehicle (UAV) traffic management (UTM) and the like that control a UAV, for example.
PEDESTRIAN SEARCH METHOD, SERVER, AND STORAGE MEDIUM
Provided are a pedestrian search method, a server, and a storage medium. The pedestrian search method is described as follows: a pedestrian detection is performed on each segment of monitoring video to obtain multiple pedestrian tracks, where each pedestrian track of the multiple pedestrian tracks includes multiple video frame images of a same pedestrian; and pedestrian tracks belonging to the same pedestrian is determined according to video frame images in the multiple pedestrian tracks, and the pedestrian tracks of the same pedestrian are merged.
MOVEMENT STATE ESTIMATION DEVICE, MOVEMENT STATE ESTIMATION METHOD AND PROGRAM RECORDING MEDIUM
[Problem] To provide a motion condition estimation device, a motion condition estimation method and a motion condition estimation program capable of accurately estimating the motion condition of monitored subjects even in a crowded environment. [Solution] A motion condition estimation device according to the present invention is provided with a quantity estimating means 81 and a motion condition estimating means 82. The quantity estimating means 81 uses a plurality of chronologically consecutive images to estimate a quantity of monitored subjects for each local region in each image. The motion condition estimating means 82 estimates the motion condition of the monitored subjects from chronological changes in the quantities estimated in each local region.
Identifying objects within images from different sources
Techniques are disclosed for providing a notification that a person is at a particular location. For example, a resident device may receive from a user device an image that shows a face of a first person, the image being captured by a first camera of the user device. The resident device may also receive, from another device having a second camera, a second image showing a portion of a face of a second person, the second camera having a viewable area showing a particular location. The resident device may determine a score indicating a level of similarity between a first set of characteristics associated with the face of the first person and a second set of characteristics associated with the face of a second person. The resident device may then provide to the user device a notification based on determining the score.
Surveillance information generation apparatus, imaging direction estimation apparatus, surveillance information generation method, imaging direction estimation method, and program
A surveillance information generation apparatus (2000) includes a first surveillance image acquisition unit (2020), a second surveillance image acquisition unit (2040), and a generation unit (2060). The first surveillance image acquisition unit (2020) acquires a first surveillance image (12) generated by a fixed camera (10). The second surveillance image acquisition unit (2040) acquires a second surveillance image (22) generated by a moving camera (20). The generation unit (2060) generates surveillance information (30) relating to object surveillance, using the first surveillance image (12) and first surveillance information (14).