Patent classifications
G06K9/52
Vehicular surrounding-monitoring control apparatus
A vehicular surrounding-monitoring control apparatus is mounted in a vehicle having an imaging unit capturing an image of surrounding and a display. The vehicular surrounding-monitoring control apparatus includes: an extraction section extracting, as an object region, an image region corresponding to each object from a captured image obtained by the imaging unit; a determining section determining whether a plurality of objects are coordinated into a group on the basis of positional relations of the object regions and types of the objects; and a display control unit displaying, when the objects are determined to be coordinated into a group, displaying a symbol image, which is specific to the group and expresses the presence of the group, so as to be superimposed on the captured image in the display.
Systems and methods for object tracking
A method performed by an electronic device is described. The method includes determining a local motion pattern by determining a set of local motion vectors within a region of interest between a previous frame and a current frame. The method also includes determining a global motion pattern by determining a set of global motion vectors between the previous frame and the current frame. The method further includes calculating a separation metric based on the local motion pattern and the global motion pattern. The separation metric indicates a motion difference between the local motion pattern and the global motion pattern. The method additionally includes tracking an object based on the separation metric.
SHIFT-AND-MATCH FUSION OF COLOR AND MONO IMAGES
In general, techniques are described that facilitate processing of color image data using both a mono image data and a color image data. A device comprising a monochrome camera, a color camera, and a processor may be configured to perform the techniques. The monochrome camera may be configured to capture monochrome image data of a scene. The color camera may be configured to capture color image data of the scene. A processor may be configured to match features of the color image data to features of the monochrome image data, and compute a finite number of shift values based on the matched features of the color image data and the monochrome image data. The processor may further be configured to shift the color image data based on the finite number of shift values to generate enhanced color image data.
DEVICE AND METHOD FOR MONITORING PEOPLE, METHOD FOR COUNTING PEOPLE AT A LOCATION
A monitoring device to monitor and to count people in a certain area includes an extracting module to extract images of persons from a first signal; and a computing module to process the images of persons from the extracting module. The extracting module removes background of the first signal and extracts images of persons for the computing module. The computing module obtains coordinates of a center of each image of persons and a value of hue of each image of persons. The computing module can match images of persons to persons to be monitored and can constantly determine the instant number of persons being monitored.
DETECTION OF OBJECTS IN IMAGES USING REGION-BASED CONVOLUTIONAL NEURAL NETWORKS
A transformed image is received. The transformed image includes an other-than-visible light image that has been captured using a transformation device. A region of the transformed image is isolated, the region being less than an entirety of the transformed image. By applying to the region a convolutional Neural Network (CNN) which executes using a processor and a memory, and by processing only the region of the transformed image, an object of interest is detected in the region. Upon detecting, an indication is produced to indicate the presence of the object of interest in the region.
METHODS AND APPARATUS FOR AUTOMATED NOISE AND TEXTURE OPTIMIZATION OF DIGITAL IMAGE SENSORS
Systems and methods are disclosed for calibrating an image sensor using a source image taken from the image sensor and comparing it to a reference image. In one embodiment, the method may involve determining the luminance and chrominance values of portions of the image at successive frequency levels and calculating a standard deviation at each frequency level for both the source image and the reference image. The standard deviation values may be compared and a difference determined. Using unit vector search vectors, noise values may be calculated to determine sensor calibration values.
Estimation of object properties in 3D world
Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.
Field display system, field display method, and field display program
A field display system is provided which can intelligibly present to a user a range in which a camera can capture an image of an entire target to be monitored or a certain part or more of the target to be monitored. A projecting unit 5 projects a position in an image captured by a camera, on a plurality of monitoring domains obtained by moving, in parallel, a region to be monitored which defines a range to be checked for an image capturing situation of the camera, the monitoring domain being determined based on a height of a target to be monitored, an image of which is captured by the camera, and specifies fields of the plurality of monitoring domains as a range an image of which the camera can capture without being blocked by an obstacle. An integrating unit 6 integrates the fields in the monitoring domains. The display control unit 7 causes the display apparatus to display an integration result of the fields.
Methods and systems for determining user liveness
A method determining user liveness is provided that includes calculating, by a device, eye openness measures for a frame included in captured authentication data, and storing the eye openness measures in a buffer of the device. Moreover the method includes calculating confidence scores from the eye openness measures stored in the buffer, and detecting an eye blink when a maximum confidence score is greater than a threshold score.
Method and system for assessing damage to infrastructure
A method and system may assess the damage to infrastructure using aerial images captured from an unmanned aerial vehicle (UAV), a manned aerial vehicle (MAV) or from a satellite device. Specifically, an item of infrastructure may be identified for assessing damage. The UAV, MAV, or satellite device may then capture aerial images within an area which surrounds the identified infrastructure item. Subsequently, the aerial images may be analyzed to determine a condition and the extent and/or severity of the damage to the infrastructure item. Furthermore, the aerial images along with indications of the extent of the damage may be displayed on a computing device, where one indication of the severity of the damage surrounds another indication of the severity of the damage on the display.