G06V10/435

Image processing apparatus and non-transitory computer readable medium
10318801 · 2019-06-11 · ·

An image processing apparatus includes a first extraction unit, a second extraction unit, and a third extraction unit. The first extraction unit extracts a first area in an image that includes plural document images, the first area having low-frequency characteristics. The second extraction unit extracts a second area in the image, the second area having high-frequency characteristics. The third extraction unit extracts areas of the document images by combining the first area and the second area in accordance with whether a background of the image is white.

CAMERA BASED SYSTEM FOR DETERMINING ROBOT HEADING IN INDOOR ENVIRONMENTS
20190155301 · 2019-05-23 · ·

A method for controlling a vehicle, comprising generating image data at a camera system of a robotic vehicle, receiving the image data at a processor of the robotic vehicle, identifying a plurality of bright spots in the image data using the processor, computing a boundary for the plurality of bright spots and generating control data as a function of the boundary.

Virtualization of Tangible Interface Objects
20190080173 · 2019-03-14 ·

An example system includes a stand configured to position a computing device proximate to a physical activity surface. The system further includes a video capture device, a detector, and an activity application. The video capture device is coupled for communication with the computing device and is adapted to capture a video stream that includes an activity scene of the physical activity surface and one or more interface objects physically interactable with by a user. The detector is executable to detect motion in the activity scene based on the processing and, responsive to detecting the motion, process the video stream to detect one or more interface objects included in the activity scene of the physical activity surface. The activity application is executable to present virtual information on a display of the computing device based on the one or more detected interface objects.

IMAGE PROCESSING DEVICE, OBSERVATION DEVICE, AND PROGRAM

An image processing device includes an image processing unit that performs image processing on an observed image in which a cell is imaged and an image processing method selector that is configured to determine an observed image processing method for analyzing the imaged cell on the basis of information of a processed image obtained through image processing of the image processing unit.

Virtualization of tangible interface objects

An example system includes a stand configured to position a computing device proximate to a physical activity surface. The system further includes a video capture device, a detector, and an activity application. The video capture device is coupled for communication with the computing device and is adapted to capture a video stream that includes an activity scene of the physical activity surface and one or more interface objects physically interactable with by a user. The detector is executable to detect motion in the activity scene based on the processing and, responsive to detecting the motion, process the video stream to detect one or more interface objects included in the activity scene of the physical activity surface. The activity application is executable to present virtual information on a display of the computing device based on the one or more detected interface objects.

METHOD AND SYSTEM FOR DETECTION OF CONTAMINANTS PRESENT ON A LENS OF AN IMAGING DEVICE

A method and system for detection of contaminants present on a lens of an imaging device is disclosed. An input image received from an imaging device is split into a plurality of patches of predefined size and a kurtosis value calculated for each and compared with a median kurtosis value. Patches having kurtosis value less than the median kurtosis value are selected. Based on comparison of a first maximum likelihood of the selected patches with a predefined threshold, one or more selected patches are stored. Such patches are split into a top and a bottom portion for processing based on discrete wavelet transform and singular value decomposition, respectively. The top and the bottom portion are merged patches for which a second maximum likelihood is greater than a second predefined threshold, are stored. Further, contaminants in the image are classified into predefined categories based on one or more image features.

APPARATUS, METHOD AND IMAGE PROCESSING DEVICE FOR SMOKE DETECTION IN IMAGE
20180260963 · 2018-09-13 · ·

Smoke detection based on video images includes performing background image modeling on a current image, to acquire a foreground image and a background image of the current image; acquiring one or more candidate areas in the current image used for detecting a moving object, based on the foreground image; calculating attribute information of a candidate area corresponding to the current image and/or the background image; and determining whether there exists smoke in the candidate area according to the attribute information. The smoke can be detected quickly and accurately through video images, but also the detection accuracy of video-based smoke detection when light changes and at complex environments can be provided.

Method and system for identifying bleeding

A three-dimensional image of a patient is generated in time; blood vessels site voxels are compared to a model arterial and venous signals; clusters of voxels are separated so that ones that have spatial growth over time are determined to be bleeding sites.

Method and image processing apparatus for image-based object feature description
09996755 · 2018-06-12 · ·

A method and an image processing apparatus for image-based object feature description are provided. In the method, an object of interest in an input image is detected and a centroid and a direction angle of the object of interest are calculated. Next, a contour of the object of interest is recognized and a distance and a relative angle of each pixel on the contour to the centroid are calculated, in which the relative angle of each pixel is calibrated by using the direction angle. Then, a 360-degree range centered on the centroid is equally divided into multiple angle intervals and the pixels on the contour are separated into multiple groups according to a range covered by each angle interval. Afterwards, a maximum among the distances of the pixels in each group is obtained and used as a feature value of the group. Finally, the feature values of the groups are normalized and collected to form a feature vector that serves as a feature descriptor of the object of interest.

IMAGE PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM
20180150688 · 2018-05-31 · ·

An image processing apparatus includes a first extraction unit, a second extraction unit, and a third extraction unit. The first extraction unit extracts a first area in an image that includes plural document images, the first area having low-frequency characteristics. The second extraction unit extracts a second area in the image, the second area having high-frequency characteristics. The third extraction unit extracts areas of the document images by combining the first area and the second area in accordance with whether a background of the image is white.