Patent classifications
G06T2207/20024
IMAGE QUALITY METRIC FOR HDR IMAGES AND VIDEO
Methods and systems for generating an image quality metric are described. A reference and a test image are first converted to the ITP color space. After calculating difference images ΔI, ΔT, and ΔP, using the color channels of the two images, the difference images are convolved with low pass filters, one for the I channel and one for the chroma channels (I or P). The image quality metric is computed as a function of the sum of squares of filtered ΔI, ΔT, and ΔP values. The chroma low-pass filter is designed to maximize matching the image quality metric with subjective results.
Auxtiliary filtering device of electronic device and cellphone
An auxiliary filtering device for face recognition is provided. The auxiliary filtering device is used to exclude an ineligible object to be identified according to the relative relationship between object distances and image sizes, the image variation with time and/or the feature difference between images captured by different cameras to prevent the possibility of cracking the face recognition by using a photo or a video.
Enhanced catheter navigation methods and apparatus
Methods, apparatus, and systems are provided for facilitating the navigation of a catheter between first and second locations within a subject based on display of serial images corresponding to positions of the catheter at successive incremental times. Image production includes sensing catheter positions to produce location data for each time increment. For each position P.sub.i, the corresponding location data is processed to respectively produce an image I.sub.i reflecting the position of the catheter at a time T.sub.i. Each image I.sub.i is successively displayed at a time equal to T.sub.i+d, where d is an image processing visualization delay. Upon a condition that the catheter is displaced to a selected interim location between the first and second locations, the processing of the location data is switched from being performed by a first process associated with a first visualization delay to a second process associated with a second different visualization delay.
IMAGE FUSION
In general, techniques are described regarding fusing or combining frames of image data to generate composite frames of image data. Cameras comprising camera processors configured to perform the techniques are also disclosed. A camera processor may capture multiple frames at various focal lengths. The frames of image data may have various regions of the respective frame in focus, whereas other regions of the respective frame may not be in focus, due to particular configurations of lens and sensor combinations used. The camera processor may combine the frames to achieve a single composite frame having both a first region (e.g., a center region) and a second region (e.g., an outer region) in focus.
Detection target positioning device, detection target positioning method, and sight tracking device
Disclosed is a detection target positioning method and device. The method comprises: acquiring an original image and pre-processing the original image to obtain a gradation of each pixel in a target gradation image corresponding to a target region including a detection target; calculating first gradation sets corresponding to rows of pixels of the target gradation image and second gradation sets corresponding to columns of pixels of the target gradation image; and determining rows of two ends of the detection target in a column direction according to the first gradation sets, determining columns of two ends of the detection target in a row direction according to the second gradation sets, and determining a center of the detection target according to the row of two ends of the detection target in the column direction and the columns of two ends of the detection target in the row direction.
Machine-learning for enhanced machine reading of non-ideal capture conditions
Implementations of the present disclosure include receiving a training image, providing a hash pattern that is representative of the training image, applying a plurality of filters to the training image to provide a respective plurality of filtered training images, identifying a filter to be associated with the hash pattern based on the plurality of filtered training images, and storing a mapping of the filter to the hash pattern within a set of mapping in a data store.
Pixel value calibration method and pixel value calibration device
A pixel value calibration method includes: obtaining input image data generated by pixels, the input image data including a first group of pixel values in a first color plane and a second group of pixel values in a second color plane, generated by a first portion and a second portion of the pixels respectively; determining a difference function associated with filter response values and target values, the filter response values being generated by utilizing characteristic filter coefficients to filter first and second estimated pixel values of estimated pixel data in the first and second color planes, respectively; determining a set of calibration filter coefficients by calculating a solution of the estimated pixel data, the solution resulting in a minimum value of the difference function; and filtering the input image data, by a filter circuit using the set of calibration filter coefficients, to calibrate the first group of pixel values.
Image processing apparatus, image processing method, and image pickup apparatus for displaying image for use in focusing operation
An apparatus which is capable of displaying an image for a user to easily recognize the brightness and colors of an area of interest in the image and easily determine whether a subject is in focus. The amount of image shift between parallax image signals is calculated. The amount of blur in an area where an image shift occurs in the parallax image signals is determined based on the amount of image shift. A blurring process is performed on at least one of the parallax image signals based on the amount of blur. An image based on the display image signal generated based on the resulting parallax image signal is displayed on a display. The determined amount of blur is greater than the amount of blur shown by a subject image defocused by the amount of defocus converted from the amount of image shift.
Apparatus and method for contrast-enhanced ultrasound imaging
An apparatus and a method for contrast-enhanced ultrasound (CEUS) including use of a fluid dynamics model for the analysis of dynamic contrast-enhanced ultrasound (DCEUS).
Identifying location of shreds on an imaged form
Disclosed herein is a machine learning application for automatically reading filled-in forms. There are multiple steps involved in using a computer to accurately read a handwritten form. First, the system identifies the form. Second, the system identifies what parts of the form are important. Third, the important parts are extracted as image data (known as shreds). Finally, fourth, the system interprets the shreds. This application is focused on steps two and three of that overall process. The disclosed techniques relate to training a machine learning system on a given series of forms such that when provided future filled-in forms within that series, the system is able to extract the portions of the filled-in form that are important/relevant.