Patent classifications
G06T3/0056
IMAGE FEATURE COMBINATION FOR IMAGE-BASED OBJECT RECOGNITION
Methods, systems, and articles of manufacture to improve image recognition searching are disclosed. In some embodiments, a first document image of a known object is used to generate one or more other document images of the same object by applying one or more techniques for synthetically generating images. The synthetically generated images correspond to different variations in conditions under which a potential query image might be captured. Extracted features from an initial image of a known object and features extracted from the one or more synthetically generated images are stored, along with their locations, as part of a common model of the known object. In other embodiments, image recognition search effectiveness is improved by transforming the location of features of multiple images of a same known object into a common coordinate system. This can enhance the accuracy of certain aspects of existing image search/recognition techniques including, for example, geometric verification.
COMPUTER-IMPLEMENTED METHOD FOR VISUALIZATION OF AN ELONGATED ANATOMICAL STRUCTURE
A computer-implemented method for visualization of an elongated anatomical structure (20), for example of a fetal spine using ultrasound is provided. The method comprising the steps of: receiving a plurality of 3D ultrasound image volumes, each image volume depicting at least a portion of an elongated anatomical structure (20); on each 3D ultrasound image volume, automatically or semi-automatically fitting a parametric curve (30) to the depicted portion of the elongated anatomical structure, the parametric curve being defined by curve parameters; reformatting each 3D ultrasound image volume by applying a transformation which straightens the parametric curve along at least one axis, so as to generate a plurality of reformatted image volumes and reformatted parametric curves (32, 34); registering the reformatted image volumes with one another by determining the joining point of their respective parametric curves; and fusing the reformatted image volumes with one another to yield a fused image depicting the whole elongated anatomical structure or a larger portion thereof than the 3D ultrasound image volumes.
DEVICE AND METHOD FOR FOVEATED RENDERING
A display driver includes interface circuitry, image processing circuitry, and drive circuitry. The interface circuitry is configured to receive a full frame image and a foveal image from a source external to the display driver. The image processing circuitry is configured to: upscale the full frame image; render a foveated image from the upscaled full frame image and the foveal image. The foveated image includes a foveal area based on the foveal image, a peripheral are based on the upscaled full frame image, and a border area based on the foveal image and the upscaled full frame image. The border area being located between the foveal area and the peripheral area. The drive circuitry is configured to drive a display panel using the foveated image.
METHOD AND IMAGE-PROCESSING DEVICE FOR DETECTING FOREIGN OBJECTS ON A TRANSPARENT PROTECTIVE COVER OF A VIDEO CAMERA
A method for determining whether or not a transparent protective cover of a video camera comprising a lens-based optical imaging system is partly covered by a foreign object is disclosed. The method comprises: obtaining (402) a first captured image frame captured by the video camera with a first depth of field; obtaining (404) a second captured image frame captured by the video camera with a second depth of field which differs from the first depth of field; and determining (406) whether or not the protective cover is partly covered by the foreign object by analysing whether or not the first and second captured image frames are affected by presence of the foreign object on the protective cover such that the difference between the first depth of field and the second depth of field results in a difference in a luminance pattern of corresponding pixels of a first image frame and a second image frame. The first image frame is based on the first captured image frame and the second image frame is based on the second captured image frame.
AUGMENTED REALITY PROCESSING DEVICE AND METHOD
An augmented reality processing device is provided, comprising an image capturing circuit and a processor. The processor is connected to the image capturing circuit, and execute operations of: generating an original point cloud image according to the first environment image and a physical object in the first environment image; generating an expanded point cloud image corresponding to the physical object from the second environment image according to the first environment image and the physical object point cloud set, and generating a superimposed point cloud image according to the expanded point cloud image and the original point cloud image; and generating a transformation matrix according to the original point cloud image and the expanded point cloud image, and superimposing a virtual object to the second environment image according to the superimposed point cloud image and the transformation matrix.
Image analysis well log data generation
A well log is scanned for one or more dimensions that describe one or more features of a well. Each dimension includes a plurality of values in a numerical format that represents each dimension. A missing value is detected in a first plurality of values of a first dimension of the well log. The first dimension of the well log is transformed, in response to the missing value, into a first image that visually depicts the first dimension including the first plurality of values and the missing value. Based on the first image and based on an image analysis algorithm a second image is created that visually depicts the first plurality of values and includes a found depiction visually depicting a found value in place of the missing value. The found depiction is converted, based on the second image, into a first value in the numerical format.
Projection device and projection image correction method thereof
A projection device and a projection image correction method are provided. Four target coordinates of four target vertices forming a target quadrilateral boundary are obtained. A first trapezoidal boundary is obtained according to a predetermined image boundary and a first coordinate component of each of the four target coordinates. At least one edge of the target quadrilateral boundary is extended until intersecting with at least one of two reference line segments to obtain a second trapezoidal boundary. Bases of the first trapezoidal boundary are perpendicular to bases of the second trapezoidal boundary. First direction scaling processing is performed according to the first trapezoidal boundary, and second direction scaling processing is performed according to the second trapezoidal boundary, to scale an original image into a target image block aligned with the target quadrilateral boundary in an output image. The projection device projects the output image to display a rectangular projection image.
ENCODING PROGRAM MEDIA, ENCODING METHOD, ENCODING APPARATUS, DECODING PROGRAM MEDIA, DECODING METHOD, AND DECODING APPARATUS
An encoding method includes; acquiring a first image; separating the first image into a plurality of second images by extracting a pixel in the first image after every predetermined number of pixels in each of horizontal and vertical directions of the first image; and encoding each of the separated second images. By transmitting those pieces of encoded data, even if a packet loss occurs in one of the second images, the missing pixel can be re-generated based on corresponding neighboring pixels in other second images.
METHOD AND APPARATUS TO EVALUATE RADAR IMAGES AND RADAR DEVICE
In an embodiment, a method to evaluate radar images includes providing a first raw radar image and a second raw radar image and determining, whether a reliability criterion is fulfilled. The method further includes using a first coordinate and a second coordinate output by a trained neural network as an estimate of a position of an object if the reliability criterion is fulfilled, the trained neural network using the first raw radar image and the second raw radar image as an input. The method further includes using a third coordinate and a fourth coordinate output by another radar processing pipeline as the estimate of the position of the object if the reliability criterion is not fulfilled, the radar processing pipeline using the first raw radar image and the second raw radar image as an input.
Vehicle image processing apparatus and vehicle image processing method
A vehicle image processing apparatus includes a group of cameras, a drawing unit that converts a captured image into an image viewed along a line of sight running from a predetermined position in a predetermined direction, a viewing-line-of-sight changing unit that detects whether a first line of sight is switched to a second line of sight, and a viewing-line-of-sight generation/updating unit that acquires parameters concerning the first and second lines of sight after detecting the switching and generates a parameter which is gradually changed from the parameter of the first line of sight to the parameter of the second line of sight. Moreover, the drawing unit generates, on the basis of the gradually changed parameter, an image which is gradually changed from an image viewed along the first line of sight to an image viewed along the second line of sight.