Patent classifications
G06V10/752
Method and system to determine distance to an object in an image
A controller/application analyzes image data from a camera to determine the distance to an object in an image based on the size of the object in the image and based on a known focal length of a camera that captured the image and based on a known dimension of the actual object. The known dimension of the object may be retrieved from a database that is indexed according to outline shape, color, markings, contour, or other similar features or characteristics. The distance determined from analysis of the image and objects therein may be used to calibrate, or to verify the calibration of, complex distance determining systems that may include LIDAR. Object distance determinations in different image frames, whether to the same or different object, taken a known time apart may be used to determine speed of the camera that took the images, or speed of a vehicle associated with camera.
Removing unwanted objects from a photograph
Methods, systems and computer program products for removing unwanted objects from a photograph are provided. Aspects include identifying a plurality of objects in the photograph and classifying each of the plurality of objects as one of a static object and a dynamic object. Aspects also include removing one or more of the plurality of objects classified as dynamic objects from the photograph and identifying one or more additional photographs, which include one or more of the plurality of objects classified as static objects. Aspects further include integrating content from at least one of the one or more additional photographs in a location of the removed dynamic objects in the photograph.
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
An image processing device includes a data processing unit that processes data of a face image which is captured to include a face. The data processing unit generates an edge image by filtering the face image to detect an edge in a scanning direction, extracts a sampling value as the information regarding a gradient magnitude and whether the gradient is positive or negative from each of positions in the edge image corresponding to a plurality of points constituting the sampling curve, calculates a likelihood with respect to the sampling curve by setting points having a positive gradient and a negative gradient as likelihood evaluation targets in a first point group and a second point group, and detects a sampling curve having a maximum likelihood as a pupil or an iris among a plurality of sampling curves.
MODEL-BASED SEGMENTATION OF AN ANATOMICAL STRUCTURE
A system and method are provided for segmentation of an anatomical structure in which a user may interactively specify a limited set of boundary points of the anatomical structure in a view of a medical image. The set of boundary points may, on its own, be considered an insufficient segmentation of the anatomical structure in the medical image, but is rather used to select a segmentation model from a plurality of different segmentation models. The selection is based on a goodness-of-fit measure between the boundary points and each of the segmentation models. For example, a best-fitting model may be selected and used for segmentation of the anatomical structure. It is therefore not needed for the user to delineate the entire anatomical structure, which would be time consuming and ultimately error prone, nor is it needed for a segmentation algorithm to autonomously have to select a segmentation model, which may yield an erroneous selection.
Using obstacle clearance to measure precise lateral gap
A system and method is provided for identifying an object along a road, where the object may be represented by a bounding box, and projecting a set of obstacle points within the bounding box corresponding to the identified object. In one aspect, a two-dimensional plane oriented perpendicular to a direction of the movement of the vehicle may be identified. In another aspect, the areas of the plane that may be occupied based on the set of obstacle points may be determined to generate a contour of the identified object. Thereafter, the height profiles of the identified object and the vehicle may be determined and identified, respectively. Based on the height profiles, a minimum clearance may be determined.
METHODS AND APPARATUS FOR OXYGENATION AND/OR CO2 REMOVAL
Described is an apparatus for oxygenation and/or CO2 clearance of a patient, comprising: a flow source or a connection for a flow source for providing a gas flow, a gas flow modulator, a controller to control the gas flow, wherein the controller is operable to: receive input relating to heart activity and/or trachea gas flow of the patient, and control the gas flow modulator to provide a varying gas flow with one or more oscillating components with a frequency or frequencies based on the heart activity and/or trachea flow of the patient.
DETECTION METHOD AND SYSTEM FOR BASE-STATION FEEDER LINE, AND RELATED APPARATUS
Provided in the present disclosure are a detection method and system for a base station feeder line, and a related apparatus. The method includes: storing, by an image acquisition terminal, a collected feeder line image of a base station to be detected and an identifier of the corresponding base station to be detected in correspondence; and searching for and acquiring, by an image processing terminal, a target feeder line image according to a detection request, performing image preprocessing on the target feeder line image to convert the target feeder line image into a feature grayscale image, comparing the obtained feature grayscale image with a standard feeder line feature grayscale image, and determining, according to a comparison result, whether a feeder line of a target base station to be detected is correctly installed.
Automatic localized evaluation of contours with visual feedback
A localized evaluation network incorporates a discriminator acting as classifier, which may be included within a generative adversarial network (GAN). GAN may include a generative network such as U-NET for creating segmentations. The localized evaluation network is trained on image pairs including medical images of organs of interest and segmentation (mask) images. The network is trained to distinguish whether an image pair does or does not represent the ground truth. GAN examines interior layers of the discriminator and evaluates how much each localized image region contributes to the final classification. The discriminator may analyze regions of the image pair that contribute to a classification by analyzing layer weights of the machine learning model. Disclosed embodiments include a visual attribute, such as a heat map, that represents contributions of localized regions of a contour to an overall confidence score. These localized regions may be highlighted and reported for quality assurance review.
Predictive information for free space gesture control and communication
Free space machine interface and control can be facilitated by predictive entities useful in interpreting a control object's position and/or motion (including objects having one or more articulating members, i.e., humans and/or animals and/or machines). Predictive entities can be driven using motion information captured using image information or the equivalents. Predictive information can be improved applying techniques for correlating with information from observations.
Robotic surveying of fruit plants
A method of machine vision includes identifying contours of fruits in a first image and a second image and performing two-way matching of contours to identify pairs of matched contours, each pair comprising a respective first contour in the first image that matches a respective second contour in the second image. For each pair of matched contours, a respective affine transformation that transforms points in the respective second contour to points in the respective first contour is identified. The second image is mapped to the first image using the affine transformations to form a composite image and the number of fruits in the composite image is counted.