Patent classifications
G06V10/457
System for automated text and halftone segmentation
A method and system for segmenting text from non-text portions of a digital image using the size, solidity, and run length characteristics of connected components within the image data. For a connected component comprising a rectangular group of pixels enclosing a set of connected pixels having the same binary state, the size characteristic may be based on a ratio of height to width of the connected component and the total number of pixels within the connected component, the solidity characteristic may be based on a ratio of pixels within the connected component to a total number of pixels within a convex hull of the set of connected pixels, and the run length characteristic may be based on a number of transitions within the connected component.
UNDERWATER CAMERA BIOMASS PREDICTION AGGREGATION
Methods, systems, and apparatus, including computer programs encoded on computer-storage media, for underwater camera biomass prediction aggregation. In some implementations, an exemplary method includes obtaining images of fish captured by an underwater camera; providing data of the images to a trained model; obtaining output of the trained model indicating the likelihoods that the biomass of fish are within multiple ranges; combining likelihoods of the output based on one or more ranges common to likelihoods of two or more fish to generate a biomass distribution; and determining an action based on the biomass distribution.
Method and system for visual based inspection of rotating objects
This disclosure relates to method and system for visual inspection of rotating components. The method includes representing rotation cycles of a rotating component as spatial features based on video or image frames, ascertaining and/or evolving Hidden Markov Model (HMM) chains for the cycles, ascertaining a count of the rotating component in the frames and/or labelling the frames with ascertained states of the HMM chains.
Processing digitized handwriting
A handwritten text processing system processes a digitized document including handwritten text input to generate an output version of the digitized document that allows users to execute text processing functions on the textual content of the digitized document. Each word of the digitized data is extracted by converting the digitized document into images, binarizing the images, and segmenting the images into binary image patches. Each binary image patch is further processed to identify if the word is machine-generated or if the word is handwritten. The output version is generated by combining underlying images of the pages of the digitized document with words from the pages superimposed in a transparent font at positions that coincide with the positions of the words in the underlying images.
Photo tagging system and method
A system for matching one or more participants of an event to a digital photograph they appear in. An image processing module analyzes a digital photograph of the event where one more participants appear and identifies one or more barcodes worn by the participants. Each barcode comprises a matrix with a unique value associated with a participant ID. A matrix analysis module reads the identified matrix, calculates its value and matches the participant with said digital photograph the participant appears in.
Method, system and apparatus for determining a property of an image
A method of determining a property of an image (176) captured by a camera (127). Vanishing points (320, 330, 340) are in the image (176). Each pixel of the image (176) is associated with one or more of the vanishing points (320, 330, 340) based on an orientation of the image gradient at the pixel. The image is partitioned into a set of regions associated with a pair of the determined vanishing points based on the vanishing point associations for pixels in the image (176). Boundaries of the regions are aligned with the associated vanishing points. For at least one of the plurality of regions, a confidence value is determined for the property of the image based on pixels in the region. The property of the image is determined for one or more pixels within the image (176) based on the confidence value.
System and method for determining dimensions of an object in an image
An information handling system includes a three dimensional camera and a processor. The three dimensional camera is configured to capture a three dimensional image. The processor is configured to communicate with the three dimensional camera. The processor to provide the three dimensional image to be displayed on a display screen of the information handling system, to determine three dimensional coordinates for an object within the three dimensional image, and to calculate a dimension of the object based on the three dimensional coordinates.
SYSTEMS AND METHODS FOR BUFFER-FREE LANE DETECTION
A method of performing lane detection without the use of a frame buffer may include capturing image frames with an image sensor in a vehicular imaging system. Feature extraction circuitry in the vehicular imaging system may analyze a frame to detect features corresponding to possible lane markers. The features may be extracted and stored in memory for further processing, while the rest of the frame may not be stored in memory and may not undergo further processing. Processing circuitry may perform a first estimation of lane marker location based on a continuous feature that is present in multiple different image frames and may perform a second estimation of lane marker location based on multiple features in a single image frame that form a connected feature. The processing circuitry may determine the presence of a lane marker based on the continuous feature and the connected feature.
Method and system for hierarchical parsing and semantic navigation of full body computed tomography data
A method and apparatus for hierarchical parsing and semantic navigation of a full or partial body computed tomography CT scan is disclosed. In particular, organs are segmented and anatomic landmarks are detected in a full or partial body CT volume. One or more predetermined slices of the CT volume are detected. A plurality of anatomic landmarks and organ centers are then detected in the CT volume using a discriminative anatomical network, each detected in a portion of the CT volume constrained by at least one of the detected slices. A plurality of organs, such as heart, liver, kidneys, spleen, bladder, and prostate, are detected in a sense of a bounding box and segmented in the CT volume, detection of each organ bounding box constrained by the detected organ centers and anatomic landmarks. Organ segmentation is via a database-guided segmentation method.
System and method for partially occluded object detection
A method for partially occluded object detection includes obtaining a response map for a detection window of an input image, the response map based on a trained model and including a root layer and a parts layer. The method includes determining visibility flags for each root cell of the root layer and each part of the parts layer. The visibility flag is one of visible or occluded. The method includes determining an occlusion penalty for each root cell with a visibility flag of occluded and for each part with a visibility flag of occluded. The occlusion penalty is based on a location of the root cell or the part with respect to the detection window. The method determines a detection score for the detection window based on the visibility flags and the occlusion penalties and generates an estimated visibility map for object detection based on the detection score.