Patent classifications
G06K9/34
Method and system for splitting scheduling problems into sub-problems
A computing system receives user input of scheduling problem data. The scheduling problem data relates to a scheduling problem and includes one or more stations and tasks to be performed by at least one station. The computing system constructs a graph problem using the scheduling problem data. The graph problem includes a graph. The computing system cuts the graph into sub-graphs using a cut algorithm to create a cut result that satisfies a threshold and identifies one or more task exceptions from the sub-graphs in the cut result. The one or more task exceptions are tasks that can be assigned to more than one sub-graph. The computing system creates scheduling sub-problems pertaining to the one or more task exceptions using the cut result.
System for collecting and processing aerial imagery with enhanced 3D and NIR imaging capability
A system for guided geospatial image capture, registration and 2D or 3D mosaicking, that employs automated imagery processing and cutting-edge airborne image mapping technologies for generation of geo-referenced Orthomosaics and Digital Elevation Models from aerial images obtained by UAVs and/or manned aircraft.
Optical character recognition method
The optical character recognition method applies a first OCR engine to provide an identification of characters of at least a first type of characters and zones of at least a second type of characters in the character string image. A second OCR engine is applied on the zones of the at least second type of characters to provide an identification of characters of a second type of characters. The characters identified by the first OCR engine and by the second OCR engine are in a further step combined to obtain the identification of the characters of the character string image.
METHOD FOR DETERMINING AND DISPLAYING PRODUCTS ON AN ELECTRONIC DISPLAY DEVICE
A method for finding and displaying products on a display device of an electronic data processing device is provided. The method includes recognizing an image or image section with at least one imaged object displayed on the display device when the cursor of the data processing device moves into an area of the image or image section. The recognized image or image section is designated with a trigger when the cursor of the data processing device moves into the area of the image or image section and the trigger is superimposed on the area of the recognized image or image section on the display device. The displayed image is segmented and at least one object is detected by an analysis of visual content of the designated image or image section by actuating the superimposed trigger. The at least one detected object may be displayed on the display device.
Image refocusing
A method including automatically segmenting regions of images of a focal stack into segment regions; and based, at least partially, upon selection of one or more of the segment regions, generating a refocused image which induces different ones of the segment regions from at least two of the images of the focal stack. An apparatus including an image segmentator for a focal stack of images, where the image segmentator is configured to automatically form segment regions for each of the images; and a focal stack fuser configured to fuse the focal stack of images into a refocused image, where the refocused image comprises different ones of the segment regions from at least two of the images of the focal stack.
Depth-based image element removal
Various embodiments herein each include at least one of systems, methods, and software to enable depth-based image element removal. Some embodiments may be implemented in a store checkout context, while other embodiments may be implemented in other contexts such as at price-checking kiosks or devices that may be deployed within a store or other retail establishment, a library at a checkout terminal, and the like. Some embodiments include removing elements of images based at least in part on depth data.
SYSTEM AND METHOD FOR MONITORING DISPLAY UNIT COMPLIANCE
Examples of the disclosure provide a compliance monitoring system suitable for display units in a retail store. A wearable device is configured to be worn by a user while in use. The wearable device may be coupled to a server. The wearable device is configured to capture an image of a product label on a display unit. The wearable device may process the captured image to extract product identity information from a machine-readable portion of the product label. The wearable device may retrieve stored label information from the server based on the extracted product identity information. The wearable device may display the retrieved stored label information in a field of view of the user while the user views a human readable portion of the product label, thereby allowing a compliance comparison therebetween.
Translation device that determines whether two consecutive lines in an image should be translated together or separately
A condition determining section (24) determines whether or not two consecutive lines in an image meet a joining condition that is based on a characteristic of a language of a character string, the two consecutive lines being extracted from the character string composed of a plurality of lines. In a case where the joining condition is met, an extracted line joining section (25) and a translation section (26) join and then translate the two consecutive lines.
TRANSITION BETWEEN BINOCULAR AND MONOCULAR VIEWS
An image processing system is designed to generate a canvas view that has smooth transition between binocular views and monocular views. Initially, the image processing system receives top/bottom images and side images of a scene and calculates offsets to generate synthetic side images for left and right view of a user. To realize smooth transition between binocular views and monocular views, the image processing system first warps top/bottom images onto corresponding synthetic side images to generate warped top/bottom images, which realizes the smooth transition in terms of shape. The image processing system then morphs the warped top/bottom images onto the corresponding synthetic side images to generate blended images for left and right eye views with the blended images. The image processing system creates the canvas view which has smooth transition between binocular views and monocular views in terms of image shape and color based on the blended images.
INTERACTIVE COMPETITIVE ADVERTISING COMMENTARY
A sponsoring brand may provide an application for mobile devices that allows users to take pictures of competitor advertisements and that provides responses to any assertions found in the competitor advertisements. The application may instruct a user to capture an image of an advertisement. Various types of detection and/or recognition components may be used to analyze the image to detect and recognize assertions, logos, and other objects or characteristics. The application then displays the image, and also displays responses or commentary relating to any assertions, logos, objects, or characteristics. The responses may point out errors, exaggerations, misstatements, deceptive statements, etc., and may also contain information that promotes the sponsoring brand.