Patent classifications
G06V30/18105
Image box filtering for optical character recognition
A method for box filtering includes obtaining, by a computing device, a form image, and identifying, by the computing device, a region of the form image that includes boxes. Vertical lines in the region of the form image are detected. The boxes in the region are detected according to the plurality of vertical lines, and image content is extracted from the boxes.
Non-invasive data extraction from digital displays
Example implementations described herein are directed to systems and methods for non-invasive data extraction from digital displays. In an example implementation, a method includes receiving one or more video frames from a video capture device capturing an external display, where the external display is independent the video capture device; determining one or more locations within the external display comprising time varying data of the external display; and for each identified location of the time varying data: determining a data type; applying one or more rules based on the data type; and determining an accuracy of the time varying data within the one or more frames based on the rules.
METHODS AND SYSTEMS FOR ADJUSTING TEXT COLORS IN SCANNED DOCUMENTS
The present disclosure discloses methods and systems for adjusting text colors in scanned documents. The method includes receiving a document for scanning from a user. Then, the document is scanned to generate scanned data. The scanned data is segmented into an image layer and one or more text layers, wherein the one or more text layers include textual content. Thereafter, the text color of the textual content in each text layer is identified. Then, the identified text color of the textual content in the text layer is compared with one or more pre-defined colors. Based on comparison, the text color of the textual content in each text layer is adjusted to match with the one or more pre-defined colors thereby generates a modified text layer. Finally, the modified text layer and the image layer are combined to create a final scanned document.
Image processing apparatus and method for binarization of image data according to adjusted histogram threshold index values
An image processing apparatus has a color image, the image data being constituted by multiple pixels, each of the multiple pixels having a gradation value, and a controller, which is configured to generate a histogram of index values corresponding to brightness values of the multiple pixels constituting the image data, set an original threshold value based on the histogram which is referred to for binarization, detect a mound-shaped part, in the histogram, satisfying a particular condition, set an adjusting direction in which the original threshold value is to be adjusted, set the index value at a base on a particular direction side of a particular mound-shaped part which is one of mound-shaped parts existing on the adjusting direction side with respect to the original threshold value in the histogram as an adjusted threshold value, and apply a binarizing process to the image data using the adjusted threshold value.
SYSTEM AND METHOD FOR A THERMOSTAT ATTRIBUTE RECOGNITION MODEL
A thermostat replacement system includes a handheld user computing device having an image capture device. The handheld user computing device is configured to communicate to a network. Thermostat replacement system also includes a server computing device communicatively coupled to the network. The server computing device includes an image analyzer configured to identify image elements in an image captured by the handheld user computing device, and a machine learning algorithm that includes an image elements table of correspondence of learned thermostat configurations. The server computing device also includes a configurator configured to determine a wirelist for connecting existing thermostat wires to a replacement thermostat back plate using a replacement thermostat identification and the image elements table of correspondence.
Systems and methods for recognizing faces using non-facial information
A wearable apparatus is provided for identifying a person in an environment of a user of the wearable apparatus based on non-facial information. The wearable apparatus includes a wearable image sensor configured to capture a plurality of images from the environment of the user, and a processing device programmed to analyze a first image of the plurality of images to determine that a face appears in the first image. The processing device also analyzes a second image of the plurality of images to identify an item of non-facial information appearing in the second image that was captured within a time period including a time when the first image is captured. The processing device also determines identification information of a person associated with the face based on the item of non-facial information.
Systems and methods for selecting content based on a user's behavior
A system is provided for selecting content for a user of a wearable apparatus based on the user's behavior. In one implementation, the system may include a memory storing executable instructions and at least one processing device. The at least one processing device may be programmed to execute the instructions to analyze a plurality of images captured by a wearable image sensor included in the wearable apparatus to identify one or more of the plurality of images that depict a behavior of the user; determine, based on the analysis, information associated with the one or more images depicting the behavior of the user; and select, based on the information associated with the one or more images depicting the behavior of the user, at least one content item.
METHOD AND SYSTEM FOR PREPARING TEXT IMAGES FOR OPTICAL-CHARACTER RECOGNITION
The current document is directed to methods and systems that acquire an image containing text with curved text lines to generate a corresponding corrected image in which the text lines are straightened and have a rectilinear organization. The method may include identifying a page sub-image within the text-containing image, generating a text-line-curvature model for the page sub-image that associates inclination angles with pixels in the page sub-image, generating local displacements, using the text-line-curvature model, for pixels in the page sub-image, and transferring pixels from the page sub-image to a corrected page-sub-image using the local displacements to construct a corrected page sub-image in which the text lines are straightened and in which the text characters and symbols have a rectilinear arrangement.
NON-INVASIVE DATA EXTRACTION FROM DIGITAL DISPLAYS
Example implementations described herein are directed to systems and methods for non-invasive data extraction from digital displays. In an example implementation, a method includes receiving one or more video frames from a video capture device capturing an external display, where the external display is independent the video capture device; determining one or more locations within the external display comprising time varying data of the external display; and for each identified location of the time varying data: determining a data type; applying one or more rules based on the data type; and determining an accuracy of the time varying data within the one or more frames based on the rules.
Systems and methods for determining an emotional environment from facial expressions
A wearable apparatus is provided for capturing and processing images from an environment of a user. In one implementation, the wearable apparatus may determine an emotional environment of the user of the wearable apparatus. The wearable apparatus may include an image sensor and capture one or more images from an environment around the user. The wearable device may also be configured to analyze the one or more images to identify facial expressions of a person in the image. In some embodiments, the wearable apparatus may also identify the person in the one or more images. The wearable apparatus may also transmit information associated with the facial expression and/or identity to an external device.