G06V30/224

Battery test system with camera

The present disclosure relates to a battery test system for a vehicle that includes a camera configured to capture an image of a vehicle identification number located on the vehicle, the camera being coupled to a processor which determines characters of the vehicle identification number from the image of the camera and correlates the characters of the vehicle identification number to a vehicle identification number database to receive battery parameters for the vehicle, a battery tester that is removably connected to terminals of a battery of the vehicle and configured to receive battery test results, and a display which conveys information relating to the battery parameters and the battery test results.

Battery test system with camera

The present disclosure relates to a battery test system for a vehicle that includes a camera configured to capture an image of a vehicle identification number located on the vehicle, the camera being coupled to a processor which determines characters of the vehicle identification number from the image of the camera and correlates the characters of the vehicle identification number to a vehicle identification number database to receive battery parameters for the vehicle, a battery tester that is removably connected to terminals of a battery of the vehicle and configured to receive battery test results, and a display which conveys information relating to the battery parameters and the battery test results.

User feedback for real-time checking and improving quality of scanned image

A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.

User feedback for real-time checking and improving quality of scanned image

A smartphone may be freely moved in three dimensions as it captures a stream of images of an object. Multiple image frames may be captured in different orientations and distances from the object and combined into a composite image representing an image of the object. The image frames may be formed into the composite image based on representing features of each image frame as a set of points in a three dimensional point cloud. Inconsistencies between the image frames may be adjusted when projecting respective points in the point cloud into the composite image. Quality of the image frames may be improved by processing the image frames to correct errors. Distracting features, such as the finger of a user holding the object being scanned, can be replaced with background content. As the scan progresses, a direction for capturing subsequent image frames is provided to a user as a real-time feedback.

Systems, methods and computer program products for automatically extracting information from a flowchart image
11704922 · 2023-07-18 · ·

A method of extracting information from a flowchart image comprising a plurality of closed-shaped data nodes having text enclosed within, connecting lines connecting the plurality of closed-shaped data nodes and free text adjacent to the connecting lines includes receiving the flowchart image, detecting the closed-shaped data nodes, localizing the text enclosed within the closed-shaped data nodes, and masking the localized text.to generate an annotated image. Lines in the annotated image are the detected to reconstruct them as closed-shaped data nodes and connecting lines. A tree frame with the plurality of closed-shaped data nodes and the connecting lines is extracted. The free text is then localized. Chunks of the free text oriented and positioned proximally together are assembled into text blocks using an orientation-based two-dimensional clustering.

Systems, methods and computer program products for automatically extracting information from a flowchart image
11704922 · 2023-07-18 · ·

A method of extracting information from a flowchart image comprising a plurality of closed-shaped data nodes having text enclosed within, connecting lines connecting the plurality of closed-shaped data nodes and free text adjacent to the connecting lines includes receiving the flowchart image, detecting the closed-shaped data nodes, localizing the text enclosed within the closed-shaped data nodes, and masking the localized text.to generate an annotated image. Lines in the annotated image are the detected to reconstruct them as closed-shaped data nodes and connecting lines. A tree frame with the plurality of closed-shaped data nodes and the connecting lines is extracted. The free text is then localized. Chunks of the free text oriented and positioned proximally together are assembled into text blocks using an orientation-based two-dimensional clustering.

Image processing apparatus, image processing method, and storage medium
11704921 · 2023-07-18 · ·

Character recognition processing suitable to a handwritten character area and a printed character area among character areas in a scanned image of a document is performed. Next, character recognition results for the handwritten character area and character recognition results for the printed character area are integrated and a likelihood indicating a probability of being an extraction target is calculated for a candidate character string that is an extraction candidate among the integrated character recognition results and a character string that is the item value is determined. Then, at the time of the determination, different evaluation indications are used in a case where a character originating from the handwritten character area is included in characters constituting the candidate character string and in a case where such a character is not included.

Image processing apparatus, image processing method, and storage medium
11704921 · 2023-07-18 · ·

Character recognition processing suitable to a handwritten character area and a printed character area among character areas in a scanned image of a document is performed. Next, character recognition results for the handwritten character area and character recognition results for the printed character area are integrated and a likelihood indicating a probability of being an extraction target is calculated for a candidate character string that is an extraction candidate among the integrated character recognition results and a character string that is the item value is determined. Then, at the time of the determination, different evaluation indications are used in a case where a character originating from the handwritten character area is included in characters constituting the candidate character string and in a case where such a character is not included.

ITEM VERIFICATION AND AUTHENTICATION SYSTEM AND METHOD
20230221679 · 2023-07-13 · ·

Disclosed are a system and method to verify and authenticate an item. The system includes a memory and a processor. The processor includes a reader module, an optical character recognition (OCR) module, an image recognition module, a camera module, and an item digital passport (IDP) module. The reader module scans and processes a matrix barcode and other barcodes. The optical character recognition (OCR) module scans one or more of a micro size alphanumeric code, a nano size alphanumeric code, and a visible alphanumeric code present on a hologram that is embossed, printed, or lasered on the item and verifies the item by comparing the scanned data with the data corresponding to the item stored in the memory. The optical character recognition module authenticates the item if the scanned data matches with the data corresponding to the item stored in the memory. The optical character recognition module facilitates a user interface to display the comparison data and the authentication data. The image recognition module verifies and authenticates one or more of a micro size image, a nano size image, and a visible holographic image present on the hologram. The camera module analyzes the intricate details of one or more of the hologram features and elements, and a security print design features and elements captured by a camera lens. The camera module measures individual features and elements present in the hologram, and the security print design features and elements with an embedded ruler. The item digital passport (IDP) module captures data pertaining to the movement of the items from a source point to a destination point.

Identity document verification based on barcode structure

An identity document can be authenticated using format data of a barcode on the document, such as a barcode on a driver's license. Scan data is obtained by decoding a plurality of barcodes. Format features of the plurality of barcodes are extracted. Scan data is classified into two or more clusters. Each cluster is characterized by a set of format features extracted from the scan data. A barcode on an ID to be verified is scanned. Format features from the barcode of the ID to be verified is compared to at least one of the two or more clusters to authenticate the ID.