G06K9/22

Printer method and apparatus

A printer has a light source that directs light toward a print medium path. An RGB sensor is positioned to receive light from the light source that is reflected by the print medium. An infrared light filter is situated between the RGB sensor and the paper to filter out infrared light. A programmed processor is coupled to an output of the RGB sensor in order to detect form elements in the output from the RGB light sensor, retrieve a stored electronic representation of a form, match the form elements to the stored electronic representation of the form to identify locations on the form, and control printing to the form.

Determining elongation of elastic bandage

The present invention is directed to new methods of determining elongation, tension and applied pressure of elastic bandages comprising tension indicators. In one embodiment, a computer-implemented method of detecting elongation of an elastic bandage (e.g. on a mobile computing device having a processor and graphical user interface) is described. The method comprises receiving image data that includes a digital photograph of an elongated tension indicator of an elastic bandage; analyzing the image data to determine elongation of the elastic bandage by comparing geometric features of the elongated tension indicator to model geometric features that define a predetermined elongation state (such as an unelongated state); and providing output indicia associated with the determined elongation. Also described are various articles, some of which are intermediate articles of the methods described herein. Such articles include non-transient computer readable medium, a three-dimensional member comprising at least one layer of certain elastic bandages. In one embodiment, the elastic bandage comprises a tension indicator and a computer readable code.

Arrangement for, and method of, expeditiously adjusting reading parameters of an imaging reader based on target distance

A distance to a target to be read by image capture over a range of working distances is determined by directing an aiming light spot along an aiming axis to the target, and by capturing a first image of the target containing the aiming light spot, and by capturing a second image of the target without the aiming light spot. Each image is captured in a frame over a field of view having an imaging axis offset from the aiming axis. An image pre-processor compares first image data from the first image with second image data from the second image over a common fractional region of both frames to obtain a position of the aiming light spot in the first image, and determines the distance to the target based on the position of the aiming light spot in the first image.

METHOD AND SYSTEM FOR MEASURING DIMENSIONS OF A TARGET OBJECT

The invention relates to a method for measuring dimensions of a target object. The method comprises acquiring depth data representative of the physical space, the depth data comprising data of the target object, converting the depth data into a point cloud, extracting at least one plane from the point cloud, identifying a ground plane, eliminating the ground plane from the point cloud, extracting at least one point cluster from the remaining point cloud, identifying a point cluster of the target object, estimating dimensions of the target object based on the point cluster of the target object.

Character recognition apparatus, character recognition processing system, and non-transitory computer readable medium
09792495 · 2017-10-17 · ·

A character recognition apparatus includes a stroke extracting unit, a noise-candidate extracting unit, a generating unit, a unit, and a specifying unit. The stroke extracting unit extracts multiple strokes from a recognition target. The noise-candidate extracting unit extracts noise candidates from the strokes. The generating unit generates multiple recognition result candidates obtained by removing at least one of the noise candidates from the recognition target. The unit performs character recognition on the recognition result candidates and obtains recognition scores. The specifying unit uses the recognition scores to specify a most likely recognition result candidate from the recognition result candidates as a recognition result.

CODE RECOGNITION DEVICE
20170293788 · 2017-10-12 ·

According to one embodiment, a code recognition device includes a reader, a processor for region detection, for first extraction, and for second extraction. The reader photographs a code image . The processor detects a code region and a letter region included in the code image from the code image photographed by the reader. The processor extracts first code information indicating the code image from the code region. The processor also extracts second code information indicating the code image from the letter region.

Three dimensional image scan for vehicle

Systems and methods provide for an automated system for generating one or more three dimensional (3D) images of a vehicle and/or a baseline image for that vehicle. The system may receive 3D images of a plurality of vehicles of a same type (e.g., same make, model, year, etc.) and generate a 3D image of a baseline vehicle for vehicles of that same type based on 3D images of the plurality of vehicles of the particular type. The system may use a 3D image of the baseline vehicle to determine a characteristic of another vehicle, such as a modification made to the vehicle, damage to the vehicle, cost to repair the vehicle or replace parts of the vehicle, a value of the vehicle, an insurance quote for the vehicle, etc. In some aspects, the 3D images may optionally comprise 3D point clouds, and 3D laser scanners may be used to capture 3D images of vehicles.

Pointing device using camera and outputting mark
09785253 · 2017-10-10 ·

Pointing device like mouse or joystick comprises camera for capturing the display screen and image processing means for recognizing and tracking the pointing cursor icon or mark from the captured image and producing the pointing signal. The pointing device of present invention can be used with any type of display without and additional tracking means like ultra sonic sensor, infrared sensor or touch sensor. The pointing device of present invention includes mark outputting portion, camera portion for capturing the said mark outputting portion and image processing portion for recognizing the said mark outputting portion from the captured image and producing the pointing signal.

Depth sensor based auto-focus system for an indicia scanner

An indicia reading terminal has a three-dimensional depth sensor, a two dimensional image sensor, an autofocus lens assembly, and a processor. The three dimensional depth sensor captures a depth image of a field of view and create a depth map from the depth image, the depth map having one or more surface distances. The two dimensional image sensor receives incident light and capture an image therefrom. The autofocusing lens assembly is positioned proximate to the two dimensional image sensor such that the incident light passes through the autofocusing lens before reaching the two dimensional image sensor. The processor is communicatively coupled to the two dimensional image sensor, the three dimensional depth sensor, and the autofocusing lens assembly.

Information processor having an input unit which inputs information on an input medium

The information processor includes an input unit having an illuminating part and an image pickup part, and an input medium having an input surface on which inputting of information is carried out by the input unit which has position coordinates on the input surface coded by a dot pattern. The input unit irradiates light from the illuminating part onto input surface of the input medium. The irradiated light reflects off the dot contained in the dot pattern which is picked up by the image pickup part, and position information of the dot pattern is obtained. The input unit further has an input unit angular position measuring part that measures an angular position of the input unit when the light is irradiated. Based on corrected position information obtained by correction processing based on the angular position, input information, that is specified by the position coordinates is obtained.