Patent classifications
G06K7/10881
APPARATUSES, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR FLICKER REDUCTION IN A MULTI-SENSOR ENVIRONMENT
Embodiments of the disclosure relate generally to flicker reduction in a multi-imager environment. Embodiments include methods, computer program products, and apparatuses configured for producing a near-field illumination using a near-field illuminator, the near-field illumination produced at a defined pulse train. A near-field image sensor may be exposed near the start of a near-field illumination pulse, and a far-field image sensor may be exposed between pulses of the near-field illumination. Some embodiments, additionally or alternatively, are configured for detecting an illuminator switch event, deactivating the near-field illuminator source, and producing, using a far-field illuminator source, a far-field illumination. Upon switching the illuminator source, some such embodiments are configured for exposing a far-field illuminator near the start of the far-field illumination pulse, and exposing a near-field image sensor near the start of the next available far-field illumination pulse. Such image capture may repeat until an image processing task such as barcode reading is successful.
Function execution system
A function execution system includes a function execution device configured to execute a plurality of functions, a storage device, an image acquisition device, and a reader device. The function execution device switches between first to fourth functions to be executed depending on whether code information, which is read by the reader device based on an image representing an information code included in a captured image acquired by the image acquisition device from the image capturing unit, is first code information or second code information and whether user information stored in the storage device is first user information or second user information.
Methods and apparatus for providing out-of-range indications for imaging readers
Methods and apparatus for providing out-of-range indications are disclosed. An example imaging reader includes an image sensor and an optical assembly. The imaging reader may include a distance determining module configured to determine a distance to a target. The imaging reader may include an indication determining module configured to determine an out-of-range indication when the distance satisfies a first condition. An indicator may be included and configured to present the out-of-range indication. The image sensor may be configured to capture a representation of an image of the target when the distance satisfies a second condition. The imaging reader may include an indicia decoder configured to decode an indicia in the representation to determine an indicia payload and/or a communication interface to convey the indicia payload to a host system.
Methods and Apparatus for Locating Small Indicia in Large Images
Methods and apparatus for locating small indicia in large images are disclosed herein. An example method includes: identifying an aiming pattern zone that includes a detected or presumed location of an aiming light pattern, wherein an offset between the location and a center of image data varies due to a parallax; determining one or more coordinates of the aiming pattern zone; capturing image data representing an image of an environment appearing within a field of view (FOV) of a scanner including the indicia; encoding the one or more coordinates into a tagline of the image; and providing the image with the tagline to an indicia decoder such that the indicia decoder attempts to decode the indicia from the image data starting in a region of the image data selected based upon the one or more coordinates.
Perspective distortion correction of discrete optical patterns in images using depth sensing
Depth information from a depth sensor, such as a LiDAR system, is used to correct perspective distortion for decoding an optical pattern in a first image acquired by a camera. Image data from the first image is spatially correlated with the depth information. The depth information is used to identify a surface in the scene and to distort the first image to generate a second image, such that the surface in the second image is parallel to an image plane of the second image. The second image is then analyzed to decode an optical pattern on the surface identified in the scene.
HOUSING OF AN IMAGE CAPTURING DEVICE
A housing of an imaging unit is disclosed. The housing comprises an outer surface and an inner surface, wherein the inner surface of the housing defines a lens channel sized to receive a lens barrel. Further, the inner surface of the housing defines a helical step in the lens channel, wherein the helical step protrudes outwardly into the lens channel, and wherein the helical step is angled at a first predetermined pitch. Further, the inner surface of the lens channel defines a glue pocket, in the lens channel, which extends from the inner surface of the housing to the outer surface of the housing such that a first edge surface, defining a portion of a periphery of the glue pocket, is coplanar with the helical step.
METHODS AND SYSTEMS OF HARVESTING DATA FOR TRAINING MACHINE LEARNING (ML) MODEL
Various embodiments disclosed herein describe a method comprising receiving indicia data from an indicia scanner. The indicia data comprises at least decoded data obtained based on decoding an indicium in an image, an image tile comprising a portion of the indicium, and/or location of one or more corners of the portion of the indicium in the image. Further, the method includes generating an image of an ideal indicium based on at least the decoded data. Thereafter, the image of the ideal indicium is modified to generate a modified image of the ideal indicium. Further, the portion of the indicium is retrieved from modified image. A clean image tile comprises a portion of the ideal indicium. Furthermore, the method includes generating training data, wherein the training data includes the portion of the indicium and the portion of the ideal indicium.
APPARATUSES, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR FLICKER REDUCTION IN A MULTI-SENSOR ENVIRONMENT
Embodiments of the disclosure relate generally to flicker reduction in a multi-imager environment. Embodiments include methods, computer program products, and apparatuses for producing a near-field illumination using a near-field illuminator, the near-field illumination produced at a defined pulse train. A near-field image sensor may be exposed near the start of a near-field illumination pulse, and a far-field image sensor may be exposed between pulses of the near-field illumination. Some embodiments, additionally or alternatively, are configured for detecting an illuminator switch event, deactivating the near-field illuminator source, and producing, using a far-field illuminator source, a far-field illumination. Upon switching the illuminator source, some such embodiments are configured for exposing a far-field illuminator near the start of the far-field illumination pulse, and exposing a near-field image sensor near the start of the next available far-field illumination pulse. Such image capture may repeat until a task such as barcode reading is successful.
Barcode scanner for use with a parcel delivery system
The barcode scanner for use with a parcel delivery system is an interface device. The barcode scanner for use with a parcel delivery system is configured for use in delivering packages. The barcode scanner for use with a parcel delivery system forms a wireless communication link with an appropriate authority. The barcode scanner for use with a parcel delivery system reads a bar code that appears on a package. The barcode scanner for use with a parcel delivery system transmits the read bar code and the location of the package to the appropriate authority. The appropriate authority logs the delivery of the package with the bar code. The appropriate authority transmits to the barcode scanner for use with a parcel delivery system the operational details about the next assigned delivery location.
CONTROL OF CAPTURE AND DECODING OF ENCODED DATA MARKINGS VIA TOUCHSCREEN AND HAPTIC FEEDBACK
A decoding device includes an aiming component and a scanning component of a scanning engine; a display component and tactile components of a touch screen; and a processor configured to operate the display component to display an initial icon, monitor the tactile components to detect a commencement of a digit tip press on the touch screen at the initial icon, and in response to the digit tip press having a pressure between predetermined lower and higher pressure levels, perform operations including: operate the aiming component to project the visual guide; monitor the tactile components to detect a pressure increase of the digit tip press to higher than the predetermined higher pressure level; and in response to the increase in pressure, operate the scanning component to attempt to scan an encoded data marking, and operate the tactile components to provide a haptic indication of the attempt to the digit tip.