Patent classifications
G06V10/19
DATA PROCESSING
A method includes obtaining an image of a spatial object in a space. The spatial object is captured in the image by a camera component. The image includes one or more captured planar regions corresponding to one or more planes of the spatial object. A first captured planar region of the one or more captured planar regions includes an array of first captured identification codes and includes first captured straight lines associated with the first captured identification codes. The first captured straight lines in the image are associated with a first vanishing point. The method further includes identifying the first captured identification codes, identifying the first captured straight lines, determining first equations of the first captured straight lines, determining, coordinates of the first vanishing point, and determining one or more intrinsic parameters of the camera component based on at least the first vanishing point.
ENCODING AND DECODING METHOD AND INFORMATION RECOGNITION DEVICE USING THE SAME
There is provided an encoding and decoding method and an information recognition device using the same. A code block includes a center coding region and a peripheral coding region arranged around the center coding region. The encoding and decoding method uses the feature of at least one microdot included in the center coding region as codes. The encoding and decoding method uses the feature of at least one microdot included in the peripheral coding region as codes. The encoding and decoding method uses the relative feature between the center coding region and the peripheral coding region as codes. The information recognition device compares the read feature with pre-stored features to decode information such as position codes, object codes, parameter codes and control codes.
SIMULTANEOUS POST-TREATMENT DETECTION PROCESS OF A PLURALITY OF OBJECTS TREATED AND POSITIONED ON A TREATMENT BASE
A simultaneous post-treatment detection process of a plurality of objects treated and positioned on a treatment base is provided, wherein the objects at first are coupled to a device identified by an identification code, and wherein the process comprises the fact of decoupling each object from the device to prepare the object for the treatment; positioning, before treatment, each object on a treatment base; associating, by recording by means of an electronic computer on a digital storage medium, the code of each device to an allocation position on the treatment base of a respective object at first coupled thereto by one or more phases selected from a phase of indicating the allocation position implemented by means of a visual support adapted to allow a user to display the allocation position on the base, and a phase of recognizing an outline, defined by each object and previously acquired by an optical instrument and associated to a respective said code, of an object on the base to determine the relative allocation position; detecting to the user the position of the object on the base for each code whenever the user recalls a code recorded on the storage medium, and wherein the recall is performed by visual inspection by the user if the association on the digital storage medium is reproduced by the computer as a graphic mapping viewable by the user, and/or by scanning a code by means of a scanning instrument operatively connected to the computer.
APPARATUS AND METHODS OF ALIGNING COMPONENTS OF DIAGNOSTIC LABORATORY SYSTEMS
A method of aligning a component to a structure in a diagnostic laboratory system. The method includes aligning a position sensor to the structure; sensing a position of the component using the position sensor; and calculating the position of the component relative to the structure based at least in part on the sensing. Other methods, apparatus, and systems are disclosed.
METHOD AND SYSTEM FOR LIVESTOCK MONITORING AND MANAGEMENT
A system and method for monitoring livestock is disclosed. In the illustrative embodiment, calibration patterns are placed on the ground in the field of view of a camera. The calibration patterns are used to generate homographies usable to determine a 3D position of a 2D position on the ground in images captured by the camera. If a gravity direction is also determined, then the 3D position of objects can be determined if a point on the ground along the gravity shadow of the object can also be identified. The identified positions of objects may be used to determine if livestock is lame.
APPARATUS AND METHODS FOR AUGMENTING VISION WITH REGION-OF-INTEREST BASED PROCESSING
Systems, apparatus, and methods for augmenting vision with region-of-interest based processing. In one specific example, smart glasses may use an eye-tracking camera to monitor the user's gaze and determine the user's gaze point. When triggered, the camera assembly captures a high-resolution image. The high-resolution image may be cropped to a much smaller region-of-interest (ROI) image based on computer-vision analysis of the user's gaze point. For example, if the smart glasses detect a human face at the gaze point, then the ROI is cropped to the human face. In this manner, the smart glasses may leverage specific capabilities of the smart glasses to augment the user experience; for example, telephoto lenses provide long distance vision, or computer-assisted search may direct the user to interesting activity. Other aspects may include e.g., external database assisted operation and/or ongoing cataloging throughout the day.
APPARATUS AND METHODS FOR AUGMENTING VISION WITH REGION-OF-INTEREST BASED PROCESSING
Systems, apparatus, and methods for augmenting vision with region-of-interest based processing. In one specific example, smart glasses may use an eye-tracking camera to monitor the user's gaze and determine the user's gaze point. When triggered, the camera assembly captures a high-resolution image. The high-resolution image may be cropped to a much smaller region-of-interest (ROI) image based on computer-vision analysis of the user's gaze point. For example, if the smart glasses detect a human face at the gaze point, then the ROI is cropped to the human face. In this manner, the smart glasses may leverage specific capabilities of the smart glasses to augment the user experience; for example, telephoto lenses provide long distance vision, or computer-assisted search may direct the user to interesting activity. Other aspects may include e.g., external database assisted operation and/or ongoing cataloging throughout the day.
MULTI-RESOLUTION IN SITU DECODING
Methods and systems for performing multi-resolution in situ decoding are described. The method may comprise, for example, acquiring at least one first image of a biological sample at a first optical resolution; identifying locations for a plurality of target analytes based on the at least one first image; acquiring at least one second image of the biological sample at a second optical resolution in at least one decoding cycle of a plurality of decoding cycles used for in situ decoding of the target analytes; and extracting signal intensity data for signals associated with all or a portion of the target analytes from the at least one second image based on the locations for the target analytes identified in the at least one first image. In some instances, the method may further comprise using the signal intensity data extracted from the at least one second image to decode the target analytes.
PRINT CARTRIDGE REGISTRATION SYSTEMS
In example implementations, an apparatus is provided. The apparatus includes a moveable stage, a print cartridge, a camera, and a processor. The moveable stage is to hold a well plate. The print cartridge holder is to hold a print cartridge that includes a print die over the moveable stage. The camera is located above the movable stage and the print cartridge holder. The camera is positioned such that the movable stage and the print die are within a field of view of the camera. The processor is to calculate a first offset a first offset of the well plate and a second offset of the print die. The movement of the movable stage is controlled by the processor in accordance with the first offset and the second offset.
Scanning device and method therefor
An aspect is to provide a scanning device, a system, and a method in which an available communication band is not used up even if a plurality of scanning devices are connected to one unit of dedicated hardware. According to one embodiment, a scanning device includes: a camera configured to pick up an image; a reduction unit configured to reduce a data volume of an image corresponding to a merchandise to be outputted to an image recognition device from the image picked up by the camera; and an output unit configured to output the image with the reduced data volume to the image recognition device.