G06V30/1473

IDENTIFYING NON-UNIFORM WEIGHT OBJECTS USING A SENSOR ARRAY

An object tracking system that includes a sensor and a tracking system. The sensor configured to capture a frame of at least a portion of a rack within a global plane for a space. The tracking system is configured to detect an item was removed from the rack. The tracking system is further configured to receive the frame of the rack, to identify a marker on an item within a predefined zone in the frame, and to identify the item associated with the identified marker. The tracking system is further configured to determine a pixel location for a person, to determine the person is within the predefined zone associated with the, and to add the identified item to a digital cart associated with the person.

A COMPUTER-IMPLEMENTED METHOD FOR READING A DIGITAL GRAPHICAL DIAGRAM REPRESENTING AN ELECTRIC CIRCUIT
20220254181 · 2022-08-11 ·

A method for reading a digital graphical diagram representing an electric circuit is provided. The graphical diagram includes one or more diagram pages, each representing a portion of the electric circuit. The method includes for each diagram page detecting the graphical objects included in the diagram page, for each diagram page basing on the detected graphical objects, obtaining predictive information related to the components included in the portion of electric circuit represented in the diagram page, and for each diagram page harmonizing the predictive information related to the components of the portion of electric circuit represented in the diagram page to obtain an identification list of the components of the electric circuit.

Systems and methods for identifying a service qualification of a unit of a community

A community mapping platform may receive an image that depicts a community layout of a community and may process, using a computer vision model, the image to identify a unit, of the community, that is depicted in the image (e.g., based on identifying a text string and/or a polygon in the image). The community mapping platform may determine sets of community geographical coordinates for a set of reference locations of the community and may map the sets of community geographical coordinates to corresponding reference pixel locations of the image. The community mapping platform may determine, using a geographical information system, unit geographical coordinates of the unit based on the reference pixel locations and may perform an action associated with the unit geographical coordinates.

RECOGNIZING TEXT IN IMAGE DATA
20210192202 · 2021-06-24 ·

A device may receive image data representing a document, the document including: text, and edges. Based on the edges, the device may identify, a segment of interest within the image data and crop the segment of interest to obtain a portion of the image data. In addition, the device may perform optical character recognition on the portion of the image data, the optical character recognition producing recognized text. The device may obtain, based on the recognized text, validation data that includes verification text, and determine whether the recognized text is verified based on the verification text. Based on a result of the determination, the device may perform an action.

Font capture from images of target decorative character glyphs
11126788 · 2021-09-21 · ·

Embodiments of the present invention are directed towards generating a captured font from an image of a target font. Character glyphs of the target font can be detected from the image. A character glyph can be selected from the detected character glyphs. A character mask can be generated for the selected character glyph. The character mask can be used to identify a similar font. A character from the similar font corresponding to the selected character glyph can be transformed to match the character mask. This transformed corresponding character can be presented and used to generate a captured font. In addition, a texture from the image can be applied to the captured font based on the transformed corresponding character.

XBRL-based intelligent financial cloud platform system, construction method and business implementation method thereof

The invention belongs to the field of cloud technology and cloud processing, it discloses an XBRL-based intelligent financial cloud platform system, provides rich accounting services for users in an efficient and convenient manner. The platform system comprises a tenant document, a document tool, an accounting tool and an administration center deployed on a server; the tenant document implements the functions such as order creation, order status query and historical order viewing; the document tool provides such cloud services as image preprocessing, element correction and total element correction; the accounting tool provides such cloud services as rule checking, simulated accounting and accounting reviewing; the administration center is used to provide private cloud management and operation services for financial cloud in an automated, intelligent and standardized manner, consisting of a grain center, a definition center, a construction center, a business center, a user center and an operation center. Furthermore, the invention also provides a construction method and a business implementation method corresponding to the cloud platform system, it is suitable for providing efficient and convenient accounting cloud services.

IDENTIFYING NON-UNIFORM WEIGHT OBJECTS USING A SENSOR ARRAY

An object tracking system that includes a sensor, a weight sensor, and a tracking system. The sensor configured to capture a frame of at least a portion of a rack within a global plane for a space. The tracking system is configured to detect a weight decrease on the weight sensor. The tracking system is further configured to receive the frame of the rack, to identify a marker on an item within a predefined zone in the frame, and to identify the item associated with the identified marker. The tracking system is further configured to determine a pixel location for a person, to determine the person is within the predefined zone associated with the, and to add the identified item to a digital cart associated with the person.

Recognizing text in image data

A device may receive image data representing a document, the document including: text, and edges. Based on the edges, the device may identify, a segment of interest within the image data and crop the segment of interest to obtain a portion of the image data. In addition, the device may perform optical character recognition on the portion of the image data, the optical character recognition producing recognized text. The device may obtain, based on the recognized text, validation data that includes verification text, and determine whether the recognized text is verified based on the verification text. Based on a result of the determination, the device may perform an action.

On-shelf image based out-of-stock detection
10949799 · 2021-03-16 · ·

An out-of-stock detection system notifies store management that a product is out of stock. The system captures images of a shelf and determines the position product labels thereon. For each product label, a bounding box is generated based on the position of each product label on the shelf. The system then identifies a product for each product label based on information within each product label and, for each product label, stores a product identified for each bounding box. Accordingly, the system performs an out-of-stock detection process that includes capturing additional image data of the shelf periodically that includes each bounding box, providing a portion of the additional image data for each bounding box to a model trained to determine whether the bounding box contains products, sending a notification for a product determined to be out of stock to a store client device based on output from the model.

FONT CAPTURE FROM IMAGES OF TARGET DECORATIVE CHARACTER GLYPHS
20210073323 · 2021-03-11 ·

Embodiments of the present invention are directed towards generating a captured font from an image of a target font. Character glyphs of the target font can be detected from the image. A character glyph can be selected from the detected character glyphs. A character mask can be generated for the selected character glyph. The character mask can be used to identify a similar font. A character from the similar font corresponding to the selected character glyph can be transformed to match the character mask. This transformed corresponding character can be presented and used to generate a captured font. In addition, a texture from the image can be applied to the captured font based on the transformed corresponding character.