G06V10/235

Combined 2D and 3D processing of images or spaces

2D and 3D data of a scene are linked by associating points in the 3D data with corresponding points in multiple different 2D images within the 2D data. Labels assigned to points in either data can be propagated to the other data. Labels propagated to a point in the 3D data are aggregated, and the labels ranked highest are kept and propagated back to the 2D images. 3D data including labels produced in this manner allow partially obscured objects in certain views to be more accurately identified. Thus, an object can be manipulated in all 2D views of the 2D data in which the object is at least partially visible, in order to digitally remove, alter, or replace it.

Mobile application for automatic identification enrollment using information synthesis and biometric liveness detection
11527087 · 2022-12-13 ·

Methods and systems to synthesize information from multiple discrete and unrelated documents, and from the synthesized information verify the identity of an individual to a high degree of trust are described. Information is adaptively synthesized from varied documents, and through generation of document confidence scores. Enrollment requirements for a trusted identification are evaluated in a real-time environment. The enrollment requirements may represent a minimum level of documentation required to sufficiently verify an individual's true identity in order to permit issuance of the trusted identification. Once sufficient documentation has been obtained and validated to meet or exceed enrollment requirements, the documentation (including any original source copies of any documentation) is securely submitted to the trusted identification issuing authority.

DETERMINING IMAGE SENSOR SETTINGS USING LIDAR
20220394172 · 2022-12-08 ·

Methods and devices related to determining image sensor settings using LiDAR are described. In an example, a method can include receiving, at a processing resource via a LiDAR sensor, first signaling indicative of location data, elevation data, and/or light energy intensity data associated with an object, receiving, at the processing resource via an image sensor, second signaling indicative of data representing an image of the object, generating, based at least in part on the first signaling, additional data representing a frame of reference for the object, transmitting to a user interface third signaling indicative of the data representing the frame of reference for the object and the data representing the image of the object, and displaying, at the user interface and based at least in part on the third signaling, another image that comprises a combination of the frame of reference and the data representing the image.

Sharing of user markings between printed and digital documents
11520974 · 2022-12-06 · ·

Techniques are disclosed for sharing user markings between digital documents and corresponding physically printed documents. The sharing is facilitated using an Augmented Reality (AR) device, such as a smartphone or a tablet. The device streams images of a page of a book on a display. The device accesses a corresponding digital document that is a digital version of content printed on the book. In an example, the digital document has a digital user marking, e.g., a comment associated with a paragraph of the digital document, wherein a corresponding paragraph of the physical book lacks any such comment. When the device streams the images of the page of the book on the display, the device appends the digital comment on the paragraph of the page of the book within the image stream. Thus, the user can view the digital comment in the AR environment, while reading the physical book.

EFFICIENT PLANT SELECTION

A plant selection apparatus has a storage means for storing a plurality of image datasets of plants and associated plant information including at least one of genotype information of the plants, phenotype information of the plants, and pedigree information of the plants. A selection unit preselects a subset of the plants. A display device displays the image datasets of the subset of the plants. A tracking unit generates observation information including information on which of the subset of the plants is observed, and the storage means is configured for storing the observation information. A training unit trains a classifier based on the observation information, the input selection, and the plant information including said at least one of the genotype information, the phenotype information, and the pedigree information.

Learning user interface controls via incremental data synthesis

A User Interface (UI) interface object detection system employs an initial dataset comprising a set of images, that may include synthesized images, to train a Machine Learning (ML) engine to generate an initial trained model. A data point generator is employed to generate an updated synthesized image set which is used to further train the ML engine. The data point generator may employ images generated by an application program as a reference by which to generate the updated synthesized image set. The images generated by the application program may be tagged in advance. Alternatively, or in addition, the images generated by the application program may be captured dynamically by a user using the application program.

SEMI-AUTOMATIC IMAGE DATA LABELING METHOD, ELECTRONIC APPARATUS, AND STORAGE MEDIUM

Disclosed are a semi-automatic image data labeling method, an electronic apparatus and a non-transitory computer-readable storage medium. The semi-automatic image data labeling method may include: displaying a to-be-labeled image, the to-be-labeled image comprising a selected area and an unselected area; acquiring a coordinate point of the unselected area and a first range value; executing a grabcut algorithm based on the coordinate point of the unselected area and the first range value acquired, and obtaining a binarized image divided by the grabcut algorithm; executing an edge tracking algorithm on the binarized image to acquire current edge coordinates; updating a local coordinate set based on the current edge coordinates acquired; updating the selected area of the to-be-labeled image based on the local coordinate set acquired.

METHOD AND APPARATUS FOR GENERATING BOUNDING BOX, DEVICE AND STORAGE MEDIUM

The present disclosure provides a method for generating a bounding box and an apparatus for generating a bounding box, a device and a storage medium, which relate to the field of artificial intelligence, and in particular, to the technical fields of computer vision, cloud computing, intelligent search, Internet of Vehicles, and intelligent cockpits. The specific implementation solution is as follows: acquiring a depth map to be processed and depth information corresponding to the depth map; capturing a selection action by a user for a target object on the depth map; then, based on the selection action, determining, in the depth information, boundary point cloud information of the target object; and finally, based on the boundary point cloud information, generating a bounding box of the target object.

Long running workflows for document processing using robotic process automation
11593599 · 2023-02-28 · ·

Systems and methods for executing a robotic process automation (RPA) workflow for document processing are provided. An input document is processed by a first robot executing one or more document processing activities of the RPA workflow. The document processing activities may include optical character recognition, digitization, classification, or data extraction. Execution of the RPA workflow is suspended by the first robot in response to a user validation activity of the RPA workflow. The user validation activity provides for user validation of the results of the one or more document processing activities. A user request that requests validation of the results from an end user is generated and the user request is transmitted to the end user. The execution of the RPA workflow is resumed by a second robot based on the validation received from the end user.

Electronic device, fingerprint sensing control method and fingerprint scanning control method

An electronic device, a fingerprint sensing control method and a fingerprint scanning control method are provided. A sensing region of a display panel is divided into a plurality of fingerprint zones. The electronic device determines at least one target fingerprint zone from the fingerprint zones according to a touched area. The electronic device scans the at least one target fingerprint zone to control the at least one target fingerprint zone for performing fingerprint sensing. The electronic device performs an accelerated scanning operation. The accelerated scanning operation includes: setting a scanning speed corresponding to at least one target scanning group coupled to at least the touched area to a first speed; and setting a scanning speed corresponding to one or more scanning groups other than the at least one target scanning group to a second speed higher than the first speed.