Patent classifications
G06V20/50
Selecting and Reporting Objects Based on Events
Systems and methods for selecting and reporting objects based on events are provided. An indication of first and second objects, an indication of first events associated with the first object, and an indication of second events associated with the second object may be received. Based on the first events, it may be determined to include in a textual content a description based on the first events of the first object. Based on the second events, it may be determined not to include in the textual content any description based on the second events of the second object. Data associated with the first events may be analyzed to generate a particular description of the first object. The textual content including the particular description of the first object and not including any description based on the second events of the second object may be generated and provided.
Selecting and Reporting Objects Based on Events
Systems and methods for selecting and reporting objects based on events are provided. An indication of first and second objects, an indication of first events associated with the first object, and an indication of second events associated with the second object may be received. Based on the first events, it may be determined to include in a textual content a description based on the first events of the first object. Based on the second events, it may be determined not to include in the textual content any description based on the second events of the second object. Data associated with the first events may be analyzed to generate a particular description of the first object. The textual content including the particular description of the first object and not including any description based on the second events of the second object may be generated and provided.
Analyzing Objects Data to Generate a Textual Content Reporting Events
Systems, methods and non-transitory computer readable media for analyzing objects data to generate a textual content reporting events are provided. An indication of an event may be received. An indication of a group of one or more objects associated with the event may be received. For each object of the group of one or more objects, data associated with the object may be received. The data associated with the group of one or more objects may be analyzed to select an adjective. A particular description of the event may be generated. The particular description may be based on the group of one or more objects. The particular description may include the selected adjective. A textual content may be generated. The textual content may include the particular description. The generated textual content may be provided.
SYSTEM AND METHOD FOR ROBOTIC OBJECT PLACEMENT
A computing system including a processing circuit in communication with a robot and a camera having a field of view. The processing circuit obtains image information based on the objects in the field of view and a loading environment, the loading environment which includes loading areas, an object queue, and a buffer zone. The computing system is configured to use the obtained image information in motion planning operations for the retrieval and placement of objects from the object queue into the loading environment. Pallets provided within the loading environment (i.e., within the loading areas) are dedicated to receiving objects having corresponding object type identifiers. The computer system further uses the image information to determine the fill status of pallets existing within the loading environment, and whether new pallets need to be brought into the loading environment and/or swapped out with existing pallets to account for future planning and placement operations.
SYSTEM AND METHOD FOR ROBOTIC OBJECT PLACEMENT
A computing system including a processing circuit in communication with a robot and a camera having a field of view. The processing circuit obtains image information based on the objects in the field of view and a loading environment, the loading environment which includes loading areas, an object queue, and a buffer zone. The computing system is configured to use the obtained image information in motion planning operations for the retrieval and placement of objects from the object queue into the loading environment. Pallets provided within the loading environment (i.e., within the loading areas) are dedicated to receiving objects having corresponding object type identifiers. The computer system further uses the image information to determine the fill status of pallets existing within the loading environment, and whether new pallets need to be brought into the loading environment and/or swapped out with existing pallets to account for future planning and placement operations.
Deep learning-based method and device for calculating overhang of battery
A deep learning-based method for calculating an overhang of a battery includes the following steps: obtaining a training sample image set; training a neural network according to the training sample image set to obtain a segmentation network model; detecting an object detection image of the battery to be detected according to the segmentation network model to obtain a corresponding first binarized image; obtaining top coordinates of each of a positive electrode and a negative electrode of the battery to be detected according to the first binarized image; and calculating the overhang of the battery to be detected according to the top coordinates.
SYSTEMS AND METHODS FOR OBJECT DETECTION
A computing system including a processing circuit in communication with a camera having a field of view. The processing circuit is configured to perform operations related to detecting, identifying, and retrieving objects disposed amongst a plurality of objects. The processing circuit may be configured to perform operations related to object recognition template generation, feature generation, hypothesis generation, hypothesis refinement, and hypothesis validation.
SUBSTANCE PREPARATION EVALUATION SYSTEM
Automatic substance preparation and evaluation systems and methods are provided for preparing and evaluating a fluidic substance, such as e.g. a sample with bodily fluid, in a container and/or in a dispense tip. The systems and methods can detect volumes, evaluate integrities, and check particle concentrations in the container and/or the dispense tip.
SUBSTANCE PREPARATION EVALUATION SYSTEM
Automatic substance preparation and evaluation systems and methods are provided for preparing and evaluating a fluidic substance, such as e.g. a sample with bodily fluid, in a container and/or in a dispense tip. The systems and methods can detect volumes, evaluate integrities, and check particle concentrations in the container and/or the dispense tip.
TOOL SYSTEM, TOOL, WORK TARGET IDENTIFICATION SYSTEM, WORK TARGET IDENTIFICATION METHOD, AND PROGRAM
A tool system includes a tool, an image capturing unit, a processing unit, and a set state detection unit. The tool is a portable tool including a driving unit to be activated with power supplied from a power source. The image capturing unit is provided for the tool and generates a captured image. The processing unit intermittently performs identification processing of identifying a work target based on the captured image. The set state detection unit detects a state where the tool is set in place on the work target.