G06V2201/10

SEMANTIC ANNOTATION OF SENSOR DATA USING UNRELIABLE MAP ANNOTATION INPUTS

Provided are methods for semantic annotation of sensor data using unreliable map annotation inputs, which can include training a machine learning model to accept inputs including images representing sensor data for a geographic area and unreliable semantic annotations for the geographic area. The machine learning model can be trained against validated semantic annotations for the geographic area, such that subsequent to training, additional images representing sensor data and additional unreliable semantic annotations can be passed through the neural network to provide predicted semantic annotations for the additional images. Systems and computer program products are also provided.

SYSTEM AND METHOD FOR ROBOTIC OBJECT PLACEMENT
20230052515 · 2023-02-16 ·

A computing system including a processing circuit in communication with a robot and a camera having a field of view. The processing circuit obtains image information based on the objects in the field of view and a loading environment, the loading environment which includes loading areas, an object queue, and a buffer zone. The computing system is configured to use the obtained image information in motion planning operations for the retrieval and placement of objects from the object queue into the loading environment. Pallets provided within the loading environment (i.e., within the loading areas) are dedicated to receiving objects having corresponding object type identifiers. The computer system further uses the image information to determine the fill status of pallets existing within the loading environment, and whether new pallets need to be brought into the loading environment and/or swapped out with existing pallets to account for future planning and placement operations.

APPARATUS OF SELECTING VIDEO CONTENT FOR AUGMENTED REALITY, USER TERMINAL AND METHOD OF PROVIDING VIDEO CONTENT FOR AUGMENTED REALITY
20230051112 · 2023-02-16 ·

A video content selecting apparatus for augmented reality is provided. The apparatus includes a communication interface; and an operation processor configured to perform: (a) collect a plurality of video contents through the Internet; (b) extract feature information and metadata for each of the plurality of video contents, and generate a hash value corresponding to the feature information by using a predetermined hashing function; (c) manage a database to include at least the hash value and the metadata of each of the plurality of video contents; (d) receive object information corresponding to an object in a real-world environment from a user terminal through the communication interface; (e) search the database based on the object information and select a video content corresponding to the object information from among the plurality of video contents; and (f) transmit the metadata of the selected video content to the user terminal through the communication interface.

Whiteboard background customization system

Systems and methods are directed to automatically creating customized whiteboard backgrounds. A network system accesses metadata associated with a virtual presentation (e.g., title, topic, tenant identifier). First image data is identified based on first data of the metadata and second image data is identified based on second data of the metadata. Using the first image data and the second image data, the network system generates a plurality of whiteboard backgrounds by combining a first object obtained from the first image data with a second object obtained from the second image data to form each whiteboard background. The network system then causes presentation of a representation of each of the plurality of whiteboard backgrounds on a user interface of a host, who can select one of the representations. In response to receiving a selection, a whiteboard background corresponding to the selected representation is displayed as background on a whiteboard canvas.

Systems and methods for utilizing images to determine the position and orientation of a vehicle

Described are systems and methods to utilize images to determine the position and/or orientation of a vehicle (e.g., an autonomous ground vehicle) operating in an unstructured environment (e.g., environments such as sidewalks which are typically absent lane markings, road markings, etc.). The described systems and methods can determine the vehicle's position and orientation based on an alignment of annotated images captured during operation of the vehicle with a known annotated reference map. The translation and rotation applied to obtain alignment of the annotated images with the known annotated reference map can provide the position and the orientation of the vehicle.

Information processing apparatus, information processing method, and non-transitory computer readable medium

An information processing apparatus (10) is for supporting work by a user who uses drawings for a plant. The information processing apparatus (10) includes a controller (15). The controller (15) is configured to convert a drawing including elements configuring the plant into an abstract model represented by element information indicating the elements and connection information indicating a connection relationship between the elements. The controller (15) is configured to generate display information, when it is judged that a difference exists between one abstract model based on one drawing and another abstract model based on another drawing, for displaying the differing portion in a different form than another portion.

Representative document hierarchy generation

In some aspects, a method includes performing optical character recognition (OCR) based on data corresponding to a document to generate text data, detecting one or more bounded regions from the data based on a predetermined boundary rule set, and matching one or more portions of the text data to the one or more bounded regions to generate matched text data. Each bounded region of the one or more bounded regions encloses a corresponding block of text. The method also includes extracting features from the matched text data to generate a plurality of feature vectors and providing the plurality of feature vectors to a trained machine-learning classifier to generate one or more labels associated with the one or more bounded regions. The method further includes outputting metadata indicating a hierarchical layout associated with the document based on the one or more labels and the matched text data.

Generating space models from map files

A map file includes two-dimensional or three-dimensional geometric data items collectively representing layout of a building. The map file is parsed and the geometric data items are analyzed to identify building elements including rooms, floors, and objects of the building, and to identify containment relationships between the elements. A space model having a space graph is constructed. The space graph includes nodes that correspond to the respective building elements and links forming relationships between nodes that correspond to the identified containment relationships. Each node may include node metadata, rules or code that operate on the metadata, and a node type that corresponds to a type of physical space. Some nodes may include user representations or device representations that represent physical sensors associated therewith. The representations may receive data from the respectively represented sensors, and the sensor data becomes available via the space model.

Triage engine for document authentication

Computer systems and methods are provided for receiving a first authentication request that includes an image of an identification document. A risk value is determined using one or more information factors that correspond to the authentication request. A validation user interface that includes the image of the identification document is displayed. A risk category that corresponds to the risk value is determined using at least a first risk threshold. In accordance with a determination that the risk value corresponds to a first risk category, a visual indication that corresponds to the first risk category is displayed. In accordance with a determination that the risk value corresponds to a second risk category, a visual indication that corresponds to the second risk category is displayed.

Method and an apparatus for predicting a future state of a biological system, a system and a computer program
20230011970 · 2023-01-12 ·

An embodiment of a method 100 for predicting a future state of a biological system is provided. The method 100 comprises receiving 101a microscope image depicting the biological system at an associated time and receiving 102 metadata corresponding to the microscope image. The method 100 further comprises extracting 103 features from the microscope image having information on a state of the biological system and using 104 the features and the metadata to predict the future state of the biological system.