G06K9/62

AUTOMATIC LOCALIZED EVALUATION OF CONTOURS WITH VISUAL FEEDBACK
20220414402 · 2022-12-29 · ·

A localized evaluation network incorporates a discriminator acting as classifier, which may be included within a generative adversarial network (GAN). GAN may include a generative network such as U-NET for creating segmentations. The localized evaluation network is trained on image pairs including medical images of organs of interest and segmentation (mask) images. The network is trained to distinguish whether an image pair does or does not represent the ground truth. GAN examines interior layers of the discriminator and evaluates how much each localized image region contributes to the final classification. The discriminator may analyze regions of the image pair that contribute to a classification by analyzing layer weights of the machine learning model. Disclosed embodiments include a visual attribute, such as a heat map, that represents contributions of localized regions of a contour to an overall confidence score. These localized regions may be highlighted and reported for quality assurance review.

AUTOMATED COMPUTER SYSTEM AND METHOD OF ROAD NETWORK EXTRACTION FROM REMOTE SENSING IMAGES USING VEHICLE MOTION DETECTION TO SEED SPECTRAL CLASSIFICATION
20220414376 · 2022-12-29 ·

A fully-automated computer-implemented system and method for generating a road network map from a remote sensing (RS) image in which the classification accuracy is satisfactory combines moving vehicle detection with spectral classification to overcome the limitations of each. Moving vehicle detections from an RS image are used as seeds to extract and characterize image-specific spectral roadway signatures from the same RS image. The RS image is then searched and the signatures matched against the scene to grow a road network map. The entire process can be performed using the radiance measurements of the scene without having to perform the complicated geometric and atmospheric conversions, thus improving computational efficiency, the accuracy of moving vehicle detection (location, speed, heading) and ultimately classification accuracy.

CLASSIFICATION OF ORGAN OF INTEREST SHAPES FOR AUTOSEGMENTATION QUALITY ASSURANCE
20220414867 · 2022-12-29 · ·

Embodiments described herein provide for receiving a second image comprising an overlay depicting an organ-at-risk (OAR) segmentations. The overlay is generated by a first machine learning model based on a first image depicting the anatomical region of a current patient. A second machine learning model receives the second image and set of third images depicting prior patient OAR segmentations on which the second machine learning model was trained. The second machine learning model classifies the second image as one of a set of class names and characterizes the extent to which the second image is similar to, or dissimilar to, images with the same class name in the set of third images. The characterization may be based on outputs of internal layers of the second machine learning model. Dimensionality reduction may be performed on the outputs of the internal layers to present the outputs in a form comprehendible by humans.

ZONE-BASED FEDERATED LEARNING
20220417108 · 2022-12-29 ·

A method for managing model updates by a first network device includes receiving, at the first network device associated with a first zone model of multiple zone models, a global model from a second network device associated with the global model. The method also includes transmitting, from the first network device, the global model to user equipment (UEs) in a first group of UEs associated with the first zone model, a different group of UEs associated with each of the plurality of zone models. The method further includes receiving, at the first network device, weights associated with the global model from each UE in the first group. The method still further includes updating, at the first network device, the first zone model based on the received weights. The method also includes transmitting, from the first network device, the updated first zone model to each UE in the first group.

NETWORK FOR INTERACTED OBJECT LOCALIZATION
20220414371 · 2022-12-29 ·

A method for human-object interaction detection includes receiving an image. A set of features are extracted from multiple positions of the image. One or more human-object pairs may be predicted based on the extracted set of features. A human-object interaction may be determined based on a set of candidate interactions and the predicted human-object pairs.

TECHNIQUES FOR COMBINING OPERATIONS
20220414455 · 2022-12-29 ·

Apparatuses, systems, and techniques to combine operations. In at least one embodiment, a processor causes two or more operations in a graph to be combined based, at least in part, on another combination of two or more independent operations.

IMAGE PROCESSING SYSTEM

The present invention discloses a system and method for image processing and recognizing a scene of an image. The system utilizes a Multi-mode scalable network system and regrouping pipeline. The system is AI based system which uses neuro network. The system includes a pre-processing, processing and a post-processing unit. The system uses optical information recorded from the camera of a mobile device to extract and analyze the content in an image such as a photo or video clip. Based on the retrieved information, a label is given to best describe the scene of the image.

APPARATUS AND METHOD FOR DETERMINING LANE CHANGE OF SURROUNDING OBJECTS
20220410942 · 2022-12-29 · ·

A method for determining a lane change, performed by an apparatus for determining a lane change of an object located around a driving vehicle with which is equipped a sensor, the method including, detecting a plurality of objects located around the driving vehicle using scanning information obtained repeatedly at every predetermined period of time by the sensor scanning surroundings of the driving vehicle, selecting at least one candidate object estimated to change lanes among the plurality of objects based on previously detected lane edge information and determining whether the candidate object changes lanes based on information on movement of the candidate object.

OBJECT IDENTIFICATION
20220413507 · 2022-12-29 ·

Object identification may be provided herein. A feature extractor may extract a first set of visual features, extract a second set of visual features, concatenate the first set of visual features, the second set of visual features, and a set of bounding box information, determine a number of object features and a global feature for a scene, and receive ego-vehicle feature information associated with an ego-vehicle. An object classifier may receive the number of object features, the global feature, and the ego-vehicle feature information, generate relational features with respect to relationships between each of the number of objects from the scene, and classify each of the number of objects from the scene based on the number of object features, the relational features, the global feature, the ego-vehicle feature information, and an intention of the ego-vehicle.

INTELLIGENT CACHE MANAGEMENT FOR MOUNTED SNAPSHOTS BASED ON A BEHAVIOR MODEL

A client computing device receives a behavior model corresponding to a user group associated with a user. The behavior model has been trained with monitored user interactions of one or more files associated with the user group. The client computing device further mounts a snapshot of a file and determines, based on the behavior model, which files of the mounted snapshot to transfer to a locally accessible cache. During use of the client computing device, the client computing device may determine whether the mounted snapshot is accessible. If the mounted snapshot is not accessible, the client computing device may selectively delete, based on the behavior model, one or more of the files stored in the locally accessible cache. If the mounted snapshot is accessible, the client computing device may update the one or more files of the locally accessible cache with monitored user interactions with the mounted snapshot.