G06V10/77

ANCHOR FOR LINE RECOGNITION
20230230394 · 2023-07-20 ·

A method for determining at least one anchor for an anchor-based lane line recognition and/or roadway marking recognition in a digital image representation on the basis of sensor data that are obtained from at least one surroundings sensor of a system. The method includes at least the following steps: a) receiving a digital image representation, b) setting at least one row or one column of possible anchors in at least one area of the digital image representation, the row or column of possible anchors being situated at a distance from at least the upper and lower or left and right edge of the area of the digital image representation.

METHODS AND SYSTEMS FOR IMAGE SELECTION
20230230356 · 2023-07-20 ·

Various methods and systems are provided for automatically classifying a plurality of image slices using body region bounding boxes identified from a localizer image. In one embodiment, a localizer image may be mapped to a plurality of bounding boxes, corresponding to a plurality of body regions, using a trained machine learning model. Coordinates of the plurality of bounding boxes may be used to determine body region boundaries, such that the body regions are non-intersecting and coherent. The body regions identified in the localizer image may then be correlated to image slice ranges, and image slices within each image slice range may be labeled as belonging to the corresponding body region.

AUTOMATICALLY CLASSIFYING ANIMAL BEHAVIOR

Systems and methods are disclosed to objectively identify sub-second behavioral modules in the three-dimensional (3D) video data that represents the motion of a subject. Defining behavioral modules based upon structure in the 3D video data itself—rather than using a priori definitions for what should constitute a measurable unit of action—identifies a previously-unexplored sub-second regularity that defines a timescale upon which behavior is organized, yields important information about the components and structure of behavior, offers insight into the nature of behavioral change in the subject, and enables objective discovery of subtle alterations in patterned action. The systems and methods of the invention can be applied to drug or gene therapy classification, drug or gene therapy screening, disease study including early detection of the onset of a disease, toxicology research, side-effect study, learning and memory process study, anxiety study, and analysis in consumer behavior.

METHOD AND SYSTEM PERFORMING PATTERN CLUSTERING
20230230348 · 2023-07-20 ·

A method of clustering patterns of an integrated circuit includes; providing a pattern image and numeric data, as input data corresponding to a first pattern to a first model, wherein the first model is trained by a plurality of sample images and a plurality of sample values, obtaining a content latent variable using the first model, and grouping a plurality of content latent variables corresponding to a plurality of patterns into a plurality of clusters based on a Euclidean distance, wherein the numeric data represents at least one attribute of the first pattern.

COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM, METHOD OF PROCESSING INFORMATION, AND INFORMATION PROCESSING APPARATUS
20230230357 · 2023-07-20 · ·

A non-transitory computer-readable recording medium stores an information processing program for causing a computer to execute a process including: extracting a first feature from an image; detecting, from the extracted first feature, a plurality of visual entities included in the image; generating a second feature in which the visual entities in at least one combination of the plurality of detected visual entities are combined, in first feature, with each other; generating, based on the first feature and the second feature, a first map that indicates relation of each visual entity; extracting a fourth feature based on the first map and a third feature obtained by converting the first feature; and estimating the relation from the fourth feature.

Action recognition method and apparatus

An action recognition method and apparatus related to artificial intelligence and include extracting a spatial feature of a to-be-processed picture, determining a virtual optical flow feature of the to-be-processed picture based on the spatial feature and X spatial features and X optical flow features in a preset feature library, where the X spatial features and the X optical flow features include a one-to-one correspondence, determining a first type of confidence of the to-be-processed picture in different action categories based on similarities between the virtual optical flow feature and Y optical flow features, where each of the Y optical flow features in the preset feature library corresponds to one action category, X and Y are both integers greater than 1, and determining an action category of the to-be-processed picture based on the first type of confidence.

SAFETY BELT DETECTION METHOD, APPARATUS, COMPUTER DEVICE AND STORAGE MEDIUM
20230017759 · 2023-01-19 ·

A safety belt detection method, apparatus, computer device, and computer readable storage medium are disclosed. In the detection method, an image to be detected is obtained. The image to be detected is inputted into a detection network which includes an image classification branch network and an image segmentation branch network. A classification result, which indicates whether a driver is wearing a safety belt and is output from the image classification branch network, is obtained. A segmentation image, which indicates a position information of the safety belt and is output from the image segmentation branch network, is obtained. A detection result of the safety belt, indicating whether the driver wears the safety belt normatively, is obtained based on the classification result and the segmentation image.

COMPUTER VISION-BASED SURGICAL WORKFLOW RECOGNITION SYSTEM USING NATURAL LANGUAGE PROCESSING TECHNIQUES
20230017202 · 2023-01-19 ·

Systems, methods, and instrumentalities are disclosed for computer vision-based surgical workflow recognition using natural language processing (NLP) techniques. Surgical video of surgical procedures may be processed and analyzed, for example, to achieve workflow recognition. Surgical phases may be determined based on the surgical video and segmented to generate an annotated video representation. The annotated video representation of the surgical video may provide information associated with the surgical procedure. For example, the annotated video representation may provide information on surgical phases, surgical events, surgical tool usage, and/or the like.

ARRANGEMENT FOR PRODUCING HEAD RELATED TRANSFER FUNCTION FILTERS
20230222819 · 2023-07-13 ·

When three-dimensional audio is produced by using headphones, particular HRTF-filters are used to modify sound for the left and right channels of the headphone. As the morphology of every ear is different, it is beneficial to have HRTF-filters particularly designed for the user of headphones. Such filters may be produced by deriving ear geometry from a plurality of images taken with an ordinary camera, detecting necessary features from images and fitting said features to a model that has been produced from accurately scanned ears comprising representative values for different sizes and shapes. Taken images are sent to a server (52) that performs the necessary computations and submits the data further or produces the requested filter.

VEHICLE INFORMATION PHOTO OVERLAY

An image information overlay system retrieves an image associated with a vehicle listing and uses machine learning models to classify the image, generating identification data that may comprise a vehicle make and model, a feature or part of the vehicle present in the image, and a location of the vehicle feature or part. The identification data or an individual identifier of the vehicle, such as a Vehicle Identification Number (VIN), may be used to retrieve overlay information related to the vehicle make and model, such as recalls or known maintenance issues or information specific to the vehicle, such as mileage, accident reports, or ownership history. The overlay information is displayed on the image as an overlay at the location of the vehicle feature or part corresponding to the overlay information.