G06V10/457

APPARATUS AND METHOD FOR IDENTIFYING CONDITION OF ANIMAL OBJECT BASED ON IMAGE
20230049090 · 2023-02-16 ·

An image-based animal object condition identification apparatus includes: a communication module that receives an image of an object; a memory that stores therein a program configured to extract animal condition information from the received image; and a processor that executes the program. The program extracts continuous animal detection information of each object by inputting the received image into an animal detection model that is trained based on learning data composed of animal images and determines predetermined animal condition information for each class of each animal object by inputting the continuous animal detection information of each object into an animal condition identification model.

System and method for ordered representation and feature extraction for point clouds obtained by detection and ranging sensor

A method is described which includes receiving a point cloud having a plurality of data points each representing a 3D location in a 3D space, the point cloud being obtained using a detection and ranging (DAR) sensor. For each data point, associating the data point with a 3D volume containing the 3D location of the data point, the 3D volume being defined using a 3D lattice that partitions the 3D space based on spherical coordinates. For at least one 3D volume, the data points are sorted within the 3D volume based on at least one dimension of the 3D lattice; and the sorted data points are stored as a set of ordered data points. The method also includes performing feature extraction on the set of ordered data points to generate a set of ordered feature vectors and providing the set of ordered feature vectors to perform a machine learning inference task.

Object detection device, method, and program

Even if an object to be detected is not remarkable in images, and the input includes images including regions that are not the object to be detected and have a common appearance on the images, a region indicating the object to be detected is accurately detected. A local feature extraction unit 20 extracts a local feature of a feature point from each image included in an input image set. An image-pair common pattern extraction unit 30 extracts, from each image pair selected from images included in the image set, a common pattern constituted by a set of feature point pairs that have similar local features extracted by the local feature extraction unit 20 in images constituting the image pair, the set of feature point pairs being geometrically similar to each other. A region detection unit 50 detects, as a region indicating an object to be detected in each image included in the image set, a region that is based on a common pattern that is omnipresent in the image set, of common patterns extracted by the image-pair common pattern extraction unit 30.

Method for computation relating to clumps of virtual fibers
11593584 · 2023-02-28 · ·

A computer-implemented method for processing a set of virtual fibers into a set of clusters of virtual fibers, usable for manipulation on a cluster basis in a computer graphics generation system, may include determining aspects for virtual fibers in the set of virtual fibers, determining similarity scores between the virtual fibers based on their aspects, and determining an initial cluster comprising the virtual fibers of the set of virtual fibers. The method may further include instantiating a cluster list in at least one memory, adding the initial cluster to the cluster list, partitioning the initial cluster into a first subsequent cluster and a second subsequent cluster based on similarity scores among fibers in the initial cluster, adding the first subsequent cluster and the second subsequent cluster to the cluster list, and testing whether a number of clusters in the cluster list is below a predetermined threshold.

SYSTEM AND METHOD FOR PARTIALLY OCCLUDED OBJECT DETECTION
20180005025 · 2018-01-04 ·

A method for partially occluded object detection includes obtaining a response map for a detection window of an input image, the response map based on a trained model and including a root layer and a parts layer. The method includes determining visibility flags for each root cell of the root layer and each part of the parts layer. The visibility flag is one of visible or occluded. The method includes determining an occlusion penalty for each root cell with a visibility flag of occluded and for each part with a visibility flag of occluded. The occlusion penalty is based on a location of the root cell or the part with respect to the detection window. The method determines a detection score for the detection window based on the visibility flags and the occlusion penalties and generates an estimated visibility map for object detection based on the detection score.

DISEASE CHARACTERIZATION FROM FUSED PATHOLOGY AND RADIOLOGY DATA
20180012356 · 2018-01-11 ·

Methods and apparatus distinguish invasive adenocarcinoma (IA) from in situ adenocarcinoma (AIS). One example apparatus includes a set of circuits, and a data store that stores three dimensional (3D) radiological images of tissue demonstrating IA or AIS. The set of circuits includes a classification circuit that generates an invasiveness classification for a diagnostic 3D radiological image, a training circuit that trains the classification circuit to identify a texture feature associated with IA, an image acquisition circuit that acquires a diagnostic 3D radiological image of a region of tissue demonstrating cancerous pathology and that provides the diagnostic 3D radiological image to the classification circuit, and a prediction circuit that generates an invasiveness score based on the diagnostic 3D radiological image and the invasiveness classification. The training circuit trains the classification circuit using a set of 3D histological reconstructions combined with the set of 3D radiological images.

GEOLOGICALLY CONSTRAINED INFRARED IMAGING DETECTION METHOD AND SYSTEM FOR URBAN DEEPLY-BURIED STRIP-LIKE PASSAGE

Provided in the present invention are a geologically constrained infrared imaging detection method and system for an urban deeply-buried strip-like passage, pertaining to the crossing fields of geophysics and remote sensing technology. The method includes: establishing an urban hierarchical three-dimensional temperature field model according to urban street DEM data and geological data corresponding to urban streets; acquiring urban stratum geological background heat flux according to the urban hierarchical three-dimensional temperature field model; using a total solar radiation energy distribution model to calculate urban surface total solar radiation energy; sequentially filtering out the urban surface total solar radiation energy and the urban stratum geological background heat flux from an infrared remote sensing image of a region corresponding to a strip-like underground target, to acquire a perturbation signal image of an urban street deeply-buried strip-like passage; and using grayscale closed-operation plus an edge detection algorithm to perform detection and positioning after preprocessing the perturbation signal image of the urban street deeply-buried strip-like passage, to acquire location information of an urban strip-like underground passage. The present invention achieves inverse detection and positioning of an urban street deeply-buried strip-like passage.

Contour-based detection of closely spaced objects

A system includes a sensor and a client. The client receives a set of frames of top-view depth images generated by the sensor. The client identifies a frame of the received frames in which a first contour associated with a first object is merged with a second contour associated with a second object. The client determines, at a first depth in the identified frame, a merged-contour region which is associated with the merged contours. The client detects a third contour at a second depth that is less than the first depth and determines a first region associated with the third contour. The client detects a fourth contour at the second depth and determines a second region associated with the fourth contour. If criteria are satisfied, the client associates the first region with a position of the first object and associates the second region with a position of the second object.

Shape-based graphics search
11704357 · 2023-07-18 · ·

Approaches are described for shape-based graphics search. Each graphics object of a set of graphics objects is analyzed. The analyzing includes determining an outline of the graphics object from graphics data that forms the graphics object. The outline of the graphics object is sampled resulting in sampled points that capture the outline of the graphics object. A shape descriptor of the graphics object is determined which captures local and global geometric properties of the sampled points. Search results of a search query are determined based on a comparison between a shape descriptor of a user identified graphics object and the shape descriptor of at least one graphics object of the set of graphics objects. At least one of the search results can be presented on a user device associated with the search query.

METHOD AND SYSTEM FOR CONFIDENCE LEVEL DETECTION FROM EYE FEATURES

State of art techniques attempt in extracting insights from eye features, specifically pupil with focus on behavioral analysis than on confidence level detection. Embodiments of the present disclosure provide a method and system for confidence level detection from eye features using ML based approach. The method enables generating overall confidence level label based on the subject's performance during an interaction, wherein the interaction that is analyzed is captured as a video sequence focusing on face of the subject. For each frame facial features comprising an Eye-Aspect ratio, a mouth movement, Horizontal displacements, Vertical displacements, Horizontal Squeezes and Vertical Peaks, are computed, wherein HDs, VDs, HSs and VPs are features that are derived from points on eyebrow with reference to nose tip of the detected face. This is repeated for all frames in the window. A Bi-LSTM model is trained using the facial features to derive confidence level of the subject.