Patent classifications
G06V10/34
METHOD AND SYSTEM FOR DETECTING A TYPE OF SEAT OCCUPANCY
Computer implemented method for detecting a type of seat occupancy, comprising capturing, by means of an imaging device, an image of a seat, the image comprising depth data and intensity data, performing, by means of a processor device, a classifier algorithm on the captured image to determine a level of occupancy, wherein, if the determination indicates that the level of occupancy is above a predetermined threshold, the method comprises processing, by means of the processor device, the depth data with a convolutional neural network, to determine a type of occupation and wherein, if the determination indicates that the level of occupancy is below a predetermined threshold, the method comprises processing, by means of the processor device, the intensity data with a convolutional neural network to determine a type of occupation.
USER-GUIDED IMAGE SEGMENTATION METHODS AND PRODUCTS
A method for image segmentation includes (a) clustering, based upon k-means clustering, pixels of an image into first clusters, (b) outputting a cluster map of the first clusters (c) re-clustering the pixels into a new plurality of non-disjoint pixel-clusters, and (d) classifying the non-disjoint pixel-clusters in categories, according to a user-indicated classification. Another method for image segmentation includes (a) forming a graph with each node of the graph corresponding to a first respective non-disjoint pixel-cluster of the image and connected to each terminal of the graph and to all other nodes corresponding to other respective non-disjoint pixel-clusters that, in the image, are within a neighborhood of the first respective non-disjoint pixel-cluster, (b) setting weights of connections of the graph according to a user-indicated classification in categories respectively associated with the terminals, and (c) segmenting the image into the categories by cutting the graph based upon the weights.
USER-GUIDED IMAGE SEGMENTATION METHODS AND PRODUCTS
A method for image segmentation includes (a) clustering, based upon k-means clustering, pixels of an image into first clusters, (b) outputting a cluster map of the first clusters (c) re-clustering the pixels into a new plurality of non-disjoint pixel-clusters, and (d) classifying the non-disjoint pixel-clusters in categories, according to a user-indicated classification. Another method for image segmentation includes (a) forming a graph with each node of the graph corresponding to a first respective non-disjoint pixel-cluster of the image and connected to each terminal of the graph and to all other nodes corresponding to other respective non-disjoint pixel-clusters that, in the image, are within a neighborhood of the first respective non-disjoint pixel-cluster, (b) setting weights of connections of the graph according to a user-indicated classification in categories respectively associated with the terminals, and (c) segmenting the image into the categories by cutting the graph based upon the weights.
MULTI-TASK DEEP LEARNING-BASED REAL-TIME MATTING METHOD FOR NON-GREEN-SCREEN PORTRAITS
A multi-task deep learning-based real-time matting method for non-green-screen portraits is provided. The method includes: performing binary classification adjustment on an original dataset, inputting an image or video containing portrait information, and performing preprocessing; constructing a deep learning network for person detection, extracting image features by using a deep residual neural network, and obtaining a region of interest (ROI) of portrait foreground and a portrait trimap in the ROI through logistic regression; and constructing a portrait alpha mask matting deep learning network. An encoder sharing mechanism effectively accelerates a computing process of the network. An alpha mask prediction result of the portrait foreground is output in an end-to-end manner to implement portrait matting. In this method, green screens are not required during portrait matting. In addition, during the matting, only original images or videos need to be provided, without a need to provide manually annotated portrait trimaps.
IMAGE PROCESSING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
An image processing device includes a reception interface and a processor. The reception interface is configured to receive image data corresponding to an image in which a subject is captured. The processor is configured to perform, with respect to the image data, processing that a skeleton model in which a plurality of feature points corresponding to four limbs are connected to a center feature point corresponding to a center of a human body is applied to the subject.
IMAGE PROCESSING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
An image processing device includes a reception interface and a processor. The reception interface is configured to receive image data corresponding to an image in which a subject is captured. The processor is configured to perform, with respect to the image data, processing that a skeleton model in which a plurality of feature points corresponding to four limbs are connected to a center feature point corresponding to a center of a human body is applied to the subject.
Method for checking a static monitoring system
A system and method of inspecting a static monitoring installation, installed in a traffic space. An evaluation circuit is able to create an image of the environment from a signal reflected from an object, wherein at least one reference value of a reference image of the environment is stored in the evaluation circuit, and the at least one reference value is formed from the reflected signals of at least one reference point for a reflected signal
Method for checking a static monitoring system
A system and method of inspecting a static monitoring installation, installed in a traffic space. An evaluation circuit is able to create an image of the environment from a signal reflected from an object, wherein at least one reference value of a reference image of the environment is stored in the evaluation circuit, and the at least one reference value is formed from the reflected signals of at least one reference point for a reflected signal
METHOD OF OPC MODELING
In a method of optical proximity correction (OPC) modeling, a resist image (RI) model is generated from an aerial image (AI) of a pattern. A light intensity of a portion having a level lower than a truncation level is replaced with the truncation level in an image profile of the RI model. The image profile is smoothed to remove a sharp point in the image profile. A Laplacian kernel is applied to the image profile to generate a contour image profile. A portion of the contour image profile having a value lower than a given level is truncated. A radius of curvature kernel is applied to the contour image profile. A reciprocal number of the radius of curvature is applied to the RI model.
METHOD OF OPC MODELING
In a method of optical proximity correction (OPC) modeling, a resist image (RI) model is generated from an aerial image (AI) of a pattern. A light intensity of a portion having a level lower than a truncation level is replaced with the truncation level in an image profile of the RI model. The image profile is smoothed to remove a sharp point in the image profile. A Laplacian kernel is applied to the image profile to generate a contour image profile. A portion of the contour image profile having a value lower than a given level is truncated. A radius of curvature kernel is applied to the contour image profile. A reciprocal number of the radius of curvature is applied to the RI model.