Patent classifications
G06V10/84
Method of detecting wrinkles based on artificial neural network and apparatus therefor
According to various embodiments, a wrinkle detection service providing server for providing a wrinkle detection method based on an artificial intelligence may include a data pre-processor for obtaining a skin image of a user from a skin measurement device and performing pre-processing based on feature points based on the skin image; a wrinkle detector for inputting the skin image pre-processed through the data pre-processing into an artificial neural network and generating a wrinkle probability map corresponding to the skin image; a data post-processor for post-processing the generated wrinkle probability map; and a wrinkle visualization service providing unit for superimposing the post-processed wrinkle probability map on the skin image and providing a wrinkle visualization image to a user terminal of the user.
Methods and systems for feature recognition of two-dimensional prints for manufacture
An apparatus for feature recognition of two-dimensional prints is illustrated. The apparatus comprise a processor and a memory communicatively connected to the processor. The memory contains instructions configuring the processor to receive a two-dimensional print of a part for manufacture, scale two-dimensional print so that the two-dimensional print is within a predetermined area, identify a curve feature of the two-dimensional print as a function of scaling of the two-dimensional print, wherein the curve feature comprises a plurality of line segments, and classify a line type of the curve feature using line observations as a function of the curve feature identification.
Methods and systems for feature recognition of two-dimensional prints for manufacture
An apparatus for feature recognition of two-dimensional prints is illustrated. The apparatus comprise a processor and a memory communicatively connected to the processor. The memory contains instructions configuring the processor to receive a two-dimensional print of a part for manufacture, scale two-dimensional print so that the two-dimensional print is within a predetermined area, identify a curve feature of the two-dimensional print as a function of scaling of the two-dimensional print, wherein the curve feature comprises a plurality of line segments, and classify a line type of the curve feature using line observations as a function of the curve feature identification.
VIDEO REPAIRING METHODS, APPARATUS, DEVICE, MEDIUM AND PRODUCTS
A video repairing method, apparatus, device, medium, and product are provided. The method includes: acquiring a to-be-repaired video frame sequence; determining a target category corresponding to each pixel in the to-be-repaired video frame sequence based on the to-be-repaired video frame sequence and a preset category detection model; determining, from the to-be-repaired video frame sequence, to-be-repaired pixels each with a target category being a to-be-repaired category; and performing repairing on to-be-repaired areas corresponding to the to-be-repaired pixels to obtain a target video frame sequence.
VEHICLE MOUNTED VIRTUAL VISOR SYSTEM WITH OPTIMIZED BLOCKER PATTERN
A virtual visor system is disclosed that includes a visor having a plurality of independently operable pixels that are selectively operated with a variable opacity. A camera captures images of the face of a driver or other passenger and, based on the captured images, a controller operates the visor to automatically and selectively darken a limited portion thereof to block the sun or other illumination source from striking the eyes of the driver, while leaving the remainder of the visor transparent. The virtual visor system advantageously updates the optical state with blocker patterns that including padding in excess of what is strictly necessary to block the sunlight. This padding advantageously provides robustness against errors, allows for a more relaxed response time, and minimizes frequent small changes to the position of the blocker in the optical state of the visor.
Neural network categorization accuracy with categorical graph neural networks
Neural network-based categorization can be improved by incorporating graph neural networks that operate on a graph representing the taxonomy of the categories into which a given input is to be categorized by the neural network based-categorization. The output of a graph neural network, operating on a graph representing the taxonomy of categories, can be combined with the output of a neural network operating upon the input to be categorized, such as through an interaction of multidimensional output data, such as a dot product of output vectors. In such a manner, information conveying the explicit relationships between categories, as defined by the taxonomy, can be incorporated into the categorization. To recapture information, incorporate new information, or reemphasize information a second neural network can also operate upon the input to be categorized, with the output of such a second neural network being merged with the output of the interaction.
Generation and usage of semantic features for detection and correction of perception errors
Described is a system for detecting and correcting perception errors in a perception system. In operation, the system generates a list of detected objects from perception data of a scene, which allows for the generation of a list of background classes from backgrounds in the perception data associated with the list of detected objects. For each detected object in the list of detected objects, a closest background class is identified from the list of background classes. Vectors can then be used to determine a semantic feature, which is used to identify axioms. An optimal perception parameter is then generated, which is used to adjust perception parameters in the perception system to minimize perception errors.
SIGN LANGUAGE VIDEO SEGMENTATION METHOD BY GLOSS FOR SIGN LANGUAGE SENTENCE RECOGNITION, AND TRAINING METHOD THEREFOR
There are provided a method for segmenting a sign language video by gloss to recognize a sign language sentence, and a method for training. According to an embodiment, a sign language video segmentation method receives an input of a sign language sentence video, and segments the inputted sign language sentence video by gloss. Accordingly, there is suggested a method for segmenting a sign language sentence video by gloss, analyzing various gloss sequences from the linguistic perspective, understanding meanings robustly in spite of various changes in sentences, and translating sign language into appropriate Korean sentences.
SIGN LANGUAGE VIDEO SEGMENTATION METHOD BY GLOSS FOR SIGN LANGUAGE SENTENCE RECOGNITION, AND TRAINING METHOD THEREFOR
There are provided a method for segmenting a sign language video by gloss to recognize a sign language sentence, and a method for training. According to an embodiment, a sign language video segmentation method receives an input of a sign language sentence video, and segments the inputted sign language sentence video by gloss. Accordingly, there is suggested a method for segmenting a sign language sentence video by gloss, analyzing various gloss sequences from the linguistic perspective, understanding meanings robustly in spite of various changes in sentences, and translating sign language into appropriate Korean sentences.
SYSTEM AND METHOD FOR AUTOMATED VIDEO SEGMENTATION OF AN INPUT VIDEO SIGNAL CAPTURING A TEAM SPORTING EVENT
There is provided a system and method for automated video segmentation of an input video signal. The input video signal capturing a playing surface of a team sporting event. The method including: receiving the input video signal; determining player position masks from the input video signal; determining optic flow maps from the input video signal; determining visual cues using the optic flow maps and the player position masks; classifying temporal portions of the input video signal for game state using a trained hidden Markov model, the game state comprising either game in play or game not in play, the hidden Markov model receiving the visual cues as input features, the hidden Markov model trained using training data comprising a plurality of visual cues for previously recorded video signals each with labelled play states; and outputting the classified temporal portions.