Patent classifications
G06T7/11
APPARATUS AND METHOD FOR CLASSIFYING CLOTHING ATTRIBUTES BASED ON DEEP LEARNING
Disclosed herein are an apparatus and method for classifying clothing attributes based on deep learning. The apparatus includes memory for storing at least one program and a processor for executing the program, wherein the program includes a first classification unit for outputting a first classification result for one or more attributes of clothing worn by a person included in an input image, a mask generation unit for outputting a mask tensor in which multiple mask layers respectively corresponding to principal part regions obtained by segmenting a body of the person included in the input image are stacked, a second classification unit for outputting a second classification result for the one or more attributes of the clothing by applying the mask tensor, and a final classification unit for determining and outputting a final classification result for the input image based on the first classification result and the second classification result.
APPARATUS AND METHOD FOR CLASSIFYING CLOTHING ATTRIBUTES BASED ON DEEP LEARNING
Disclosed herein are an apparatus and method for classifying clothing attributes based on deep learning. The apparatus includes memory for storing at least one program and a processor for executing the program, wherein the program includes a first classification unit for outputting a first classification result for one or more attributes of clothing worn by a person included in an input image, a mask generation unit for outputting a mask tensor in which multiple mask layers respectively corresponding to principal part regions obtained by segmenting a body of the person included in the input image are stacked, a second classification unit for outputting a second classification result for the one or more attributes of the clothing by applying the mask tensor, and a final classification unit for determining and outputting a final classification result for the input image based on the first classification result and the second classification result.
COMPUTER-IMPLEMENTED METHOD FOR PROVIDING AN OUTLINE OF A LESION IN DIGITAL BREAST TOMOSYNTHESIS
One or more example embodiments of the present invention relates to a computer-implemented method for providing an outline of a lesion in digital breast tomosynthesis includes receiving input data, wherein the input data comprises a reconstructed tomosynthesis volume dataset based on projection recordings, a virtual target marker within a lesion being in the tomosynthesis volume dataset; applying a trained function to at least a part of the tomosynthesis volume dataset to establish an outline enclosing the lesion, the part of the tomosynthesis volume dataset corresponding to a region surrounding the virtual target marker in the tomosynthesis volume dataset; and providing output data, wherein the output data is an outline of a two-dimensional area or a three-dimensional volume surrounding the target marker.
MEDICAL IMAGE PROCESSING METHOD AND APPARATUS, DEVICE, STORAGE MEDIUM, AND PRODUCT
A computer device obtains a medical image set. The device identifies a difference between the reference medical image and the target medical image to obtain a candidate non-lesion region in the target medical image. The device determines area size information of the candidate non-lesion region as candidate area size information. The device adjusts the candidate non-lesion region according to the annotated area size information when the candidate area size information does not match the annotated area size information, so as to obtain a target non-lesion region in the target medical image.
MEDICAL IMAGE PROCESSING METHOD AND APPARATUS, DEVICE, STORAGE MEDIUM, AND PRODUCT
A computer device obtains a medical image set. The device identifies a difference between the reference medical image and the target medical image to obtain a candidate non-lesion region in the target medical image. The device determines area size information of the candidate non-lesion region as candidate area size information. The device adjusts the candidate non-lesion region according to the annotated area size information when the candidate area size information does not match the annotated area size information, so as to obtain a target non-lesion region in the target medical image.
MODEL-BASED IMAGE SEGMENTATION
A method and system for mapping boundary detecting features of at least one source triangulated mesh of known topology to a target triangulated mesh of arbitrary topology. A region of interest in a volumetric image associated with each triangle of the target triangulated mesh is provided to a feature mapping network. The feature mapping network assigns a feature selection vector to each triangle of the target triangulated mesh. The associated region of interest and assigned feature selection vector for each triangle of the target triangulated mesh are provided to a boundary detection network. A predicted boundary based on features of the associated region of interest selected by the assigned feature selection vector is obtained from the boundary detection network.
MODEL-BASED IMAGE SEGMENTATION
A method and system for mapping boundary detecting features of at least one source triangulated mesh of known topology to a target triangulated mesh of arbitrary topology. A region of interest in a volumetric image associated with each triangle of the target triangulated mesh is provided to a feature mapping network. The feature mapping network assigns a feature selection vector to each triangle of the target triangulated mesh. The associated region of interest and assigned feature selection vector for each triangle of the target triangulated mesh are provided to a boundary detection network. A predicted boundary based on features of the associated region of interest selected by the assigned feature selection vector is obtained from the boundary detection network.
METHOD, DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT FOR DETECTING IMAGE FRAME LOSS
An image frame loss detection method is performed by a computer device, including: acquiring first coded data respectively corresponding to a plurality of first image frames and a color signal corresponding to at least one second image frame; obtaining second coded data corresponding to at least one second image frame generated by a terminal device through image rendering of a color signal based on the coded data respectively corresponding to the plurality of first image frames; and comparing the first coded data respectively corresponding to the plurality of first image frames with the second coded data corresponding to the at least one second image frame to determine whether a frame loss occurs. The first coded data and the second coded data each include color-coded data respectively corresponding to M image blocks of a correspond image frame, and each of the M image blocks has a color in the image frame.
METHOD, DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT FOR DETECTING IMAGE FRAME LOSS
An image frame loss detection method is performed by a computer device, including: acquiring first coded data respectively corresponding to a plurality of first image frames and a color signal corresponding to at least one second image frame; obtaining second coded data corresponding to at least one second image frame generated by a terminal device through image rendering of a color signal based on the coded data respectively corresponding to the plurality of first image frames; and comparing the first coded data respectively corresponding to the plurality of first image frames with the second coded data corresponding to the at least one second image frame to determine whether a frame loss occurs. The first coded data and the second coded data each include color-coded data respectively corresponding to M image blocks of a correspond image frame, and each of the M image blocks has a color in the image frame.
VIRTUAL CONTENT EXPERIENCE SYSTEM AND CONTROL METHOD FOR SAME
Disclosed is a virtual content experience system. In the virtual content experience system, a central server for driving the system contains: a content conversion unit which converts two-dimensional image content, received by means of a data transmission and reception unit or input by a user, into a stereoscopic image; a motion information generation unit which recognizes text information extracted from the two-dimensional image content and converts the text information into motion information; a content playback control unit which is provided to transmit the motion information to a motion information management unit provided in a virtual reality experience chair, or receive start information and end information about the motion information from the motion information management unit to generate and change control information for controlling whether to provide new two-dimensional image content; and a display unit for displaying the content conversion unit, and the motion information or control information.