Patent classifications
G16H30/40
Method for detecting image of object using convolutional neural network
The present application related to a method for detecting an object image using a convolutional neural network. Firstly, obtaining feature images by Convolution kernel, and then positioning an image of an object under detected by a default box and a boundary box from the feature image. By Comparing with the sample image, the detected object image is classifying to an esophageal cancer image or a non-esophageal cancer image. Thus, detecting an input image from the image capturing device by the convolutional neural network to judge if the input image is the esophageal cancer image for helping the doctor to interpret the detected object image.
Method for detecting image of object using convolutional neural network
The present application related to a method for detecting an object image using a convolutional neural network. Firstly, obtaining feature images by Convolution kernel, and then positioning an image of an object under detected by a default box and a boundary box from the feature image. By Comparing with the sample image, the detected object image is classifying to an esophageal cancer image or a non-esophageal cancer image. Thus, detecting an input image from the image capturing device by the convolutional neural network to judge if the input image is the esophageal cancer image for helping the doctor to interpret the detected object image.
System for facilitating medical image interpretation
A system for facilitating medical image interpretation includes a processing unit and a display control unit. The processing unit includes a location information module generating a reference location indicator, and a feature marking module generating indication markers. The display control unit is in signal connection with the processing unit and a display device. The display control unit includes an image displaying module controlling the display device to display tissue images, and an auxiliary information displaying module controlling the display device to display, for each of the tissue images displayed by the display device, the reference location indicator and the indication markers together on the tissue image.
Method, system and computer readable medium for automatic segmentation of a 3D medical image
A method, a system and a computer readable medium for automatic segmentation of a 3D medical image, the 3D medical image comprising an object to be segmented, the method characterized by comprising: carrying out, by using a machine learning model, in at least two of a first, a second and a third orthogonal orientation, 2D segmentations for the object in slices of the 3D medical image to derive 2D segmentation data; determining a location of a bounding box (10) within the 3D medical image based on the 2D segmentation data, the bounding box (10) having predetermined dimensions; and carrying out a 3D segmentation for the object in the part of the 3D medical image corresponding to the bounding box (10).
Method, system and computer readable medium for automatic segmentation of a 3D medical image
A method, a system and a computer readable medium for automatic segmentation of a 3D medical image, the 3D medical image comprising an object to be segmented, the method characterized by comprising: carrying out, by using a machine learning model, in at least two of a first, a second and a third orthogonal orientation, 2D segmentations for the object in slices of the 3D medical image to derive 2D segmentation data; determining a location of a bounding box (10) within the 3D medical image based on the 2D segmentation data, the bounding box (10) having predetermined dimensions; and carrying out a 3D segmentation for the object in the part of the 3D medical image corresponding to the bounding box (10).
Oral information management system using smart toothbrush
Disclosed herein is an oral information management system using a smart toothbrush, the oral information management system including: a smart toothbrush configured to include a camera and at least one sensor; and a user terminal configured to acquire information collected from the smart toothbrush; wherein the user terminal determines the oral health state of a user based on the information collected from the smart toothbrush.
Luminance calibration system and method of mobile device display for medical images
A luminance calibration system and method of mobile device display for medical images is provided, and allows a mobile device display to display the medical images complying with grayscale standard display function (GSDF) defined by Digital Imaging and Communications in Medicine (DICOM) under any environmental light sources; for example, the medical images displayed by the mobile device display can meet a Just-Noticeable Difference (JND) defined by DICOM to facilitate medical diagnosis for medical staffs. In addition, the luminance calibration system and method of mobile device display for medical images only adjusts the medical images inside the operating window of the mobile device display, while any image outside the operating window of the mobile device display is reserved; as a result, the luminance calibration system and method of mobile device display for medical images makes the mobile device display to be a medical image screen as well as a regular screen.
Luminance calibration system and method of mobile device display for medical images
A luminance calibration system and method of mobile device display for medical images is provided, and allows a mobile device display to display the medical images complying with grayscale standard display function (GSDF) defined by Digital Imaging and Communications in Medicine (DICOM) under any environmental light sources; for example, the medical images displayed by the mobile device display can meet a Just-Noticeable Difference (JND) defined by DICOM to facilitate medical diagnosis for medical staffs. In addition, the luminance calibration system and method of mobile device display for medical images only adjusts the medical images inside the operating window of the mobile device display, while any image outside the operating window of the mobile device display is reserved; as a result, the luminance calibration system and method of mobile device display for medical images makes the mobile device display to be a medical image screen as well as a regular screen.
Electronic device for measuring skin condition of user and method for operating same
A first electronic device according to various embodiments may include: a display; a communication module comprising communication circuitry; a camera module including at least one camera; and a processor. The processor may be configured to: identify a request for measuring a skin condition of a user in a state in which the first electronic device is cradled on a second electronic device; acquire, based on information of the camera module and information regarding at least one light-emitting element included in the second electronic device, control information for controlling output of the at least one light-emitting element; control output of light from the at least one light-emitting element of the second electronic device based on the control information; acquire at least one image including at least a part of a body of the user through the camera module while light is output through the at least one light-emitting element controlled based on the control information; and provide information regarding the skin condition of the user using the at least one image.
METHOD AND SYSTEM FOR ANATOMICAL TREE STRUCTURE ANALYSIS
The present disclosure is directed to a computer-implemented method and system for anatomical tree structure analysis. The method includes receiving model inputs for a set of positions in an anatomical tree structure. The method further includes applying, by a processor, a learning network to the model inputs. The learning network comprises a set of encoders and a neural network modeling the anatomical tree structure, wherein each encoder provides features extracted from the model input at a corresponding position. The neural network has a plurality of nodes constructed according to the anatomical tree structure and each node is configured to process the extracted features from one or more of the encoders. The method additionally includes providing an output of the learning network as an analysis result of the anatomical tree structure analysis.