Patent classifications
G06T2207/30088
APPARATUS AND METHOD FOR SENSING AND ANALYZING SKIN CONDITION
A skin imaging and diagnostic method and apparatus comprising, a frame, configured to circumscribe a target tissue on the skin of a patient. An electro-optics unit of the apparatus comprising: an illuminator assembly comprising illuminating elements, configured to provide illumination light on the target tissue; an imaging optics assembly; and an image sensor assembly, comprising an image sensor, wherein the imaging optics assembly is configured to collect backscattered said illumination light from the target tissue and focus the collected backscattered illumination light on the image sensor; and the image sensor is disposed to consequently sense an image of the target tissue. A controller configured to activate illuminating elements and to capture each image from the image sensor.
INFERRING USER POSE USING OPTICAL DATA
A tracking device monitors a portion of a user's skin to infer a pose or gesture made by a body part of a user that engages the portion of the user's skin as the pose or gesture is made. For example, the tracking device monitors a portion of skin on a user's forearm to infer a pose or gesture made by the user's hand. The tracking device may include an illumination source that illuminates the portion of the user's skin. An optical sensor of the tracking device may capture images of the illuminated portion of skin. A controller of the tracking device infers a pose or gesture of the body part based in part on a model (e.g., a machine-learned model) and the captured images. The model may map various configurations of the user's skin to different poses or gestures of the body part.
Hyperspectral scanning to determine skin health
A system, method, and computer readable media are provided for obtaining a first set of skin data from an image capture system including at least one ultraviolet (UV) image of a user's skin. Performing a correction on the skin data using a second set of skin data associated with the user. Quantifying a plurality of skin parameters of the user's skin based on the first skin data, including quantifying a bacterial load. Quantifying the bacterial load by applying a brightness filter to isolate portions of the at least one UV image containing fluorescence, applying a dust filter, identifying portions of the at least one UV image that contain fluorescence due to bacteria, and determining a quantity of bacterial load in the users skin. Determining, using a machine learning model, an output associated with a normal skin state of the user and a current skin state of the user.
APPARATUS FOR, METHOD OF, AND COMPUTER PROGRAM PRODUCT HAVING PROGRAM OF DISPLAYING BIOLOGICAL INFORMATION
A biological information displaying apparatus according to an embodiment includes a picture obtaining apparatus and a processor. The picture obtaining apparatus obtains a picture signal of a predetermined site of a subject as a moving image. The processor generates a hue moving image by extracting a luminance or an image-based photoplethysmogram (iPPG) related to a pulse wave for each pixel of the moving image and assigning a hue in accordance with a value of luminance information or iPPG information. The processor displays the generated hue moving image such that the hue moving image is superimposed on an image of the subject.
METHOD OF AUTOMATICALLY RECOGNIZING WOUND BOUNDARY BASED ON ARTIFICIAL INTELLIGENCE AND METHOD OF GENERATING THREE-DIMENSIONAL WOUND MODEL
The present specification discloses a method capable of automatically recognizing an accurate wound boundary and a method of generating a 3D wound model based on the recognized wound boundary. The method of automatically recognizing a wound boundary according to the present specification is a method of automatically recognizing a wound boundary based on artificial intelligence, and may photograph several frames of the wound to be recognized with an RGB-D camera, separating measurement information in an image, amplifying image data for learning, and passing the amplified image data through an artificial neural network. The method may include generating a three-dimensional (3D) model by performing boundary recognition post-processing on the data passing through the artificial neural network to match a two-dimensional (2D) image with the 3D model.
CONNECTED BODY SURFACE CARE MODULE
A wearable treatment and analysis module is provided. The module is positioned on or near a body surface region of interest. The module provides remote access to sensor data, treatment administration, and/or other health care regimens via a network connection with a user device and/or management system.
MEDICAL IMAGE SEGMENTATION METHOD BASED ON U-NETWORK
A medical image segmentation method include: 1) acquiring a medical image data set; 2) acquiring, from the medical image data set, an original image and a real segmentation image of a target region in the original image in pair to serve as an input data set of a pre-built constant-scaling segmentation network, the input data set including a training set, a verification set, and a test set; 3) training the constant-scaling segmentation network by using the training set to obtain a trained segmentation network model, and verifying the constant-scaling segmentation network by using the verification set, the constant-scaling segmentation network including a feature extraction module and a resolution amplifying module; and 4) inputting the original image to be segmented into the segmentation network model for segmentation to obtain a real segmentation image.
Systems and methods for performing gabor optical coherence tomographic angiography
Systems and methods are provided for performing optical coherence tomography angiography for the rapid generation of en face images. According to one example embodiment, differential interferograms obtained using a spectral domain or swept source optical coherence tomography system are convolved with a Gabor filter, where the Gabor filter is computed according to an estimated surface depth of the tissue surface. The Gabor-convolved differential interferogram is processed to produce an en face image, without requiring the performing of a fast Fourier transform and k-space resampling. In another example embodiment, two interferograms are separately convolved with a Gabor filter, and the amplitudes of the Gabor-convolved interferograms are subtracted to generate a differential Gabor-convolved interferogram amplitude frame, which is then further processed to generate an en face image in the absence of performing a fast Fourier transform and k-space resampling. The example OCTA methods disclosed herein are shown to achieve faster data processing speeds compared to conventional OCTA algorithms.
Facial Skin Detection Method and Apparatus
In a facial skin detection method performed by a terminal device having a front-facing camera, a first image including a face is obtained and feature information of pores in a target area of the face is extracted. The pores in the target area are classified into pore categories based on the feature information of the pores in the target area. Feature information of pores of each pore category is input into at least one pore detection model to obtain pore detection data of each pore category and a skin detection result of the face is determined based on pore detection data of the pore categories.
SYSTEM AND METHOD FOR EVALUATING EFFECTIVENESS OF A SKIN TREATMENT
A method for evaluating the effectiveness of a skin treatment on a skin feature includes correcting the color of a first image before the skin treatment, correcting the color of a second image after the skin treatment, determining the sizes of the skin feature in the first and second images, and comparing the corrected colors and sizes of the skin feature in the first and second images. In some embodiments, the first and second images include the skin feature and an indicator adjacent to the skin feature, where the indicator has a standard color and a known size.