Patent classifications
G06T7/62
UNDERWATER ORGANISM IMAGING AID SYSTEM, UNDERWATER ORGANISM IMAGING AID METHOD, AND STORAGE MEDIUM
An underwater organism imaging aid system according to an aspect of the present disclosure includes, at least one memory configured to store instructions, and at least one processor configured to execute the instructions to: detect an underwater organism from an image acquired by a camera, determine a positional relationship between the underwater organism detected and the camera, and output auxiliary information for moving the camera in such a way that a side face of the underwater organism and an imaging face of the camera face each other based on the positional relationship.
SYSTEMS AND METHODS FOR PREDICTING INDIVIDUAL PATIENT RESPONSE TO RADIOTHERAPY USING A DYNAMIC CARRYING CAPACITY MODEL
Systems and methods for predicting outcome of radiation therapy is described herein. An example method includes receiving respective values for tumor volume of a target patients tumor at first and second time points, and calculating a change in tumor volume between the first and second time points. The method also includes estimating a patient-specific carrying capacity based on a logistic growth model and the change in tumor volume. Additionally, the method includes predicting a volume of the target patient's tumor at a future time point during radiation treatment based, at least in part, on a historical carrying capacity reduction fraction distribution and the patient-specific carrying capacity. The method further includes predicting a patient-specific outcome of radiation therapy for the target patient based, at least in part, on the predicted volume of the target patients tumor at the future time point.
SYSTEMS AND METHODS FOR PREDICTING INDIVIDUAL PATIENT RESPONSE TO RADIOTHERAPY USING A DYNAMIC CARRYING CAPACITY MODEL
Systems and methods for predicting outcome of radiation therapy is described herein. An example method includes receiving respective values for tumor volume of a target patients tumor at first and second time points, and calculating a change in tumor volume between the first and second time points. The method also includes estimating a patient-specific carrying capacity based on a logistic growth model and the change in tumor volume. Additionally, the method includes predicting a volume of the target patient's tumor at a future time point during radiation treatment based, at least in part, on a historical carrying capacity reduction fraction distribution and the patient-specific carrying capacity. The method further includes predicting a patient-specific outcome of radiation therapy for the target patient based, at least in part, on the predicted volume of the target patients tumor at the future time point.
LEARNING DATA GENERATION DEVICE AND DEFECT IDENTIFICATION SYSTEM
A learning data generation device that can generate learning data suitable for learning of an identification model. The learning data generation device has a function of cutting out part of first image data as second image data, a function of generating a two-dimensional graphic corresponding to the area of the second image data and representing a pseudo defect, a function of generating third image data by combining the second image data and the two-dimensional graphic, and a function of assigning a label corresponding to the two-dimensional graphic to the third image data. By using the third image data for learning of the identification model, a highly accurate identification model can be generated.
LEARNING DATA GENERATION DEVICE AND DEFECT IDENTIFICATION SYSTEM
A learning data generation device that can generate learning data suitable for learning of an identification model. The learning data generation device has a function of cutting out part of first image data as second image data, a function of generating a two-dimensional graphic corresponding to the area of the second image data and representing a pseudo defect, a function of generating third image data by combining the second image data and the two-dimensional graphic, and a function of assigning a label corresponding to the two-dimensional graphic to the third image data. By using the third image data for learning of the identification model, a highly accurate identification model can be generated.
PROGRAM, INFORMATION PROCESSING METHOD, METHOD FOR GENERATING LEARNING MODEL, METHOD FOR RELEARNING LEARNING MODEL, AND INFORMATION PROCESSING SYSTEM
A program and the like that make a catheter system relatively easy to use. The program including a non-transitory computer-readable medium (CRM) storing computer program code executed by a computer processor that executes a process comprising: acquiring a tomographic image generated using a diagnostic imaging catheter inserted into a lumen organ; and inputting the acquired tomographic image to a first model configured to output types of a plurality of objects included in the tomographic image and ranges of the respective objects in association with each other when the tomographic image is input, and outputting the types and ranges of the objects output from the first model.
PROGRAM, INFORMATION PROCESSING METHOD, METHOD FOR GENERATING LEARNING MODEL, METHOD FOR RELEARNING LEARNING MODEL, AND INFORMATION PROCESSING SYSTEM
A program and the like that make a catheter system relatively easy to use. The program including a non-transitory computer-readable medium (CRM) storing computer program code executed by a computer processor that executes a process comprising: acquiring a tomographic image generated using a diagnostic imaging catheter inserted into a lumen organ; and inputting the acquired tomographic image to a first model configured to output types of a plurality of objects included in the tomographic image and ranges of the respective objects in association with each other when the tomographic image is input, and outputting the types and ranges of the objects output from the first model.
IMAGE PROCESSING METHOD AND SENSOR DEVICE
Object detection is performed for an image acquired by imaging using an array sensor in which a plurality of imaging elements are arranged one-dimensionally or two-dimensionally, some of the imaging elements are configured as color-filter-disposed pixels in which a color filter is disposed in an incident optical path, and color information acquisition points are formed by the color-filter-disposed pixels. Then, coloring processing in a pixel range of a detected object is performed by referring to color information acquired at the color information acquisition points corresponding to the inside of the pixel range of the detected object.
IMAGE PROCESSING METHOD AND SENSOR DEVICE
Object detection is performed for an image acquired by imaging using an array sensor in which a plurality of imaging elements are arranged one-dimensionally or two-dimensionally, some of the imaging elements are configured as color-filter-disposed pixels in which a color filter is disposed in an incident optical path, and color information acquisition points are formed by the color-filter-disposed pixels. Then, coloring processing in a pixel range of a detected object is performed by referring to color information acquired at the color information acquisition points corresponding to the inside of the pixel range of the detected object.
METHOD FOR AUTOMATICALLY RECONSTITUTING THE REINFORCING ARCHITECTURE OF A COMPOSITE MATERIAL
A method for automatically reconstituting the architecture, along a reinforcing axis, of the reinforcement of a composite material, includes acquiring images of the reinforcement of the composite material, each image being acquired along a section plane perpendicular to the reinforcing axis; for each image acquired, detecting, using a neural network, barycentre and/or the circumference of each section of the reinforcing thread; for at least one acquired reference image, assigning a tag corresponding to a reinforcing thread, to each detected barycentre or circumference; for each other acquired image, assigning, to each detected barycentre and/or each detected circumference, the tag of the corresponding barycentre in the acquired reference image; reconstituting the architecture of each reinforcing thread from each detected barycentre and/or circumference having the tag of the reinforcing thread and the position on the reinforcing axis associated with the acquired image on which the barycentre and/or the circumference has been detected.