G06T2207/20061

METHOD OF CAPTURING AND RECONSTRUCTING COURT LINES
20170337714 · 2017-11-23 ·

A method of extracting and reconstructing court lines includes the steps of binarizing a court image of a court including court lines to form a binary image; performing horizontal projection for the binary image; searching for plural corners in the binary image and defining a court line range by the corners; forming plural linear segments from images within the court line range by linear transformation; defining at least one first cluster and at least one second cluster according to the characteristics of the linear segments and categorizing the linear segments into plural groups; taking an average of each group as a standard court line and creating a linear equation of the standard court line to locate the point of intersection of the standard court lines; and reconstructing the court lines according to the point of intersection. This method is capable of extracting the image of a portion of the court line from a dynamic or static image having a court line quickly to eliminate interference caused by noises coming from a portion other than the court line such as the background color, ambient brightness, people or advertisement, and reconstructing the court lines quickly and accurately to facilitate the determination of the boundary of a court line or the computation of data.

DRIVING DETERMINATION DEVICE AND DETECTION DEVICE

A driving determination device includes an acquirer configured to acquire at least a captured image of a driving body in a driving direction and information that changes with movement of the driving body; a driving level calculator configured to calculate a driving level for evaluating a driving method for the driving body for each predetermined determination item, using at least one of the acquired captured image and the acquired information that changes with the movement of the driving body; an itemized calculator configured to calculate values based on a plurality of the calculated driving levels for each determination item; and an evaluation result calculator configured to calculate a value for comprehensively evaluating the driving method for the driving body, using the values based on the driving levels for each determination item.

COMPUTER VISION SYSTEM AND METHOD FOR ASSESSING ORTHOPEDIC SPINE CONDITION
20230169644 · 2023-06-01 ·

A computer vision system and method for an orthopedic assessment of the human spine condition. The system uses frontal and sagittal images of the human spine to detect the vertebrae of the spine. More specifically, four edges of each and every vertebra are detected, and the corresponding straight lines, which can be used for assessment, diagnosis and evaluation of various spinal disorders and diseases by orthopedic doctors, are exported. The system has two phases. In phase one, deep learning algorithm for object detection is applied to detect and localize each and every vertebra of frontal and sagittal images of the human spine. In phase two, the system extracts straight lines that correspond to each of the four edges. Using the straight lines, metrics for the spinal assessment, such as the curvature of the spine, the distance of consecutive vertebrae, and crucial angles such as the Cobb angle, can be determined.

Lamella alignment based on a reconstructed volume

Apparatuses and methods for aligning lamella to charged particle beams based on a volume reconstruction are disclosed herein. An example method at least includes forming a reconstructed volume of a portion of a sample, the sample including a plurality of structures, and the reconstructed volume including a portion of the plurality of structures, performing, over a range of angles, a mathematical transform on each plane of a plurality of planes of the reconstructed volume, and based on the mathematical transform on each plane of the plurality of planes, determining a target orientation of the sample within the range of angles, wherein the target orientation aligns the plurality of structures parallel to an optical axis of a charged particle beam.

SYSTEMS AND METHODS FOR IMAGE SEGMENTATION

The present disclosure relates to an image processing method. The method may include: obtaining image data; reconstructing an image based on the image data, the image including one or more first edges; obtaining a model, the model including one or more second edges corresponding to the one or more first edges; matching the model and the image; and adjusting the one or more second edges of the model based on the one or more first edges.

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM FOR EXTRACTING AN IRRADIATION FIELD OF A RADIOGRAPH
20220058423 · 2022-02-24 ·

An image processing apparatus configured to extract an irradiation field from an image obtained through radiation imaging, comprises: an inference unit configured to obtain an irradiation field candidate in the image based on inference processing; a contour extracting unit configured to extract a contour of the irradiation field based on contour extraction processing performed on the irradiation field candidate; and a field extracting unit configured to extract the irradiation field based on the contour.

ROBOTIC SURGERY SYSTEMS AND SURGICAL GUIDANCE METHODS THEREOF

The invention in its various embodiments relates to a method of providing surgical guidance and targeting in robotic surgery systems. The method utilizes data from a navigation system in tandem with 2-dimensional (2D) intra-operative imaging data. 2D intra-operative image data is superimposed with a pre-operative 3-dimensional (3D) image and surgery plans made in the pre-operative image coordinate system. The superimposition augments real-time intraoperative navigation for achieving image guided surgery in robotic surgery systems. Also, a robotic surgery system that incorporates the method of providing surgical guidance and targeting is disclosed. The advantages include minimizing radiation exposure to a patient by avoiding intra-operative volumetric imaging, mobility of tools, imager and robot in and out of the operating space without the need for re-calibration, and relaxing the need for repeating precise imaging positions.

RETINAL IMAGE PROCESSING
20170309014 · 2017-10-26 ·

The present disclosure relates to a computer-readable storage medium storing instructions that can cause a processor to process image data defining an image of a vascular structure of temporal vascular arcades of a retina to estimate a location of the fovea of the retina in the image by transforming received image data such that the vascular structure in the image defined by the transformed image data is more circular than the vascular structure defined by the image data; calculating, for each of a plurality of pixels of the transformed image data, a respective local orientation vector indicative of the orientation of any blood vessel present in the image; calculating a normalised local orientation vector for each of the pixels; operating on an array of accumulators; and estimating the location of the fovea in the image of the retina using the location of a pixel of the transformed image data.

RETINAL IMAGE PROCESSING
20170309015 · 2017-10-26 ·

The disclosure relates to a non-transitory computer-readable storage medium storing computer program instructions which, when executed by a processor, cause the processor to process image data defining an image of a retina to determine a location of an anatomical feature of the retina in the image by receiving the image data; calculating, for each of a plurality of pixels of the image data, a respective local orientation vector indicative of the orientation of any blood vessel present in the image; calculating a normalised local orientation vector for each of the plurality of pixels; operating on an array of accumulators, wherein each accumulator in the array is associated with a respective pixel of the image data; and determining the location of the anatomical feature in the image of the retina using the location of a pixel of the image data which is associated with an accumulator having accumulated an accumulated value.

APPARATUS AND METHOD FOR TRANSFORMING AUGMENTED REALITY INFORMATION OF HEAD-UP DISPLAY FOR VEHICLE

Provided are an apparatus and method for transforming augmented reality information of a head-up display (HUD) for a vehicle. The apparatus for transforming augmented reality information of a head-up display for a vehicle includes a first projection transforming formula generation unit configured to extract a conjugate point by performing feature point matching on a first feature point of a forward recognition image of the vehicle input from a forward recognition camera and a second feature point of a first driver-based viewpoint image input from a driver-based viewpoint camera whose installation position is different from that of the forward recognition camera, and generate a first projection transforming formula; a second projection transforming formula generation unit configured to generate a second projection transforming formula using a straight-line intersection of a second driver-based viewpoint image, which is extracted from a second driver-based viewpoint image input from the driver-based viewpoint camera, and HUD coordinates pre-defined on the HUD; and an augmented reality information transformation unit configured to sequentially apply the generated first and second projection transforming formulas to recognition coordinates of a forward object, which is recognized from the first forward recognition image, to calculate primary and secondary transformation coordinates, and render the secondary transformation coordinates on the HUD.