Patent classifications
G06V10/42
METHOD AND APPARATUS FOR EXTRACTING A FINGERPRINT OF A VIDEO HAVING A PLURALITY OF FRAMES
A method for extracting a fingerprint of a video includes calculating 2D discrete cosine transform (DCT) coefficients from each of the plurality of frames of the video, extracting, from the 2D DCT coefficients, a coefficient having a basis satisfying at least one of up-down symmetry or left-right symmetry, and calculating a fingerprint of the video based on the extracted coefficient.
MEDICAL IMAGE PROCESSING APPARATUS AND MEDICAL IMAGE PROCESSING METHOD
A medical image processing apparatus according to an embodiment includes processing circuitry. The processing circuitry detects, for each of target points corresponding to feature points, a reference point having a spatial correlation with the target point in a medical image. The processing circuitry generates candidate points corresponding to the target point for each of the target points by using a detection model. The processing circuitry selects, for each of the target points, a candidate point based on a position feature indicating a spatial position relationship between the target point and the reference point. The processing circuitry selects, for each of a plurality of candidate point combinations, a candidate point combination based on a structural feature indicating a spatial structural relationship between the target points. The processing circuitry outputs feature points in the medical image based on the selected candidate point and candidate point combination.
MEDICAL IMAGE PROCESSING APPARATUS AND MEDICAL IMAGE PROCESSING METHOD
A medical image processing apparatus according to an embodiment includes processing circuitry. The processing circuitry detects, for each of target points corresponding to feature points, a reference point having a spatial correlation with the target point in a medical image. The processing circuitry generates candidate points corresponding to the target point for each of the target points by using a detection model. The processing circuitry selects, for each of the target points, a candidate point based on a position feature indicating a spatial position relationship between the target point and the reference point. The processing circuitry selects, for each of a plurality of candidate point combinations, a candidate point combination based on a structural feature indicating a spatial structural relationship between the target points. The processing circuitry outputs feature points in the medical image based on the selected candidate point and candidate point combination.
Method and system for image processing to determine blood flow
Embodiments include a system for determining cardiovascular information for a patient. The system may include at least one computer system configured to receive patient-specific data regarding a geometry of the patient's heart, and create a three-dimensional model representing at least a portion of the patient's heart based on the patient-specific data. The at least one computer system may be further configured to create a physics-based model relating to a blood flow characteristic of the patient's heart and determine a fractional flow reserve within the patient's heart based on the three-dimensional model and the physics-based model.
Method and system for image processing to determine blood flow
Embodiments include a system for determining cardiovascular information for a patient. The system may include at least one computer system configured to receive patient-specific data regarding a geometry of the patient's heart, and create a three-dimensional model representing at least a portion of the patient's heart based on the patient-specific data. The at least one computer system may be further configured to create a physics-based model relating to a blood flow characteristic of the patient's heart and determine a fractional flow reserve within the patient's heart based on the three-dimensional model and the physics-based model.
Automated obscurity for digital imaging
Obfuscating a human or other subject in digital media preserves privacy. A user of a smartphone, for example, may enable a flag for obscuring her face in digital photos or movies. When any device captures digital media, the user's smartphone transmits the flag for receipt. The device capturing the digital media is thus informed of the user's desire to obscure her face or even entire image. The device capturing the digital media may thus perform an obscuration in response to the flag.
Automated obscurity for digital imaging
Obfuscating a human or other subject in digital media preserves privacy. A user of a smartphone, for example, may enable a flag for obscuring her face in digital photos or movies. When any device captures digital media, the user's smartphone transmits the flag for receipt. The device capturing the digital media is thus informed of the user's desire to obscure her face or even entire image. The device capturing the digital media may thus perform an obscuration in response to the flag.
Computer Vision Systems and Methods for Determining Structure Features from Point Cloud Data Using Neural Networks
Computer vision systems and methods for determining structure features from point cloud data using neural networks are provided. The system obtains point cloud data of a structure or a property parcel having a structure present therein from a database. The system can preprocess the obtained point cloud data to generate another point cloud or 3D representation derived from the point cloud data by spatial cropping and/or transformation, down sampling, up sampling, and filtering. The system can also preprocess point features to generate and/or obtain any new features thereof. Then, the system extracts a structure and/or feature of the structure from the point cloud data utilizing one or more neural networks. The system determines at least one attribute of the extracted structure and/or feature of the structure utilizing the one or more neural networks.
Barrier detection for support structures
A method of barrier detection in an imaging controller includes: obtaining an image of a support structure configured to support a plurality of items on a support surface extending between a shelf edge and a shelf back; extracting frequency components representing pixels of the image; based on the extracted frequency components, identifying a barrier region of the image, the barrier region containing a barrier adjacent to the shelf edge; and detecting at least one empty sub-region within the barrier region, wherein the empty sub-region is free of items between the barrier and the shelf back.
Image processing apparatus, image capture apparatus, and control method for adding an effect of a virtual light source to a subject
With respect to a subject included in an image, the illuminating condition by an ambient light source in an environment where the image was captured is estimated, and based on the estimation result, the effect of a virtual light source that was non-existent at the time of image capture is computed. More specifically, the effect of the virtual light source is computed using an illumination direction of the virtual light source and the reflective characteristics of the subject illuminated by the virtual light source, which have been determined based on the estimation result, and an image derived from addition of the effect of the virtual light source is output.