Patent classifications
G06V10/16
System and method for generating imaging report
The present disclosure provides a system and a method for generating an imaging report. The method may include obtaining first imaging information and second imaging information. The first and second imaging information may be acquired from an examination region of a subject using a first imaging device or a second imaging device, respectively. The method may include identifying at least one first target ROI based on the first imaging information, and determining first reporting information corresponding to the at least one first target ROI. The method may include identifying at least one second target ROI with respect to the second imaging information based on the at least one first target ROI, and determining second reporting information corresponding to the at least one second target ROI. The method may further include generating a report based on at least a part of the first reporting information or the second reporting information.
MOBILE OBJECT, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
In a mobile object, a camera unit including an optical system that forms an optical image having a high-resolution region and a low-resolution region on a light receiving surface of an image pickup element and is disposed on a side of the mobile object, wherein the camera unit is installed to meet the following conditions: A tan (h/(d1+x))−θv/2<φv<A tan (h/(d2+x))+θv/2, φh_limit=max (A tan ((w1−y)/(d1+x))−θh/2, A tan ((w2−y)/(d2+x))−θh/2), φh limit <φh <−A tan (y/(d1+x))+θh/2, where θv and θh denote a vertical and a horizontal field angle of the high-resolution region, φv and φh denote a vertical and a horizontal direction angle of the optical axis of the optical system, x, y, and h denotes offsets, and w1 and w2 denote predetermined widths on the ground at the distances d1 and d2.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
An information processing apparatus configured to learn objectness of an object in an image includes an image acquisition unit configured to acquire an image, a GT data acquisition unit configured to acquire GT data including at least an object area where an object present in the acquired image is present, an inference unit configured to infer a candidate area for the object in the image and a first score indicating objectness for the candidate area based on a learning model, a determination unit configured to determine a second score indicating objectness for the object area based on the inferred candidate area and an area included in the acquired GT data, and an update unit configured to update a parameter for the learning model based on a loss value calculated based on the second score and the inferred first score indicating the objectness for the candidate area.
Methods and Systems for Augmented Reality Tracking Based on Volumetric Feature Descriptor Data
An illustrative augmented reality tracking system obtains a volumetric feature descriptor dataset that includes: 1) a plurality of feature descriptors associated with a plurality of views of a volumetric target, and 2) a plurality of 3D structure datapoints that correspond to the plurality of feature descriptors. The system also obtains an image frame captured by a user equipment (UE) device. The system identifies a set of image features depicted in the image frame and detects, based on a match between the set of image features depicted in the image frame and a set of feature descriptors of the plurality of feature descriptors, that the volumetric target is depicted in the image frame. In response to this detecting and based on 3D structure datapoints corresponding to matched feature descriptors, the system determines a spatial relationship between the UE device and the volumetric target. Corresponding methods and systems are also disclosed.
OBJECT STITCHING IMAGE GENERATION
A method includes receiving, by a computing device, concepts of a domain; determining, by the computing device, objects relevant to the concepts; generating, by the computing device, a new image by stitching the relevant objects together; determining, by the computing device, whether the new image is accurate or inaccurate; and in response to determining the new image is inaccurate, propagating, by the computing device, the inaccurate new image back to a convolutional neural network (CNN).
ULTRASOUND IMAGING METHOD AND SYSTEM FOR IDENTIFYING AN ANATOMICAL FEATURE OF A SPINE
Ultrasound imaging methods for identifying an anatomical feature of a spine are described. In an embodiment, the method comprises: receiving a transverse ultrasound image of a portion of the spine; extracting features of the portion of the spine from the image based on a distinct pattern associated with the anatomical feature of the spine; identifying a midline of the portion of the spine in the image; extracting midline features using pixel intensity values of the image; and identifying, based on a combination of the extracted features of the portion of the spine and the extracted midline features, the anatomical feature in the image. In another embodiment, the method comprises: receiving a paramedian sagittal ultrasound image of a portion of the spine; identifying morphological features of the image; and determining if the portion of the spine includes a sacrum using a Support Vector Machine classifier.
METHOD, APPARATUS AND SYSTEM FOR VIDEO PROCESSING
A method for video processing is provided. The method comprises obtaining an image of a scene, obtaining a video that records an area included in the scene, determining one or more frames from the plurality of frames of the video, determining pairs of matched features, generating a plurality of composite frames by combining each of the selected one or more frames with the image of the scene based on the pairs of matched features, and generating a composite video based on the plurality of composite frames. Each of the pairs of matched features is related to an object that is in both the image and the one or more frames. Each of the pairs of matched features is associated with one or more pixels of the image of the scene and one or more pixels of a selected frame of the one or more frames.
IMPROVED ARCHITECTURE FOR COUPLING DIGITAL PIXEL SENSORS AND COMPUTING COMPONENTS
The disclosed system may include a first layer that includes multiple digital pixel sensors configured to detect light. The system may also include a second layer that includes various image processing components configured to process the light detected by the digital pixel sensors. Still further, the system may include a third layer that includes machine learning (ML) hardware processing components. The image processing components of the second layer may be communicatively connected to the ML hardware processing components of the third layer via multiple micro through-silicon vias (uTSVs). Various other methods of manufacturing, apparatuses, and computer-readable media are also disclosed.
SYSTEMS AND METHODS FOR PERFORMING COMPUTER VISION TASK USING A SEQUENCE OF FRAMES
Systems and methods are described for performing a computer vision task on a sequence of frames. A first frame and a second frame are obtained, corresponding to a first timestep and a second timestep, respectively, in a sequence of frames. A differential image is computed between the first frame and the second frame. A predicted output is generated by forward propagating the differential image through a neural network that is trained to perform a computer vision task.
Image obtaining method
An image obtaining method comprises: by a projecting device, separately projecting an image acquisition light and a reference light onto a target object, wherein the light intensity of the image acquisition light is higher than the light intensity of the reference light; by an image obtaining device, obtaining a first image and a second image, both the first image and the second image comprising the image of the target object, with the target object of the first image being illuminated by the image acquisition light, and the target object of the second image being illuminated by the reference light, wherein the first image has a first area including a part of the target object, and the second image has a second area including the part of the target object; and by a computing device, performing a difference evaluation procedure to obtain a required light intensity based on a required amount.