Patent classifications
G06V10/467
Methods, systems, and media for evaluating images
A method may include obtaining an image including a face. The method may further include determining at least one time domain feature related to the face in the image and at least one frequency domain information related to the face in the image. The method may further include evaluating the quality of the image based on the at least one time domain feature and the frequency domain information.
Camera having two exposure modes and imaging system using the same
There is provided an imaging system including a camera and a control host. The camera identifies ambient light intensity and performs trigger event detection in a low power mode. When the camera detects a trigger event in the low power mode, the control host is woken up. The camera also determines an exposure mode according the ambient light intensity and informs the exposure mode to the control host such that an operating mode of the control host after being woken up matches the exposure mode of the camera.
SINGLE-SHOT AUTOFOCUSING OF MICROSCOPY IMAGES USING DEEP LEARNING
A deep learning-based offline autofocusing method and system is disclosed herein, termed a Deep-R trained neural network, that is trained to rapidly and blindly autofocus a single-shot microscopy image of a sample or specimen that is acquired at an arbitrary out-of-focus plane. The efficacy of Deep-R is illustrated using various tissue sections that were imaged using fluorescence and brightfield microscopy modalities and demonstrate single snapshot autofocusing under different scenarios, such as a uniform axial defocus as well as a sample tilt within the field-of-view. Deep-R is significantly faster when compared with standard online algorithmic autofocusing methods. This deep learning-based blind autofocusing framework opens up new opportunities for rapid microscopic imaging of large sample areas, also reducing the photon dose on the sample.
METHOD AND DEVICE FOR DETECTING DEFECTS IN AIRCRAFT TEMPLATE
A method for detecting defects of an aircraft template, including: scanning the template; establishing a local coordinate system of a template point cloud; fitting plane parameters of a target local point cloud; acquiring an average of normal vectors of all points; calculating heights of all points; calculating an angle between a normal vector of the sinking point and a normal vector of the template point cloud plane; binarizing a point cloud image of the template; obtaining a 3D digital model of the template; aligning the 3D digital model with a resulting point cloud; and determining whether an actual distance exceeds a preset distance threshold, if not, the template is qualified, otherwise, determining whether the number of points whose actual distance exceeds the preset distance threshold exceeds a preset number threshold; if not, the template is qualified; otherwise, the template is not qualified. A detection device is also provided.
CAMERA HAVING TWO EXPOSURE MODES AND IMAGING SYSTEM USING THE SAME
There is provided an imaging system including a camera and a control host. The camera identifies ambient light intensity and performs trigger event detection in a low power mode. When the camera detects a trigger event in the low power mode, the control host is woken up. The camera also determines an exposure mode according the ambient light intensity and informs the exposure mode to the control host such that an operating mode of the control host after being woken up matches the exposure mode of the camera.
SYSTEMS, METHODS, AND APPARATUSES FOR GENERATING PRE-TRAINED MODELS FOR nnU-Net THROUGH THE USE OF IMPROVED TRANSFER LEARNING TECHNIQUES
Described herein are means for generating pre-trained models for nnU-Net through the use of improved transfer learning techniques, in which the pre-trained models are then utilized for the processing of medical imaging. According to a particular embodiment, there is a system specially configured for segmenting medical images, in which such a system includes: a memory to store instructions; a processor to execute the instructions stored in the memory; wherein the system is specially configured to: execute instructions via the processor for executing a pre-trained model from Models Genesis within a nnU-Net framework; execute instructions via the processor for learning generic anatomical patterns within the executing Models Genesis through self-supervised learning; execute instructions via the processor for transforming an original image using distortion and cutout-based methods; execute instructions via the processor for learning the reconstruction of the original image from the transformed image using an encoder-decoder architecture of the nnU-Net framework to identify the generic anatomical representation from the transformed image by recovering the original image; and wherein architecture determined by the nnU-Net framework is utilized with Models Genesis and is trained to minimize the L2 distance between the prediction and ground truth. Other related embodiments are disclosed.
METHOD AND SYSTEM FOR IMAGE PROCESSING TO MODEL VASCULASTURE
Systems and methods are disclosed for evaluating cardiovascular treatment options for a patient. One method includes creating a three-dimensional model representing a portion of the patient's heart based on patient-specific data regarding a geometry of the patient's heart or vasculature; and for a plurality of treatment options for the patient's heart or vasculature, modifying at least one of the three-dimensional model and a reduced order model based on the three-dimensional model. The method also includes determining, for each of the plurality of treatment options, a value of a blood flow characteristic, by solving at least one of the modified three-dimensional model and the modified reduced order model; and identifying one of the plurality of treatment options that solves a function of at least one of: the determined blood flow characteristics of the patient's heart or vasculature, and one or more costs of each of the plurality of treatment options.
FEATURE DENSITY OBJECT CLASSIFICATION, SYSTEMS AND METHODS
A system capable of determining which recognition algorithms should be applied to regions of interest within digital representations is presented. A preprocessing module utilizes one or more feature identification algorithms to determine regions of interest based on feature density. The preprocessing modules leverages the feature density signature for each region to determine which of a plurality of diverse recognition modules should operate on the region of interest. A specific embodiment that focuses on structured documents is also presented. Further, the disclosed approach can be enhanced by addition of an object classifier that classifies types of objects found in the regions of interest.
SYSTEMS AND METHODS FOR ASSISTING IN OBJECT RECOGNITION IN OBJECT PROCESSING SYSTEMS
An object recognition system includes: an image capture system for capturing at least one image of an object, and for providing image data representative of the captured image; a patch identification system in communication with the image capture system for receiving the image data, and for identifying at least one image patch associated with the captured image, each image patch being associated with a potential grasp location on the object, each potential grasp location being described as an area that may be associated with a contact portion of an end effector of a programmable motion device; a feature identification system for capturing at least one feature of each image patch, for accessing feature image data in the database and for providing feature identification data responsive to the image feature comparison data; and an object identification system for providing object identify data responsive to the image feature comparison data.
Image segmentation and modification of a video stream
Systems, devices, media, and methods are presented for segmenting an image of a video stream with a client device, identifying an area of interest, generating a modified area of interest within one or more image, identifying a first set of pixels and a second set of pixels, and modifying a color value for the first set of pixels.