Patent classifications
G06T2207/20084
METHOD OF IN-PROCESS DETECTION AND MAPPING OF DEFECTS IN A COMPOSITE LAYUP
A method of detecting defects in a composite layup includes capturing, using an infrared camera, reference images of a reference layup being laid up by a reference layup head. The method also includes manually reviewing the reference images for defects, and generating reference defect masks indicating defects in the reference images. The method further includes training, using the reference images and reference defect masks, a neural network, creating a machine learning model that, given a production image as input, outputs a production defect mask indicating the defect location and the defect type of each defect. The method also includes capturing, using an infrared camera, production images of a production layup being laid up by the production layup head, and applying the model to the production images to automatically generate a production defect masks indicating each defect in the production images.
SYSTEM AND METHOD FOR ADDITIVE MANUFACTURING CONTROL
An additive manufacturing apparatus, a computing system, and a method for operating an additive manufacturing apparatus are provided. The method includes obtaining two or more images corresponding to respective build layers at a build plate, wherein each image comprises a plurality of data points comprising a feature and corresponding location at the build plate; removing variation between the features of the plurality of data points; and normalizing each feature to remove location dependence in the plurality of data points.
APPARATUS AND METHOD FOR IDENTIFYING CONDITION OF ANIMAL OBJECT BASED ON IMAGE
An image-based animal object condition identification apparatus includes: a communication module that receives an image of an object; a memory that stores therein a program configured to extract animal condition information from the received image; and a processor that executes the program. The program extracts continuous animal detection information of each object by inputting the received image into an animal detection model that is trained based on learning data composed of animal images and determines predetermined animal condition information for each class of each animal object by inputting the continuous animal detection information of each object into an animal condition identification model.
AIRCRAFT DOOR CAMERA SYSTEM FOR DOCKING ALIGNMENT MONITORING
A camera with a field of view toward an external environment of an aircraft is disposed within an aircraft door such that a ground surface is within the field of view of the camera during taxiing of the aircraft. A display device is disposed within an interior of the aircraft. A processor is operatively coupled to the camera and to the display device. The processor analyzes image data captured by the camera for docking guidance by identifying, within the captured image data, a region on the ground surface corresponding to an alignment fiducial indicating a parking location for the aircraft, determining, based on the region of the captured image data corresponding to the alignment fiducial indicating the parking location, a relative location of the aircraft with respect to the alignment fiducial, and outputting an indication of the relative location of the aircraft to the alignment fiducial.
NON-TRANSITORY COMPUTER READABLE MEDIUM AND METHOD FOR STYLE TRANSFER
According to one or more embodiments, a non-transitory computer readable medium storing a program which, when executed, causes a computer to perform processing comprising acquiring image data, applying style transfer to the image data a plurality of times based on one or more style images, and outputting data after the style transfer is applied.
SYSTEM AND METHOD FOR CALIBRATING A TIME DIFFERENCE BETWEEN AN IMAGE PROCESSOR AND AN INTERTIAL MEASUREMENT UNIT BASED ON INTER-FRAME POINT CORRESPONDENCE
Systems and methods are used for calibrating a time difference between an image signal processor (ISP) and an inertial measurement unit (IMU) of an image capture device. An image capture device includes a lens, an image sensor, an IMU, and an ISP. The image sensor detects images as frames and the IMU captures motion data. The ISP detects one or more key points on the frames and matches the one or more key points between the frames. The ISP computes one or more calibration parameters. The one or more calibration parameters are based on the matched key points and a time difference between the ISP and the IMU. The ISP performs a calibration using the calibration parameters.
DIGITAL TISSUE SEGMENTATION AND MAPPING WITH CONCURRENT SUBTYPING
Accurate tissue segmentation is performed without a priori knowledge of tissue type or other extrinsic information not found within the subject image, and may be combined with classification analysis so that diseased tissue is not only delineated within an image but also characterized in terms of disease type. In various embodiments, a source image is decomposed into smaller overlapping subimages such as square or rectangular tiles. A predictor such as a convolutional neural network produces tile-level classifications that are aggregated to produce a tissue segmentation and, in some embodiments, to classify the source image or a subregion thereof.
SYSTEM AND METHOD FOR GENERATING VIRTUAL PSEUDO 3D OUTPUTS FROM IMAGES
A method for generating virtual pseudo three dimensional 360 degree outputs from 2D images of an object 102 is provided. An image viewer plane of the object 102 in the 3D image to be rendered on a user device 108 is detected using an augmented reality technique. An image viewer plane is placed facing the user device 108 rendering ‘Image 0’ and movement coordinates of the user device 108 with respect to the image viewer plane is detected to calculate the virtual pseudo 3D image set to be displayed based on at least one angle of view by performing interpolation between two consecutive virtual pseudo 3D images. The image viewer plane is changed with respect to the movement of the user device 108 to change the virtual pseudo 3D image and the interpolated virtual pseudo 3D image on the plane and that image is displayed as an augmented reality object in real-time to the user device 108.
SYSTEMS AND METHODS FOR PROVIDING DISPLAYED FEEDBACK WHEN USING A REAR-FACING CAMERA
A system includes a processor and a non-transitory computer-readable medium containing instructions that when executed by the processor causes the processor to perform operations comprising displaying a prompt to a user of a mobile device on a display of a mobile device to capture an image representing at least a portion of a mouth of the user using a rear-facing camera of the mobile device, where the rear-facing camera is on an opposite side of the mobile device including the display. The operations further comprise controlling the rear-facing camera to enable the rear-facing camera to capture the image, receiving the image, and outputting, user feedback based on the image, where the user feedback is outputted on the display that is on the opposite side of the mobile device than the rear-facing camera.
PART INSPECTION SYSTEM HAVING GENERATIVE TRAINING MODEL
A part inspection system includes a vision device configured to image a part being inspected and generate a digital image of the part. The system includes a part inspection module communicatively coupled to the vision device and receives the digital image of the part as an input image. The part inspection module includes a defect detection model. The defect detection model includes a template image. The defect detection model compares the input image to the template image to identify defects. The defect detection model generates an output image. The defect detection model configured to overlay defect identifiers on the output image at the identified defect locations, if any.