Patent classifications
G06T3/0075
SYSTEM AND METHOD FOR COHESIVE MULTI-REGIONAL FUNCTIONAL-ANATOMICAL MEDICAL IMAGE REGISTRATION
A method includes applying both a first dedicated functional-anatomical registration scheme to a first volume of interest to deform the first volume of interest and a second dedicated functional-anatomical registration scheme to a second volume of interest to deform the second volume of interest, wherein the first volume of interest at least partially encompasses the second volume of interest. The method includes identifying or segmenting relevant organs or anatomical structures related to a first group and a second group in the first volume of interest and the second volume of interest, respectively; generating a spatially smooth-transition weight mask that gives higher weight to image data corresponding to the identified or segmented relevant organs or anatomical structures related to the first group and the second group; and generating a final cohesive registered image volume from the first image volume and the second image volume utilizing the spatially smooth-transition weight mask.
Photogrammetric alignment for immersive content production
A method of content production includes generating a survey of a performance area that includes a point cloud representing a first physical object, in a survey graph hierarchy, constraining the point cloud and a taking camera coordinate system as child nodes of an origin of a survey coordinate system, obtaining virtual content including a first virtual object that corresponds to the first physical object, applying a transformation to the origin of the survey coordinate system so that at least a portion of the point cloud that represents the first physical object is substantially aligned with a portion of the virtual content that represents the first virtual object, displaying the first virtual object on one or more displays from a perspective of the taking camera, capturing, using the taking camera, one or more images of the performance area, and generating content based on the one or more images.
Automatic correction method for onboard camera and onboard camera device
There is provided an automatic correction method for an onboard camera and an onboard camera device. The automatic correction method includes the following steps: obtaining a lane image with the onboard camera and a current extrinsic parameter matrix, and identifying two lane lines in the lane image; converting the lane image into a top-view lane image, and obtaining two projected lane lines in the top-view lane image for the two lane lines; calculating a plurality of correction parameter matrices corresponding to the current extrinsic parameter matrix according to the two projected lane lines; and correcting the current extrinsic parameter matrix according to the plurality of correction parameter matrices. This can be applied in situations where the vehicle is stationary or travelling for automatic correction on the extrinsic parameter matrix of the onboard camera.
SYMBOL RECOGNITION FROM RASTER IMAGES OF P&IDs USING A SINGLE INSTANCE PER SYMBOL CLASS
Traditional systems that enable extracting information from Piping and Instrumentation Diagrams (P&IDs) lack accuracy due to existing noise in the images or require a significant volume of annotated symbols for training if deep learning models that provide good accuracy are utilized. Conventional few-shot/one-shot learning approaches require a significant number of training tasks for meta-training prior. The present disclosure provides a method and system that utilizes the one-shot learning approach that enables symbol recognition using a single instance per symbol class which is represented as a graph with points (pixels) sampled along the boundaries of different symbols present in the P&ID and subsequently, utilizes a Graph Convolutional Neural Network (GCNN) or a GCNN appended to a Convolutional Neural Network (CNN) for symbol classification. Accordingly, given a clean symbol image for each symbol class, all instances of the symbol class may be recognized from noisy and crowded P&IDs.
STRUCTURAL MASKING FOR PROGRESSIVE HEALTH MONITORING
A method of structural masking for progressive health monitoring of a structural component includes receiving a current image of the structural component. A processor aligns the current image and a reference image of the structural component. The processor performs a structure estimation on the current image and the reference image to produce a current structure estimate image and a reference structure estimate image. The processor generates a structural mask from the reference structure estimate image. The processor masks the current structure estimate image with the structural mask to identify one or more health monitoring analysis regions including a potential defect or damaged area appearing in the masked current structure estimate image that does not appear in the reference structure estimate image.
Biometric Authentication Using Head-Mounted Devices
A head-mounted wearable device includes a frame mountable on a head of a user; an infrared imaging device arranged to image a face of the user when the frame is mounted on the head of the user; and a computing system configured to perform operations including causing the infrared imaging device to capture an image of the face of the user using infrared light received at the infrared camera and initiating a biometric authentication process based on the image. The head-mounted wearable device may include a visible-light imaging device to image the face of the user with the computing system configured to perform operations including causing the visible-light imaging device to capture a second image of the face of the user using visible light received at the visible-light imaging device, with the biometric authentication process being based in part on the second image.
IMAGE ROTATION FOR STREAM OF INPUT IMAGES
A method for processing a stream of input images is described. A stream of input images that are from an image sensor is received. The stream comprises an initial sequence of input images including a subject having an initial orientation. A change in an angular orientation of the image sensor while receiving the stream of input images is determined. In response to determining the change in the angular orientation, a subsequent sequence of input images of the stream of input images is processed for rotation to counteract the change in the angular orientation of the image sensor and maintain the subject in the initial orientation. The stream of input images is transmitted to one or more display devices.
Realtime image analysis and feedback
Performing realtime image analysis and providing realtime feedback are disclosed. A stream comprising a plurality of arriving RAW images is received. A RAW image included in the stream is sent to a graphics processing unit (GPU). A result of the GPU is used to generate a visualization corresponding to the RAW image. The visualization is co-presented with a realtime view of a scene in a display.
EYE CENTER LOCALIZATION METHOD AND LOCALIZATION SYSTEM THEREOF
An eye center localization method includes performing an image sketching step, a frontal face generating step, an eye center marking step and a geometric transforming step. The image sketching step is performed to drive a processing unit to sketch a face image from the image. The frontal face generating step is performed to drive the processing unit to transform the face image into a frontal face image according to a frontal face generating model. The eye center marking step is performed to drive the processing unit to mark a frontal eye center position information on the frontal face image. The geometric transforming step is performed to drive the processing unit to calculate two rotating variables between the face image and the frontal face image, and calculate the eye center position information according to the two rotating variables and the frontal eye center position information.
METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR ENSURING CONTINUITY OF FEATURES BETWEEN SPATIALLY PARTITIONED MAPS
A method is provided to ensure continuity of features through spatially partitioned maps. Methods may include: identifying a map element extending from a first map tile to a second map tile; determining a first set of continuous features of the map element in the first map tile; determining a second set of continuous features of the map element in the second map tile; identifying a first set of locations in a plane separating the first map tile from the second map tile where the first set of continuous features intersect the plane; identifying a second set of locations where the second set of continuous features intersect the plane; correlating the first set of continuous features with the second set of continuous features; blending the first and second set of continuous features; and updating map data including the first map tile and the second map tile with a blended map element.