Patent classifications
G06T11/00
AI ENABLED COUPON CODE GENERATION FOR IMPROVED USER EXPERIENCE
An embodiment for generating an electronic coupon based on user preferences is provided. The embodiment may include receiving real-time and historical data relating to one or more reward coupons. The embodiment may also include identifying a contextual situation of the user and one or more preferences of the user regarding a coupon reward type. The embodiment may further include identifying one or more vendors that match with the one or more preferences of the user. The embodiment may also include generating one or more electronic coupons and presenting the one or more generated electronic coupons to the user. The embodiment may further include in response to determining the one or more generated electronic coupons match at least one preference of the user, adding the one or more generated electronic coupons that match the at least one preference of the user to an account of the user.
SYSTEM AND METHOD FOR ADAPTIVE COINCIDENCE PROCESSING FOR HIGH COUNT RATES
A method for adaptive coincidence data processing is provided. The method includes detecting positron annihilation events with a detector array of a positron emission tomography (PET) scanner, wherein the PET scanner includes multiple detector rings disposed along a longitudinal axis of the PET scanner, and each detector ring includes multiple detectors. The method also includes, within a given time period, dynamically adjusting a number of positron annihilation events accepted and transmitted to acquisition circuitry for processing utilizing a numerical difference in detector rings along the longitudinal axis between a first detector and a second detector detecting respective annihilation photons from a positron annihilation event.
SYSTEM AND METHOD FOR GENERATING 3D OBJECTS FROM 2D IMAGES OF GARMENTS
A system for generating three-dimensional (3D) objects from two-dimensional (2D) images of garments is presented. The system includes a data module configured to receive a 2D image of a selected garment and a target 3D model. The system further includes a computer vision model configured to generate a UV map of the 2D image of the selected garment. The system moreover includes a training module configured to train the computer vision model based on a plurality of 2D training images and a plurality of ground truth (GT) panels for a plurality of 3D training models. The system furthermore includes a 3D object generator configured to generate a 3D object corresponding to the selected garment based on the UV map generated by a trained computer vision model and the target 3D model. A related method is also presented.
Messaging system with augmented reality makeup
Systems, methods, and computer readable media for messaging system with augmented reality (AR) makeup are presented. Methods include processing a first image to extract a makeup portion of the first image, the makeup portion representing the makeup from the first image and training a neural network to process images of people to add AR makeup representing the makeup from the first image. The methods may further include receiving, via a messaging application implemented by one or more processors of a user device, input that indicates a selection to add the AR makeup to a second image of a second person. The methods may further include processing the second image with the neural network to add the AR makeup to the second image and causing the second image with the AR makeup to be displayed on a display device of the user device.
Messaging system with augmented reality makeup
Systems, methods, and computer readable media for messaging system with augmented reality (AR) makeup are presented. Methods include processing a first image to extract a makeup portion of the first image, the makeup portion representing the makeup from the first image and training a neural network to process images of people to add AR makeup representing the makeup from the first image. The methods may further include receiving, via a messaging application implemented by one or more processors of a user device, input that indicates a selection to add the AR makeup to a second image of a second person. The methods may further include processing the second image with the neural network to add the AR makeup to the second image and causing the second image with the AR makeup to be displayed on a display device of the user device.
Methods, systems, and computer readable media for mask embedding for realistic high-resolution image synthesis
The subject matter described herein includes methods, systems, and computer readable media for mask embedding for realistic high-resolution image synthesis. According to one method for mask embedding for realistic high-resolution image synthesis includes receiving, as input, a mask embedding vector and a latent features vector, wherein the mask embedding vector acts as a semantic constraint; generating, using a trained image synthesis algorithm and the input, a realistic image, wherein the realistic image is constrained by the mask embedding vector; and outputting, by the trained image synthesis algorithm, the realistic image to a display or a storage device.
Methods, systems, and computer readable media for mask embedding for realistic high-resolution image synthesis
The subject matter described herein includes methods, systems, and computer readable media for mask embedding for realistic high-resolution image synthesis. According to one method for mask embedding for realistic high-resolution image synthesis includes receiving, as input, a mask embedding vector and a latent features vector, wherein the mask embedding vector acts as a semantic constraint; generating, using a trained image synthesis algorithm and the input, a realistic image, wherein the realistic image is constrained by the mask embedding vector; and outputting, by the trained image synthesis algorithm, the realistic image to a display or a storage device.
System and method for adaptive coincidence processing for high count rates
A method for adaptive coincidence data processing is provided. The method includes detecting positron annihilation events with a detector array of a positron emission tomography (PET) scanner, wherein the PET scanner includes multiple detector rings disposed along a longitudinal axis of the PET scanner, and each detector ring includes multiple detectors. The method also includes, within a given time period, dynamically adjusting a number of positron annihilation events accepted and transmitted to acquisition circuitry for processing utilizing a numerical difference in detector rings along the longitudinal axis between a first detector and a second detector detecting respective annihilation photons from a positron annihilation event.
Multi-color quantitative magnetic nanoparticle imaging method and system based on trapezoidal wave excitation
A multi-color quantitative magnetic nanoparticle imaging method and system based on trapezoidal wave excitation solves the problem that the existing technology cannot implement multi-color quantitative magnetic particle imaging. The method includes: constructing, based on hysteresis effect and hysteresis inertial growth differences of n superparamagnetic iron oxide nanoparticles (SPIOs) under trapezoidal wave excitation, an equation set of quality of n SPIOs in a to-be-tested sample formed by any composition of n SPIO standard products; solving the equation set to obtain the quality distribution of the to-be-tested sample at position r; and performing rearrangement, color assignment, and image merging on the quality distribution to implement multi-color quantitative imaging of various particles in magnetic particle imaging (MPI). The method broadens the functions of MPI to realize multi-color quantitative imaging, such that MPI has greater potential for application in the medical field.
Systems and methods for gamification of instrument inspection and maintenance
Disclosed is a gamification system for overlaying user-controlled graphical targeting elements over a real-time video feed of an instrument being inspected, and providing interactive controls for firing virtual weapons or other graphical indicators to designate and/or record the presence of contaminants, defects, and/or other issues at specific locations within or on the instrument. The system may receive and present images of the instrument under inspection in a graphical user interface (“GUI”). The system may receive user input that tags a particular region of a particular image with an issue identifier, and may generate a visualization that is presented in conjunction with the particular image in the GUI in response to receiving the input. The visualization corresponds to firing of a virtual weapon and other gaming visuals associated with tagging the particular region of the particular image with the issue identifier.