Patent classifications
G06V10/54
Method and Device for Identification of Effect Pigments in a Target Coating
Disclosed herein is a computer-implemented method, a respective device, and a non-transitory computer-readable medium. The method includes: obtaining color values, texture values and digital images of a target coating, retrieving from a database one or more preliminary matching formulas based on the color and/or texture values obtained for the target coating, determining sparkle points within the respective obtained images and within the respective images associated with the one or more preliminary matching formulas, creating subimages of each sparkle point from the respective images, providing the created subimages to a convolutional neural network, the convolutional neural network being trained to correlate a respective subimage of a respective sparkle point with a pigment and/or pigment class, and determining, based on an output of the neural network, at least one of the one or more preliminary matching formulas as the formula(s) best matching the target coating.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
An information processing apparatus (20) according to the present disclosure includes a control unit (230). When a display image is displayed on a transmissive display in which real space is visually recognizable, a control unit (230) detects, from the display image, a transparent area through which the real space is seen. The control unit (230) corrects pixel values of at least a part of an area in the transparent area of the display image.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
An information processing apparatus (20) according to the present disclosure includes a control unit (230). When a display image is displayed on a transmissive display in which real space is visually recognizable, a control unit (230) detects, from the display image, a transparent area through which the real space is seen. The control unit (230) corrects pixel values of at least a part of an area in the transparent area of the display image.
Systems and methods for property damage restoration predictions based upon processed digital images
Embodiments of the present invention provide methods, systems, apparatuses, and computer program products for predicting property damage restoration estimates. In one embodiment, a computing entity or apparatus is configured to receive, from a client device, a property damage restoration estimate request comprising one or more digital image files; retrieve policy data associated with a user of the client device, the policy data comprising user identification properties and policy properties; programmatically generate, by fraud detection/prediction circuitry and based on the one or more digital image files, a first predictive value, wherein the first predictive value represents a likelihood that at least one of the digital image files was fraudulently altered; upon identifying that the first predictive value does not exceed a fraud threshold, programmatically generate, by property restoration estimate prediction circuitry and based on the one or more digital image files, a second predictive value, wherein the second predictive value represents a property damage restoration estimate, wherein the second predictive value is based at least on the property properties contained in the policy data and the one or more digital image files; and substantially instantaneously transmit a property damage restoration estimate response comprising the property damage restoration estimate to the client device.
MAGNETIC RESONANCE (MR) IMAGE ARTIFACT DETERMINATION USING TEXTURE ANALYSIS FOR IMAGE QUALITY (IQ) STANDARDIZATION AND SYSTEM HEALTH PREDICTION
An apparatus (100) comprises at least one electronic processor (101, 113) programmed to: control an associated medical imaging device (120) to acquire an image (130); compute values of textural features (132) for the acquired image; generate a signature (140) from the computed values of the textural features; and at least one of: display the signature on a display device (105); and apply an artificial intelligence (AI) component (150) to the generated signature to output image artifact metrics (152) for a set of image artifacts and display an image quality assessment based on the image artifact metrics on the display device.
MAGNETIC RESONANCE (MR) IMAGE ARTIFACT DETERMINATION USING TEXTURE ANALYSIS FOR IMAGE QUALITY (IQ) STANDARDIZATION AND SYSTEM HEALTH PREDICTION
An apparatus (100) comprises at least one electronic processor (101, 113) programmed to: control an associated medical imaging device (120) to acquire an image (130); compute values of textural features (132) for the acquired image; generate a signature (140) from the computed values of the textural features; and at least one of: display the signature on a display device (105); and apply an artificial intelligence (AI) component (150) to the generated signature to output image artifact metrics (152) for a set of image artifacts and display an image quality assessment based on the image artifact metrics on the display device.
IMAGE PROCESSING APPARATUS AND OPERATING METHOD THEREOF
An image processing apparatus, including a memory configured to store one or more instructions; and at least one processor configured to execute the one or more instructions to: based on a first image and a probability model, optimize an estimated pixel value and estimated gradient values of each pixel of an original image corresponding to the first image, obtain an estimated original image based on the optimized estimated pixel value of the each pixel of the original image, obtain a decontour map based on the optimized estimated pixel value and the estimated gradient values of the each pixel of the original image, and generate a second image by combining the first image with the estimated original image based on the decontour map.
IMAGE PROCESSING APPARATUS AND OPERATING METHOD THEREOF
An image processing apparatus, including a memory configured to store one or more instructions; and at least one processor configured to execute the one or more instructions to: based on a first image and a probability model, optimize an estimated pixel value and estimated gradient values of each pixel of an original image corresponding to the first image, obtain an estimated original image based on the optimized estimated pixel value of the each pixel of the original image, obtain a decontour map based on the optimized estimated pixel value and the estimated gradient values of the each pixel of the original image, and generate a second image by combining the first image with the estimated original image based on the decontour map.
Additional developments to the automatic rig creation process
The disclosure provides methods and systems for automatically generating an animatable object, such as a 3D model. In particular, the present technology provides fast, easy, and automatic animatable solutions based on unique facial characteristics of user input. Various embodiments of the present technology include receiving user input, such as a two-dimensional image or three-dimensional scan of a user's face, and automatically detecting one or more features. The methods and systems may further include deforming a template geometry and a template control structure based on the one or more detected features to automatically generate a custom geometry and custom control structure, respectively. A texture of the received user input may also be transferred to the custom geometry. The animatable object therefore includes the custom geometry, the transferred texture, and the custom control structure, which follow a morphology of the face.
System for the automated, context sensitive, and non-intrusive insertion of consumer-adaptive content in video
Described herein is a method and system for automated, context sensitive and non-intrusive insertion of consumer-adaptive content in video. It assesses ‘context’ in the video that a consumer is viewing through multiple modalities and metadata about the video. The method and system described herein analyzes relevance for a consumer based on multiple factors such as the profile information of the end-user, history of the content, social media and consumer interests and professional or educational background, through patterns from multiple sources. The system also implements local-context through search techniques for localizing sufficiently large, homogenous regions in the image that do not obfuscate protagonists or objects in focus but are viable candidate regions for insertion for the intended content. This makes relevant, curated content available to a user in the most effortless manner without hampering the viewing experience of the main video.