Patent classifications
G06V10/7747
IMAGE PROCESSING METHODS AND SYSTEMS FOR GENERATING A TRAINING DATASET FOR LOW-LIGHT IMAGE ENHANCEMENT USING MACHINE LEARNING MODELS
The present disclosure relates to an image processing method for generating a training dataset for training a machine learning model to enhance illumination of input images, said training dataset comprising target image/low-light image pairs to be used to train the machine learning model, said image processing method comprising, for generating a target image/low-light image pair: obtaining a target image representing a scene in a first color space, said first color space comprising a plurality of color channels including a color channel representative of the brightness of the scene, referred to as brightness channel, wherein the first color space comprises two color channels independent of the brightness of the scene, or is the L*a*b* color space, applying a darkening function to the brightness channel of the target image, thereby obtaining a low-light image of the scene and the target image/low light image pair in the first color space.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND PROGRAM
An information processing apparatus includes a control unit that creates a plurality of learning information items including an input image and a teacher image as an expected value by image-processing the input image in accordance with a scenario described with a program code, and supplies the created plurality of learning information items to a machine learning module that composes an image processing algorithm by machine learning.
Model training system
According to one embodiment, a model training system includes a processor. The processor is configured to input a first image to a model and acquire a second image output from the model, and generate a third image by correcting the second image. The processor is configures to train the model by using the first image as input data and using the third image as teacher data.
Training system and analysis system
According to one embodiment, a training system includes a first generator, a second generator, a third generator, and a trainer. The first generator uses a human body model to generate a first image. The human body model models a human body and is three-dimensional and virtual. The second generator generates a teacher image by annotating body parts of the human body model in the first image. The third generator generates a second image including noise by performing, on the first image, at least one selected from first processing, second processing, third processing, fourth processing, or fifth processing. The trainer uses the second image and the teacher image to train a first model.
Combination of radiomic and pathomic features in the prediction of prognoses for tumors
Embodiments discussed herein facilitate building and/or employing model(s) for determining tumor prognoses based on a combination of radiomic features and pathomic features. One example embodiment can perform actions comprising: providing, to a first machine learning model, at least one of: one or more intra-tumoral radiomic features associated with a tumor or one or more peri-tumoral radiomic features associated with a peri-tumoral region around the tumor; receiving a first predicted prognosis associated with the tumor from the first machine learning model; providing, to a second machine learning model, one or more pathomic features associated with the tumor; receiving a second predicted prognosis associated with the tumor from the second machine learning model; and generating a combined prognosis associated with the tumor based on the first predicted prognosis and the second predicted prognosis.
METHOD AND DEVICE FOR OPTICALLY INSPECTING CONTAINERS IN A DRINKS PROCESSING SYSTEM
A method for optically inspecting containers in a drinks processing system, wherein the containers are transported as a container mass flow using a transporter and captured as camera images by an inspection unit arranged in the drinks processing system, and wherein the camera images are inspected for faults by a first evaluation unit using a conventional image processing method, wherein the camera images with faulty containers are classified as fault images and the faults are correspondingly assigned to the fault images as fault markings, wherein the camera images with containers considered to be good quality are classified as fault-free images, the fault images, the fault markings and the fault-free images are compiled as a specific training data set, and wherein, using the specific training data set, a second evaluation unit is trained in situ with an image processing method working on the basis of artificial intelligence.
DIRECT CLASSIFICATION OF RAW BIOMOLECULE MEASUREMENT DATA
Disclosed herein are systems and methods for direct classification of biological datasets. The datasets may include raw mass spectrometry data. Some aspects include training a classifier for direct classification of raw data, and some aspects include applying the classifier.
Dynamic User Interface and Data Communications Via Extended Reality Environment
Methods and systems for entering content into fields via an extended reality (XR) environment are described herein. An XR device may provide an XR environment. The XR device may detect, in a physical environment around the XR device, a user interface element, displayed by a display device, that permits entry of content by a user of a first computing device. The XR device may determine a type of content to be entered via the user interface element. The XR device may receive an image of a physical object corresponding to the type of content to be entered via the user interface element. The XR device may then process the image of the physical object to determine first content to provide to the user interface element and transmit, to the first computing device, the first content for entry into the user interface element.
Timeline generation
Embodiments described herein provide for a non-transitory machine-readable medium storing instructions to cause one or more processors to select a set of content items from a content item collection based upon a temporal relevance and a contextual relevance to a period of time, rank the set of content items based on at least one of a content item category or a content item predefined relevance score, partition the period of time into a set of time slots to schedule for rendering content in an application, rank the set of time slots based on device usage analysis for the period of time, and schedule the set of content items into the set of time slots in accordance with the rankings.
VIDEO SCREENING USING A MACHINE LEARNING VIDEO SCREENING MODEL TRAINED USING SELF-SUPERVISED TRAINING
Video content screening using a trained video screening model trained using self-supervised training includes automatically generating a training dataset by obtaining predicate screening data indicating a predicate temporal segment within a training video and a corresponding reference temporal segment within the reference video, obtaining candidate screening data for an extended temporal segment from the training video, wherein the extended temporal segment includes the predicate temporal segment and at least one frame from the training video adjacent to the predicate temporal segment, wherein the candidate screening data indicates a similarity between a screening frame from the reference video and a spatial portion of a candidate frame from the extended temporal segment, and, in response to a determination that a determined similarity between the candidate subframe including, in the automatically generated training dataset, training example data indicating the similarity between the candidate subframe and the screening frame.