G06T7/194

FOREGROUND EXTRACTION APPARATUS, FOREGROUND EXTRACTION METHOD, AND RECORDING MEDIUM

In a foreground extraction apparatus, an extraction result generation unit performs a foreground extraction using a plurality of foreground extraction models for an input image, and generates foreground extraction results. A selection unit selects one or more foreground extraction models among the plurality of foreground extraction models using respective foreground results acquired by the plurality of foreground extraction models. A foreground region generation unit extracts each foreground region based on the input image using the selected one or more foreground extraction models.

Method and System For Accelerating Rapid Class Augmentation for Object Detection in Deep Neural Networks
20230010033 · 2023-01-12 · ·

Object detection architectures for detecting and classifying objects in an image are modified to incorporate an extending Rapid Class Augmentation (XRCA) progressive learning algorithm with its defining aspect of memory built into its optimizer which allows joint optimization over both the old and the classes using just the new class data and eliminates the issues associated with catastrophic forgetting.

Method and System For Accelerating Rapid Class Augmentation for Object Detection in Deep Neural Networks
20230010033 · 2023-01-12 · ·

Object detection architectures for detecting and classifying objects in an image are modified to incorporate an extending Rapid Class Augmentation (XRCA) progressive learning algorithm with its defining aspect of memory built into its optimizer which allows joint optimization over both the old and the classes using just the new class data and eliminates the issues associated with catastrophic forgetting.

VIDEO MATTING
20230044969 · 2023-02-09 ·

The present disclosure describes techniques of improving video matting. The techniques comprise extracting features from each frame of a video by an encoder of a model, wherein the video comprises a plurality of frames; incorporating, by a decoder of the model, into any particular frame temporal information extracted from one or more frames previous to the particular frame, wherein the particular frame and the one or more previous frames are among the plurality of frames of the video, and the decoder is a recurrent decoder; and generating a representation of a foreground object included in the particular frame by the model, wherein the model is trained using segmentation dataset and matting dataset.

VIDEO MATTING
20230044969 · 2023-02-09 ·

The present disclosure describes techniques of improving video matting. The techniques comprise extracting features from each frame of a video by an encoder of a model, wherein the video comprises a plurality of frames; incorporating, by a decoder of the model, into any particular frame temporal information extracted from one or more frames previous to the particular frame, wherein the particular frame and the one or more previous frames are among the plurality of frames of the video, and the decoder is a recurrent decoder; and generating a representation of a foreground object included in the particular frame by the model, wherein the model is trained using segmentation dataset and matting dataset.

HAIR IDENTIFYING DEVICE AND APPARATUS FOR AUTOMATICALLY SEPERATING HAIR FOLLICLES INCLUDING THE SAME
20230041440 · 2023-02-09 ·

A follicle identifying device includes an image acquiring unit configured to acquire an image of a follicle and a hair included in the follicle for each follicle separated from a scalp cut from back of a head of an alopecic patient in an incisional hair transplant or each follicle directly extracted from back of a head of an alopecic patient in a non-incisional hair transplant, an image processing unit configured to extract edges of a follicle and a hair from the image of the follicle and the hair acquired by the image acquiring unit, a hair count determining unit configured to determine a hair count in the follicle based on the edges of the follicle and the hair extracted by the image processing unit, and a control unit configured to output the hair count in the follicle determined by the hair count determining unit.

HAIR IDENTIFYING DEVICE AND APPARATUS FOR AUTOMATICALLY SEPERATING HAIR FOLLICLES INCLUDING THE SAME
20230041440 · 2023-02-09 ·

A follicle identifying device includes an image acquiring unit configured to acquire an image of a follicle and a hair included in the follicle for each follicle separated from a scalp cut from back of a head of an alopecic patient in an incisional hair transplant or each follicle directly extracted from back of a head of an alopecic patient in a non-incisional hair transplant, an image processing unit configured to extract edges of a follicle and a hair from the image of the follicle and the hair acquired by the image acquiring unit, a hair count determining unit configured to determine a hair count in the follicle based on the edges of the follicle and the hair extracted by the image processing unit, and a control unit configured to output the hair count in the follicle determined by the hair count determining unit.

Method for image processing of image data for image and visual effects on a two-dimensional display wall

A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values to add or modify an image effect or a visual effect are determined, and the image data is adjusted using the pixel display values and the image matte.

Method for image processing of image data for image and visual effects on a two-dimensional display wall

A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values to add or modify an image effect or a visual effect are determined, and the image data is adjusted using the pixel display values and the image matte.

Monitoring device, and method for monitoring a man overboard situation
11594035 · 2023-02-28 · ·

The invention relates to a monitoring device 1 for monitoring a man-overboard situation in a ship section 5, wherein the ship section 5 is monitored by video technology using at least one camera 2, and the camera 2 is designed to provide surveillance in the form of video data. The monitoring device comprises an analysis device 9, said analysis device 9 having an interface 10 for transferring the video data, and the analysis device 9 is designed to detect a moving object in the ship section 5 on the basis of the video data and determine a kinematic variable of the moving object. The analysis device 9 is also designed to determine a scale on the basis of the video data and the kinematic variable in order to determine the extent 8 of the moving object and evaluate the moving object as a man-overboard event on the basis of the extent 8 thereof.