G06V10/54

DISEASE CHARACTERIZATION FROM FUSED PATHOLOGY AND RADIOLOGY DATA
20180012356 · 2018-01-11 ·

Methods and apparatus distinguish invasive adenocarcinoma (IA) from in situ adenocarcinoma (AIS). One example apparatus includes a set of circuits, and a data store that stores three dimensional (3D) radiological images of tissue demonstrating IA or AIS. The set of circuits includes a classification circuit that generates an invasiveness classification for a diagnostic 3D radiological image, a training circuit that trains the classification circuit to identify a texture feature associated with IA, an image acquisition circuit that acquires a diagnostic 3D radiological image of a region of tissue demonstrating cancerous pathology and that provides the diagnostic 3D radiological image to the classification circuit, and a prediction circuit that generates an invasiveness score based on the diagnostic 3D radiological image and the invasiveness classification. The training circuit trains the classification circuit using a set of 3D histological reconstructions combined with the set of 3D radiological images.

TEXTURE RECOGNITION DEVICE AND DISPLAY DEVICE

A texture recognition device and a display device are provided. The texture recognition device includes a backlight element, configured to provide first backlight; a light constraint element, configured to perform a light divergence angle constraint process on the first backlight to obtain second backlight with a divergence angle within a preset angle range, the second backlight being transmitted to a detection object; and a photosensitive element, configured to detect the second backlight reflected by a texture of the detection object to recognize a texture image of the texture of the detection object.

TEXTURE RECOGNITION DEVICE AND DISPLAY DEVICE

A texture recognition device and a display device are provided. The texture recognition device includes a backlight element, configured to provide first backlight; a light constraint element, configured to perform a light divergence angle constraint process on the first backlight to obtain second backlight with a divergence angle within a preset angle range, the second backlight being transmitted to a detection object; and a photosensitive element, configured to detect the second backlight reflected by a texture of the detection object to recognize a texture image of the texture of the detection object.

INFORMATION PROCESSING APPARATUS, LEARNING APPARATUS, IMAGE RECOGNITION APPARATUS, INFORMATION PROCESSING METHOD, LEARNING METHOD, IMAGE RECOGNITION METHOD, AND NON-TRANSITORY-COMPUTER-READABLE STORAGE MEDIUM
20230237777 · 2023-07-27 ·

An information processing apparatus comprises a first generation unit configured to generate a synthesized image in which a second image is synthesized in a closed region in a first image, and a second generation unit configured to generate learning data, the learning data including a label and the synthesized image, the label indicating an object region including a region corresponding to the closed region in the synthesized image.

INFORMATION PROCESSING APPARATUS, LEARNING APPARATUS, IMAGE RECOGNITION APPARATUS, INFORMATION PROCESSING METHOD, LEARNING METHOD, IMAGE RECOGNITION METHOD, AND NON-TRANSITORY-COMPUTER-READABLE STORAGE MEDIUM
20230237777 · 2023-07-27 ·

An information processing apparatus comprises a first generation unit configured to generate a synthesized image in which a second image is synthesized in a closed region in a first image, and a second generation unit configured to generate learning data, the learning data including a label and the synthesized image, the label indicating an object region including a region corresponding to the closed region in the synthesized image.

Segment action detection
11704893 · 2023-07-18 · ·

Aspects of the present disclosure involve a system comprising a storage medium storing a program and method for receiving a video comprising a plurality of video segments; selecting a target action sequence that includes a sequence of action phases; receiving features of each of the video segments; computing, based on the received features, for each of the plurality of video segments, a plurality of action phase confidence scores indicating a likelihood that a given video segment includes a given action phase of the sequence of action phases; identifying a set of consecutive video segments of the plurality of video segments that corresponds to the target action sequence, wherein video segments in the set of consecutive video segments are arranged according to the sequence of action phases; and generating a display of the video that includes the set of consecutive video segments and skips other video segments in the video.

MICROWAVE IDENTIFICATION METHOD AND SYSTEM
20230014948 · 2023-01-19 · ·

The present disclosure discloses a microwave identification method, which is implemented on at least one device, including at least one processor and at least one storage device, the method including: the at least one processor obtains microwave data; the at least one processor generates an image of one or more objects based on the microwave data; the at least one processor obtains a model of each of the one or more objects; and based on the model of each of the one or more objects, the at least one processor identifies the one or more objects in the image of the one or more objects.

VIRTUAL IMAGE GENERATION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM

The present disclosure provides a virtual image generation method and apparatus, an electronic device and a storage medium, and relates to the field of artificial intelligence technologies such as augmented reality, computer vision and deep learning. A specific implementation scheme involves: acquiring base coefficients corresponding to key points of a target face based on a target face picture; generating a structure of a virtual image of the target face based on a mapping relationship of spatial alignment between a preset virtual model and a standard model, a base of the standard model and the base coefficients corresponding to the key points of the target face; and performing texture filling on the structure of the virtual image based on textures of the target face picture, to obtain the virtual image of the target face.

VIRTUAL IMAGE GENERATION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM

The present disclosure provides a virtual image generation method and apparatus, an electronic device and a storage medium, and relates to the field of artificial intelligence technologies such as augmented reality, computer vision and deep learning. A specific implementation scheme involves: acquiring base coefficients corresponding to key points of a target face based on a target face picture; generating a structure of a virtual image of the target face based on a mapping relationship of spatial alignment between a preset virtual model and a standard model, a base of the standard model and the base coefficients corresponding to the key points of the target face; and performing texture filling on the structure of the virtual image based on textures of the target face picture, to obtain the virtual image of the target face.

System for the automated, context sensitive, and non-intrusive insertion of consumer-adaptive content in video

Described herein is a method and system for automated, context sensitive and non-intrusive insertion of consumer-adaptive content in video. It assesses ‘context’ in the video that a consumer is viewing through multiple modalities and metadata about the video. The method and system described herein analyzes relevance for a consumer based on multiple factors such as the profile information of the end-user, history of the content, social media and consumer interests and professional or educational background, through patterns from multiple sources. The system also implements local-context through search techniques for localizing sufficiently large, homogenous regions in the image that do not obfuscate protagonists or objects in focus but are viable candidate regions for insertion for the intended content. This makes relevant, curated content available to a user in the most effortless manner without hampering the viewing experience of the main video.