G06V20/00

Scene-aware object detection
11580723 · 2023-02-14 · ·

Embodiments described herein provide systems and processes for scene-aware object detection. This can involve an object detector that modulates its operations based on image location. The object detector can be a neural network detector or a scanning window detector, for example.

Scene-aware object detection
11580723 · 2023-02-14 · ·

Embodiments described herein provide systems and processes for scene-aware object detection. This can involve an object detector that modulates its operations based on image location. The object detector can be a neural network detector or a scanning window detector, for example.

Method and device for carrying out eye gaze mapping

The invention relates to a device and a method for performing an eye gaze mapping (M), in which at least one point of vision (B) and/or a viewing direction of at least one person (10) in relation to at least one scene recording (S) of a scene (12) viewed by the at least one person (10) is mapped onto a reference (R). At least a part of an algorithm (A1, A2, A3) for performing the eye gaze mapping (M) is thereby selected from multiple predetermined algorithms (A1, A2, A3) as a function of at least one parameter (P), and the eye gaze mapping (M) is performed on the basis of the at least one part of the algorithm (A1, A2, A3).

Systems and methods for detecting environmental occlusion in a wearable computing device display
11580324 · 2023-02-14 · ·

Systems and methods for displaying a visual interface in a display of a head-mounted wearable device when the display may occlude objects in the user's physical environment while in use. An image detection device oriented generally in-line with the user's line of sight is used to capture at least one image. One or more objects are detected in the at least one image and, based on the detection of the one or more objects, an environmental interaction mode may be activated or deactivated. In the environmental interaction mode, the user interface may be modified or disabled to facilitate viewing of the physical environment.

Systems and methods for detecting environmental occlusion in a wearable computing device display
11580324 · 2023-02-14 · ·

Systems and methods for displaying a visual interface in a display of a head-mounted wearable device when the display may occlude objects in the user's physical environment while in use. An image detection device oriented generally in-line with the user's line of sight is used to capture at least one image. One or more objects are detected in the at least one image and, based on the detection of the one or more objects, an environmental interaction mode may be activated or deactivated. In the environmental interaction mode, the user interface may be modified or disabled to facilitate viewing of the physical environment.

Construction zone segmentation

Systems and methods for construction zone segmentation are provided. The system aligns image level features between a source domain and a target domain based on an adversarial learning process while training a domain discriminator. The target domain includes construction zones scenes having various objects. The system selects, using the domain discriminator, unlabeled samples from the target domain that are far away from existing annotated samples from the target domain. The system selects, based on a prediction score of each of the unlabeled samples, samples with lower prediction scores. The system annotates the samples with the lower prediction scores.

Construction zone segmentation

Systems and methods for construction zone segmentation are provided. The system aligns image level features between a source domain and a target domain based on an adversarial learning process while training a domain discriminator. The target domain includes construction zones scenes having various objects. The system selects, using the domain discriminator, unlabeled samples from the target domain that are far away from existing annotated samples from the target domain. The system selects, based on a prediction score of each of the unlabeled samples, samples with lower prediction scores. The system annotates the samples with the lower prediction scores.

OBJECT IDENTIFICATION METHOD, APPARATUS AND DEVICE
20230042208 · 2023-02-09 · ·

Provided are an object identification method, apparatus, and device. The object identification method comprises: acquiring a first image of at least part of an object; determining a feature portion of the object on the basis of the first image; acquiring a second image of the feature portion of the object; and identifying an object category of the object on the basis of the second image. According to the object identification method, apparatus, and device, in which a feature portion of an object is acquired and identification of the category of the object is performed on the basis of the feature portion, operations are simple, and the accuracy of object identification can be effectively improved.

METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR IMAGE RECOGNITION
20230038047 · 2023-02-09 ·

Embodiments of the present disclosure relate to a method, a device, and a computer program product for image recognition. In some embodiments, characterization information for a first reference image in a reference image set is generated in an image recognition engine by using a Gaussian mixture model. First reference label information for the first reference image is generated based on the characterization information for the first reference image, the first reference label information being associated with a category of a first object in the first reference image. The image recognition engine is updated by determining the accuracy of the first reference label information for the first reference image. In this way, good characterization of images and generation of reference label information for the images can be achieved, thus both improving the robustness of the generated reference label information and significantly improving the accuracy of image recognition.

Scene-based automatic white balance

A method and apparatus may be used for performing a scene-based automatic white balance correction. The method may include obtaining an input image. The method may include obtaining a raw image thumbnail. The method may include obtaining an augmented image thumbnail. The method may include computing a histogram from an image thumbnail. The method may include determining a scene classification. The method may include learning a filter. The filter may be learned from one or several different instances of the raw image thumbnail, the augmented image thumbnail, the scene classification, or any combination thereof. The method may include applying the filter to the histogram to determine white balance correction coefficients and obtain a processed image.