Patent classifications
G06V10/443
MATERIAL DETERMINING DEVICE, MATERIAL DETERMINING METHOD, AUTONOMOUS CLEANING DEVICE
A material determining device comprising a first image sensor, a second image sensor, and a light source is provided. The material determining method comprises: (a) sensing a first image by the first image sensor according to light from the light source; (b) sensing a second image by the second image sensor according to the light; and (c) determining whether material corresponding to material images in the first image and the second image is first type of material or second type of material, according to locations of the material images in the first image and the second image and according to shapes of the material images in the first image and the second image. By this way an electronic device using the material determining device can properly operate according to the type of material.
SYSTEM AND METHOD FOR LATERAL VEHICLE DETECTION
A system and method for lateral vehicle detection is disclosed. A particular embodiment can be configured to: receive lateral image data from at least one laterally-facing camera associated with an autonomous vehicle; warp the lateral image data based on a line parallel to a side of the autonomous vehicle; perform object extraction on the warped lateral image data to identify extracted objects in the warped lateral image data; and apply bounding boxes around the extracted objects.
Electronic apparatus, controlling method of electronic apparatus, and computer readable medium
An electronic apparatus is provided. The electronic apparatus includes: a camera; a processor configured to control the camera; and a memory configured to be electrically connected to the processor and to store a network model trained to determine a degree of matching between an input image frame and predetermined feature information, wherein the memory stores at least one instruction, and wherein the processor is configured, by executing the at least one instruction, to: identify a representative image frame based on a degree of matching obtained by applying image frames, selected from among a plurality of image frames, to the trained network model, while the plurality of image frames are captured through the camera, identify a best image frame based on a degree of matching obtained by applying image frames within a specific section including the identified representative image frame, to the trained network model, from among the plurality of image frames, and provide the identified best image frame.
VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND COMPUTER PROGRAM PRODUCT
A vehicle control device is configured to control a vehicle equipped with (i) an optical sensor configured to acquire a reflected light image by sensing reflected light of irradiated light and (ii) a sensing camera configured to acquire a camera image according to intensity of outside light in a sensing area which overlaps with a sensing area of the optical sensor. The vehicle control device includes an extraction unit configured to extract an unmatched pixel group by comparing the reflected light image with the camera image, and a control unit configured to instruct the vehicle to control according to a water-related substance estimated to correspond to the unmatched pixel group.
Polygon detection device, polygon detection method, and polygon detection program
An object is to provide a polygon detection device, a polygon detection method, and a polygon detection program to accurately detect a polygon resembling a reference polygon from an image. The polygon detection device acquires a ratio among lengths of sides of a reference polygon included in an appearance of a predetermined object. The polygon detection device acquires a photographic image of the predetermined object. The polygon detection device detects line segments from the acquired photographic image. The polygon detection device forms at least one polygon based on the detected line segments. The polygon detection device identifies, from the formed polygon, a polygon corresponding to the reference polygon based on a degree of similarity between a ratio among lengths of sides of the formed polygon and the acquired ratio among the lengths of sides of the reference polygon, among from the formed polygon.
Recurrent deep neural network system for detecting overlays in images
In one aspect, an example method includes a processor (1) applying a feature map network to an image to create a feature map comprising a grid of vectors characterizing at least one feature in the image and (2) applying a probability map network to the feature map to create a probability map assigning a probability to the at least one feature in the image, where the assigned probability corresponds to a likelihood that the at least one feature is an overlay. The method further includes the processor determining that the probability exceeds a threshold, and responsive to the processor determining that the probability exceeds the threshold, performing a processing action associated with the at least one feature.
COMPUTER-IMPLEMENTED DETECTION AND PROCESSING OF ORAL FEATURES
Described herein are computer-implemented methods for identifying and classifying one or more regions of interest in a facial region and augmenting an appearance of the regions of interest in an image. For example, a region of interest may include one or more of: a teeth region, a lip region, a mouth region, or a gum region. User selected templates for teeth, gums, smile, etc. may be used to replace the analogous facial features in an input image provided by the user, for example from an image library or taken with an image sensor. The computer-implemented methods described herein may use one or more trained machine learning models and one or more algorithms to identify and classify regions of interest in an input image.
SYSTEM AND METHOD FOR FOLLOW-UP LOCAL FEATURE MATCHING BASED ON MULTIPLE FUNCTIONAL-ANATOMICAL FEATURE LAYERS
A method includes obtaining functional and anatomical image data sets from a subject acquired at different dates. The method includes receiving a volumetric coordinate of interest in a specified functional and anatomical image data set. The method includes generating a 3D feature matching map for at least one functional feature layer type and for at least one anatomical feature layer type for each non-specified functional and anatomical image data set relative to the specified functional and anatomical image data set utilizing the volumetric coordinate of interest. The method includes generating a best matching coordinate and a corresponding confidence metric value for each 3D feature matching map. The method includes calculating an optimal matching coordinate to the volumetric coordinate of interest based on the best matching coordinates and their corresponding confidence metric values and outputting a respective optimal matching coordinate for each of the non-specified functional and anatomical image data sets.
Vehicle positioning method and system based on laser device
The present application discloses a positioning method for a movable platform, including: detecting, by a laser positioning system (LPS) mounted on the movable platform, a plurality of reflectors mounted on a target object, wherein the movable platform is moving; calculating in real-time, by the LPS, according to the current position information, relative positions of the plurality of reflectors with respect to the LPS; and obtaining, by the LPS, a relative position of the movable platform with respect to the target object based on the relative positions of the plurality of reflectors with respect to the LPS. The present application also discloses positioning system that performs the positioning method.
Method to improve accuracy of quantized multi-stage object detection network
An apparatus includes a memory and a processor. The memory may be configured to store image data of an input image. The processor may be configured to detect one or more objects in the input image using a quantized multi-stage object detection network, where quantization of the quantized multi-stage object detection network includes (i) generating quantized image data by performing a first data range analysis on the image data of the input image, (ii) generating a feature map and proposal bounding boxes by applying a region proposal network (RPN) to the quantized image data, (iii) performing a region of interest pooling operation on the feature map and a plurality of ground truth boxes corresponding to the proposal bounding boxes generated by the RPN, (iv) generating quantized region of interest pooling results by performing a second data range analysis on results from the region of interest pooling operation, and (v) applying a region-based convolutional neural network (RCNN) to the quantized region of interest pooling results.