Patent classifications
G06V10/759
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
According to an embodiment, an image processing device includes one or more processors. The one or more processors are configured to: acquire an image; detect a first repeated pattern from the image; detect an object included in the first repeated pattern; and output the object as a second repeated pattern.
Real-Time Alignment of Multiple Point Clouds to Video Capture
The presented invention includes the generation of cloud points, the identification of objects in the cloud points, and, in this case, finding the positions of objects in cloud points. In addition, the invention includes capturing images, data streaming, and digital image processing in different points of the system, and calculation of the position of objects. The invention includes the usage of cameras of mobile smart devices, smart glasses, 3D cameras, but not necessarily. The data streaming provides video streaming and sensor data streaming from mobile smart devices. The presented invention further includes cloud points of buildings in which the positioning of separated objects could be implemented. It also consists of the database of cloud points of isolated objects which help to calculate the position in the building. Finally, the invention comprises the method of objects feature extraction, comparing in the cloud points and position calculation.
System and method for determining a viewpoint of a traffic camera
A system and method for determining a viewpoint of a traffic camera includes obtaining images of a real road captured by the traffic camera, segmenting a road surface from the captured images to generate a mask of the real road, generating a 3D model of a simulated road corresponding to the real road, from geographical data of the real road, adding a simulated camera corresponding to the traffic camera to a location in the 3D model that is corresponding to a location of the traffic camera in the real road, generating a plurality of simulated images of the simulated road using the 3D model, each corresponding to a set of viewpoint parameters of the simulated traffic camera, selecting the simulated image that provides the best fit between the simulated image and the mask, and generating mapping between pixel locations in the captured images and locations on the real road.
MULTI-CHANNEL OBJECT MATCHING
A method may include obtaining first sensor data captured by a first sensor system and second sensor data captured by a second sensor system of a different type from the first sensor system. The method may include detecting a first object included in the first sensor data and a second object included in the second sensor data. The method may include assigning a first label to the first object and a second label to the second object after comparing the first and the second sensor data. The first and second labels may indicate degrees to which the first and the second objects match. Responsive to the first and second labels indicating that the first and the second objects match, the method may include designating a matched object representative of the first object and the second object and sending the matched object to a downstream computing system of an autonomous vehicle.
DATA IDENTIFICATION METHOD AND APPARATUS
This disclose relates to a data processing method and apparatus. The method includes: acquiring a first prediction region in a target image, the first prediction region being a prediction region corresponding to a maximum prediction category probability in N prediction regions in the target image, a prediction category probability being a probability that an object in a prediction region belongs to a prediction object category; determining a coverage region jointly covered by a second prediction region and the first prediction region; the second prediction region being a prediction region other than the first prediction region in the N prediction regions; and determining a target prediction region in the prediction regions based on an area of the coverage region and a similarity associated with the second prediction region, the similarity being for indicating a similarity between an object in the second prediction region and an object in the first prediction region.
COMPUTER-IMPLEMENTED DETECTION AND PROCESSING OF ORAL FEATURES
Described herein are computer-implemented methods for identifying and classifying one or more regions of interest in a facial region and augmenting an appearance of the regions of interest in an image. For example, a region of interest may include one or more of: a teeth region, a lip region, a mouth region, or a gum region. User selected templates for teeth, gums, smile, etc. may be used to replace the analogous facial features in an input image provided by the user, for example from an image library or taken with an image sensor. The computer-implemented methods described herein may use one or more trained machine learning models and one or more algorithms to identify and classify regions of interest in an input image.
Defect detection and image comparison of components in an assembly
A method is disclosed that includes receiving, by a processing device, a plurality of images of a test assembly. The processing device selects a component in the test assembly and an image of the plurality of images of the test assembly as received. For the component as selected and the image as selected, the processing device compares a plurality of portions of the image as selected to a corresponding plurality of portions of a corresponding profile image and computing a matching score for each of the plurality of portions. The processing device selects a largest matching score from the matching score for each of the plurality of portions as a first matching score for the component as selected and the image as selected. The first matching score is stored for the component as selected and the image as selected.
METHOD AND APPARATUS FOR DETECTING GAME CURRENCY STATE, ELECTRONIC DEVICE AND STORAGE MEDIUM
Provided are a method and apparatus for detecting a game currency state, an electronic device, and a computer-readable storage medium. The method includes the following. An original image frame and an image frame sequence of a scene of a game table in a halt state are acquired. Multiple regions for placing game currency are provided on the game table, and the original image frame is acquired when placement of the game currency is completed. In response to determining, based on the image frame sequence and the original image frame, that game currency information of a first region among the multiple regions is changed, game currency alert information is output. In response to recognizing, based on the image frame sequence, that a game currency operating object associated with a preset object appears in the first region, process alert information is output.
Medical image processing apparatus, and medical imaging apparatus
In medical examination of breast cancer, a lesion computer-aided detection is performed in real time and with high accuracy, and a burden on a medical worker is reduced. A medical image processing apparatus that processes a medical image includes: a detection unit configured to detect a lesion candidate region; a validity evaluation unit configured to evaluate validity of the lesion candidate region by using a normal tissue region corresponding to the detected lesion candidate region; and a display unit configured to determine display content to a user by using an evaluation result.
AUTONOMOUS VEHICLE SENSOR SECURITY, AUTHENTICATION AND SAFETY
A method includes receiving, from a sensing system of an autonomous vehicle (AV), image data including first image data and second image data. The method further includes determining, for a frame, whether an amount of image data matching between the first image data and the second image data satisfies a first threshold condition, in response to determining that the amount of image data matching satisfies a first threshold condition, identifying the frame as invalid, determining whether a number of consecutive frames determined to be invalid satisfies a second threshold condition, and in response to determining that the number of consecutive frames determined to be invalid satisfies the second threshold condition, generating a notification that the sensing system is outputting invalid data.