Patent classifications
G06V10/507
SIMULTANEOUS ORIENTATION AND SCALE ESTIMATOR (SOSE)
A method and hardware based system provide for descriptor-based feature mapping during terrain relative navigation (TRN). A first reference image/premade terrain map and a second image are acquired. Features in the first reference image and the second image are detected. A scale and an orientation of the one or more detected features are estimated based on an intensity centroid (IC), moments of the detected features, an orientation which is in turn based on an angle between a center of each of the detected features and the IC, and an orientation stability measure which is in turn based on a radius. Signatures are computed for each of the detected features using the estimated scale and orientation and then converted into feature descriptors. The descriptors are used to match features from the two images which are then used to perform TRN.
VIDEO PROCESSING METHOD, VIDEO SEARCHING METHOD, TERMINAL DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
A video processing method, comprising: according to the scenario, editing a video to be edited, and obtaining a target video (S100); acquiring feature parameters of the target video (S200); generating, according to the feature parameters, a keyword of the target video (S300); and associatively storing the keyword and the target video (S400).
OBJECT DETECTION APPARATUS, SYSTEM, AND METHOD, DATA CONVERSION UNIT, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
A receiver receives a radio wave transmitted to a target and scattered by the target to acquire a signal. An imaging unit generates a 3D complex image of the target based on the signal. A value extraction unit extracts intensity information and phase in including an intensity matrix and a phase matrix, the extracted intensity information constituting the intensity matrix and the extracted phases information constituting the phase matrix. A subset selection unit selects a subset from the value set. A transformation unit changes a representation of the subset to generate a 2D real image. A detection unit detects whether there is an undesired object on the target based on the 2D real image.
HALFTONE SCREENS
In an example, a method includes, by one or more processors, receiving a greyscale image having a plurality of pixels, each pixel being associated with a grey level, and the greyscale image having a first number of grey levels. An order of the pixels may be determined based on the grey level. A second number of grey levels may be determined, wherein the second number of grey levels is greater than the first number, and an indication of a target number of pixels per grey level of the second number of grey levels may be further be determined. Taking the pixels in order, and based on the target number of pixels per grey level, a new grey level may be allocated to each pixel to provide the second number of grey levels. The new grey levels may be converted to a threshold of a threshold halftone screen.
Learning highlights using event detection
A highlight learning technique is provided to detect and identify highlights in sports videos. A set of event models are calculated from low-level frame information of the sports videos to identify recurring events within the videos. The event models are used to characterize videos by detecting events within the videos and using the detected events to generate an event vector. The event vector is used to train a classifier to identify the videos as highlight or non-highlight.
SYSTEM, APPARATUS, METHOD, PROGRAM AND RECORDING MEDIUM FOR PROCESSING IMAGE
An image processing system may include an imaging device for capturing an image and an image processing apparatus for processing the image. The imaging device may include an imaging unit for capturing the image, a first recording unit for recording information relating to the image, the information being associated with the image, and a first transmission control unit for controlling transmission of the image to the image processing apparatus. The image processing apparatus may include a reception control unit for controlling reception of the image transmitted from the imaging device, a feature extracting unit for extracting a feature of the received image, a second recording unit for recording the feature, extracted from the image, the feature being associated with the image, and a second transmission control unit for controlling transmission of the feature to the imaging device.
PIXEL-LEVEL BASED MICRO-FEATURE EXTRACTION
Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.
METHOD AND CIRCUITRY FOR EXPOSURE COMPENSATION APPLIED TO HIGH DYNAMIC RANGE VIDEO
A method and a circuitry for exposure compensation applied to a high dynamic range video are provided. The circuitry is adapted to an image-acquisition device. In the method, when a video is received, the pixel values for each of the sequential frames can be obtained. Next, an exposure value ratio between two adjacent frames is obtained. A processor exposure value ratio of an image signal processor can be regarded as an initial exposure value ratio. A fixed adjustment ratio is used to control the image signal processor and an image sensor of the image-acquirement device so as to calculate an exposure value ratio for each of the frames. The exposure value ratio is referred to for performing the high dynamic range compensation for the frames so as to output an HDR video.
SYSTEMS AND METHODS FOR EFFICENTLY SENSING COLLISON THREATS
A system for efficiently sensing collision threats has an image sensor configured to capture an image of a scene external to a vehicle. The system is configured to then identify an area of the image that is associated with homogeneous sensor values and is thus likely devoid of collision threats. In order to reduce the computational processing required for detecting collision threats, the system culls the identified area from the image, thereby conserving the processing resources of the system.
SYSTEMS, METHODS, STORAGE MEDIA, AND COMPUTING PLATFORMS FOR SCANNING ITEMS AT THE POINT OF MANUFACTURING
Systems, methods, storage media, and computing platforms for scanning items at the point of manufacturing are disclosed. Exemplary implementations may: receive a first set of images of an item from a first set of camera sources; detect a code in the first set of images; combine, responsive to detecting the code, along a second axis perpendicular to the first axis, the first set of images into a first set of combined images; rotate parallel to the first axis; and combine along the first axis.