Patent classifications
G06F18/2134
Mobile-based positioning using assistance data provided by onboard micro-BSA
A method for estimating position of a mobile device which includes receiving, from a network server, observed time difference of arrival (OTDOA) assistance data for a first plurality of cells from a base station almanac (BSA) accessible to the network server. The OTDOA assistance data is stored, within a memory of the mobile device, as a first micro-BSA. A position estimate for the mobile device is determined based upon time difference of arrival (TDOA) measurements associated with an initial subset of the first plurality of cells and initial OTDOA assistance data corresponding to the initial subset of the first plurality of cells. The initial OTDOA assistance data may be generated by the micro-BSA based upon an initial seed estimate.
COLOR SORTING METHOD FOR SMALL-GRAIN AGRICULTURAL PRODUCTS COMBINING AREA SCANNING PHOTOELECTRIC CHARACTERISTIC AND LINE SCANNING PHOTOELECTRIC CHARACTERISTIC
A color sorting method for small-grain agricultural products combining an area scanning photoelectric characteristic and a line scanning photoelectric characteristic is provided. The present invention obtains an area scan image of small-grain agricultural product materials on a conveyor belt by using an area scan camera, which can accurately extract area array features of the materials and realize accurate identification of the unqualified materials. At the same time, the present invention can provide key parameters for accurate positioning during free falling of the materials while identifying the unqualified materials by using the area scan image, and can cooperate with the line scan positioning camera and the pneumatic nozzle to achieve high-speed elimination of the unqualified materials.
COLOR SORTING METHOD FOR SMALL-GRAIN AGRICULTURAL PRODUCTS COMBINING AREA SCANNING PHOTOELECTRIC CHARACTERISTIC AND LINE SCANNING PHOTOELECTRIC CHARACTERISTIC
A color sorting method for small-grain agricultural products combining an area scanning photoelectric characteristic and a line scanning photoelectric characteristic is provided. The present invention obtains an area scan image of small-grain agricultural product materials on a conveyor belt by using an area scan camera, which can accurately extract area array features of the materials and realize accurate identification of the unqualified materials. At the same time, the present invention can provide key parameters for accurate positioning during free falling of the materials while identifying the unqualified materials by using the area scan image, and can cooperate with the line scan positioning camera and the pneumatic nozzle to achieve high-speed elimination of the unqualified materials.
Crop identification method and computing device
In a crop identification method, multi-temporal sample remote sensing images labeled with first planting blocks of a specific crop are acquired. NDVI data of the sample remote sensing images are calculated. Noise of the NDVI data is reduced. A first multivariate Gaussian model is fitted based on de-noised NDVI data of the sample remote sensing image. Multi-temporal target remote sensing images are acquired. An NDVI time series of each pixel in the target remote sensing image is constructed. The NDVI time series is input to the first multivariate Gaussian model to obtain a likelihood value of each pixel displaying the specific crop in the remote sensing images. Second planting blocks of the specific crop in the target remote sensing images are determined accordingly. An accurate and robust identification result is thereby achieved.
AUTOMATICALLY REMOVING MOVING OBJECTS FROM VIDEO STREAMS
The present disclosure describes systems, non-transitory computer-readable media, and methods for accurately and efficiently removing objects from digital images taken from a camera viewfinder stream. For example, the disclosed systems access digital images from a camera viewfinder stream in connection with an undesired moving object depicted in the digital images. The disclosed systems generate a temporal window of the digital images concatenated with binary masks indicating the undesired moving object in each digital image. The disclosed systems further utilizes a 3D to 2D generator as part of a 3D to 2D generative adversarial neural network in connection with the temporal window to generate a target digital image with the region associated with the undesired moving object in-painted. In at least one embodiment, the disclosed systems provide the target digital image to a camera viewfinder display to show a user how a future digital photograph will look without the undesired moving object.
Defect detection using multiple models
A method for generating training models includes generating a preliminary training model based on a group of first images, the first images including different types of objects; processing a group of second images with the preliminary model to generate a probability array for each of the second images, the probability array indicating likelihoods that an object is a particular type of object; generating correlations between the different types of objects based on the probability arrays; generating a plurality of object groups based on the correlations, where each object group includes a plurality of different types of objects that have a relatively low correlation with the other types of objects in the same object group; and for each object group, generating a final training model based on a group of third images, the third images each including an object having an object type corresponding to one of the object types.
Semantic understanding of images based on vectorization
Identifying words to accurately describe, with a range of specificity, an image is provided. A vector space corresponding to the image is generated using a convolutional neural network to extract a hierarchy of features ranging from broad to specific from the image. Closest vocabulary ranging from broad to specific are identified for the image using Huffman coding on the vector space. Accurate words ranging from broad to specific are identified that describe the image based on vocabulary output of the Huffman coding on the vector space. The accurate words ranging from broad to specific describing the image are output.
CLUSTERING TECHNIQUES FOR MACHINE LEARNING MODELS
In some aspects, systems and methods for efficiently clustering a large-scale dataset for improving the construction and training of machine-learning models, such as neural network models, are provided. A dataset used for training a neural network model configured can be clustered into a first set of clusters and a second set of clusters. The neural network model can be constructed with a number of nodes in a hidden layer that is based on the number of clusters in the first set of clusters. The neural network can be trained based on training samples selected from the second set of clusters. In some aspects, the trained neural network model can be utilized to satisfy risk assessment queries to compute output risk indicators for target entities. The output risk indicator can be used to control access to one or more interactive computing environments by the target entities.
Method and apparatus for multi-category image recognition
A method and apparatus for image recognition are provided. The method includes: obtaining a vector of deep features of an input image; applying a Principal Component Analysis (PCA) transformation to the vector of the deep features; obtaining a sequence of principal components of the input image; dividing the sequence of the principal components into a predefined number of adjacent parts; and matching the input image to instances from a training image set.
Permutation invariant training for talker-independent multi-talker speech separation
The techniques described herein improve methods to equip a computing device to conduct automatic speech recognition (“ASR”) in talker-independent multi-talker scenarios. In some examples, permutation invariant training of deep learning models can be used for talker-independent multi-talker scenarios. In some examples, the techniques can determine a permutation-considered assignment between a model's estimate of a source signal and the source signal. In some examples, the techniques can include training the model generating the estimate to minimize a deviation of the permutation-considered assignment. These techniques can be implemented into a neural network's structure itself, solving the label permutation problem that prevented making progress on deep learning based techniques for speech separation. The techniques discussed herein can also include source tracing to trace streams originating from a same source through the frames of a mixed signal.