Unsupervised representation learning with contrastive prototypes
11263476 · 2022-03-01
Assignee
Inventors
Cpc classification
G06V10/454
PHYSICS
G06F18/2155
PHYSICS
G06F18/2321
PHYSICS
G06V10/7753
PHYSICS
International classification
Abstract
The system and method are directed to a prototypical contrastive learning (PCL). The PCL explicitly encodes the hierarchical semantic structure of the dataset into the learned embedding space and prevents the network from exploiting low-level cues for solving the unsupervised learning task. The PCL includes prototypes as the latent variables to help find the maximum-likelihood estimation of the network parameters in an expectation-maximization framework. The PCL iteratively performs an E-step for finding prototypes with clustering and M-step for optimizing the network on a contrastive loss.
Claims
1. A method for training a prototypical contrastive learning (PCL) framework, comprising: receiving unstructured data at a momentum encoder of the PCL framework; determining, using the momentum encoder, features from the unstructured data; clustering, using the momentum encoder, the unstructured data into a number of clusters according to the features; determining prototypes, assignments, and concentrations of the clusters, a prototype, an assignment, and a concentration for each cluster in the clusters; determining, a contrastive loss function of the PCL framework from the prototypes, the assignments, and the concentrations of the clusters; training an encoder of the PCL framework using the contrastive loss function and a subset of unstructured data; and updating the momentum encoder using weights of the encoder.
2. The method of claim 1, wherein the trained encoder is configured to determine second cluster for second unstructured data.
3. The method of claim 1, wherein the momentum encoder and the encoder are convolutional neural networks.
4. The method of claim 3, wherein a neural network structure of the momentum encoder is the same as a neural network structure of the encoder.
5. The method of claim 1, wherein the updating further comprises: updating weights of the momentum encoder with the weights of the encoder.
6. The method of claim 5, wherein the updated weights of the momentum encoder are a moving average of the weights of the momentum encoder and the weights of the encoder.
7. The method of claim 1, wherein the momentum encoder is updated at each iteration in an epoch.
8. The method of claim 1, wherein the updated momentum encoder is trained to perform at least one task associated with processing the unstructured data.
9. The method of claim 1, wherein the encoder is trained and the momentum encoder is updated over a configurable number of iterations.
10. A system for training a prototypical contrastive learning (PCL) framework, comprising: a momentum encoder configured to: receive images; and determine features from the images; a clustering module configured to: cluster the images into a number of clusters according to the features; determine prototypes, assignments, and concentrations of the clusters, a prototype, an assignment, and a concentration for each cluster in the clusters; a ProtoNCE modules configured to determine, a contrastive loss function of the PCL framework from the prototypes, the assignments, and the concentrations of the clusters; and an encoder configured to: process a subset of the images using the contrastive loss function; and update the momentum encoder using weights of the encoder.
11. The system of claim 10, wherein the encoder is configured to determine second cluster for second images.
12. The system of claim 10, wherein the momentum encoder and the encoder are convolutional neural networks.
13. The system of claim 12, wherein a neural network structure of the momentum encoder is the same as a neural network structure of the encoder.
14. The system of claim 10, wherein the encoder is further configured to update weights of the momentum encoder with the weights of the encoder.
15. The system of claim 14, wherein the updated weights of the momentum encoder are a moving average of the weights of the momentum encoder and the weights of the encoder.
16. The system of claim 10, wherein the momentum encoder is updated at each iteration in an epoch.
17. The system of claim 10, wherein the updated momentum encoder is trained to perform at least one task associated with processing an image.
18. The system of claim 10, wherein the encoder is trained and the momentum encoder is updated over a configurable number of iterations.
19. A non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations for training a prototypical contrastive learning (PCL) framework, the operations comprising: receiving unstructured data at a momentum encoder of the PCL framework; determining, using the momentum encoder, features from the unstructured data; clustering, using the momentum encoder, the unstructured data into a number of clusters according to the features; determining prototypes, assignments, and concentrations of the clusters, a prototype, an assignment, and a concentration for each cluster in the clusters; determining, a contrastive loss function of the PCL framework from the prototypes, the assignments, and the concentrations of the clusters; training an encoder of the PCL framework using the contrastive loss function and a subset of unstructured data; and updating weights of the momentum encoder using weights of the encoder.
20. The non-transitory machine-readable medium of claim 19, wherein a neural network structure of the momentum encoder is the same as a neural network structure of the encoder.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION
(7) Unsupervised visual representation learning aims to learn image representations from pixels themselves and without relying on semantic annotations. Recent developments in unsupervised representation learning are largely driven by the task called instance discrimination. Methods based on the instance discrimination usually consist of two key components: image transformation and contrastive loss. Image transformation aims to generate multiple embeddings that represent the same image, by data augmentation, patch perturbation, or by using momentum features. The contrastive loss, which may be in the form of a noise contrastive estimator, aims to bring closer samples from the same instance and separate samples from different instances. Instance-wise contrastive learning leads to an embedding space where all instances are well-separated, and each instance is locally smooth (i.e. input perturbation leads to similar representations).
(8) Despite their improved performance, methods based on instance discrimination share a common fundamental weakness: the semantic structure of data is not encoded by the learned representations. This problem arises because instance-wise contrastive learning considers two samples to be a negative pair as long as they are from different instances, regardless of the semantic similarity between instances. The problem is magnified by the fact that thousands of negative samples are generated to form the contrastive loss, leading to many negative pairs that share similar semantic meaning but are undesirably pushed apart in the embedding space.
(9) The embodiments are directed to a prototypical contrastive learning (PCL) framework for unsupervised representation learning. The PCL framework explicitly encodes the semantic structure into the embedding space. A prototype in the PCL framework may be defined as a representative embedding for a group of semantically similar instances. Each instance may be assigned to several prototypes of different granularity. The PCL framework may also construct a contrastive loss which may enforce the embedding of a sample to be more similar to its assigned prototypes compared to other prototypes. In practice, the PCL framework may find prototypes by performing standard clustering on the embeddings.
(10) In some embodiments, the PCL framework may use a bilevel Expectation-Maximization (E-M) algorithm. The E-M algorithm may find parameters of a deep neural network (DNN) that best describe the data by iteratively approximating and maximizing the likelihood function. The E-M algorithm may include additional latent variables, such as prototypes and instance assignments. The E-M algorithm may estimate the latent variables in the E-step of the E-M algorithm by performing k-means clustering. In the M-step of the E-M algorithm, the E-M algorithm may update the network parameters by minimizing the proposed contrastive loss. A proposed contrastive loss may be determined using a ProtoNCE function, described below. The E-M algorithm may determine that minimizing the ProtoNCE function is equivalent to maximizing the approximate likelihood function under the assumption that the data distribution around each prototype is an isotropic Gaussian. By using the E-M algorithm, the widely used instance discrimination task can be explained as a special case of the PCL framework, where the prototype for each instance is its augmented feature, and the Gaussian distribution around each prototype has the same fixed variance.
(11) The embodiments of the disclosure are directed to the PCL framework for unsupervised representation learning. The learned representation not only preserves the local smoothness of each image instance, but also captures the hierarchical semantic structure of the global dataset. Further, although described with respect to images, the PCL framework may also apply to any type of unstructured data such as video, text, speech, etc.
(12) The embodiments of the disclosure are directed to the PCL framework that includes an Expectation-Maximization (E-M) algorithm. In the E-M algorithm the iterative steps of clustering and representation learning can be interpreted as approximating and maximizing the log-likelihood function.
(13) The embodiments of the disclosure are also directed to using the ProtoNCE function for determining the contrastive loss. Notably, the ProtoNCE function dynamically estimates the concentration for the feature distribution around each prototype. The learned prototypes contain more information about the image classes.
(14)
(15) Memory 120 may be used to store software executed by computing device 100 and/or one or more data structures used during operation of computing device 100. Memory 120 may include one or more types of machine readable media. Some common forms of machine readable media may include floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.
(16) Processor 110 and/or memory 120 may be arranged in any suitable physical arrangement. In some embodiments, processor 110 and/or memory 120 may be implemented on a same board, in a same package (e.g., system-in-package), on a same chip (e.g., system-on-chip), and/or the like. In some embodiments, processor 110 and/or memory 120 may include distributed, virtualized, and/or containerized computing resources. Consistent with such embodiments, processor 110 and/or memory 120 may be located in one or more data centers and/or cloud computing facilities.
(17) In some embodiments, memory 120 may include non-transitory, tangible, machine readable media that includes executable code that when run by one or more processors (e.g., processor 110) may cause the one or more processors to perform the methods described in further detail herein. For example, as shown, memory 120 includes instructions for prototypical contrastive learning (PCL) framework 130 that may be used to implement and/or emulate the systems and models, and/or to implement any of the methods described further herein. The PCL framework 130 may be a “network” that may comprise any hardware or software-based framework that includes any artificial intelligence network or system, neural network or system and/or any training or learning models implemented thereon or therewith. The PCL framework 130 may include encoders that may be trained using images 140 (or other unstructured data such as video, speech, text, etc.) that the PCL framework 130 receives as input. The images 140 may comprise multiple pixels. Unlike conventional frameworks, the PCL framework 130 may be trained using images that do not include labels or tags that may identify different features of the images. After the PCL framework 130 is trained, an encoder in the PCL framework 130 may perform one or more tasks, e.g. identify a picture that is included in an image, determine image colorization, predict patch orderings, etc. In a non-structured data example, once trained an encoder in the PCL framework 130 may perform one or more tasks, namely generate clusters that include similar subsets of data in unstructured data.
(18) In some embodiments, the PCL framework 130 may be a neural network that includes one or more components.
(19) An expectation-maximization (E-M) algorithm 220 may act on the momentum encoder 205, clustering module 235, encoder 210, and ProtoNCE module 215. The E-M algorithm 220 may execute during multiple iterations that occur during a configurable time period called an epoch. Further, there may be multiple epochs during which the E-M algorithm 220 may execute and train encoder 210. The E-M algorithm 220 may be divided into an E-step 225 and a M-step 230, which are both performed at each iteration. In the E-step 225, the momentum encoder 205 may identify different features in images 140 and the clustering module 235 may generate a configurable number of clusters based on the identified features. Each cluster may include a prototype, assignment and concentration of the similar features in the images 140 and may have different levels of granularity. The M-step 230 may include the encoder 210 and ProtoNCE module 215 and may train the PCL framework 130. For example, the ProtoNCE module 215 may approximate and maximize a likelihood function that is back propagated to the encoder 210. Encoder 210 is trained using the likelihood function. The momentum encoder 205 is then updated with weights of the trained encoder 210. Both E-step 225 and M-step 230 of the E-M algorithm 220 are discussed in detail below.
(20) Each of the momentum encoder 205 and encoder 210 may be a neural network, such as a convolutional neural network. Momentum encoder 205 and encoder 210 may be structural copies of each other. In other words, momentum encoder 205 and encoder 210 may be two instances of the same neural network, but may have different weights that are assigned to the nodes of the neural network.
(21) As discussed above, during each iteration in the epoch, the E-M algorithm 220 performs the E-step 225 and the M-step 230. At the beginning of E-step 225, momentum encoder 205 receives one or more images 140 as input. In some instances, prior to the momentum encoder 205 receiving images 140, images 140 may be augmented, e.g. cropped, color changed, etc. Momentum encoder 205 passes the received images 140 though the neural network to determine the features of images 140. The features may be the output of the last layer of the convolutional neural network that makes up the momentum encoder 205. The features may be embeddings of the momentum encoder 205.
(22) In some embodiments, clustering module 235 may receive the features that are the output of momentum encoder 205. Clustering module 235 may cluster the features into one or more clusters, such as clusters 240A-C. Each of the clusters 240A, 240B, and 240C may be associated with a prototype. Prototypes C are shown in
(23) During the M-step 230, the ProtoNCE module 215 may receive output 245 that includes prototypes C, concentrations M, and assignments S for clusters 240A-C determined during the E-step 225. The ProtoNCE module 215 may use the prototypes C, concentrations M, and assignments S to optimize the ProtoNCE function shown in Equation 6 below. The ProtoNCE module 215 may determine that minimizing the ProtoNCE function is equivalent to maximizing the approximate likelihood function under the assumption that the data distribution around each prototype is an isotropic Gaussian. The optimized ProtoNCE function may be back propagated from ProtoNCE module 215 to encoder 210.
(24) Encoder 210 may be trained using the ProtoNCE function determined by the ProtoNCE module 215 and the images 140. For example, encoder 210 may receive and process images 140 while applying the ProtoNCE function to the weights. As discussed above, the data in the images may be augmented. During training, the weights of the encoder 210 are trained. The trained weights of encoder 210 may then update the weights of the momentum encoder 205. Because the structures of encoder 210 and momentum encoder 205 are the same, the weights from encoder 210 may update the weights of momentum encoder 205 in the same position in the structure.
(25) In some embodiments, the weights of the momentum encoder 205 may be updated by determining an average or a moving average of the weights of the momentum encoder 205 and the weights of encoder 210. Once the weights of momentum encoder 205 are updated, the PCL framework 130 may begin the next iteration in the epoch. During the next iteration, the PCL framework 130 may be trained using the same or different images 140 to determine clusters 240A-C, the prototypes C, concentrations M, and assignments S, that are then used to further optimize the ProtoNCE function.
(26) The iterative process may continue for a preconfigured number of epochs. Once the iterative process discussed above is concluded, the momentum encoder 205 is trained and may be applied to perform various tasks.
(27)
(28) At process 302, data from the images is received. For example, momentum encoder 205 may receive data from one or more images 140. The data from images 140 may be cropped or otherwise augmented. The data may include pixels from images 140.
(29) At process 304, features from the images are determined. For example, momentum encoder 205, which may be structured as a convolutional neural network, may generate embeddings which are features of images 140.
(30) At process 306, prototypes, assignments and concentrations are determined from the features. For example, clustering module 235 may receive the features determined in process 304 and generate clusters, such as clusters 240A-C, using the features. From the clusters 240A-C, clustering module 235 may determine prototypes C, assignments S, and concentrations M of each cluster 240A, 240B, and 240C. In some embodiments, the number of clusters that clustering module 235 may determine is preconfigured, and clustering module 235 determines which features are included in which one or more clusters 240A-240C.
(31) At process 308, a ProtoNCE function is determined. For example, the ProtoNCE module 215 receives the prototypes C, assignments S, and concentration M and determines the ProtoNCE function that minimizes a proposed contrastive loss. For example, ProtoNCE module 215 may determine the ProtoNCE function by maximizing the approximate likelihood function under the assumption that the data distribution around each prototype is an isotropic Gaussian.
(32) At process 310, an encoder is trained. For example, encoder 210 is trained using the embedding function determined in step 308 and all or a subset of images 140. Like momentum encoder 205, encoder 210 may also receive images 140. During training, the contrastive loss function may be applied to the one or more weights of the nodes in the convolution neural network included in the encoder 210 as the encoder determines features of images 140.
(33) At process 312, the momentum encoder is updated. For example, momentum encoder 205 may be updated with the weights of encoder 210. In some embodiments, momentum encoder 205 may be updated with an average of the weights of momentum encoder 205 and weights of encoder 210. In other embodiments, momentum encoder 205 may be updated with a moving average of the weights by taking an arithmetic mean of the weights of momentum encoder 205 at previous iterations and the weights received from encoder 210.
(34) After process 312 completes, method 300 may repeat another iteration of processes 302-312 until the iterations complete the epoch. At completion of an epoch, the method 300 may repeat for another epoch or for a configurable number of epochs. Once the PCL framework 130 completes training, encoder 210 may be included in other frameworks, including other image processing frameworks to perform different tasks.
(35) Going back to
(36) The instance-wise contrastive learning may achieve this objective by optimizing a contrastive loss function, such as an InfoNCE function. The InfoNCE function may be defined as:
(37)
where v.sub.i′ is a positive embedding for instance i, v.sub.j′ includes one positive embedding, r is negative embeddings for other instances, and τ is a temperature hyper-parameter. These embeddings are obtained by feeding x.sub.i to momentum encoder 205 parametrized by θ′, v.sub.i′=ƒ.sub.θ′(x.sub.i), where θ′ is a moving average of θ.
(38) In the PCL framework 130, the prototypes c may replace v and a concentration estimation μ (shown as concentration M in
(39) The PCL framework 130 may find the network parameters θ that maximizes the likelihood function of the observed n samples:
(40)
(41) Further, the observed data {x.sub.i}.sub.i=1.sup.n is related to the latent variable C={c.sub.i}.sub.i=1.sup.K which denotes the prototypes C of the data. In this way, the likelihood function may be re-written as:
(42)
(43) In order to optimize the function in Equation 3, the PCL framework 130 may use a surrogate function to lower-bound Equation 3, as follows:
(44)
where Q(c.sub.i) denotes some distribution over c's (Σ.sub.c.sub.
(45)
to be a constant. In this case:
(46)
(47) Further, by ignoring the constant −Σ.sub.i=1.sup.n Σ.sub.c.sub.
Σ.sub.i=1.sup.nΣ.sub.c.sub.
(48) During the E-step 225 of the E-M algorithm 220, the PCL framework 130 aims to estimate p(c.sub.1; x.sub.i, θ). To achieve this, the clustering module 235 may perform k-means clustering on the features v.sub.i′=ƒ.sub.θ′(x.sub.i) of images 140 identified by momentum encoder 205 to obtain k clusters. Prototype c.sub.i may be defined as cluster centroid for the i-the cluster. Then the clustering module 235 computes p(c.sub.i; x.sub.i, θ)=(x.sub.iεc.sub.i), where
(x.sub.i∈c.sub.i)=1 if x.sub.i belongs to the cluster represented by c.sub.i. Otherwise
(x.sub.i∈c.sub.i)=0.
(49) During the M-step 230, the ProtoNCE module 215 maximizes the lower-bound of Equation 6 as follows:
(50)
(51) Under the assumption of a uniform prior over cluster centroids, the p(x.sub.i, c.sub.i; θ) may be represented as follows:
(52)
where the prior probability p(c.sub.1; θ) for each c.sub.i is set to
(53)
In some embodiments, the distribution around each prototype is an isotropic Gaussian, which leads to:
(54)
where v.sub.i=ƒ.sub.θ(x.sub.i) and x.sub.i∈c.sub.s. If the ProtoNCE module 215 applies the l.sub.2-normalization to both v and c, then (v−c).sup.2=2−2v.Math.c. Combining Equations 3, 4, 6, 7, 8, and 9, the maximum log-likelihood estimation may be written as:
(55)
which is in the same form as the InfoNCE loss in Equation 1. Here μ∝σ.sup.−2 denotes the concentration level of the feature distribution around a prototype (smaller μ means more concentrated distribution). Therefore, instance-wise contrastive learning can be interpreted as a special case of prototypical contrastive learning, where the prototypes are instance features (i.e. C=V′), and the concentration of the distribution around each instance is the same (i.e. μ=τ).
(56) The ProtoNCE module 215 may sample r negative prototypes to calculate the normalization term. ProtoNCE module 215 may cluster samples M times with different number of clusters K={k.sub.m}.sub.m=1.sup.M, which has a more robust probability estimation of prototypes that encode the hierarchical structure. Furthermore, a loss to instance discrimination may be added to retain the property of local smoothness. The ProtoNCE function used by the ProtoNCE module 215 to determine ProtoNCE loss may be defined as:
(57)
(58) As illustrated in
(59)
where α is a smooth parameter to ensure that small clusters do not have an overly large μ. Also μ may be normalized for each set of prototypes C.sup.m such that they have a mean of τ.
(60) In some embodiments, in the ProtoNCE loss (Equation 11), μ.sub.s.sup.m LT acts as a scaling factor on the similarity between an embedding v.sub.i and its prototype c.sub.s.sup.m. With the proposed the similarity for embeddings in a loose cluster (lager μ) are down-scaled, pulling them closer to the prototype. On the contrary, embeddings in a tight cluster (smaller μ) have an up-scaled similarity, thus less encouraged to approach the prototype. Therefore, representation learning with the ProtoNCE function yields more balanced clusters with similar concentration. This prevents a trivial solution where most embeddings collapse to a single cluster, a problem that could only be heuristically addressed by data-resampling in DeepCluster.
(61) In some embodiments, minimizing the proposed ProtoNCE loss may be considered as simultaneously maximizing the mutual information between V and all the prototypes {V′, C.sup.1, . . . , C.sup.M}. This leads to better representation learning, for two reasons. First, the encoder 210 may learn the shared information among prototypes and ignore the individual noise that exists in each prototype. The shared information is more likely to capture higher-level semantic knowledge. Second, when compared to instance features, prototypes have a larger mutual information (MI) with the class labels. Furthermore, training the encoder 210 using the ProtoNCE loss function may increase the MI between the instance features (or their assigned prototypes) and the ground-truth class labels for all images in a training dataset.
(62) In some embodiments, the PCL framework 130 can provide more insights into the nature of the learned prototypes. The optimization in Equation 10 is similar to optimizing the cluster-assignment probability p (s; x.sub.i, θ) using the cross-entropy loss, where the prototypes c represent weights for a linear classifier. With k-means clustering, the linear classifier has a fixed set of weights as the mean vectors for the representations in each cluster,
(63)
A similar idea has been used for few-shot learning, where a non-parametric prototypical classifier performs better than a parametrized linear classifier.
(64) In some embodiments, the PCL framework 130 may be trained using the ImageNet-1M dataset, which contains approximately 1.28 million images in 1000 classes. Momentum encoder 205 or encoder 210 may be a ResNet-50, whose last fully-connected layer outputs a 128-D and L2-normalized feature. PCL framework 130 may perform data augmentation on images 140 with random crop, random color jittering, random horizontal flip, and random grayscale conversion. The PCL framework 130 may use a SGD as an optimizer, with a weight decay of 0.0001, a momentum of 0.9, and a batch size of 256. The PCL framework 130 may train for 200 epochs, where the PCL framework 130 may warm-up the network in the first 20 epochs by only using the InfoNCE loss. The initial learning rate is 0.03 and may be multiplied by 0.1 at 120 and 160 epochs. In terms of the hyper-parameters, we set τ=0.1, α=10, and number of clusters K={25000, 50000, 100000}. We use the GPU k-means implementation in faiss which takes approximately 10 seconds. The clustering is performed every epoch, which introduces 219 ⅓ computational overhead due to a forward pass through the dataset. The number of negatives for ProtoNCE module 213 is set as k=16000.
(65)
(66) At line 1, algorithm 1 receives input which includes an encoder function ƒ.sub.θ, the training dataset X which could be images 140 or other unstructured data, and a number of clusters K={k.sub.m}.sub.m=1.sup.M.
(67) At line 2, a momentum encoder 205 is initialized to θ, which may be the weights of encoder 210.
(68) At line 3, a number of epoch are initialized using the MaxEpoch variable.
(69) At line 4, the momentum features V′ from the training dataset X are generated using the momentum encoder 210.
(70) At lines 4-8 the E-step 225 is performed, where the clustering module 235 clusters V′ features into k.sub.m clusters, returns prototypes C.sup.m (lines 6) and estimates the concentration distribution u.sub.m around each prototype using Equation 12 (line 7).
(71) At liens 9-14 and M-step 230 is performed. In the M-step 230, the images 140 (or other unstructured data) in the training dataset X may be loaded in minibatches and passed through encoder 210 and momentum encoder 205 at lines 10. The ProtoNCE module 215 determines a loss function using the features from the encoder 210 and momentum encoder 205 at line 11 and as shown in Equation 11. At line 12, the encoder 210 is trained using the loss function which updates the weights of the encoder 210. At line 13, the weights of the momentum encoder 205 are updated with the weights of the encoder 210.
(72)
(73) Some examples of computing devices, such as computing device 100 may include non-transitory, tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 110) may cause the one or more processors to perform the processes of method 300. Some common forms of machine readable media that may include the processes of method 300 are, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, and/or any other medium from which a processor or computer is adapted to read.
(74) This description and the accompanying drawings that illustrate inventive aspects, embodiments, implementations, or applications should not be taken as limiting. Various mechanical, compositional, structural, electrical, and operational changes may be made without departing from the spirit and scope of this description and the claims. In some instances, well-known circuits, structures, or techniques have not been shown or described in detail in order not to obscure the embodiments of this disclosure. Like numbers in two or more figures represent the same or similar elements.
(75) In this description, specific details are set forth describing some embodiments consistent with the present disclosure. Numerous specific details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one skilled in the art that some embodiments may be practiced without some or all of these specific details. The specific embodiments disclosed herein are meant to be illustrative but not limiting. One skilled in the art may realize other elements that, although not specifically described here, are within the scope and the spirit of this disclosure. In addition, to avoid unnecessary repetition, one or more features shown and described in association with one embodiment may be incorporated into other embodiments unless specifically described otherwise or if the one or more features would make an embodiment non-functional.
(76) This application is further described with respect to the attached document in Appendix I., entitled “Unsupervised Representation Learning with Contrastive Prototypes,” 14 pages, which is considered part of this disclosure and the entirety of which is incorporated by reference.
(77) Although illustrative embodiments have been shown and described, a wide range of modification, change and substitution is contemplated in the foregoing disclosure and in some instances, some features of the embodiments may be employed without a corresponding use of other features. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. Thus, the scope of the invention should be limited only by the following claims, and it is appropriate that the claims be construed broadly and in a manner consistent with the scope of the embodiments disclosed herein.