Classifiers¶
SDR Classifier¶
Implementation of a SDR classifier.
The SDR classifier takes the form of a single layer classification network that takes SDRs as input and outputs a predicted distribution of classes.
-
class
nupic.algorithms.sdr_classifier.
SDRClassifier
(steps=[1], alpha=0.001, actValueAlpha=0.3, verbosity=0)¶ Bases:
nupic.serializable.Serializable
The SDR Classifier accepts a binary input pattern from the level below (the “activationPattern”) and information from the sensor and encoders (the “classification”) describing the true (target) input.
The SDR classifier maps input patterns to class labels. There are as many output units as the number of class labels or buckets (in the case of scalar encoders). The output is a probabilistic distribution over all class labels.
During inference, the output is calculated by first doing a weighted summation of all the inputs, and then perform a softmax nonlinear function to get the predicted distribution of class labels
During learning, the connection weights between input units and output units are adjusted to maximize the likelihood of the model
Example Usage:
c = SDRClassifier(steps=[1], alpha=0.1, actValueAlpha=0.1, verbosity=0) # learning c.compute(recordNum=0, patternNZ=[1, 5, 9], classification={"bucketIdx": 4, "actValue": 34.7}, learn=True, infer=False) # inference result = c.compute(recordNum=1, patternNZ=[1, 5, 9], classification={"bucketIdx": 4, "actValue": 34.7}, learn=False, infer=True) # Print the top three predictions for 1 steps out. topPredictions = sorted(zip(result[1], result["actualValues"]), reverse=True)[:3] for probability, value in topPredictions: print "Prediction of {} has probability of {}.".format(value, probability*100.0)
References:
- Alex Graves. Supervised Sequence Labeling with Recurrent Neural Networks, PhD Thesis, 2008
- J. S. Bridle. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition
- In F. Fogleman-Soulie and J.Herault, editors, Neurocomputing: Algorithms, Architectures and Applications, pp 227-236, Springer-Verlag, 1990
Parameters: - steps – (list) Sequence of the different steps of multi-step predictions to learn
- alpha – (float) The alpha used to adapt the weight matrix during learning. A larger alpha results in faster adaptation to the data.
- actValueAlpha – (float) Used to track the actual value within each bucket. A lower actValueAlpha results in longer term memory
- verbosity – (int) verbosity level, can be 0, 1, or 2
Raises: (ValueError) when record number does not increase monotonically.
-
compute
(recordNum, patternNZ, classification, learn, infer)¶ Process one input sample.
This method is called by outer loop code outside the nupic-engine. We use this instead of the nupic engine compute() because our inputs and outputs aren’t fixed size vectors of reals.
Parameters: - recordNum – Record number of this input pattern. Record numbers normally increase sequentially by 1 each time unless there are missing records in the dataset. Knowing this information insures that we don’t get confused by missing records.
- patternNZ – List of the active indices from the output below. When the input is from TemporalMemory, this list should be the indices of the active cells.
- classification –
Dict of the classification information where:
- bucketIdx: list of indices of the encoder bucket
- actValue: list of actual values going into the encoder
Classification could be None for inference mode.
- learn – (bool) if true, learn this sample
- infer – (bool) if true, perform inference
Returns: Dict containing inference results, there is one entry for each step in self.steps, where the key is the number of steps, and the value is an array containing the relative likelihood for each bucketIdx starting from bucketIdx 0.
There is also an entry containing the average actual value to use for each bucket. The key is ‘actualValues’.
for example:
{1 : [0.1, 0.3, 0.2, 0.7], 4 : [0.2, 0.4, 0.3, 0.5], 'actualValues': [1.5, 3,5, 5,5, 7.6], }
-
infer
(patternNZ, actValueList)¶ Return the inference value from one input sample. The actual learning happens in compute().
Parameters: - patternNZ – list of the active indices from the output below
- classification – dict of the classification information: bucketIdx: index of the encoder bucket actValue: actual value going into the encoder
Returns: dict containing inference results, one entry for each step in self.steps. The key is the number of steps, the value is an array containing the relative likelihood for each bucketIdx starting from bucketIdx 0.
for example:
{'actualValues': [0.0, 1.0, 2.0, 3.0] 1 : [0.1, 0.3, 0.2, 0.7] 4 : [0.2, 0.4, 0.3, 0.5]}
-
inferSingleStep
(patternNZ, weightMatrix)¶ Perform inference for a single step. Given an SDR input and a weight matrix, return a predicted distribution.
Parameters: - patternNZ – list of the active indices from the output below
- weightMatrix – numpy array of the weight matrix
Returns: numpy array of the predicted class label distribution
-
class
nupic.algorithms.sdr_classifier_factory.
SDRClassifierFactory
¶ Factory for instantiating SDR classifiers.
-
static
create
(*args, **kwargs)¶ Create a SDR classifier factory. The implementation of the SDR Classifier can be specified with the “implementation” keyword argument.
- The SDRClassifierFactory uses the implementation as specified in
- Default NuPIC Configuration.
-
static
KNN Classifier¶
-
class
nupic.algorithms.knn_classifier.
KNNClassifier
(k=1, exact=False, distanceNorm=2.0, distanceMethod='norm', distThreshold=0, doBinarization=False, binarizationThreshold=0.5, useSparseMemory=True, sparseThreshold=0.1, relativeThreshold=False, numWinners=0, numSVDSamples=0, numSVDDims=None, fractionOfMax=None, verbosity=0, maxStoredPatterns=-1, replaceDuplicates=False, cellsPerCol=0, minSparsity=0.0)¶ Bases:
nupic.serializable.Serializable
This class implements NuPIC’s k Nearest Neighbor Classifier. KNN is very useful as a basic classifier for many situations. This implementation contains many enhancements that are useful for HTM experiments. These enhancements include an optimized C++ class for sparse vectors, support for continuous online learning, support for various distance methods (including Lp-norm and raw overlap), support for performing SVD on the input vectors (very useful for large vectors), support for a fixed-size KNN, and a mechanism to store custom ID’s for each vector.
Parameters: - k – (int) The number of nearest neighbors used in the classification of patterns. Must be odd.
- exact – (boolean) If true, patterns must match exactly when assigning class labels
- distanceNorm – (int) When distance method is “norm”, this specifies the p value of the Lp-norm
- distanceMethod –
(string) The method used to compute distance between input patterns and prototype patterns. The possible options are:
norm
: When distanceNorm is 2, this is the euclidean distance,- When distanceNorm is 1, this is the manhattan distance In general: sum(abs(x-proto) ^ distanceNorm) ^ (1/distanceNorm) The distances are normalized such that farthest prototype from a given input is 1.0.
rawOverlap
: Only appropriate when inputs are binary. This computes:- (width of the input) - (# bits of overlap between input and prototype).
pctOverlapOfInput
: Only appropriate for binary inputs. This computes- 1.0 - (# bits overlap between input and prototype) /
- (# ON bits in input)
pctOverlapOfProto
: Only appropriate for binary inputs. This computes- 1.0 - (# bits overlap between input and prototype) /
- (# ON bits in prototype)
pctOverlapOfLarger
: Only appropriate for binary inputs. This computes- 1.0 - (# bits overlap between input and prototype) /
- max(# ON bits in input, # ON bits in prototype)
- distThreshold – (float) A threshold on the distance between learned patterns and a new pattern proposed to be learned. The distance must be greater than this threshold in order for the new pattern to be added to the classifier’s memory.
- doBinarization – (boolean) If True, then scalar inputs will be binarized.
- binarizationThreshold – (float) If doBinarization is True, this specifies the threshold for the binarization of inputs
- useSparseMemory – (boolean) If True, classifier will use a sparse memory matrix
- sparseThreshold – (float) If useSparseMemory is True, input variables whose absolute values are less than this threshold will be stored as zero
- relativeThreshold – (boolean) Flag specifying whether to multiply sparseThreshold by max value in input
- numWinners – (int) Number of elements of the input that are stored. If 0, all elements are stored
- numSVDSamples – (int) Number of samples the must occur before a SVD (Singular Value Decomposition) transformation will be performed. If 0, the transformation will never be performed
- numSVDDims – (string) Controls dimensions kept after SVD transformation. If “adaptive”, the number is chosen automatically
- fractionOfMax – (float) If numSVDDims is “adaptive”, this controls the smallest singular value that is retained as a fraction of the largest singular value
- verbosity – (int) Console verbosity level where 0 is no output and larger integers provide increasing levels of verbosity
- maxStoredPatterns – (int) Limits the maximum number of the training patterns stored. When KNN learns in a fixed capacity mode, the unused patterns are deleted once the number of stored patterns is greater than maxStoredPatterns. A value of -1 is no limit
- replaceDuplicates – (bool) A boolean flag that determines whether, during learning, the classifier replaces duplicates that match exactly, even if distThreshold is 0. Should be True for online learning
- cellsPerCol – (int) If >= 1, input is assumed to be organized into columns, in the same manner as the temporal memory AND whenever a new prototype is stored, only the start cell (first cell) is stored in any bursting column
- minSparsity – (float) If useSparseMemory is set, only vectors with sparsity >= minSparsity will be stored during learning. A value of 0.0 implies all vectors will be stored. A value of 0.1 implies only vectors with at least 10% sparsity will be stored
-
clear
()¶ Clears the state of the KNNClassifier.
-
closestOtherTrainingPattern
(inputPattern, cat)¶ Return the closest training pattern that is not of the given category “cat”.
Parameters: - inputPattern – The pattern whose closest neighbor is sought
- cat – Training patterns of this category will be ignored no matter their distance to inputPattern
Returns: A dense version of the closest training pattern, or None if no such patterns exist
-
closestTrainingPattern
(inputPattern, cat)¶ Returns the closest training pattern to inputPattern that belongs to category “cat”.
Parameters: - inputPattern – The pattern whose closest neighbor is sought
- cat – The required category of closest neighbor
Returns: A dense version of the closest training pattern, or None if no such patterns exist
-
computeSVD
(numSVDSamples=0, finalize=True)¶ Compute the singular value decomposition (SVD). The SVD is a factorization of a real or complex matrix. It factors the matrix a as u * np.diag(s) * v, where u and v are unitary and s is a 1-d array of a‘s singular values.
Reason for computing the SVD:
There are cases where you want to feed a lot of vectors to the KNNClassifier. However, this can be slow. You can speed up training by (1) computing the SVD of the input patterns which will give you the eigenvectors, (2) only keeping a fraction of the eigenvectors, and (3) projecting the input patterns onto the remaining eigenvectors.
Note that all input patterns are projected onto the eigenvectors in the same fashion. Keeping only the highest eigenvectors increases training performance since it reduces the dimensionality of the input.
Parameters: - numSVDSamples – (int) the number of samples to use for the SVD computation.
- finalize – (bool) whether to apply SVD to the input patterns.
Returns: (array) The singular values for every matrix, sorted in descending order.
-
doIteration
()¶ Utility method to increment the iteration index. Intended for models that don’t learn each timestep.
-
finishLearning
()¶ Used for batch scenarios. This method needs to be called between learning and inference.
-
getAdaptiveSVDDims
(singularValues, fractionOfMax=0.001)¶ Compute the number of eigenvectors (singularValues) to keep.
Parameters: - singularValues –
- fractionOfMax –
Returns:
-
getClosest
(inputPattern, topKCategories=3)¶ Returns the index of the pattern that is closest to inputPattern, the distances of all patterns to inputPattern, and the indices of the k closest categories.
-
getDistances
(inputPattern)¶ Return the distances between the input pattern and all other stored patterns.
Parameters: inputPattern – pattern to check distance with Returns: (distances, categories) numpy arrays of the same length. - overlaps: an integer overlap amount for each category - categories: category index for each element of distances
-
getNumPartitionIds
()¶ Returns: the number of unique partition Ids stored.
-
getOverlaps
(inputPattern)¶ Return the degree of overlap between an input pattern and each category stored in the classifier. The overlap is computed by computing:
logical_and(inputPattern != 0, trainingPattern != 0).sum()
Parameters: inputPattern – pattern to check overlap of Returns: (overlaps, categories) Two numpy arrays of the same length, where: - overlaps: an integer overlap amount for each category
- categories: category index for each element of overlaps
-
getPartitionId
(i)¶ Gets the partition id given an index.
Parameters: i – index of partition Returns: the partition id associated with pattern i. Returns None if no id is associated with it.
-
getPartitionIdKeys
()¶ Returns: a list containing unique (non-None) partition Ids (just the keys)
-
getPartitionIdList
()¶ Returns: a list of complete partition id objects
-
getPattern
(idx, sparseBinaryForm=False, cat=None)¶ Gets a training pattern either by index or category number.
Parameters: - idx – Index of the training pattern
- sparseBinaryForm – If true, returns a list of the indices of the non-zero bits in the training pattern
- cat – If not None, get the first pattern belonging to category cat. If this is specified, idx must be None.
Returns: The training pattern with specified index
-
getPatternIndicesWithPartitionId
(partitionId)¶ Returns: a list of pattern indices corresponding to this partitionId. Return an empty list if there are none.
-
infer
(inputPattern, computeScores=True, overCategories=True, partitionId=None)¶ Finds the category that best matches the input pattern. Returns the winning category index as well as a distribution over all categories.
Parameters: - inputPattern – (list or array) The pattern to be classified. This must be a dense representation of the array (e.g. [0, 0, 1, 1, 0, 1]).
- computeScores – NO EFFECT
- overCategories – NO EFFECT
- partitionId – (int) If provided, all training vectors with partitionId equal to that of the input pattern are ignored. For example, this may be used to perform k-fold cross validation without repopulating the classifier. First partition all the data into k equal partitions numbered 0, 1, 2, ... and then call learn() for each vector passing in its partitionId. Then, during inference, by passing in the partition ID in the call to infer(), all other vectors with the same partitionId are ignored simulating the effect of repopulating the classifier while ommitting the training vectors in the same partition.
Returns: 4-tuple with these keys:
winner
: The category with the greatest number of nearest neighbors- within the kth nearest neighbors. If the inferenceResult contains no neighbors, the value of winner is None. This can happen, for example, in cases of exact matching, if there are no stored vectors, or if minSparsity is not met.
inferenceResult
: A list of length numCategories, each entry contains- the number of neighbors within the top k neighbors that are in that category.
dist
: A list of length numPrototypes. Each entry is the distance- from the unknown to that prototype. All distances are between 0.0 and 1.0.
categoryDist
: A list of length numCategories. Each entry is the- distance from the unknown to the nearest prototype of that category. All distances are between 0 and 1.0.
-
learn
(inputPattern, inputCategory, partitionId=None, isSparse=0, rowID=None)¶ Train the classifier to associate specified input pattern with a particular category.
Parameters: - inputPattern – (list) The pattern to be assigned a category. If isSparse is 0, this should be a dense array (both ON and OFF bits present). Otherwise, if isSparse > 0, this should be a list of the indices of the non-zero bits in sorted order
- inputCategory – (int) The category to be associated to the training pattern
- partitionId – (int) partitionID allows you to associate an id with each input vector. It can be used to associate input patterns stored in the classifier with an external id. This can be useful for debugging or visualizing. Another use case is to ignore vectors with a specific id during inference (see description of infer() for details). There can be at most one partitionId per stored pattern (i.e. if two patterns are within distThreshold, only the first partitionId will be stored). This is an optional parameter.
- isSparse – (int) 0 if the input pattern is a dense representation. When the input pattern is a list of non-zero indices, then isSparse is the number of total bits (n). E.g. for the dense array [0, 1, 1, 0, 0, 1], isSparse should be 0. For the equivalent sparse representation [1, 2, 5] (which specifies the indices of active bits), isSparse should be 6, which is the total number of bits in the input space.
- rowID – (int) UNKNOWN
Returns: The number of patterns currently stored in the classifier
-
prototypeSetCategory
(idToCategorize, newCategory)¶ Allows ids to be assigned a category and subsequently enables users to use:
removeCategory()
closestTrainingPattern()
closestOtherTrainingPattern()
-
remapCategories
(mapping)¶ Change the category indices.
Used by the Network Builder to keep the category indices in sync with the ImageSensor categoryInfo when the user renames or removes categories.
Parameters: mapping – List of new category indices. For example, mapping=[2,0,1] would change all vectors of category 0 to be category 2, category 1 to 0, and category 2 to 1
-
removeCategory
(categoryToRemove)¶ There are two caveats. First, this is a potentially slow operation. Second, pattern indices will shift if patterns before them are removed.
Parameters: categoryToRemove – Category label to remove
-
removeIds
(idsToRemove)¶ There are two caveats. First, this is a potentially slow operation. Second, pattern indices will shift if patterns before them are removed.
Parameters: idsToRemove – A list of row indices to remove.
-
setCategoryOfVectors
(vectorIndices, categoryIndices)¶ Change the category associated with this vector(s).
Used by the Network Builder to move vectors between categories, to enable categories, and to invalidate vectors by setting the category to -1.
Parameters: - vectorIndices – Single index or list of indices
- categoryIndices – Single index or list of indices. Can also be a single index when vectorIndices is a list, in which case the same category will be used for all vectors