Metrics¶
MetricsManager¶

class
nupic.frameworks.opf.prediction_metrics_manager.
MetricsManager
(metricSpecs, fieldInfo, inferenceType)¶ This is a class to handle the computation of metrics properly. This class takes in an inferenceType, and it assumes that it is associcated with a single model

getMetricDetails
(metricLabel)¶ Gets detailed info about a given metric, in addition to its value. This may including any statistics or auxilary data that are computed for a given metric
metricLabel: The string label of the given metric (see metrics.MetricSpec)
 Returns: A dictionary of metric information, as returned by
 opf.metric.Metric.getMetric()

getMetricLabels
()¶ Return the list of labels for the metrics that are being calculated

getMetrics
()¶ Gets the current metric values
 Returns: A dictionary where each key is the metricname, and the values are
 it scalar value. Same as the output of update()

update
(results)¶ Compute the new metrics values, given the next inference/groundtruth values
 results: An opf_utils.ModelResult object that was computed during the last
 iteration of the model
 Returns: A dictionary where each key is the metricname, and the values are
 it scalar value.

MetricSpec¶

class
nupic.frameworks.opf.metrics.
MetricSpec
(metric, inferenceElement, field=None, params=None)¶ This class represents a single Metrics specification in the TaskControl block

classmethod
getInferenceTypeFromLabel
(label)¶ Extracts the PredicitonKind (temporal vs. nontemporal) from the given metric label
 label: A label (string) for a metric spec generated by getMetricLabel
 (above)
Returns: An InferenceType value

getLabel
(inferenceType=None)¶ Helper method that generates a unique label for a MetricSpec / InferenceType pair. The label is formatted as follows:
<predictionKind>:<metric type>:(paramName=value)*:field=<fieldname> For example:
 classification:aae:paramA=10.2:paramB=20:window=100:field=pounds

classmethod
Metrics Interface¶

class
nupic.frameworks.opf.metrics.
MetricsIface
(metricSpec)¶ A Metrics module compares a prediction Y to corresponding ground truth X and returns a single measure representing the “goodness” of the prediction. It is up to the implementation to determine how this comparison is made.

addInstance
(groundTruth, prediction, record=None, result=None)¶ add one instance consisting of ground truth and a prediction.
 groundTruth:
 The actual measured value at the current timestep
 prediction:
 The value predicted by the network at the current timestep
 groundTruthEncoding:
 The binary encoding of the groundTruth value (as a numpy array). Right now this is only used by CLA networks
 predictionEncoding:
 The binary encoding of the prediction value (as a numpy array). Right now this is only used by CLA networks
 result:
An ModelResult class (see opf_utils.py)
 return:
 The average error as computed over the metric’s window size

getMetric
()¶  return:
{value : <current measurement>, “stats” : {<stat> : <value> ...}} metric name is defined by the MetricIface implementation. stats is expected to contain further
information relevant to the given metric, for example the number of timesteps represented in the current measurement. all stats are implementation defined, and “stats” can be None

AggregateMetric¶

class
nupic.frameworks.opf.metrics.
AggregateMetric
(metricSpec)¶ Bases:
nupic.frameworks.opf.metrics.MetricsIface
Partial implementation of Metrics Interface for metrics that accumulate an error and compute an aggregate score, potentially over some window of previous data. This is a convenience class that can serve as the base class for a wide variety of metrics

accumulate
(groundTruth, prediction, accumulatedError, historyBuffer, result)¶ Updates the accumulated error given the prediction and the ground truth.
groundTruth: Actual value that is observed for the current timestep
prediction: Value predicted by the network for the given timestep
 accumulatedError: The total accumulated score from the previous
 predictions (possibly over some finite window)
 historyBuffer: A buffer of the last <self.window> ground truth values
that have been observed.
If historyBuffer = None, it means that no history is being kept.
 result: An ModelResult class (see opf_utils.py), used for advanced
metric calculation (e.g., MetricNegativeLogLikelihood)
 retval:
The new accumulated error. That is: self.accumulatedError = self.accumulate(groundTruth, predictions, accumulatedError)
historyBuffer should also be updated in this method. self.spec.params[“window”] indicates the maximum size of the window

aggregate
(accumulatedError, historyBuffer, steps)¶ Updates the final aggregated score error given the prediction and the ground truth.
 accumulatedError: The total accumulated score from the previous
 predictions (possibly over some finite window)
 historyBuffer: A buffer of the last <self.window> ground truth values
that have been observed.
If historyBuffer = None, it means that no history is being kept.
steps: The total number of (groundTruth, prediction) pairs that have been passed to the metric. This does not include pairs where the groundTruth = SENTINEL_VALUE_FOR_MISSING_DATA
 retval:
 The new aggregate (final) error measure.

MetricPassThruPrediction¶

class
nupic.frameworks.opf.metrics.
MetricPassThruPrediction
(metricSpec)¶ Bases:
nupic.frameworks.opf.metrics.MetricsIface
This is not a metric, but rather a facility for passing the predictions generated by a baseline metric through to the prediction output cache produced by a model.
For example, if you wanted to see the predictions generated for the TwoGram metric, you would specify ‘PassThruPredictions’ as the ‘errorMetric’ parameter.
This metric class simply takes the prediction and outputs that as the aggregateMetric value.

addInstance
(groundTruth, prediction, record=None, result=None)¶ Compute and store metric value

getMetric
()¶ Return the metric value

CustomErrorMetric¶

class
nupic.frameworks.opf.metrics.
CustomErrorMetric
(metricSpec)¶ Bases:
nupic.frameworks.opf.metrics.MetricsIface
Custom Error Metric class that handles user defined error metrics

class
CircularBuffer
(length)¶ implementation of a fixed size constant random access circular buffer

CustomErrorMetric.
expValue
(pred)¶ Helper function to return a scalar value representing the expected value of a probability distribution

CustomErrorMetric.
mostLikely
(pred)¶ Helper function to return a scalar value representing the most likely outcome given a probability distribution

class
MetricMovingMode¶

class
nupic.frameworks.opf.metrics.
MetricMovingMode
(metricSpec)¶ Bases:
nupic.frameworks.opf.metrics.AggregateMetric
computes error metric based on moving mode prediction
MetricTrivial¶

class
nupic.frameworks.opf.metrics.
MetricTrivial
(metricSpec)¶ Bases:
nupic.frameworks.opf.metrics.AggregateMetric
computes a metric against the ground truth N steps ago. The metric to compute is designated by the ‘errorMetric’ entry in the metric params.
MetricTwoGram¶

class
nupic.frameworks.opf.metrics.
MetricTwoGram
(metricSpec)¶ Bases:
nupic.frameworks.opf.metrics.AggregateMetric
computes error metric based on onegrams. The groundTruth passed into this metric is the encoded output of the field (an array of 1’s and 0’s).
MetricAccuracy¶

class
nupic.frameworks.opf.metrics.
MetricAccuracy
(metricSpec)¶ Bases:
nupic.frameworks.opf.metrics.AggregateMetric
computes simple accuracy for an enumerated type. all inputs are treated as discrete members of a set, therefore for example 0.5 is only a correct response if the ground truth is exactly 0.5. Inputs can be strings, integers, or reals
MetricAveError¶

class
nupic.frameworks.opf.metrics.
MetricAveError
(metricSpec)¶ Bases:
nupic.frameworks.opf.metrics.AggregateMetric
Simply the inverse of the Accuracy metric More consistent with scalar metrics because they all report an error to be minimized
MetricNegAUC¶

class
nupic.frameworks.opf.metrics.
MetricNegAUC
(metricSpec)¶ Bases:
nupic.frameworks.opf.metrics.AggregateMetric
Computes 1 * AUC (Area Under the Curve) of the ROC (Receiver Operator Characteristics) curve. We compute 1 * AUC because metrics are optimized to be LOWER when running hypersearch.
For this, we assuming that category 1 is the “positive” category and we are generating an ROC curve with the TPR (True Positive Rate) of category 1 on the yaxis and the FPR (False Positive Rate) on the xaxis.

accumulate
(groundTruth, prediction, accumulatedError, historyBuffer, result=None)¶ Accumulate history of groundTruth and “prediction” values.
For this metric, groundTruth is the actual category and “prediction” is a dict containing one toplevel item with a key of 0 (meaning this is the 0step classificaton) and a value which is another dict, which contains the probability for each category as output from the classifier. For example, this is what “prediction” would be if the classifier said that category 0 had a 0.6 probability and category 1 had a 0.4 probability: {0:0.6, 1: 0.4}

MetricMultiStep¶

class
nupic.frameworks.opf.metrics.
MetricMultiStep
(metricSpec)¶ Bases:
nupic.frameworks.opf.metrics.AggregateMetric
This is an “uber” metric which is used to apply one of the other basic metrics to a specific step in a multistep prediction.
 The specParams are expected to contain:
‘errorMetric’: name of basic metric to apply ‘steps’: compare prediction[‘steps’] to the current
ground truth.
Note that the metrics manager has already performed the time shifting for us  it passes us the prediction element from ‘steps’ steps ago and asks us to compare that to the current ground truth.
When multiple steps of prediction are requested, we average the results of the underlying metric for each step.