nodes using the mdp-toolkit node interface
initialization - alignment of spike waveform sets
detector nodes for capacitative artifacts in multi-channeled data
These detectors find events and event epochs on potentially multi-channeled data signal. Mostly, you will want to reset the internals of the detector after processing a chunk of data. There are different kinds of detectors, the common product of the detector is the discrete events or epochs in the data signal.
DATA_TYPE: fixed for float32 (single precision)
Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode
detects artifacts by detecting zero-crossing frequencies
For a zero-mean gaussian process the the zero-crossing rate zcr is independent of its moments and approaches 0.5 as the integration window size approaches infinity:
The capacitive artifacts seen in the Munk dataset have a significantly lower frequency, s.t. zcr decreases to 0.1 and below, for the integration window sizes relevant to our application. Detecting epochs where the zcr significantly deviates from the expectation, assuming a coloured Gaussian noise process, can thus lead be used for detection of artifact epochs.
The zero crossing rate (zcr) is given by the convolution of a moving average window (although this is configurable to use other weighting methods) with the XOR of the signbits of X(t) and X(t+1).
Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode
detects artifacts by identifying unwanted frequency packages in the spectrum of the signal
For a zero-mean gaussian process the the zero-crossing rate zcr is independent of its moments and approaches 0.5 as the integration window size approaches infinity:
The capacitive artifacts seen in the Munk dataset have a significantly lower frequency, s.t. zcr decreases to 0.1 and below, for the integration window sizes relevant to our application. Detecting epochs where the zcr significantly deviates from the expectation, assuming a coloured Gaussian noise process, can thus lead be used for detection of artifact epochs.
The zero crossing rate (zcr) is given by the convolution of a moving average window (although this is configurable to use other weighting methods) with the XOR of the signbits of X(t) and X(t+1).
abstract base classes derived from MDP nodes
Bases: object
Bases: object
allows mdp.Node to reset to training state
This is a mixin class for subclasses of mdp.Node. To use it inherit from mdp.Node and put this mixin as the first superclass.
node is a mdp.signal_node.Cumulator that can have its training phase reinitialised once a batch of cumulated data has been processed on. This is useful for online algorithms that derive parameters from the batch of data currently under consideration (Ex.: stochastic thresholding).
Bases: object
initialization - clustering of spikes in feature space
Bases: botmpy.nodes.base_nodes.ResetNode
interface for clustering algorithms
Bases: botmpy.nodes.cluster.ClusteringNode
clustering with model order selection to learn a mixture model
Assuming the data are prewhitened spikes, possibly in some condensed representation e.g. PCA, the problem is to find the correct number of components and their corresponding means. The covariance matrix of all components is assumed to be the same, as the variation in the data is produced by an additive noise process. Further the covariance matrix can be assumed be the identity matrix (or a scaled version due to estimation errors, thus a spherical covariance),
To increase performance, it is assumed all necessary pre-processing has been taken care of already, to assure an optimal clustering performance (e.g.: alignment, resampling, (noise)whitening, etc.)
So we have to find the number of components and their means in a homoscedastic clustering problem. The ‘goodness of fit’ will be evaluated by evaluating a likelihood based criterion that is penalised for an increasing number of model parameters (to prevent overfitting) (ref: BIC). Minimising said criterion will lead to the most likely model.
implementation of a filter bank consisting of a set of filters
Bases: Node
abstract class that handles filter instances and their outputs
All filters constituting the filter bank have to be of the same temporal extend (Tf) and process the same channel set.
There are two different index sets. One is abbreviated “idx” and one “key”. The “idx” the index of filter in self.bank and thus a unique, hashable identifier. Where as the “key” an index in a subset of idx. Ex.: the index for list(self._idx_active_set) would be a “key”.
activates a filter in the filter bank
Filters are never deleted, but can be de-/reactivated and will be used respecting there activation state for the filter output of the filter bank.
No effect if idx not in self.bank.
adds a new filter to the filter bank
Parameters: | xi (ndarray) – template to build the filter for |
---|
deactivates a filter in the filter bank
Filters are never deleted, but can be de-/reactivated and will be used respecting there activation state for the filter output of the filter bank.
No effect if idx not in self.bank.
filter set of active filters
number of channels
number of filters
template set of active filters
temporal filter extend [samples]
cross correlation tensor for active filters
filter classes for linear filters in the time domain
Bases: Node
linear filter in the time domain
This node applies a linear filter to the data and returns the filtered data. The derivation of the filter (f) from the pattern (xi) is specified in the implementing subclass via the ‘filter_calculation’ classmethod. The template will be averaged from a ringbuffer of observations. The covariance matrix is supplied from an external covariance estimator.
append one waveform to the xi_buffer
Parameters: |
|
---|
covariance estimator
append an iterable of waveforms to the xi_buffer
Parameters: |
|
---|
filter (multi-channeled)
filter (concatenated)
fill all of the xi_buffer with wf
Parameters : |
|
---|
ABSTRACT METHOD FOR FILTER CALCULATION
Implement this in a meaningful way in any subclass. The method should return the filter given the multi-channeled template xi, the covariance estimator ce and the channel set cs plus any number of optional arguments and keywords. The filter is usually the same shape as the pattern xi.
number of channels
plots the current buffer on the passed axis handle
signal to noise ratio (mahalanobis distance)
temporal extend [sample]
template (multi-channeled)
template (concatenated)
Bases: botmpy.nodes.linear_filter.FilterNode
matched filters in the time domain optimise the signal to noise ratio (SNR) of the matched pattern with respect to covariance matrix describing the noise background (deconvolution).
Bases: botmpy.nodes.linear_filter.FilterNode
matched filters in the time domain optimise the signal to noise ratio (SNR) of the matched pattern with respect to covariance matrix describing the noise background (deconvolution). Here the deconvolution output is normalised s.t. the response of the pattern is peak of unit amplitude.
spike noise prewhitening algorithm
smoothing algorithms for multi-channeled data
Bases: botmpy.nodes.base_nodes.ResetNode
smooths the data using a gauss kernel of size 5 to 11
smooth signal with kernel of type kernel and window size window
Parameters : |
|
---|
detector nodes for multi-channeled data
These detectors find features and feature epochs on multi-channeled signals. Mostly, you will want to reset the internals of the detector after processing a chunk of data, which is featured by deriving from ResetNode. There are different kinds of detectors, distinguished by their way of feature to noise discrimination.
Bases: botmpy.nodes.base_nodes.ResetNode
abstract interface for detecting feature epochs in a signal
The ThresholdDetectorNode is the abstract interface for all detectors. The input signal is assumed to be a (multi-channeled) signal, with data for one channel in each column (or one multi-channeled observation/sample per row).
The output will be a timeseries of detected feature in the input signal. To find the features, the input signal is transformed by applying an operator (called the energy function from here on) that produces an representation of the input signal, which should optimize the SNR of the features vs the remainder of the input signal. A threshold is then applied to this energy representation of the input signal to find the feature epochs.
The output timeseries either holds the onsets of the feature epochs or the maximum of the energy function within the feature epoch, in samples.
Extra information about the events or the internals has to be saved in member variables along with a proper interface.
returns epochs based on self.events for the current iteration
Parameters : |
|
---|---|
Returns : |
|
yields the extracted spikes
Parameters: |
|
---|
Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode
spike detector
energy: absolute of the signal threshold: signal.std
Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode
spike detector
energy: square of the signal threshold: signal.var
Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode
spike detector
energy: multiresolution teager energy operator threshold: energy.std
Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode
spike detector
energy: teager energy operator threshold: energy.std
Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode
spike detector
energy: identity threshold: zero
Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode
spike detector
energy: absolute of the signal threshold: signal.std
implementation of spike sorting with matched filters
See: [1] F. Franke, M. Natora, C. Boucsein, M. Munk, and K. Obermayer. An online spike detection and spike classification algorithm capable of instantaneous resolution of overlapping spikes. Journal of Computational Neuroscience, 2009 [2] F. Franke, ... , 2012, The revolutionary BOTM Paper
Bases: botmpy.nodes.filter_bank.FilterBankNode
abstract class that handles filter instances and their outputs
This class provides a pipeline structure to implement spike sorting algorithms that operate on a filter bank. The implementation is done by implementing the self._pre_filter, self._post_filter, self._pre_sort, self._sort_chunk and self._post_sort methods with meaning full processing. After the filter steps the filter output is present and can be processed on. Input data can be partitioned into chunks of smaller size.
plot the sorting of the last data chunk
Parameters: |
|
---|
plot the waveforms of the sorting of the last data chunk
Parameters: |
|
---|
yields the spike for the u-th filter
Parameters: |
|
---|
Bases: botmpy.nodes.spike_sorting.BayesOptimalTemplateMatchingNode
Adaptive BOTM Node
adaptivity here means,backwards sense, that known templates and covariances are adapted local temporal changes. In the forward sense a parallel spike detection is matched to find currently unidenified units in the data.
Bases: botmpy.nodes.spike_sorting.FilterBankSortingNode
FilterBanksSortingNode derivative for the BOTM algorithm
Can use two implementations of the Bayes Optimal Template-Matching (BOTM) algorithm as presented in [2]. First implementation uses explicitly constructed overlap channels for the extend of the complete input signal, the other implementation uses subtractive interference cancellation (SIC) on epochs of the signal, where the template discriminants are greater the the noise discriminant.
component probabilities under the model
Parameters: |
|
---|---|
Return type: | ndarray |
Returns: | divergence from means of current filter bank[n, c] |
posterior probabilities for data under the model
Parameters: |
|
---|---|
Return type: | ndarray |
Returns: | matrix with per component posterior probabilities [n, c] |
alias of BayesOptimalTemplateMatchingNode