nodes Package

nodes Package

nodes using the mdp-toolkit node interface

alignment Module

initialization - alignment of spike waveform sets

class AlignmentNode(nchan=4, max_rep=32, max_tau=10, resample_factor=None, cut_down=True, dtype=<type 'numpy.float32'>, debug=False)[source]

Bases: botmpy.nodes.base_nodes.ResetNode

aligns a set of spikes on the mean waveform of the set

is_invertable()[source]
is_trainable()[source]

artifact_detector Module

detector nodes for capacitative artifacts in multi-channeled data

These detectors find events and event epochs on potentially multi-channeled data signal. Mostly, you will want to reset the internals of the detector after processing a chunk of data. There are different kinds of detectors, the common product of the detector is the discrete events or epochs in the data signal.

DATA_TYPE: fixed for float32 (single precision)

class ArtifactDetectorNode(wsize_ms=15.0, psize_ms=(5.0, 10.0), wfunc=<function ones at 0x31292a8>, srate=32000.0, zcr_th=0.1, mindist_ms=10.0)[source]

Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode

detects artifacts by detecting zero-crossing frequencies

For a zero-mean gaussian process the the zero-crossing rate zcr is independent of its moments and approaches 0.5 as the integration window size approaches infinity:

s_t \sim N(0,\Sigma)

zcr_{wsize}(s_t) = \frac{1}{wsize-1} \sum_{t=1}^{wsize-1}
{{\mathbb I}\left\{{s_t s_{t-1} < 0}\right\}}

\lim_{wsize \rightarrow \infty} zcr_{wsize}(s_t) = 0.5

The capacitive artifacts seen in the Munk dataset have a significantly lower frequency, s.t. zcr decreases to 0.1 and below, for the integration window sizes relevant to our application. Detecting epochs where the zcr significantly deviates from the expectation, assuming a coloured Gaussian noise process, can thus lead be used for detection of artifact epochs.

The zero crossing rate (zcr) is given by the convolution of a moving average window (although this is configurable to use other weighting methods) with the XOR of the signbits of X(t) and X(t+1).

get_fragmentation()[source]

returns the artifact fragmentation

get_nonartefact_epochs()[source]

return the index set that represents the non-artifact epochs

class SpectrumArtifactDetector(wsize_ms=8.0, srate=32000.0, cutoff_hz=2000.0, nfft=512, en_func='max_normed', overlap=1, max_merge_dist=6, min_allowed_length=2, **kw)[source]

Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode

detects artifacts by identifying unwanted frequency packages in the spectrum of the signal

For a zero-mean gaussian process the the zero-crossing rate zcr is independent of its moments and approaches 0.5 as the integration window size approaches infinity:

s_t \sim N(0,\Sigma)

zcr_{wsize}(s_t) = \frac{1}{wsize-1} \sum_{t=1}^{wsize-1}
{{\mathbb I}\left\{{s_t s_{t-1} < 0}\right\}}

\lim_{wsize \rightarrow \infty} zcr_{wsize}(s_t) = 0.5

The capacitive artifacts seen in the Munk dataset have a significantly lower frequency, s.t. zcr decreases to 0.1 and below, for the integration window sizes relevant to our application. Detecting epochs where the zcr significantly deviates from the expectation, assuming a coloured Gaussian noise process, can thus lead be used for detection of artifact epochs.

The zero crossing rate (zcr) is given by the convolution of a moving average window (although this is configurable to use other weighting methods) with the XOR of the signbits of X(t) and X(t+1).

get_nonartefact_epochs()[source]

return the index set that represents the non-artifact epochs

base_nodes Module

abstract base classes derived from MDP nodes

class Node

Bases: object

class ResetNode[source]

Bases: botmpy.nodes.base_nodes.TrainingResetMixin, Node

class TrainingResetMixin[source]

Bases: object

allows mdp.Node to reset to training state

This is a mixin class for subclasses of mdp.Node. To use it inherit from mdp.Node and put this mixin as the first superclass.

node is a mdp.signal_node.Cumulator that can have its training phase reinitialised once a batch of cumulated data has been processed on. This is useful for online algorithms that derive parameters from the batch of data currently under consideration (Ex.: stochastic thresholding).

reset()[source]

reset handler, calls the reset hook and resets to training phase

class PCANode

Bases: object

cluster Module

initialization - clustering of spikes in feature space

class ClusteringNode(dtype=None)[source]

Bases: botmpy.nodes.base_nodes.ResetNode

interface for clustering algorithms

is_invertable()[source]
is_trainable()[source]
class HomoscedasticClusteringNode(clus_type='kmeans', crange=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], repeats=4, sigma_factor=4.0, max_iter=None, conv_thresh=None, alpha=None, cvtype='tied', gof_type='bic', dtype=None, debug=False)[source]

Bases: botmpy.nodes.cluster.ClusteringNode

clustering with model order selection to learn a mixture model

Assuming the data are prewhitened spikes, possibly in some condensed representation e.g. PCA, the problem is to find the correct number of components and their corresponding means. The covariance matrix of all components is assumed to be the same, as the variation in the data is produced by an additive noise process. Further the covariance matrix can be assumed be the identity matrix (or a scaled version due to estimation errors, thus a spherical covariance),

To increase performance, it is assumed all necessary pre-processing has been taken care of already, to assure an optimal clustering performance (e.g.: alignment, resampling, (noise)whitening, etc.)

So we have to find the number of components and their means in a homoscedastic clustering problem. The ‘goodness of fit’ will be evaluated by evaluating a likelihood based criterion that is penalised for an increasing number of model parameters (to prevent overfitting) (ref: BIC). Minimising said criterion will lead to the most likely model.

static gauss_heat_kernel(x, delta=1.0)[source]
plot(data, views=2, show=False)[source]

plot clustering

filter_bank Module

implementation of a filter bank consisting of a set of filters

exception FilterBankError[source]

Bases: exceptions.Exception

class FilterBankNode(**kwargs)[source]

Bases: Node

abstract class that handles filter instances and their outputs

All filters constituting the filter bank have to be of the same temporal extend (Tf) and process the same channel set.

There are two different index sets. One is abbreviated “idx” and one “key”. The “idx” the index of filter in self.bank and thus a unique, hashable identifier. Where as the “key” an index in a subset of idx. Ex.: the index for list(self._idx_active_set) would be a “key”.

activate(idx, check=False)[source]

activates a filter in the filter bank

Filters are never deleted, but can be de-/reactivated and will be used respecting there activation state for the filter output of the filter bank.

No effect if idx not in self.bank.

ce
create_filter(xi, check=True)[source]

adds a new filter to the filter bank

Parameters:xi (ndarray) – template to build the filter for
cs
deactivate(idx, check=False)[source]

deactivates a filter in the filter bank

Filters are never deleted, but can be de-/reactivated and will be used respecting there activation state for the filter output of the filter bank.

No effect if idx not in self.bank.

filter_set

filter set of active filters

get_ce()[source]
get_chan_set()[source]
get_filter_set(active=True, mc=True)[source]
get_idx_for(key)[source]
get_nc()[source]
get_nf(active=True)[source]
get_template_set(active=True, mc=True)[source]
get_tf()[source]
get_xcorrs()[source]
get_xcorrs_at(idx0, idx1=None, shift=0)[source]
is_invertible()[source]
is_trainable()[source]
nc

number of channels

nf

number of filters

plot_filter_set(ph=None, show=False)[source]

plot the filter set in a waveform plot

plot_template_set(ph=None, show=False)[source]

plot the template set in a waveform plot

plot_template_set2(show=False)[source]

plot the template set in a waveform plot

plot_xvft(ph=None, show=False)[source]

plot the Xi vs F Tensor of the filter bank

reset_history()[source]

sets the history to all zeros for all filters

reset_rates()[source]

resets the rate estimators for all filters (if applicable)

set_ce(value)[source]
set_chan_set(value)[source]
template_set

template set of active filters

tf

temporal filter extend [samples]

xcorrs

cross correlation tensor for active filters

linear_filter Module

filter classes for linear filters in the time domain

exception FilterError[source]

Bases: exceptions.Exception

class FilterNode(tf, nc, ce, chan_set=None, rb_cap=None, dtype=None)[source]

Bases: Node

linear filter in the time domain

This node applies a linear filter to the data and returns the filtered data. The derivation of the filter (f) from the pattern (xi) is specified in the implementing subclass via the ‘filter_calculation’ classmethod. The template will be averaged from a ringbuffer of observations. The covariance matrix is supplied from an external covariance estimator.

append_xi_buf(wf, recalc=False)[source]

append one waveform to the xi_buffer

Parameters:
  • wf (ndarray) – wavefom data [self.tf, self.nc]
  • recalc (bool) – if True, call self.calc_filter after appending
calc_filter()[source]

initiate a calculation of the filter

ce

covariance estimator

extend_xi_buf(wfs, recalc=False)[source]

append an iterable of waveforms to the xi_buffer

Parameters:
  • wfs (iterable of ndarray) – wavefom data [n][self.tf, self.nc]
  • recalc (bool) – if True, call self.calc_filter after extending
f

filter (multi-channeled)

f_conc

filter (concatenated)

fill_xi_buf(wf, recalc=False)[source]

fill all of the xi_buffer with wf

Parameters :
wf : ndarrsay

ndarray of shape (self.tf, self.nc)

recalc : bool

if True, call self.calc_filter after appending

classmethod filter_calculation(xi, ce, cs, *args, **kwargs)[source]

ABSTRACT METHOD FOR FILTER CALCULATION

Implement this in a meaningful way in any subclass. The method should return the filter given the multi-channeled template xi, the covariance estimator ce and the channel set cs plus any number of optional arguments and keywords. The filter is usually the same shape as the pattern xi.

get_ce()[source]
get_f()[source]
get_f_conc()[source]
get_nc()[source]
get_snr()[source]
get_tf()[source]
get_xi()[source]
get_xi_conc()[source]
is_invertible()[source]
is_trainable()[source]
nc

number of channels

plot_buffer_to_axis(axis=None, idx=None, limits=None)[source]

plots the current buffer on the passed axis handle

reset_history()[source]

sets the history to all zeros

set_ce(value)[source]
snr

signal to noise ratio (mahalanobis distance)

tf

temporal extend [sample]

xi

template (multi-channeled)

xi_conc

template (concatenated)

class MatchedFilterNode(tf, nc, ce, chan_set=None, rb_cap=None, dtype=None)[source]

Bases: botmpy.nodes.linear_filter.FilterNode

matched filters in the time domain optimise the signal to noise ratio (SNR) of the matched pattern with respect to covariance matrix describing the noise background (deconvolution).

classmethod filter_calculation(xi, ce, cs, *args, **kwargs)[source]
class NormalisedMatchedFilterNode(tf, nc, ce, chan_set=None, rb_cap=None, dtype=None)[source]

Bases: botmpy.nodes.linear_filter.FilterNode

matched filters in the time domain optimise the signal to noise ratio (SNR) of the matched pattern with respect to covariance matrix describing the noise background (deconvolution). Here the deconvolution output is normalised s.t. the response of the pattern is peak of unit amplitude.

classmethod filter_calculation(xi, ce, cs, *args, **kwargs)[source]

prewhiten Module

spike noise prewhitening algorithm

class PrewhiteningNode(ncov=None, dtype=<type 'numpy.float32'>)[source]

Bases: Node

prewhitens the data with respect to a noise covariance matrix

static is_invertible()[source]
static is_trainable()[source]
update(ncov)[source]

updates the covariance matrix and recalculates internals

Parameters :
ncov : ndarray

symetric matrix, noise covariance

class PrewhiteningNode2(covest)[source]

Bases: Node

pre-whitens data with respect to a noise covariance matrix

static is_invertible()[source]
static is_trainable()[source]

smoothing Module

smoothing algorithms for multi-channeled data

class SmoothingNode(size=5, input_dim=None, dtype=None)[source]

Bases: botmpy.nodes.base_nodes.ResetNode

smooths the data using a gauss kernel of size 5 to 11

is_invertable()[source]
is_trainable()[source]
smooth(signal, window=5, kernel='gauss')[source]

smooth signal with kernel of type kernel and window size window

Parameters :
signal : ndarray

multi-channeled signal [data, channel]

window : ndarray

window size of the smoothing filter (len(window) < signal .shape[0])

kernel : ndarray
kernel to use, one of
  • ‘gauss’: least squares
  • ‘box’: moving average

spike_detection Module

detector nodes for multi-channeled data

These detectors find features and feature epochs on multi-channeled signals. Mostly, you will want to reset the internals of the detector after processing a chunk of data, which is featured by deriving from ResetNode. There are different kinds of detectors, distinguished by their way of feature to noise discrimination.

exception EnergyNotCalculatedError[source]

Bases: exceptions.Exception

class ThresholdDetectorNode(input_dim=None, output_dim=None, dtype=None, energy_func=None, threshold_func=None, threshold_mode='gt', threshold_base='energy', threshold_factor=1.0, tf=47, min_dist=1, find_max=True, ch_separate=False)[source]

Bases: botmpy.nodes.base_nodes.ResetNode

abstract interface for detecting feature epochs in a signal

The ThresholdDetectorNode is the abstract interface for all detectors. The input signal is assumed to be a (multi-channeled) signal, with data for one channel in each column (or one multi-channeled observation/sample per row).

The output will be a timeseries of detected feature in the input signal. To find the features, the input signal is transformed by applying an operator (called the energy function from here on) that produces an representation of the input signal, which should optimize the SNR of the features vs the remainder of the input signal. A threshold is then applied to this energy representation of the input signal to find the feature epochs.

The output timeseries either holds the onsets of the feature epochs or the maximum of the energy function within the feature epoch, in samples.

Extra information about the events or the internals has to be saved in member variables along with a proper interface.

events
get_epochs(cut=None, invert=False, merge=False)[source]

returns epochs based on self.events for the current iteration

Parameters :
cut : (int,int)

Window size of an epoch in samples (befor,after) the event sample. If None, self._tf will be used.

invert : bool

Inverts the epochs, frex to yield noise epochs instead of spike epochs.

merge : bool

Merges overlapping epochs.

Returns :
ndarray

ndarray with epochs on the rows [[start,end]]

get_events()[source]
get_extracted_events(mc=False, align_at=-1, kind='min', rsf=1.0, buffer=False)[source]

yields the extracted spikes

Parameters:
  • mc (bool) – if True, return multichannel events, else return concatenated events. Default=False
  • align_at (int or float) – if a float from (0.0,1.0), determine the align_sample according to that weight. If a positive integer from (0, self.tf-1] use that sample as the align_sample. Default=0.25 * self.tf
  • kind (str) – one of “min”, “max”, “energy” or “none”. method to use for alignment, will be passed to the alignment function. Default=’min’
  • rsf (float) – resampling factor (use integer values of powers of 2)
  • buffer (bool) – if True, write to buffer regardless of current buffer state. Default=False
is_invertible()[source]
is_trainable()[source]
plot(show=False)[source]

plot detection in mcdata plot

set_events(value)[source]
class SDAbsNode(**kwargs)[source]

Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode

spike detector

energy: absolute of the signal threshold: signal.std

class SDSqrNode(**kwargs)[source]

Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode

spike detector

energy: square of the signal threshold: signal.var

class SDMteoNode(kvalues=[1, 3, 5, 7, 9], quantile=0.98, **kwargs)[source]

Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode

spike detector

energy: multiresolution teager energy operator threshold: energy.std

class SDKteoNode(kvalue=1, quantile=0.98, **kwargs)[source]

Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode

spike detector

energy: teager energy operator threshold: energy.std

class SDIntraNode(**kwargs)[source]

Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode

spike detector

energy: identity threshold: zero

class SDPeakNode(**kwargs)[source]

Bases: botmpy.nodes.spike_detection.ThresholdDetectorNode

spike detector

energy: absolute of the signal threshold: signal.std

spike_sorting Module

implementation of spike sorting with matched filters

See: [1] F. Franke, M. Natora, C. Boucsein, M. Munk, and K. Obermayer. An online spike detection and spike classification algorithm capable of instantaneous resolution of overlapping spikes. Journal of Computational Neuroscience, 2009 [2] F. Franke, ... , 2012, The revolutionary BOTM Paper

class FilterBankSortingNode(**kwargs)[source]

Bases: botmpy.nodes.filter_bank.FilterBankNode

abstract class that handles filter instances and their outputs

This class provides a pipeline structure to implement spike sorting algorithms that operate on a filter bank. The implementation is done by implementing the self._pre_filter, self._post_filter, self._pre_sort, self._sort_chunk and self._post_sort methods with meaning full processing. After the filter steps the filter output is present and can be processed on. Input data can be partitioned into chunks of smaller size.

plot_sorting(ph=None, show=False)[source]

plot the sorting of the last data chunk

Parameters:
  • ph (plot handle) – plot handle top use for the plot
  • show (bool) – if True, call plt.show()
plot_sorting_waveforms(ph=None, show=False, **kwargs)[source]

plot the waveforms of the sorting of the last data chunk

Parameters:
  • ph (plot handle) – plot handle to use for the
  • show (bool) – if True, call plt.show()
sorting2gdf(fname)[source]

yield the gdf representing the current sorting

spikes_u(u, mc=True, exclude_overlaps=True, overlap_window=None, align_at=-1, align_kind='min', align_rsf=1.0)[source]

yields the spike for the u-th filter

Parameters:
  • u (int) – index of the filter # CHECK THIS
  • mc (bool) – if True, return spikes multi-channeled, else return spikes concatenated Default=True
  • exclude_overlaps (bool) – if True, exclude overlap spike
  • overlap_window (int) – if exclude_overlaps is True, this will define the overlap range, if None set overlap_window=self._tf. Default=None
class AdaptiveBayesOptimalTemplateMatchingNode(**kwargs)[source]

Bases: botmpy.nodes.spike_sorting.BayesOptimalTemplateMatchingNode

Adaptive BOTM Node

adaptivity here means,backwards sense, that known templates and covariances are adapted local temporal changes. In the forward sense a parallel spike detection is matched to find currently unidenified units in the data.

det
get_det()[source]
resampled_mean_dist(spks1, spks2)[source]

Caclulate distance of resampled means from two sets of spikes

class BayesOptimalTemplateMatchingNode(**kwargs)[source]

Bases: botmpy.nodes.spike_sorting.FilterBankSortingNode

FilterBanksSortingNode derivative for the BOTM algorithm

Can use two implementations of the Bayes Optimal Template-Matching (BOTM) algorithm as presented in [2]. First implementation uses explicitly constructed overlap channels for the extend of the complete input signal, the other implementation uses subtractive interference cancellation (SIC) on epochs of the signal, where the template discriminants are greater the the noise discriminant.

component_divergence(obs, with_noise=False, loading=False, subdim=None)[source]

component probabilities under the model

Parameters:
  • obs (ndarray) – observations to be evaluated [n, tf, nc]
  • with_noise (bool) – if True, include the noise cluster as component in the mixture. Default=False
  • loading (bool) – if True, use the loaded matrix Default=False
  • subdim (int) – dimensionality of subspace to build the inverse over. if None ignore Default=None
Return type:

ndarray

Returns:

divergence from means of current filter bank[n, c]

get_noise_prior()[source]
get_spike_prior()[source]
get_spike_prior_bias()[source]
noise_prior
posterior_prob(obs, with_noise=False)[source]

posterior probabilities for data under the model

Parameters:
  • obs (ndarray) – observations to be evaluated [n, tf, nc]
  • with_noise (bool) – if True, include the noise cluster as component in the mixture. Default=False
Return type:

ndarray

Returns:

matrix with per component posterior probabilities [n, c]

set_noise_prior(value)[source]
set_spike_prior(value)[source]
set_spike_prior_bias(value)[source]
spike_prior
spike_prior_bias
BOTMNode

alias of BayesOptimalTemplateMatchingNode

ABOTMNode

alias of AdaptiveBayesOptimalTemplateMatchingNode