Spike Module
Spike
Spike mechanism implementation.
- class lava.lib.dl.slayer.spike.spike.Spike(*args, **kwargs)
Bases:
Function
Spiking mechanism with autograd link.
f_s(x) &= \mathcal{H}(x - \vartheta)
- Parameters:
voltage (torch tensor) – neuron voltage.
threshold (float or torch tensor) – neuron threshold
tau_rho (float) – gradient relaxation
scale_rho (float) – gradient scale
graded_spike (bool) – flag for graded spike
voltage_last (torch tensor) – voltage at t=-1
scale (int) – variable scale value.
- Returns:
spike tensor
- Return type:
torch tensor
Examples
>>> spike = Spike.apply(v, th, tau_rho, scale_rho, False, 0, 1)
- static backward(ctx, grad_spikes)
- derivative = None
- static forward(ctx, voltage, threshold, tau_rho, scale_rho, graded_spike, voltage_last, scale)
Complex Spike (Phase Threshold)
Complex spike (Phase threshold) implementation.
- class lava.lib.dl.slayer.spike.complex.Spike(*args, **kwargs)
Bases:
Function
Complex spike function with autograd link.
f_s(z) &= \mathcal{H}(|z| - \vartheta)\,\delta(\arg(z))
- Parameters:
real (torch tensor) – real component of neuron response.
imag (torch tensor) – imaginary component of neuron response.
threshold (torch tensor or float) – neuron threshold.
tau_rho (float) – gradient relaxation
scale_rho (float) – gradient scale
graded_spike (bool) – flag for graded spike
imag_last (torch tensor) – imaginary response at t=-1
scale (int) – variable scale value.
- Returns:
spike tensor
- Return type:
torch tensor
Examples
>>> spike = Spike.apply(re, im, th, tau_rho, scale_rho, False, 0, 1)
- static backward(ctx, grad_output)
- derivative = None
- static forward(ctx, real, imag, threshold, tau_rho, scale_rho, graded_spike, imag_last, scale)
Module contents
- class lava.lib.dl.slayer.spike.Spike(*args, **kwargs)
Bases:
Function
Spiking mechanism with autograd link.
f_s(x) &= \mathcal{H}(x - \vartheta)
- Parameters:
voltage (torch tensor) – neuron voltage.
threshold (float or torch tensor) – neuron threshold
tau_rho (float) – gradient relaxation
scale_rho (float) – gradient scale
graded_spike (bool) – flag for graded spike
voltage_last (torch tensor) – voltage at t=-1
scale (int) – variable scale value.
- Returns:
spike tensor
- Return type:
torch tensor
Examples
>>> spike = Spike.apply(v, th, tau_rho, scale_rho, False, 0, 1)
- static backward(ctx, grad_spikes)
- derivative = None
- static forward(ctx, voltage, threshold, tau_rho, scale_rho, graded_spike, voltage_last, scale)