lava.proc.conv
lava.proc.conv.models
digraph inheritance679c32b492 { bgcolor=transparent; rankdir=TB; size=""; "ABC" [fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",tooltip="Helper class that provides a standard way to create an ABC using"]; "AbstractProcessModel" [URL="../lava.magma.core.model.html#lava.magma.core.model.model.AbstractProcessModel",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="Represents a model that implements the behavior of a Process."]; "ABC" -> "AbstractProcessModel" [arrowsize=0.5,style="setlinewidth(0.5)"]; "AbstractPyConvModel" [URL="../lava/lava.proc.conv.html#lava.proc.conv.models.AbstractPyConvModel",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="Abstract template implementation of PyConvModel."]; "PyLoihiProcessModel" -> "AbstractPyConvModel" [arrowsize=0.5,style="setlinewidth(0.5)"]; "AbstractPyProcessModel" [URL="../lava.magma.core.model.py.html#lava.magma.core.model.py.model.AbstractPyProcessModel",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="Abstract interface for Python ProcessModels."]; "AbstractProcessModel" -> "AbstractPyProcessModel" [arrowsize=0.5,style="setlinewidth(0.5)"]; "ABC" -> "AbstractPyProcessModel" [arrowsize=0.5,style="setlinewidth(0.5)"]; "PyConvModelFixed" [URL="../lava/lava.proc.conv.html#lava.proc.conv.models.PyConvModelFixed",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="Conv with fixed point synapse implementation."]; "AbstractPyConvModel" -> "PyConvModelFixed" [arrowsize=0.5,style="setlinewidth(0.5)"]; "PyConvModelFloat" [URL="../lava/lava.proc.conv.html#lava.proc.conv.models.PyConvModelFloat",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="Conv with float synapse implementation."]; "AbstractPyConvModel" -> "PyConvModelFloat" [arrowsize=0.5,style="setlinewidth(0.5)"]; "PyLoihiProcessModel" [URL="../lava.magma.core.model.py.html#lava.magma.core.model.py.model.PyLoihiProcessModel",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="ProcessModel to simulate a Process on Loihi using CPU."]; "AbstractPyProcessModel" -> "PyLoihiProcessModel" [arrowsize=0.5,style="setlinewidth(0.5)"]; }- class lava.proc.conv.models.AbstractPyConvModel(proc_params=None)
Bases:
PyLoihiProcessModel
Abstract template implementation of PyConvModel.
- a_buf = None
- a_out = None
- clamp_precision(x)
- Return type:
ndarray
-
dilation:
ndarray
= LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.int8'>, precision=8)
-
groups:
ndarray
= LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.int8'>, precision=8)
-
kernel_size:
ndarray
= LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.int8'>, precision=8)
-
num_message_bits:
ndarray
= LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.int8'>, precision=5)
-
num_weight_bits:
ndarray
= LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.int8'>, precision=5)
-
padding:
ndarray
= LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.int8'>, precision=8)
- run_spk()
Function that runs in Spiking Phase
- Return type:
None
- s_in = None
-
stride:
ndarray
= LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.int8'>, precision=8)
- weight = None
-
weight_exp:
ndarray
= LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.int32'>, precision=8)
- class lava.proc.conv.models.PyConvModelFixed(proc_params=None)
Bases:
AbstractPyConvModel
Conv with fixed point synapse implementation.
-
a_buf:
ndarray
= LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.int32'>, precision=24)
-
a_out:
PyOutPort
= LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyOutPortVectorDense'>, d_type=<class 'numpy.int32'>, precision=24)
- clamp_precision(x)
- Return type:
ndarray
- implements_protocol
alias of
LoihiProtocol
- required_resources: ty.List[ty.Type[AbstractResource]] = [<class 'lava.magma.core.resources.CPU'>]
-
s_in:
PyInPort
= LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyInPortVectorDense'>, d_type=<class 'numpy.int32'>, precision=24)
- tags: ty.List[str] = ['fixed_pt']
-
weight:
ndarray
= LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.int32'>, precision=8)
-
a_buf:
- class lava.proc.conv.models.PyConvModelFloat(proc_params=None)
Bases:
AbstractPyConvModel
Conv with float synapse implementation.
-
a_buf:
ndarray
= LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'float'>, precision=None)
-
a_out:
PyOutPort
= LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyOutPortVectorDense'>, d_type=<class 'float'>, precision=None)
- implements_protocol
alias of
LoihiProtocol
- required_resources: ty.List[ty.Type[AbstractResource]] = [<class 'lava.magma.core.resources.CPU'>]
-
s_in:
PyInPort
= LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyInPortVectorDense'>, d_type=<class 'float'>, precision=24)
- tags: ty.List[str] = ['floating_pt']
-
weight:
ndarray
= LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'float'>, precision=None)
-
a_buf:
lava.proc.conv.process
digraph inheritance932624acc4 { bgcolor=transparent; rankdir=TB; size=""; "AbstractProcess" [URL="../lava.magma.core.process.html#lava.magma.core.process.process.AbstractProcess",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="The notion of a Process is inspired by the Communicating Sequential"]; "Conv" [URL="../lava/lava.proc.conv.html#lava.proc.conv.process.Conv",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top"]; "AbstractProcess" -> "Conv" [arrowsize=0.5,style="setlinewidth(0.5)"]; }- class lava.proc.conv.process.Conv(*, weight, weight_exp=0, input_shape=(1, 1, 1), padding=0, stride=1, dilation=1, groups=1, num_weight_bits=8, num_message_bits=0, name=None, log_config=None)
Bases:
AbstractProcess
lava.proc.conv.utils
digraph inheritance644afd1aa3 { bgcolor=transparent; rankdir=TB; size=""; "Enum" [fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",tooltip="Generic enumeration."]; "IntEnum" [fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",tooltip="Enum where members are also (and must be) ints"]; "Enum" -> "IntEnum" [arrowsize=0.5,style="setlinewidth(0.5)"]; "TensorOrder" [URL="../lava/lava.proc.conv.html#lava.proc.conv.utils.TensorOrder",fillcolor=white,fontname="Vera Sans, DejaVu Sans, Liberation Sans, Arial, Helvetica, sans",fontsize=10,height=0.25,shape=box,style="setlinewidth(0.5),filled",target="_top",tooltip="Defines how images are represented by tensors."]; "IntEnum" -> "TensorOrder" [arrowsize=0.5,style="setlinewidth(0.5)"]; }- class lava.proc.conv.utils.TensorOrder(value)
Bases:
IntEnum
Defines how images are represented by tensors.
- Meaning:
N: number of images in a batch H: height of an image W: width of an image C: number of channels of an image
- CHWN = 2
- HWCN = 3
- NCHW = 1
- NWHC = 4
- lava.proc.conv.utils.conv(input_, weight, kernel_size, stride, padding, dilation, groups)
Convolution implementation
- Parameters:
input (3 dimensional np array) – convolution input.
weight (4 dimensional np array) – convolution kernel weight.
kernel_size (2 element tuple, list, or array) – convolution kernel size in XY/WH format.
stride (2 element tuple, list, or array) – convolution stride in XY/WH format.
padding (2 element tuple, list, or array) – convolution padding in XY/WH format.
dilation (2 element tuple, list, or array) – dilation of convolution kernel in XY/WH format.
groups (int) – number of convolution groups.
- Returns:
convolution output
- Return type:
3 dimensional np array
- lava.proc.conv.utils.conv_scipy(input_, weight, kernel_size, stride, padding, dilation, groups)
Scipy based implementation of convolution
- Parameters:
input (3 dimensional np array) – convolution input.
weight (4 dimensional np array) – convolution kernel weight.
kernel_size (2 element tuple, list, or array) – convolution kernel size in XY/WH format.
stride (2 element tuple, list, or array) – convolution stride in XY/WH format.
padding (2 element tuple, list, or array) – convolution padding in XY/WH format.
dilation (2 element tuple, list, or array) – dilation of convolution kernel in XY/WH format.
groups (int) – number of convolution groups.
- Returns:
convolution output
- Return type:
3 dimensional np array
- lava.proc.conv.utils.conv_to_sparse(input_shape, output_shape, kernel, stride, padding, dilation, group, order=TensorOrder.NWHC)
Translate convolution kernel into sparse matrix.
- Parameters:
input_shape (tuple of 3 ints) – Shape of input to the convolution.
output_shape (tuple of 3 ints) – Shape of output from the convolution.
kernel (numpy array with 4 dimensions) – Convolution kernel. The kernel should have four dimensions. The order of kernel tensor is described by
order
argument. See Notes for the supported orders.stride (tuple of 2 ints) – Convolution stride.
padding (tuple of 2 ints) – Convolution padding.
dilation (tuple of 2 ints) – Convolution dilation.
group (int) – Convolution groups.
order (TensorOrder, optional) – The order of convolution kernel tensor. The default is lava convolution order i.e.
TensorOrder.NWHC
- Return type:
Tuple
[ndarray
,ndarray
,ndarray
]- Returns:
np.ndarray – Destination indices of sparse matrix. It is a linear array of ints.
np.ndarray – Source indices of sparse matrix. It is a linear array of ints.
np.ndarray – Weight value at non-zero location.
- Raises:
ValueError – If tensor order is not supported.
AssertionError – if output channels is not divisible by group.
AssertionError – if input channels is not divisible by group.
Notes
Input/Output order
Kernel order
Operation type
WHC
NWHC
Default Lava order. The operation is convolution.
CHW
NCHW
Default PyTorch order. The operation in correlation.
HWC
HWCN
Default Tensorflow order. The operation in correlation.
- lava.proc.conv.utils.make_tuple(value)
Create a tuple of two integers from the given input.
- Parameters:
value (int or tuple(int) or tuple(int, int)) –
- Returns:
tuple value of input
- Return type:
tuple(int, int)
- Raises:
Exception – if argument value is not 1/2 dimensional.
- lava.proc.conv.utils.output_shape(input_shape, out_channels, kernel_size, stride, padding, dilation)
Calculates the output shape of convolution operation.
- Parameters:
input_shape (3 element tuple, list, or array) – shape of input to convolution in XYZ/WHC format.
out_channels (int) – number of output channels.
kernel_size (2 element tuple, list, or array) – convolution kernel size in XY/WH format.
stride (2 element tuple, list, or array) – convolution stride in XY/WH format.
padding (2 element tuple, list, or array) – convolution padding in XY/WH format.
dilation (2 element tuple, list, or array) – dilation of convolution kernel in XY/WH format.
- Returns:
shape of convolution output in XYZ/WHC format.
- Return type:
tuple of 3 ints
- Raises:
Exception – for invalid x convolution dimension.
Exception – for invalid y convolution dimension.
- lava.proc.conv.utils.signed_clamp(x, bits)
clamps as if input is a signed value within the precision of bits.
- Parameters:
x (int, float, np array) – input
bits (int) – number of bits for the variable
- Returns:
clamped value
- Return type:
same type as x