lava.lib.optimization.solvers.qp

lava.lib.optimization.solvers.qp.models

Inheritance diagram of lava.lib.optimization.solvers.qp.models

Implement behaviors (models) of the processes defined in processes.py For further documentation please refer to processes.py

class lava.lib.optimization.solvers.qp.models.PyCNeuModel(proc_params=None)

Bases: PyLoihiProcessModel

a_in: PyInPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyInPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
implements_process

alias of ConstraintNeurons

implements_protocol

alias of LoihiProtocol

required_resources: ty.List[ty.Type[AbstractResource]] = [<class 'lava.magma.core.resources.CPU'>]
run_spk()

Function that runs in Spiking Phase

s_out: PyOutPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyOutPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
thresholds: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
class lava.lib.optimization.solvers.qp.models.PyDelNeurModel(proc_params)

Bases: PyLoihiProcessModel

implements_process

alias of DeltaNeurons

implements_protocol

alias of LoihiProtocol

required_resources: ty.List[ty.Type[AbstractResource]] = [<class 'lava.magma.core.resources.CPU'>]
run_spk()

Function that runs in Spiking Phase

s_in: PyInPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyInPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
s_out: PyOutPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyOutPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
theta: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
theta_decay_schedule: int = LavaPyType(cls=<class 'int'>, d_type=<class 'numpy.int32'>, precision=None)
x_internal: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
class lava.lib.optimization.solvers.qp.models.PyPIneurPIPGeqModel(proc_params)

Bases: PyLoihiProcessModel

a_in: PyInPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyInPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
beta: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
beta_exp: int = LavaPyType(cls=<class 'int'>, d_type=<class 'int'>, precision=None)
beta_growth_schedule: int = LavaPyType(cls=<class 'int'>, d_type=<class 'numpy.int32'>, precision=None)
beta_man: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'int'>, precision=None)
con_bias_exp: int = LavaPyType(cls=<class 'int'>, d_type=<class 'int'>, precision=None)
con_bias_man: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'int'>, precision=None)
constraint_bias: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
constraint_neuron_state: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
da_exp: int = LavaPyType(cls=<class 'int'>, d_type=<class 'int'>, precision=None)
growth_factor: int = LavaPyType(cls=<class 'int'>, d_type=<class 'int'>, precision=None)
growth_index: int = LavaPyType(cls=<class 'int'>, d_type=<class 'int'>, precision=None)
growth_inter: int = LavaPyType(cls=<class 'int'>, d_type=<class 'int'>, precision=None)
implements_process

alias of ProportionalIntegralNeuronsPIPGeq

implements_protocol

alias of LoihiProtocol

required_resources: ty.List[ty.Type[AbstractResource]] = [<class 'lava.magma.core.resources.CPU'>]
run_spk()

Function that runs in Spiking Phase

s_out: PyOutPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyOutPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
class lava.lib.optimization.solvers.qp.models.PyProjGradPIPGeqModel(proc_params)

Bases: PyLoihiProcessModel

a_in_cn: PyInPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyInPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
a_in_qc: PyInPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyInPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
alpha: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
alpha_decay_schedule: int = LavaPyType(cls=<class 'int'>, d_type=<class 'numpy.int32'>, precision=None)
alpha_exp: int = LavaPyType(cls=<class 'int'>, d_type=<class 'int'>, precision=None)
alpha_man: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'int'>, precision=None)
da_exp: int = LavaPyType(cls=<class 'int'>, d_type=<class 'int'>, precision=None)
decay_factor: int = LavaPyType(cls=<class 'int'>, d_type=<class 'int'>, precision=None)
decay_index: int = LavaPyType(cls=<class 'int'>, d_type=<class 'int'>, precision=None)
decay_inter: int = LavaPyType(cls=<class 'int'>, d_type=<class 'int'>, precision=None)
grad_bias: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
grad_bias_exp: int = LavaPyType(cls=<class 'int'>, d_type=<class 'int'>, precision=None)
grad_bias_man: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'int'>, precision=None)
implements_process

alias of ProjectedGradientNeuronsPIPGeq

implements_protocol

alias of LoihiProtocol

qp_neuron_state: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
required_resources: ty.List[ty.Type[AbstractResource]] = [<class 'lava.magma.core.resources.CPU'>]
run_spk()

Function that runs in Spiking Phase

s_out_cd: PyOutPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyOutPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
s_out_qc: PyOutPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyOutPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
class lava.lib.optimization.solvers.qp.models.PyQPDenseModel(proc_params=None)

Bases: PyLoihiProcessModel

a_out: PyOutPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyOutPortVectorDense'>, d_type=<class 'float'>, precision=None)
implements_process

alias of QPDense

implements_protocol

alias of LoihiProtocol

required_resources: ty.List[ty.Type[AbstractResource]] = [<class 'lava.magma.core.resources.CPU'>]
run_spk()

Function that runs in Spiking Phase

s_in: PyInPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyInPortVectorDense'>, d_type=<class 'float'>, precision=None)
weights: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'float'>, precision=None)
class lava.lib.optimization.solvers.qp.models.PySNModel(proc_params=None)

Bases: PyLoihiProcessModel

a_in_cn: PyInPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyInPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
a_in_qc: PyInPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyInPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
alpha: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
alpha_decay_schedule: int = LavaPyType(cls=<class 'int'>, d_type=<class 'numpy.int32'>, precision=None)
beta: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
beta_growth_schedule: int = LavaPyType(cls=<class 'int'>, d_type=<class 'numpy.int32'>, precision=None)
decay_counter: int = LavaPyType(cls=<class 'int'>, d_type=<class 'numpy.int32'>, precision=None)
grad_bias: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
growth_counter: int = LavaPyType(cls=<class 'int'>, d_type=<class 'numpy.int32'>, precision=None)
implements_process

alias of SolutionNeurons

implements_protocol

alias of LoihiProtocol

qp_neuron_state: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
required_resources: ty.List[ty.Type[AbstractResource]] = [<class 'lava.magma.core.resources.CPU'>]
run_spk()

Function that runs in Spiking Phase

s_out_cc: PyOutPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyOutPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
s_out_qc: PyOutPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyOutPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
class lava.lib.optimization.solvers.qp.models.PySigNeurModel(proc_params=None)

Bases: PyLoihiProcessModel

implements_process

alias of SigmaNeurons

implements_protocol

alias of LoihiProtocol

required_resources: ty.List[ty.Type[AbstractResource]] = [<class 'lava.magma.core.resources.CPU'>]
run_spk()

Function that runs in Spiking Phase

s_in: PyInPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyInPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
s_out: PyOutPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyOutPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
x_internal: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
class lava.lib.optimization.solvers.qp.models.SubCCModel(proc)

Bases: AbstractSubProcessModel

Implement constraintCheckProcess behavior via sub Processes.

constraint_bias: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
constraint_matrix: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
implements_process

alias of ConstraintCheck

implements_protocol

alias of LoihiProtocol

s_in: PyInPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyInPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
s_out: PyOutPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyOutPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
x_internal: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
class lava.lib.optimization.solvers.qp.models.SubGDModel(proc)

Bases: AbstractSubProcessModel

Implement gradientDynamics Process behavior via sub Processes.

alpha: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
alpha_decay_schedule: int = LavaPyType(cls=<class 'int'>, d_type=<class 'numpy.int32'>, precision=None)
beta: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
beta_growth_schedule: int = LavaPyType(cls=<class 'int'>, d_type=<class 'numpy.int32'>, precision=None)
constraint_matrix_T: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
grad_bias: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
hessian: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
implements_process

alias of GradientDynamics

implements_protocol

alias of LoihiProtocol

qp_neuron_state: ndarray = LavaPyType(cls=<class 'numpy.ndarray'>, d_type=<class 'numpy.float64'>, precision=None)
s_in: PyInPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyInPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)
s_out: PyOutPort = LavaPyType(cls=<class 'lava.magma.core.model.py.ports.PyOutPortVectorDense'>, d_type=<class 'numpy.float64'>, precision=None)

lava.lib.optimization.solvers.qp.processes

Inheritance diagram of lava.lib.optimization.solvers.qp.processes
class lava.lib.optimization.solvers.qp.processes.ConstraintCheck(**kwargs)

Bases: AbstractProcess

Check if linear constraints (equality/inequality) are violated for the qp.

Recieves and sends graded spike from and to the gradientDynamics process. House the constraintDirections and constraintNeurons as sub-processes.

Implements Abstract behavior: (constraint_matrix*x-constraint_bias)*(constraint_matrix*x<constraint_bias)

Initialize constraintCheck Process.

Parameters
  • constraint_matrix (1-D or 2-D np.array, optional) –

  • linear (The value of the constraint matrix. This is 'A' in the) –

  • constraints.

  • constraint_bias (1-D np.array, optional) – The value of the constraint bias. This is ‘k’ in the linear constraints.

  • sparse (bool, optional) – Sparse is true when using sparsifying neuron-model eg. sigma-delta

  • x_int_init (1-D np.array, optional) – initial value of internal sigma neurons

class lava.lib.optimization.solvers.qp.processes.ConstraintNeurons(**kwargs)

Bases: AbstractProcess

Process to check the violation of the linear constraints of the QP. A graded spike corresponding to the violated constraint is sent from the out port.

Realizes the following abstract behavior: s_out = (a_in - thresholds) * (a_in < thresholds)

Intialize the constraintNeurons Process.

Parameters
  • shape (int tuple, optional) – Define the shape of the thresholds vector. Defaults to (1,1).

  • thresholds (1-D np.array, optional) – Define the thresholds of the neurons in the constraint checking layer. This is usually ‘k’ in the constraints of the QP. Default value of thresholds is 0.

class lava.lib.optimization.solvers.qp.processes.DeltaNeurons(**kwargs)

Bases: AbstractProcess

Process to simulate Delta coding.

A graded spike is sent only if the difference delta for a neuron exceeds the spiking threshold, Theta Realizes the following abstract behavior: delta = np.abs(s_in - self.x_internal) s_out = delta[delta > theta]

Parameters
  • shape (int tuple, optional) – Define the shape of the thresholds vector. Defaults to (1,1).

  • x_del_init (1-D np.array, optional) – initial value of internal delta neurons. Should be the same as qp_neurons_init. Default value is 0.

  • theta (1-D np.array, optional) – Defines the learning rate for gradient descent. Defaults to 1.

  • theta_decay_type (string, optional) – Defines the nature of the learning rate, theta’s decay. “schedule” decays it for every theta_decay_schedule timesteps. “indices” halves the learning rate for every timestep defined in alpha_decay_indices.

  • theta_decay_schedule (int, optional) – The number of iterations after which one right shift operation takes place for theta. Default intialization to a very high value of 10000.

  • theta_decay_indices (list, optional) – The iteration numbers at which value of theta gets halved (right-shifted).

class lava.lib.optimization.solvers.qp.processes.GradientDynamics(**kwargs)

Bases: AbstractProcess

Perform gradient descent with constraint correction to converge at the solution of the QP.

Implements Abstract behavior: -alpha*(Q@x_init + p)- beta*A_T@graded_constraint_spike Initialize gradientDynamics Process.

Parameters
  • hessian (1-D or 2-D np.array, optional) – Define the hessian matrix (‘Q’ in the cost function of the QP) in the QP. Defaults to 0.

  • constraint_matrix_T (1-D or 2-D np.array, optional) – The value of the transpose of the constraint matrix. This is ‘A^T’ in the linear constraints.

  • grad_bias (1-D np.array, optional) – The bias of the gradient of the QP. This is the value ‘p’ in the QP definition.

  • qp_neurons_init (1-D np.array, optional) – Initial value of qp solution neurons

  • sparse (bool, optional) – Sparse is true when using sparsifying neuron-model eg. sigma-delta

  • model (str, optional) – “SigDel” for sigma delta neurons and “TLIF” for Ternary LIF neurons. Defines the type of neuron to be used for sparse activity.

  • vth_lo (1-D np.array, optional) – Defines the lower threshold for TLIF spiking. Defaults to 10.

  • vth_hi (1-D np.array, optional) – Defines the upper threshold for TLIF spiking. Defaults to -10.

  • theta (1-D np.array, optional) – Defines the threshold for sigma-delta spiking. Defaults to 0.

  • alpha (1-D np.array, optional) – Define the learning rate for gradient descent. Defaults to 1.

  • beta (1-D np.array, optional) – Define the learning rate for constraint-checking. Defaults to 1.

  • theta_decay_schedule (int, optional) – The number of iterations after which one right shift operation takes place for theta. Default intialization to a very high value of 10000.

  • alpha_decay_schedule (int, optional) – The number of iterations after which one right shift operation takes place for alpha. Default intialization to a very high value of 10000.

  • beta_growth_schedule (int, optional) – The number of iterations after which one left shift operation takes place for beta. Default intialization to a very high value of 10000.

class lava.lib.optimization.solvers.qp.processes.ProjectedGradientNeuronsPIPGeq(**kwargs)

Bases: AbstractProcess

The neurons that evolve according to the projected gradient dynamics specified in the PIPG algorithm.

Do NOT use QPDense connection process with this solver. Use the standard Lava Dense process. Intialize the ProjectedGradientNeuronsPIPGeq process. Implements the abstract behaviour qp_neuron_state -= alpha*(a_in_qc + grad_bias + a_in_cn)

Parameters
  • shape (int tuple, optional) – A tuple defining the shape of the qp neurons. Defaults to (1,).

  • da_exp (int, optional) – Exponent of base 2 used to scale magnitude at dendritic accumulator, if needed. The correction exponent (min(Connectivity_exp, constraint_exp)) global has to be passed in to this parameter to right shift the dendritic accumulator. Value can only be -ve Used for fixed point implementations. Unnecessary for floating point implementations. Default value is 0.

  • qp_neurons_init (1-D np.array, optional) – initial value of qp solution neurons

  • grad_bias (1-D np.array, optional) – The bias of the gradient of the QP. This is the value ‘p’ in the QP definition.

  • grad_bias_exp (int, optional) – Shared exponent of base 2 used to scale magnitude of the grad_bias vector, if needed. Value can only be -ve. Mostly for fixed point implementations. Unnecessary for floating point implementations. Default value is 0.

  • alpha (1-D np.array, optional) – Defines the learning rate for gradient descent. Defaults to 1.

  • alpha_exp (int, optional) – Exponent of base 2 used to scale magnitude of alpha, if needed. Value can only be -ve. Mostly for fixed point implementations. Unnecessary for floating point implementations. Default value is 0.

  • lr_decay_type (string, optional) – Defines the nature of the learning rate, alpha’s decay. “schedule” decays it for every alpha_decay_schedule timesteps. “indices” halves the learning rate for every timestep defined in alpha_decay_indices.

  • alpha_decay_schedule (int, optional) – The number of iterations after which one right shift operation takes place for alpha. Default intialization to a very high value of 10000.

  • alpha_decay_indices (list, optional) – The timesteps at which the learning rate, alpha, halves. By default an empty list.

  • alpha_decay_params (tuple, optional) – The indices at which value of alpha gets halved (right-shifted). The tuple contains (decay_index, decay_interval, decay_factor). Default values are (1,1,1). Note that if decay_index is set to 0, it is automatically set to one. Setting decay_index to zero is not allowable behavior, instead change the initial value of alpha if decay has to take place at the 0th timestep. The index is calculated using the formula. decay_factor = decay_factor + 1 decay_index = decay_index + decay_interval*decay_factor This mimics the hyperbolic decrease of learning rate, alpha, with only right shifts at particular intervals.

class lava.lib.optimization.solvers.qp.processes.ProportionalIntegralNeuronsPIPGeq(**kwargs)

Bases: AbstractProcess

The neurons that evolve according to the proportional integral dynamics specified in the PIPG algorithm. Do NOT use QPDense connection process with this solver. Use the standard Lava Dense process. Implements the abstract behaviour.

constraint_neuron_state += beta * (a_in - constraint_bias) s_out = constraint_neuron_state + beta * (a_in - constraint_bias)

Intialize the ProportionalIntegralNeuronsPIPGeq process.

Parameters
  • shape (int tuple, optional) – A tuple defining the shape of the qp neurons. Defaults to (1,).

  • da_exp (int, optional) – Exponent of base 2 used to scale magnitude at dendritic accumulator, if needed. The global exponent of the constraint matrix (based on max element), A, has to be passed in to this parameter. Value can only be -ve. Used for fixed point implementations. Unnecessary for floating point implementations. Default value is 0.

  • constraint_neurons_init (1-D np.array, optional) – Initial value of constraint neurons

  • thresholds (1-D np.array, optional) – Define the thresholds of the neurons in the constraint checking layer. This is usually ‘k’ in the constraints of the QP. Default value of thresholds is 0.

  • thresholds_exp (int, optional) – Shared exponent of base 2 used to scale magnitude of the thresholds vector, if needed. Mostly for fixed point implementations. Value can only be -ve Unnecessary for floating point implementations. Default value is 0.

  • beta (1-D np.array, optional) – Defines the learning rate for constraint-checking. Defaults to 1.

  • beta_exp (int, optional) – Exponent of base 2 used to scale magnitude of beta, if needed. Value can only be -ve. Mostly for fixed point implementations. Unnecessary for floating point implementations. Default value is 0.

  • lr_growth_type (string, optional) – Defines the nature of the learning rate, beta’s growth. “schedule” grows it for every beta_growth_schedule timesteps. “indices” doubles the learning rate for every timestep defined in beta_growth_indices.

  • beta_growth_schedule (int, optional) – The number of iterations after which one left shift operation takes place for beta. Default intialization to a very high value of 10000.

  • beta_growth_indices (list, optional) – The timesteps at which the learning rate, beta, doubles. By default an empty list.

  • beta_growth_params (tuple of ints, optional) – The indices at which value of beta gets doubled (left-shifted). The tuple contains (growth_index, growth_factor). Default values are (1,1). Note that if growth_index is set to 0, it is automatically set to one. Setting growth_index to zero is not allowable behavior, instead change the initial value of beta if growth has to take place at the 0th timestep. growth_interval is hard-coded as 2 The index is calculated using the formula. growth_factor = growth_factor + 1 N_growth = N_growth + growth_interval*growth factor This mimics the linear increase of learning rate beta with only left shifts at particular intervals.

class lava.lib.optimization.solvers.qp.processes.QPDense(**kwargs)

Bases: AbstractProcess

Connections in the between two neurons in the QP solver. Does not buffer connections like the dense process in Lava. Meant for the GDCC QP solver only.

Realizes the following abstract behavior: a_out = weights * s_in

intialize the constraintDirectionsProcess

Parameters
  • shape (int tuple, optional) – Define the shape of the connections matrix as a tuple. Defaults to (1,1)

  • weights ((1-D or 2-D np.array), optional) – Define the weights for the dense connection process

class lava.lib.optimization.solvers.qp.processes.SigmaNeurons(**kwargs)

Bases: AbstractProcess

Process to accumate spikes into a state variable before being fed to

another process.

Realizes the following abstract behavior: a_out = self.x_internal + s_in

Parameters
  • shape (int tuple, optional) – Define the shape of the thresholds vector. Defaults to (1,1).

  • x_sig_init (1-D np.array, optional) – initial value of internal sigma neurons. Should be the same as qp_neurons_init. Default value is 0.

class lava.lib.optimization.solvers.qp.processes.SolutionNeurons(**kwargs)

Bases: AbstractProcess

The neurons that evolve according to the constraint-corrected gradient dynamics. Implements the abstract behaviour qp_neuron_state += (-alpha * (s_in_qc + grad_bias) - beta * s_in_cn)

Intialize the solutionNeurons process.

Parameters
  • shape (int tuple, optional) – A tuple defining the shape of the qp neurons. Defaults to (1,1).

  • qp_neurons_init (1-D np.array, optional) – initial value of qp solution neurons

  • grad_bias (1-D np.array, optional) – The bias of the gradient of the QP. This is the value ‘p’ in the QP definition.

  • alpha (1-D np.array, optional) – Defines the learning rate for gradient descent. Defaults to 1.

  • beta (1-D np.array, optional) – Defines the learning rate for constraint-checking. Defaults to 1.

  • alpha_decay_schedule (int, optional) – The number of iterations after which one right shift operation takes place for alpha. Default intialization to a very high value of 10000.

  • beta_growth_schedule (int, optional) – The number of iterations after which one left shift operation takes place for beta. Default intialization to a very high value of 10000.

lava.lib.optimization.solvers.qp.solver

Inheritance diagram of lava.lib.optimization.solvers.qp.solver
class lava.lib.optimization.solvers.qp.solver.QPSolver(alpha, beta, alpha_decay_schedule, beta_growth_schedule)

Bases: object

Solve Full QP by connecting two Lava processes, GradDynamics and ConstraintCheck

Parameters
  • alpha (1-D np.array) – The learning rate for gradient descent

  • beta (1-D np.array) – The learning rate for constraint correction

  • alpha_decay_schedule (int, default 10000) – Number of iterations after which one right shift takes place for alpha

  • beta_growth_schedule (int, default 10000) – Number of iterations after which one left shift takes place for beta

solve(problem, iterations=400)

solves the supplied QP problem

Parameters
  • problem (QP) – The QP containing the matrices that set up the problem

  • iterations (int, optional) – Number of iterations for which QP has to run, by default 400

Returns

sol – Solution to the quadratic program

Return type

1-D np.array