network package

Submodules

cnsproject.network.connections module

Module for connections between neural populations.

class cnsproject.network.connections.AbstractConnection(pre: cnsproject.network.neural_populations.NeuralPopulation, post: cnsproject.network.neural_populations.NeuralPopulation, lr: Optional[Union[float, Sequence[float]]] = None, weight_decay: float = 0.0, **kwargs)

Bases: abc.ABC, torch.nn.modules.module.Module

Abstract class for implementing connections.

Make sure to implement the compute, update, and reset_state_variables methods in your child class.

You will need to define the populations you want to connect as pre and post. In case of learning, you will need to define the learning rate (lr) and the learning rule to follow. Attribute w is reserved for synaptic weights. However, it has not been predefined or allocated, as it depends on the pattern of connectivity. So make sure to define it in child class initializations appropriately to indicate the pattern of connectivity. The default range of each synaptic weight is [0, 1] but it can be controlled by wmin and wmax. Synaptic strengths might decay in time and do not last forever. To define the decay rate of the synaptic weights, use weight_decay attribute. Also, if you want to control the overall input synaptic strength to each neuron, use norm argument to normalize the synaptic weights.

In case of learning, you have to implement the methods compute and update. You will use the compute method to calculate the activity of post-synaptic population based on the pre-synaptic one. Update of weights based on the learning rule will be implemented in the update method. If you find this architecture mind-bugling, try your own architecture and make sure to redefine the learning rule architecture to be compatible with this new architecture of yours.

Parameters
  • pre (NeuralPopulation) – The pre-synaptic neural population.

  • post (NeuralPopulation) – The post-synaptic neural population.

  • lr (float or (float, float), Optional) – The learning rate for training procedure. If a tuple is given, the first value defines potentiation learning rate and the second one depicts the depression learning rate. The default is None.

  • weight_decay (float, Optional) – Define rate of decay in synaptic strength. The default is 0.0.

Keyword Arguments
  • learning_rule (LearningRule) – Define the learning rule by which the network will be trained. The default is NoOp (see learning/learning_rules.py for more details).

  • wmin (float) – The minimum possible synaptic strength. The default is 0.0.

  • wmax (float) – The maximum possible synaptic strength. The default is 1.0.

  • norm (float) – Define a normalization on input signals to a population. If None, there is no normalization. The default is None.

abstract compute(s: torch.Tensor)None

Compute the post-synaptic neural population activity based on the given spikes of the pre-synaptic population.

Parameters

s (torch.Tensor) – The pre-synaptic spikes tensor.

Returns

Return type

None

abstract update(**kwargs)None

Compute connection’s learning rule.

Keyword Arguments
  • learning (bool) – Whether learning is enabled or not. The default is True.

  • mask (torch.ByteTensor) – Define a mask to determine which weights to clamp to zero.

Returns

Return type

None

abstract reset_state_variables()None

Reset all internal state variables.

Returns

Return type

None

training: bool
class cnsproject.network.connections.DenseConnection(pre: cnsproject.network.neural_populations.NeuralPopulation, post: cnsproject.network.neural_populations.NeuralPopulation, lr: Optional[Union[float, Sequence[float]]] = None, weight_decay: float = 0.0, **kwargs)

Bases: cnsproject.network.connections.AbstractConnection

Specify a fully-connected synapse between neural populations.

Implement the dense connection pattern following the abstract connection template.

compute(s: torch.Tensor)None

TODO.

Implement the computation of post-synaptic population activity given the activity of the pre-synaptic population.

update(**kwargs)None

TODO.

Update the connection weights based on the learning rule computations. You might need to call the parent method.

reset_state_variables()None

TODO.

Reset all the state variables of the connection.

training: bool
class cnsproject.network.connections.RandomConnection(pre: cnsproject.network.neural_populations.NeuralPopulation, post: cnsproject.network.neural_populations.NeuralPopulation, lr: Optional[Union[float, Sequence[float]]] = None, weight_decay: float = 0.0, **kwargs)

Bases: cnsproject.network.connections.AbstractConnection

Specify a random synaptic connection between neural populations.

Implement the random connection pattern following the abstract connection template.

compute(s: torch.Tensor)None

TODO.

Implement the computation of post-synaptic population activity given the activity of the pre-synaptic population.

update(**kwargs)None

TODO.

Update the connection weights based on the learning rule computations. You might need to call the parent method.

reset_state_variables()None

TODO.

Reset all the state variables of the connection.

training: bool
class cnsproject.network.connections.ConvolutionalConnection(pre: cnsproject.network.neural_populations.NeuralPopulation, post: cnsproject.network.neural_populations.NeuralPopulation, lr: Optional[Union[float, Sequence[float]]] = None, weight_decay: float = 0.0, **kwargs)

Bases: cnsproject.network.connections.AbstractConnection

Specify a convolutional synaptic connection between neural populations.

Implement the convolutional connection pattern following the abstract connection template.

compute(s: torch.Tensor)None

TODO.

Implement the computation of post-synaptic population activity given the activity of the pre-synaptic population.

update(**kwargs)None

TODO.

Update the connection weights based on the learning rule computations. You might need to call the parent method.

reset_state_variables()None

TODO.

Reset all the state variables of the connection.

training: bool
class cnsproject.network.connections.PoolingConnection(pre: cnsproject.network.neural_populations.NeuralPopulation, post: cnsproject.network.neural_populations.NeuralPopulation, lr: Optional[Union[float, Sequence[float]]] = None, weight_decay: float = 0.0, **kwargs)

Bases: cnsproject.network.connections.AbstractConnection

Specify a pooling synaptic connection between neural populations.

Implement the pooling connection pattern following the abstract connection template. Consider a parameter for defining the type of pooling.

Note: The pooling operation does not support learning. You might need to make some modifications in the defined structure of this class.

compute(s: torch.Tensor)None

TODO.

Implement the computation of post-synaptic population activity given the activity of the pre-synaptic population.

update(**kwargs)None

TODO.

Update the connection weights based on the learning rule computations. You might need to call the parent method.

Note: You should be careful with this method.

reset_state_variables()None

TODO.

Reset all the state variables of the connection.

training: bool

cnsproject.network.monitors module

Module for monitoring objects.

class cnsproject.network.monitors.Monitor(obj: Union[cnsproject.network.neural_populations.NeuralPopulation, cnsproject.network.connections.AbstractConnection], state_variables: Iterable[str], device: Optional[str] = 'cpu')

Bases: object

Record desired state variables.

You can record variables of different SNN objects using an instance of this class. For this purpose, you pass the object as obj and provide a list of string name of variables you intend to record as state_variables. All the recordings will reside on cpu unless you change the device value to cuda.

To save the variables at each time step, you should call record method at the desired step. By default, number of timesteps is set to 0 and hence the monitor object will basically save the data of that single step you called the record for and so is equivalent to the object’s variable itself. To record the variables for a specified duration of time, use set_time_steps to define the duration and time resolution. Use get with the variable name you want to retrieve to obtain the recorded values.

Also make sure to call reset_state_variables before starting any simulation to make the allocations.

Examples

>>> from network.neural_populations import LIFPopulation
>>> from network.monitors import Monitor
>>> neuron = LIF(shape=(1,))
Now, assume there are two variables `s` and `v` in LIFPopulation which indicate spikes
and voltages respectively.
>>> monitor = Monitor(neuron, state_variables=["s", "v"])
>>> time = 10  # time of simulation
>>> dt = 1.0  # time resolution
>>> monitor.set_time_steps(time, dt)  # record the whole simulation
>>> monitor.reset_state_variables()
>>> for t in range(time):
...     # compute input spike trace and call `neuron.forward(input_trace)`
...     monitor.record()
`monitor.record()` should be called within the simulation process. The state variables of
the given object are so recorded in the simulation time step and is kept in the recording.
>>> s = monitor.get("s")
>>> v = monitor.get("v")
`s` and `v` hold the tensor of spikes and voltages during the simulation. Their shape would
be `(time, **neuron.shape)`.
Parameters
  • obj (NeuralPopulation or AbstractConnection) – The object, states of which is desired to record.

  • state_variables (Iterable of str) – Name of variables of interest.

  • device (str, Optional) – The device to run the monitor. The default is “cpu”.

set_time_steps(time: int, dt: float)

Set number of time steps to record.

Parameters
  • time (int) – The simulation time we intend to record. If 0, Only records one time step at each point.

  • dt (float) – Simulation time resolution.

get(variable: str)torch.Tensor

Return recording to user.

Parameters

variable (str) – The requested variable.

Returns

logs – The recording log of the requested variable.

Return type

torch.Tensor

record()None

Append the current value of the recorded state variables to the recording.

Returns

Return type

None

reset_state_variables()None

Reset all internal state variables.

Returns

Return type

None

cnsproject.network.network module

Module for spiking neural network construction and simulation.

class cnsproject.network.network.Network(dt: float = 1.0, learning: bool = True, reward: Optional[cnsproject.learning.rewards.AbstractReward] = None, decision: Optional[cnsproject.decision.decision.AbstractDecision] = None, **kwargs)

Bases: torch.nn.modules.module.Module

The class responsible for creating a neural network and its simulation.

Examples

>>> from network.neural_populations import LIFPopulation
>>> from network.connections import DenseConnection
>>> from network.monitors import Monitor
>>> from network import Network
>>> inp = InputPopulation(shape=(10,))
>>> out = LIFPopulation(shape=(2,))
>>> synapse = DenseConnection(inp, out)
>>> net = Network(learning=False)
>>> net.add_layer(inp, "input")
>>> net.add_layer(out, "output")
>>> net.add_connection(synapse, "input", "output")
>>> out_m = Monitor(out, state_variables=["s", "v"])
>>> syn_m = Monitor(synapse, state_variables=["w"])  # `w` indicates synaptic weights
>>> net.add_monitor(out_m, "output")
>>> net.add_monitor(syn_m, "synapse")
>>> net.run(10)
Here, we create a simple network with two layers and dense connection. We aim to monitor
the synaptic weights and output layer's spikes and voltages. We simulate the network for
10 miliseconds.

You will need to implement the run method. This mthod is responsible for the whole simulation procedure of a spiking neural network. You will have to compute number of time steps using dt attribute of the class and time parameter of the method. then you will iteratively call the procedures for single step simulation of network objects.

NOTE: If you faced any errors related to importing packages, modify the __init__.py files accordingly to solve the problem.

Parameters
  • dt (float, Optional) – Specify simulation timestep. The default is 1.0.

  • learning (bool, Optional) – Whether to allow weight update and learning. The default is True.

  • reward (AbstractReward, Optional) – The class to allow reward modifications in case of reward-modulated learning. The default is None.

  • decision (AbstractDecision, Optional) – The class to enable decision making. The default is None.

add_layer(layer: cnsproject.network.neural_populations.NeuralPopulation, name: str)None

Add a neural population to the network.

Parameters
  • layer (NeuralPopulation) – The neural population to be added.

  • name (str) – Name of the layer for further referencing.

Returns

Return type

None

add_connection(connection: cnsproject.network.connections.AbstractConnection, pre: str, post: str)None

Add a connection between neural populations to the network. The reference name will be in the format {pre}_to_{post}.

Parameters
  • connection (AbstractConnection) – The connection to be added.

  • pre (str) – Reference name of pre-synaptic population.

  • post (str) – Reference name of post-synaptic population.

Returns

Return type

None

add_monitor(monitor: cnsproject.network.monitors.Monitor, name: str)None

Add a monitor on a network object to the network.

Parameters
  • monitor (Monitor) – The monitor instance to be added.

  • name (str) – Name of the monitor instance for further referencing.

Returns

Return type

None

run(time: int, inputs: Dict[str, torch.Tensor] = {}, one_step: bool = False, **kwargs)None

Simulate network for a specific time duration with the possible given input.

Input to each layer is given to inputs parameter. As you see, it is a dictionary of population’s name and tensor of input values through time. There is a parameter named one_step. This parameter will define how the input is propagated through the network: does it go forward up to the final layer in one time step or it passes from one layer to the next in each step of simulation. You can easily remove it if it is mind-bugling.

Also, make sure to call self.reset_state_variables() before starting the simulation.

TODO.

Implement the body of this method.

Parameters
  • time (int) – Simulation time.

  • inputs (Dict[str, torch.Tensor], optional) – Mapping of input layer names to their input spike tensors. The default is {}.

  • one_step (bool, optional) – Whether to propagate the inputs all the way through the network in a single simulation step. The default is False.

Keyword Arguments
  • clamp (Dict[str, torch.Tensor]) – Mapping of layer names to boolean masks if neurons should be clamped to spiking.

  • unclamp (Dict[str, torch.Tensor]) – Mapping of layer names to boolean masks if neurons should be clamped not to spiking.

  • masks (Dict[str, torch.Tensor]) – Mapping of connection names to boolean masks of the weights to clamp to zero.

  • Note (you can pass the reward and decision arguments as keyword argumeents to this function.) –

Returns

Return type

None

reset_state_variables()None

Reset all internal state variables.

Returns

Return type

None

train(mode: bool = True)torch.nn.Moudle

Set the population’s training mode.

Parameters

mode (bool, optional) – Mode of training. True turns on the training while False turns it off. The default is True.

Returns

Return type

torch.nn.Module

training: bool

cnsproject.network.neural_populations module

Module for neuronal dynamics and populations.

class cnsproject.network.neural_populations.NeuralPopulation(shape: Iterable[int], spike_trace: bool = True, additive_spike_trace: bool = True, tau_s: Union[float, torch.Tensor] = 15.0, trace_scale: Union[float, torch.Tensor] = 1.0, is_inhibitory: bool = False, learning: bool = True, **kwargs)

Bases: torch.nn.modules.module.Module

Base class for implementing neural populations.

Make sure to implement the abstract methods in your child class. Note that this template will give you homogeneous neural populations in terms of excitations and inhibitions. You can modify this by removing is_inhibitory and adding another attribute which defines the percentage of inhibitory/excitatory neurons or use a boolean tensor with the same shape as the population, defining which neurons are inhibitory.

The most important attribute of each neural population is its shape which indicates the number and/or architecture of the neurons in it. When there are connected populations, each pre-synaptic population will have an impact on the post-synaptic one in case of spike. This spike might be persistent for some duration of time and with some decaying magnitude. To handle this coincidence, four attributes are defined: - spike_trace is a boolean indicating whether to record the spike trace in each time step. - additive_spike_trace would indicate whether to save the accumulated traces up to the current time step. - tau_s will show the duration by which the spike trace persists by a decaying manner. - trace_scale is responsible for the scale of each spike at the following time steps. Its value is only considered if additive_spike_trace is set to True.

Make sure to call reset_state_variables before starting the simulation to allocate and/or reset the state variables such as s (spikes tensor) and traces (trace of spikes). Also do not forget to set the time resolution (dt) for the simulation.

Each simulation step is defined in forward method. You can use the utility methods (i.e. compute_potential, compute_spike, refractory_and_reset, and compute_decay) to break the differential equations into smaller code blocks and call them within forward. Make sure to call methods forward and compute_decay of NeuralPopulation in child class methods; As it provides the computation of spike traces (not necessary if you are not considering the traces). The forward method can either work with current or spike trace. You can easily work with any of them you wish. When there are connected populations, you might need to consider how to convert the pre-synaptic spikes into current or how to change the forward block to support spike traces as input.

There are some more points to be considered further: - Note that parameters of the neuron are not specified in child classes. You have to define them as attributes of the corresponding class (i.e. in __init__) with suitable naming. - In case you want to make simulations on cuda, make sure to transfer the tensors to the desired device by defining a device attribute or handling the issue from upstream code. - Almost all variables, parameters, and arguments in this file are tensors with a single value or tensors of the shape equal to population`s shape. No extra dimension for time is needed. The time dimension should be handled in upstream code and/or monitor objects.

Parameters
  • shape (Iterable of int) – Define the topology of neurons in the population.

  • spike_trace (bool, Optional) – Specify whether to record spike traces. The default is True.

  • additive_spike_trace (bool, Optional) – Specify whether to record spike traces additively. The default is True.

  • tau_s (float or torch.Tensor, Optional) – Time constant of spike trace decay. The default is 15.0.

  • trace_scale (float or torch.Tensor, Optional) – The scaling factor of spike traces. The default is 1.0.

  • is_inhibitory (False, Optional) – Whether the neurons are inhibitory or excitatory. The default is False.

  • learning (bool, Optional) – Define the training mode. The default is True.

abstract forward(traces: torch.Tensor)None

Simulate the neural population for a single step.

Parameters

traces (torch.Tensor) – Input spike trace.

Returns

Return type

None

abstract compute_potential()None

Compute the potential of neurons in the population.

Returns

Return type

None

abstract compute_spike()None

Compute the spike tensor.

Returns

Return type

None

abstract refractory_and_reset()None

Refractor and reset the neurons.

Returns

Return type

None

abstract compute_decay()None

Set the decays.

Returns

Return type

None

reset_state_variables()None

Reset all internal state variables.

Returns

Return type

None

train(mode: bool = True)cnsproject.network.neural_populations.NeuralPopulation

Set the population’s training mode.

Parameters

mode (bool, optional) – Mode of training. True turns on the training while False turns it off. The default is True.

Returns

Return type

NeuralPopulation

training: bool
class cnsproject.network.neural_populations.InputPopulation(shape: Iterable[int], spike_trace: bool = True, additive_spike_trace: bool = True, tau_s: Union[float, torch.Tensor] = 10.0, trace_scale: Union[float, torch.Tensor] = 1.0, learning: bool = True, **kwargs)

Bases: cnsproject.network.neural_populations.NeuralPopulation

Neural population for user-defined spike pattern.

This class is implemented for future usage. Extend it if needed.

Parameters
  • shape (Iterable of int) – Define the topology of neurons in the population.

  • spike_trace (bool, Optional) – Specify whether to record spike traces. The default is True.

  • additive_spike_trace (bool, Optional) – Specify whether to record spike traces additively. The default is True.

  • tau_s (float or torch.Tensor, Optional) – Time constant of spike trace decay. The default is 15.0.

  • trace_scale (float or torch.Tensor, Optional) – The scaling factor of spike traces. The default is 1.0.

  • learning (bool, Optional) – Define the training mode. The default is True.

forward(traces: torch.Tensor)None

Simulate the neural population for a single step.

Parameters

traces (torch.Tensor) – Input spike trace.

Returns

Return type

None

reset_state_variables()None

Reset all internal state variables.

Returns

Return type

None

training: bool
class cnsproject.network.neural_populations.LIFPopulation(shape: Iterable[int], spike_trace: bool = True, additive_spike_trace: bool = True, tau_s: Union[float, torch.Tensor] = 10.0, trace_scale: Union[float, torch.Tensor] = 1.0, is_inhibitory: bool = False, learning: bool = True, **kwargs)

Bases: cnsproject.network.neural_populations.NeuralPopulation

Layer of Leaky Integrate and Fire neurons.

Implement LIF neural dynamics(Parameters of the model must be modifiable). Follow the template structure of NeuralPopulation class for consistency.

forward(traces: torch.Tensor)None

TODO.

  1. Make use of other methods to fill the body. This is the main method responsible for one step of neuron simulation.

  2. You might need to call the method from parent class.

compute_potential()None

TODO.

Implement the neural dynamics for computing the potential of LIF neurons. The method can either make changes to attributes directly or return the result for further use.

compute_spike()None

TODO.

Implement the spike condition. The method can either make changes to attributes directly or return the result for further use.

abstract refractory_and_reset()None

TODO.

Implement the refractory and reset conditions. The method can either make changes to attributes directly or return the computed value for further use.

abstract compute_decay()None

TODO.

Implement the dynamics of decays. You might need to call the method from parent class.

training: bool
class cnsproject.network.neural_populations.ELIFPopulation(shape: Iterable[int], spike_trace: bool = True, additive_spike_trace: bool = True, tau_s: Union[float, torch.Tensor] = 10.0, trace_scale: Union[float, torch.Tensor] = 1.0, is_inhibitory: bool = False, learning: bool = True, **kwargs)

Bases: cnsproject.network.neural_populations.NeuralPopulation

Layer of Exponential Leaky Integrate and Fire neurons.

Implement ELIF neural dynamics(Parameters of the model must be modifiable). Follow the template structure of NeuralPopulation class for consistency.

Note: You can use LIFPopulation as parent class as well.

forward(traces: torch.Tensor)None

TODO.

  1. Make use of other methods to fill the body. This is the main method responsible for one step of neuron simulation.

  2. You might need to call the method from parent class.

compute_potential()None

TODO.

Implement the neural dynamics for computing the potential of ELIF neurons. The method can either make changes to attributes directly or return the result for further use.

compute_spike()None

TODO.

Implement the spike condition. The method can either make changes to attributes directly or return the result for further use.

abstract refractory_and_reset()None

TODO.

Implement the refractory and reset conditions. The method can either make changes to attributes directly or return the computed value for further use.

abstract compute_decay()None

TODO.

Implement the dynamics of decays. You might need to call the method from parent class.

training: bool
class cnsproject.network.neural_populations.AELIFPopulation(shape: Iterable[int], spike_trace: bool = True, additive_spike_trace: bool = True, tau_s: Union[float, torch.Tensor] = 10.0, trace_scale: Union[float, torch.Tensor] = 1.0, is_inhibitory: bool = False, learning: bool = True, **kwargs)

Bases: cnsproject.network.neural_populations.NeuralPopulation

Layer of Adaptive Exponential Leaky Integrate and Fire neurons.

Implement adaptive ELIF neural dynamics(Parameters of the model must be modifiable). Follow the template structure of NeuralPopulation class for consistency.

Note: You can use ELIFPopulation as parent class as well.

forward(traces: torch.Tensor)None

TODO.

  1. Make use of other methods to fill the body. This is the main method responsible for one step of neuron simulation.

  2. You might need to call the method from parent class.

compute_potential()None

TODO.

Implement the neural dynamics for computing the potential of adaptive ELIF neurons. The method can either make changes to attributes directly or return the result for further use.

compute_spike()None

TODO.

Implement the spike condition. The method can either make changes to attributes directly or return the result for further use.

abstract refractory_and_reset()None

TODO.

Implement the refractory and reset conditions. The method can either make changes to attributes directly or return the computed value for further use.

abstract compute_decay()None

TODO.

Implement the dynamics of decays. You might need to call the method from parent class.

training: bool

Module contents