learning package¶
Submodules¶
cnsproject.learning.learning_rules module¶
Module for learning rules.
-
class
cnsproject.learning.learning_rules.
LearningRule
(connection: cnsproject.network.connections.AbstractConnection, lr: Optional[Union[float, Sequence[float]]] = None, weight_decay: float = 0.0, **kwargs)¶ Bases:
abc.ABC
Abstract class for defining learning rules.
Each learning rule will be applied on a synaptic connection defined as connection attribute. It possesses learning rate lr and weight decay rate weight_decay. You might need to define more parameters/ attributes to the child classes.
Implement the dynamics in update method of the classes. Computations for weight decay and clamping the weights has been implemented in the parent class update method. So do not invent the wheel again and call it at the end of the child method.
- Parameters
connection (AbstractConnection) – The connection on which the learning rule is applied.
lr (float or sequence of float, Optional) – The learning rate for training procedure. If a tuple is given, the first value defines potentiation learning rate and the second one depicts the depression learning rate. The default is None.
weight_decay (float) – Define rate of decay in synaptic strength. The default is 0.0.
-
update
() → None¶ Abstract method for a learning rule update.
- Returns
- Return type
None
-
class
cnsproject.learning.learning_rules.
NoOp
(connection: cnsproject.network.connections.AbstractConnection, lr: Optional[Union[float, Sequence[float]]] = None, weight_decay: float = 0.0, **kwargs)¶ Bases:
cnsproject.learning.learning_rules.LearningRule
Learning rule with no effect.
- Parameters
connection (AbstractConnection) – The connection on which the learning rule is applied.
lr (float or sequence of float, Optional) – The learning rate for training procedure. If a tuple is given, the first value defines potentiation learning rate and the second one depicts the depression learning rate. The default is None.
weight_decay (float) – Define rate of decay in synaptic strength. The default is 0.0.
-
update
(**kwargs) → None¶ Only take care about synaptic decay and possible range of synaptic weights.
- Returns
- Return type
None
-
class
cnsproject.learning.learning_rules.
STDP
(connection: cnsproject.network.connections.AbstractConnection, lr: Optional[Union[float, Sequence[float]]] = None, weight_decay: float = 0.0, **kwargs)¶ Bases:
cnsproject.learning.learning_rules.LearningRule
Spike-Time Dependent Plasticity learning rule.
Implement the dynamics of STDP learning rule.You might need to implement different update rules based on type of connection.
-
update
(**kwargs) → None¶ TODO.
Implement the dynamics and updating rule. You might need to call the parent method.
-
-
class
cnsproject.learning.learning_rules.
FlatSTDP
(connection: cnsproject.network.connections.AbstractConnection, lr: Optional[Union[float, Sequence[float]]] = None, weight_decay: float = 0.0, **kwargs)¶ Bases:
cnsproject.learning.learning_rules.LearningRule
Flattened Spike-Time Dependent Plasticity learning rule.
Implement the dynamics of Flat-STDP learning rule.You might need to implement different update rules based on type of connection.
-
update
(**kwargs) → None¶ TODO.
Implement the dynamics and updating rule. You might need to call the parent method.
-
-
class
cnsproject.learning.learning_rules.
RSTDP
(connection: cnsproject.network.connections.AbstractConnection, lr: Optional[Union[float, Sequence[float]]] = None, weight_decay: float = 0.0, **kwargs)¶ Bases:
cnsproject.learning.learning_rules.LearningRule
Reward-modulated Spike-Time Dependent Plasticity learning rule.
Implement the dynamics of RSTDP learning rule. You might need to implement different update rules based on type of connection.
-
update
(**kwargs) → None¶ TODO.
Implement the dynamics and updating rule. You might need to call the parent method. Make sure to consider the reward value as a given keyword argument.
-
-
class
cnsproject.learning.learning_rules.
FlatRSTDP
(connection: cnsproject.network.connections.AbstractConnection, lr: Optional[Union[float, Sequence[float]]] = None, weight_decay: float = 0.0, **kwargs)¶ Bases:
cnsproject.learning.learning_rules.LearningRule
Flattened Reward-modulated Spike-Time Dependent Plasticity learning rule.
Implement the dynamics of Flat-RSTDP learning rule. You might need to implement different update rules based on type of connection.
-
update
(**kwargs) → None¶ TODO.
Implement the dynamics and updating rule. You might need to call the parent method. Make sure to consider the reward value as a given keyword argument.
-
cnsproject.learning.rewards module¶
Module for reward dynamics.
TODO.
Define your reward functions here.
-
class
cnsproject.learning.rewards.
AbstractReward
¶ Bases:
abc.ABC
Abstract class to define reward function.
Make sure to implement the abstract methods in your child class.
To implement your dopamine functionality, You will write a class inheriting this abstract class. You can add attributes to your child class. The dynamics of dopamine function (DA) will be implemented in compute method. So you will call compute in your reward-modulated learning rules to retrieve the dopamine value in the desired time step. To reset or update the defined attributes in your reward function, use update method and remember to call it your learning rule computations in the right place.
-
abstract
compute
(**kwargs) → None¶ Compute the reward.
- Returns
It should return the computed reward value.
- Return type
None
-
abstract
update
(**kwargs) → None¶ Update the internal variables.
- Returns
- Return type
None
-
abstract