Releases: zju-bmi-lab/SPAIC
v0.6.2
Release note:
- Support multi-card parallel computing,
- Update 'Network_load' and 'Network_save' modules to support access to more complex networks
- Rectify the debugging when the 'Monitor' records time that is not synchronized with the simulation time in a long time step simulation
- Add many new neuron models and algorithms to support brain simulation applications
- Fix an issue where connections are limited within 'Assembly'
- The learning algorithm of STDP class supports updating after simulation by means of optimizer.step
- 'Delay' supports reverse gradient transmission
- Add the 'forward_build' mode. In 'forward_build' mode, the network is built in forward mode to avoid delay. In the original build mode, all connections were built first to resolve loop dependencies, so each connection had a one-step delay.
- Support to customize which model parameters can be trained
- Use absolute paths in module import
版本更新:
- 支持多卡并行计算,
- 更新Network_load与Network_save模块,添加对更多复杂结构网络的存取支持
- 修复Monitor在长时间步模拟情况下记录时间与仿真时间不同步的debug
- 添加许多新的神经元模型及算法,支撑脑仿真应用
- 修复连接局限于Assembly内部的问题
- STDP类学习算法支持通过optimizer.step的方式,在仿真后进行更新
- Delay支持反传梯度
- 新增forward_build方式,在forward_build模式下,网络构建按照前向模式,避免延迟。原build模式下,为了解决环路依赖问题,优先构建所有connection,因此每一个connection都存在一步延迟问题。
- 支持 自定义选择哪些模型参数可训练
- 将模块的引用路径全部统一为绝对路径
v0.6.0
Release Note
-
To provide a more concise code for build the network, we changed some parameter names for initialize NeuronGroup and Connection:
For NeuronGroup initialization: neuron_number -> num, neuron_model -> model
For Connection initialization: pre_assembly -> pre, post_assembly -> post -
We have added interfaces in the frontend that could directly get backend values of certain network components using get_values function: such as V = neuron1.get_values(‘V’).
-
Added Conv related operations such as max_pooling, batchNorm2d, Flatten as Synapse modules, which can be added to conv connections. We have also added a Pool_connection to solely conduct pooling operation.
-
We have added Meta_STDP algorithms that can concurrently train the network with gradient Backprop algorithms and STDP learning rules.
-
PoissonEncoders now generate Poisson spikes on the fly rather than at the beginning of the run, to save memory.
-
We used a new Op class to contain operations in the backend, and added more attributes to backend Ops, such as owner, device and requires_grad.
-
Some bugs are fixed.