Abstract |
Spiking neural networks (SNNs) are a class of Αrtificial Νeural Νetworks (ANNs) that attempt
to more accurately model the processes inside biological neural networks such as the
(human) brain. They are a generalization of "conventional" or “deep” ANNs. Conventional
neural networks (be it convolutional, recurrent, or other networks) are typically layer-based
and produce continuous outputs, which can be computed via simple matrix multiplications
interleaved with non-linear activation functions. In contrast, SNNs can resemble arbitrary
directed graphs. They consist of neurons, corresponding to the graph’s vertices, which are
connected via synapses, corresponding to the graph’s edges. Both neurons and synapses
have their own state, consisting of arbitrary attributes, which can be governed by arbitrary
dynamics. In addition, neurons can fire or “spike” in which case a signal/message must be
transmitted to their neighbors via their outgoing synapses. A SNN’s output is its firing pattern.
We can immediately see how this behavior is quite similar to the electro-chemical processes
happening in the brain. Several advantages can be derived from this similarity. Maass proved
in 1996 that SNNs are fundamentally more powerful computationally than conventional
ANNs. In practice, SNNs still lag behind ANNs but research around them remains vivid and
promising. The gap is constantly shrinking so that SNNs may one day live up to their
reputation. One aspect in which SNNs have already overtaken ANNs is power-efficiency,
especially in combination with neuromorphic hardware. In fact, they are so efficient that
converting ANNs into SNNs has become its own field of research. SNNs’ novelty also bears
disadvantages. Many solved problems such as the efficient inference and training of ANNs,
have to be re-thought for SNNs due to their drastically different nature. Inference requires
full-blown simulation. While training via meta-algorithms such as Backpropagation (BP) is
possible (in fact, several attempts to adapt BP to SNNs have been made), SNNs lend
themselves to a different kind of training: neuroplasticity. As the SNN is being simulated, it
constantly self-adapts, producing ever-improving outputs. Training becomes an inherent part
of the model and the simulator becomes responsible for driving it. This is why we have
dedicated this research to simulation, which we see as an even more fundamental issue than
training: A fast, resource-efficient, and user-friendly simulator not only speeds up existing
simulations. It accelerates network design (prototyping/parameter tuning/etc.) and research
into other algorithms, including training, advancing the field as a whole. To this effect, we
present Spice (/spaik/), a state of the art SNN simulator. Spice is superior in terms of
performance (speed, setup time, memory consumption) and ease of use. It is also the first
simulator to scale linearly to eight GPUs. This is achieved by novel algorithms for spike
delivery and plasticity, a novel parallelization scheme, as well as a unique, modern API. We
explore these algorithms and witness their evolution over several optimization levels, from
naíve "first" implementations all the way to outperforming the state of the art.
|