Neuromorphic Intelligence: Brain-inspired Strategies for AI Computing Systems
Résumé
TLDRI denne præsentation diskuterer Giacomo Indyvidi fra universitetet i Zürich banebrydende innovationer indenfor hjerneinspireret teknologi til kunstig intelligens. Traditionelle kunstige netværk, kendt siden 1980'erne, er blevet begrænset af deres enorme krav til energi og hukommelse. Ifølge Indyvidi kan vi ved at drage nytte af neuromorfik teknologi, som kopierer biologiske hjerners energibesparende mekanismer, opnå en betydelig forbedring. Neuromorfik tilgange integrerer både hardware og beregning tættere sammen, hvilket potentielt kan skabe intelligente enheder med lavt energiforbrug, egnet til en bred vifte af industrielle og forbrugsmæssige applikationer. Dette kan hjælpe med at reducere kunstig intelligensens energifodaftryk betydeligt. Desuden kan videre forskning inden for neuromorfik og analog computing åbne nye døre for robuste og fleksible AI-løsninger der fungerer effektivt under realistiske forhold.
A retenir
- 🧠 Neuromorfik teknologi efterligner hjernen for at spare energi i kunstig intelligens.
- ⚙️ Hjerneinspirerede strategier bruger parallelle kredsløb til at samlokalisere hukommelse og beregning.
- 🔋 Kunstige neurale netværk kræver meget energi grundet deres datakrævning.
- 💽 Analog teknologi kan reducere omkostningen ved dataoverførsel i AI-systemer.
- 📉 Problemet med energiintensiv AI er ikke i beregningen, men i dataflytningen.
- 🌐 Neuromorfik forskning omfatter design på tværs af hardware og algoritmiske strukturer.
- 🔄 Variabilitet i kredsløb kan bruges til robust beregning ved hjælp af gennemsnitsteknikker.
- 🌍 Potentialet for energibesparelse i AI kan reducere den globale teknologis energiforbrug.
- 🎓 Forskning i Zürich viser praktiske anvendelser af hjernensinspireret computing.
- 🚀 Neuromorfisk intelligens kan føre til mere effektive og kraftfulde systemer.
Chronologie
- 00:00:00 - 00:05:00
Foredraget handler om hjerneinspirerede strategier for lav-energi kunstig intelligens. AI er blevet populært, men har sine rødder i 1980'erne. Succesen skyldes forbedret hardware og større datasæt, men AI er energikrævende, og fremtidens energiforbrug fra AI-systemer kan blive uholdbart.
- 00:05:00 - 00:10:00
Neuromorphic computing er en løsning, der kombinerer nye materialer og teorier. Dette felt, skabt af Carver Mead, bruger CMOS kredsløb til at imitere hjernens funktioner. Det sigter mod at forbedre forståelsen og effektiviteten af computing gennem biologiinspirerede metoder.
- 00:10:00 - 00:15:00
Hovedforskellene mellem neurale netværk i biologiske hjerner og kunstige neurale netværk er måden de bearbejder information på. Biologiske systemer integrerer tid og fysik på en måde, der sammensmelter hardware og software, hvilket kunstige systemer ikke gør.
- 00:15:00 - 00:20:00
Strategier til at udvikle energieffektive systemer inkluderer brug af parallelle, analoge kredsløb, der anvender fysikkens love. Målet er at reducere energiforbruget ved at efterligne hjernens måde at integrere hukommelse og beregning på.
- 00:20:00 - 00:27:40
Projektet på universitetet i Zürich udvikler neurale netværk baseret på analoge kredsløb. Dette inkluderer robust beregning på trods af støj og fejl i hardware. Målet er at kombinere analoge og digitale systemer for at løse praktiske problemer effektivt.
Carte mentale
Questions fréquemment posées
Er kunstige neurale netværk lige så alsidige som menneskelige hjerner?
Nej, de er stadig meget specialiserede sammenlignet med dyre- og menneskehjerner.
Hvordan kan hjerneinspirerede strategier spare energi i AI?
Effektiviteten forbedres ved at designe parallelle kredsløb, der samtidig udfører beregning og hukommelsesopgaver ligesom hjerner.
Hvordan bidrager analoge kredsløb til energibesparelse?
Ved at integrere analog beregning for non-lineariteter og reducere behovet for adskilte digitale konverteringer.
Hvilke udfordringer har kunstige neurale netværk?
De begrænses af energiforbrug, data- og hukommelsesbehov, samt specialisering til bestemte opgaver.
Hvordan håndteres variabiliteten i analoge enheder til beregning?
Mengem varianter af devices bruges til robust beregning og udvinding af mønstre, ligesom i hjerneaktivitet.
Hvad er formålet med neuromorfik forskning?
Neuromorfik udforskninger er ofte fokuseret på at efterligne hjernens funktion for at forbedre hardwarearkitekturer.
Voir plus de résumés vidéo
5 Foods that have More Calcium than Milk (Get Stronger Bones)
Luke Skywalker vs Darth Vader (Whole Fight)
Understanding The Global Unease After WW1 | Impossible Peace | Timeline
PLOT, SETTING, CHARACTERIZATION || GRADE 10 || MELC-based VIDEO LESSON | QUARTER 1| MODULE 3
Should You Use Fingerprint Unlock?
MUST HAVE.. AR 15 Upgrades Under $100
- 00:00:04hello
- 00:00:05this is giacomo individi from the
- 00:00:07university of zurich and eth zurich at
- 00:00:09the institute of neuro-informatics it's
- 00:00:11going to be a pleasure for me to give
- 00:00:12you a talk about brain inspired
- 00:00:14strategies for low-power artificial
- 00:00:16intelligence computing systems
- 00:00:19so the term artificial intelligence
- 00:00:21actually has become very very popular in
- 00:00:24recent times
- 00:00:25in fact artificial intelligence
- 00:00:27algorithms and networks go back to the
- 00:00:29late 80s
- 00:00:30and although the first successes of
- 00:00:32these networks were demonstrated in the
- 00:00:3480s only recently
- 00:00:36these algorithms and the computing
- 00:00:38systems started to outperform
- 00:00:40conventional
- 00:00:41approaches for solving problems
- 00:00:44in fact in
- 00:00:46the field of machine vision uh from 2009
- 00:00:49on
- 00:00:50in 2011 in fact the first convolutional
- 00:00:53neural network trained using back
- 00:00:55propagation achieved impressive results
- 00:00:58that made the whole field explode
- 00:01:00and the reason for the success of this
- 00:01:03approach
- 00:01:04really even though as i said it was
- 00:01:06started many many years ago only
- 00:01:09recently we started to be able to follow
- 00:01:11this
- 00:01:12success because um and achieve really
- 00:01:15impressive performance
- 00:01:16because the technologies the hardware
- 00:01:18technologies started to provide
- 00:01:21enough computing power for these
- 00:01:22networks to actually really perform well
- 00:01:25in addition there's been now
- 00:01:28the availability of large data sets that
- 00:01:30can be used to train such networks which
- 00:01:32also were not there in the 80s
- 00:01:34and finally
- 00:01:36several tricks and hacks and
- 00:01:38improvements in the algorithms have been
- 00:01:39proposed to actually
- 00:01:41make these networks very robust and very
- 00:01:44performant
- 00:01:46however they do have some problems
- 00:01:49most of these algorithms require a large
- 00:01:52amount of resources in terms of memory
- 00:01:55and energy to be trained
- 00:01:58and in fact if we
- 00:02:00do an estimate and we try to see how
- 00:02:02much energy is required by all of the
- 00:02:04computational devices that are in the
- 00:02:06world
- 00:02:06to implement such
- 00:02:09neural networks it is estimated that by
- 00:02:122025 the ict industry will consume about
- 00:02:1420 percent of the entire world's energy
- 00:02:17this is clearly a problem which is not
- 00:02:19sustainable
- 00:02:21the other
- 00:02:22reason for or one of the main reasons
- 00:02:24for these networks to be extremely power
- 00:02:27hungry is because they are
- 00:02:29requiring large amounts of data and
- 00:02:31memory resources and in particular
- 00:02:33they're required to move data from the
- 00:02:36memory to the computing and from the
- 00:02:38computing back to the memory so
- 00:02:40typically memory is used
- 00:02:42in dram chips and these dram are at
- 00:02:45least a thousand five hundred times more
- 00:02:46costly than any compute operation mac
- 00:02:50operations in these cnn accelerators
- 00:02:53so it's it's really not the fact that
- 00:02:55we're doing lots of computation it's
- 00:02:57it's really the fact that we're moving
- 00:02:58bits uh back and forth
- 00:03:01that is is
- 00:03:02burning all of this energy
- 00:03:05the other problem is more fundamental
- 00:03:07it's not only related to technology it's
- 00:03:09related to the theory and these
- 00:03:10algorithms these algorithms actually as
- 00:03:13i said are actually are very very uh
- 00:03:15powerful in terms of recognizing images
- 00:03:18and solving but they are very narrow in
- 00:03:21the sense that they are very specialized
- 00:03:23to only a very specific domain
- 00:03:26these networks are programmed to perform
- 00:03:28a limited set of tasks and they operate
- 00:03:30within a predetermined and predefined
- 00:03:32range
- 00:03:33they are not nearly as general purpose
- 00:03:35as uh animal brains are so even though
- 00:03:39we do we do call it artificial
- 00:03:41intelligence it's really different from
- 00:03:43natural intelligence the type of
- 00:03:45intelligence that we see in in animals
- 00:03:47and in humans
- 00:03:49and the backbone of these artificial
- 00:03:51intelligence algorithms is the back
- 00:03:53propagation algorithm or if we're
- 00:03:56looking at time series and sequences the
- 00:03:58back propagation through time bpttt
- 00:04:01algorithm
- 00:04:03this this is uh really an algorithmic
- 00:04:06limitation even though it can be used to
- 00:04:08solve very powerful problems
- 00:04:11trying to improve this bpdt
- 00:04:14by making incremental changes is
- 00:04:17probably not going to lead to
- 00:04:19breakthroughs in understanding how to go
- 00:04:21from artificial intelligence to natural
- 00:04:24intelligence
- 00:04:25and the way the brain works is actually
- 00:04:27quite different from back propagation
- 00:04:29through time
- 00:04:30if you look at neuroscience and if you
- 00:04:32study real neurons and real synapses
- 00:04:35and
- 00:04:36the sort of computational principles of
- 00:04:38the brain you will realize that it's
- 00:04:40there there's a big difference so
- 00:04:43this problem has been recognized by many
- 00:04:45communities many agencies there is for
- 00:04:48example a recent paper by john shelf
- 00:04:50that shows how to go beyond these
- 00:04:52problems and try to improve performance
- 00:04:55of computation
- 00:04:56and if we look at this particular path
- 00:04:58that basically tries to put together new
- 00:05:00architectures new packaging systems with
- 00:05:02new memory devices and new theories
- 00:05:05one of the most promising approaches is
- 00:05:07the one that is here listed as
- 00:05:08neuromorphic
- 00:05:10so what is this neuromorphic this is the
- 00:05:13sort of the the bulk of this talk that i
- 00:05:16am going to show you what we can do at
- 00:05:18the university of zurich but also at the
- 00:05:21startup that comes out of the university
- 00:05:22of zurich instance with this type of
- 00:05:25approach which is as i said taking the
- 00:05:27best of the new materials and devices
- 00:05:30new architectures and new theories and
- 00:05:32trying to go really beyond what we have
- 00:05:35today
- 00:05:37so the term neuromorphic was actually
- 00:05:40invented or coined many many years ago
- 00:05:42by carver mead in the late 80s
- 00:05:44and is now being used to describe
- 00:05:47different things there's at least three
- 00:05:49big communities that are using the term
- 00:05:51neuromorphic
- 00:05:52the original one that goes back to
- 00:05:54carver mead was referring to the design
- 00:05:57of cmos electronic circuits that were
- 00:06:00used to emulate the brain
- 00:06:03basically as a basic research attempt to
- 00:06:05try to understand how the brain works by
- 00:06:07building
- 00:06:08circuits that are equivalent so trying
- 00:06:10to really reproduce the physics
- 00:06:12and and because of that these circuits
- 00:06:14were using sub-threshold analog
- 00:06:16transistors
- 00:06:18for the neurodynamics and the
- 00:06:20computation and asynchronous digital
- 00:06:22logic for communicating spikes across
- 00:06:25chips across cores it was really
- 00:06:27fundamental research
- 00:06:29the other big community that now started
- 00:06:31to use the term neuromorphic is the
- 00:06:33community building uh
- 00:06:35practical devices for you know solving
- 00:06:37practical problems
- 00:06:39in that case these these this community
- 00:06:41is is building chips that can implement
- 00:06:44spiking neural network accelerators or
- 00:06:46simulators not emulation but but really
- 00:06:49now at this point it's it's more an
- 00:06:51exploratory approach it's being used to
- 00:06:53try to understand what can be done
- 00:06:55with this approach of using digital
- 00:06:58circuits to simulate spiking neural
- 00:07:00networks
- 00:07:01finally the last community or another
- 00:07:04large community that is started to use
- 00:07:05the term neuromorphic is the one that
- 00:07:07has been developing emerging memory
- 00:07:09technologies looking at nanoscale
- 00:07:11devices to implement long-term
- 00:07:14non-volatile memories
- 00:07:16or if you like memoristive devices
- 00:07:19so this community also started using the
- 00:07:21turn neuromorphic because these devices
- 00:07:23they can actually store
- 00:07:25a change in the conductance which is
- 00:07:27very similar to the way the real
- 00:07:28synapses work when that they actually
- 00:07:31change their conductance when they
- 00:07:32change their synaptic weight
- 00:07:34and
- 00:07:35this allows them to build in memory
- 00:07:37computing architectures that are also as
- 00:07:40you will see very similar to the way
- 00:07:42real biological neural networks work and
- 00:07:45it can really create high density arrays
- 00:07:47so we can actually by using analog
- 00:07:50circuits the approach of simulating
- 00:07:52digital um spike neural networks and by
- 00:07:55using in-memory computing technologies
- 00:07:57the hope is that we create a new field
- 00:08:00which i'm calling here neuromorphic
- 00:08:01intelligence that will lead to the
- 00:08:04creation of
- 00:08:05compact intelligent brain inspired
- 00:08:08devices
- 00:08:09and really to understand how to do these
- 00:08:11brain inspired devices it's important to
- 00:08:14look at the brain to go back to carver
- 00:08:16meats approach and really do fundamental
- 00:08:18research in studying biology and try to
- 00:08:21really get the best out of all all of
- 00:08:23these communities of the devices of the
- 00:08:25sort of the computing principles using
- 00:08:28simulations and and machine learning
- 00:08:30approaches but also of neuroscience and
- 00:08:33studying the brain
- 00:08:34and so here i'd like to just to
- 00:08:36highlight the main differences that are
- 00:08:37there between
- 00:08:39simulated artificial neural networks and
- 00:08:41really
- 00:08:42the biological neural networks those
- 00:08:44that are in the brain
- 00:08:45in simulated artificial neural networks
- 00:08:48as you probably know there is a weighted
- 00:08:50sum of inputs the inputs are all coming
- 00:08:52in a point neuron which is basically
- 00:08:54just doing the sum or the integral of
- 00:08:56the inputs and multiplying all of them
- 00:08:58by a weight so it's it's really
- 00:09:00characterized by a big uh weight
- 00:09:02multiplication or matrix multiplication
- 00:09:04operation and then there is a
- 00:09:06non-linearity either a spiking
- 00:09:08non-linearity if it's a spike neural
- 00:09:09network or a thresholding non-linearity
- 00:09:12if it's an artificial neural network
- 00:09:14in biology the neurons are also
- 00:09:17integrating all of their synaptic inputs
- 00:09:20with different weights so there is this
- 00:09:22analogy of weighted inputs but it's all
- 00:09:24happening through the physics of the
- 00:09:25devices so the the physics is playing an
- 00:09:28important role for computation
- 00:09:31the synapses are not just doing a
- 00:09:32multiplication they're actually
- 00:09:34implementing some temporal operators
- 00:09:37integrating applying non-linearities uh
- 00:09:40dividing summing it's much more
- 00:09:42complicated than just a weighted sum of
- 00:09:44inputs
- 00:09:45in addition the neuron actually has an
- 00:09:47axon and it's sending its output through
- 00:09:49the axon using also basically an all or
- 00:09:52none event a spike
- 00:09:54through time because the longer the axon
- 00:09:57the longer it will take for the for the
- 00:09:58spike to travel and reach the
- 00:10:00destination
- 00:10:02and depending on how thick the axon is
- 00:10:04how much myelination there is it will be
- 00:10:06slower or faster so also here the
- 00:10:08temporal dimension is is really
- 00:10:10important
- 00:10:11in summary if we really want to see the
- 00:10:13big difference is that artificial neural
- 00:10:14networks the one that are being
- 00:10:16simulated on computers and gpus are
- 00:10:18actually algorithms that simulate some
- 00:10:21basic properties of real neurons
- 00:10:24whereas biological neural networks
- 00:10:25really use time dynamics and the physics
- 00:10:28of their computing elements to run the
- 00:10:31algorithm actually in these networks the
- 00:10:34structure of the architecture is the
- 00:10:36algorithm there is no distinction
- 00:10:38between the hardware and the software
- 00:10:40everything is one and understanding how
- 00:10:43to build
- 00:10:44these types of hardware architectures
- 00:10:47wet wear or hardware
- 00:10:49using cmos using memristors maybe even
- 00:10:51using alternative you know dna computing
- 00:10:54or other approaches
- 00:10:55will
- 00:10:56hopefully and probably lead to much more
- 00:10:58efficient and powerful computing systems
- 00:11:01compared to the artificial neural
- 00:11:03networks so if we want to understand how
- 00:11:05to do this we really need to do a
- 00:11:07radical paradigm shift in computing
- 00:11:10standard computing architectures are
- 00:11:12basically based on the phenomenon
- 00:11:15system where you have a cpu on one side
- 00:11:18and
- 00:11:18memory uh on the other and as i said
- 00:11:22transferring data back and forth from
- 00:11:24the cpu to the memory and back is
- 00:11:27actually what's burning all the power
- 00:11:29doing the computation inside the cpu is
- 00:11:31much much uh more energy efficient and
- 00:11:34less costly than transferring the data
- 00:11:37in brains what's happening is that
- 00:11:39inside the neuron there are synapses
- 00:11:41which store
- 00:11:43the value of the weight so
- 00:11:45memory and computation are co-localized
- 00:11:48there is no transfer of data back and
- 00:11:50forth everything is happening at the
- 00:11:52synapse at the neuron and there is many
- 00:11:54distributed synapses as many distributed
- 00:11:56neurons so the memory and the
- 00:11:58computation are intertwined together in
- 00:12:00a distributed system
- 00:12:02and this is really a big difference so
- 00:12:04if we want to understand how to really
- 00:12:06save power we have to look at how the
- 00:12:08brain does it we have to use these brain
- 00:12:10inspire strategies and the main three
- 00:12:13points that i'd like you to remember is
- 00:12:15that you we have to use basically
- 00:12:17parallel arrays of processing elements
- 00:12:19that have computation and memory
- 00:12:21co-localized and this is radically
- 00:12:23different from time multiplexing a
- 00:12:26circuit here for example if we have one
- 00:12:27cpu two cpus but even 64 cpus to
- 00:12:31simulate
- 00:12:32thousands of neurons we are time
- 00:12:34multiplexing the integration of the
- 00:12:36differential equations in these 64 cpus
- 00:12:40here if we look at how to do it
- 00:12:42following this brain inspired strategies
- 00:12:44if we want to emulate a thousand neurons
- 00:12:46we really have to have a thousand
- 00:12:48different circuits that are laid out in
- 00:12:50the in the layout of the of the chip of
- 00:12:52the wafer and then run these
- 00:12:55through their physics through the
- 00:12:56physics of the circuits analog circuits
- 00:12:59digital circuits but they have to be
- 00:13:01many parallel circuits that operate in
- 00:13:03parallel with the memory and the
- 00:13:05computation co-localized that's really
- 00:13:07the trick to to save power
- 00:13:09the other is if we have analog circuits
- 00:13:11we can use the physics of the circuits
- 00:13:13to carry out the computation that really
- 00:13:14instead of abstracting away some
- 00:13:16differential equations and integrating
- 00:13:18numerically the differential equation we
- 00:13:20really use the physics of the device to
- 00:13:22carry out the computation it's much more
- 00:13:24efficient in terms of power latency time
- 00:13:27and area
- 00:13:28and finally the temporal domain is
- 00:13:31really important the temporal dynamics
- 00:13:32of the system have to be well matched to
- 00:13:34the signals that we want to process so
- 00:13:37if we want to have very low power
- 00:13:38systems and for example we want to
- 00:13:40process speech we have to have elements
- 00:13:42in our computing substrate in our brain
- 00:13:44like computer that have the same time
- 00:13:47constants speech for example phonemes
- 00:13:49have time constants of the order of 50
- 00:13:51milliseconds so we have to slow down
- 00:13:53silicon to have dynamics and time
- 00:13:55constants of the order of 50
- 00:13:57milliseconds so our our chips will be
- 00:14:00firing and going at you know hertz or
- 00:14:03maybe hundreds of hertz but definitely
- 00:14:05not that megahertz or gigahertz like our
- 00:14:07cpus or our gpus are doing
- 00:14:10and and by having parallel arrays of
- 00:14:13very slow elements we can still get very
- 00:14:15fast computation even if we have slow
- 00:14:18elements it doesn't mean that we don't
- 00:14:19have a very fast reactive system
- 00:14:21it's because they're in working in
- 00:14:23parallel and so at some point there will
- 00:14:25always be one or two of these elements
- 00:14:26that are about to fire whenever the
- 00:14:28input arrives and we can have
- 00:14:31microsecond nanosecond reaction times
- 00:14:33even though we have millisecond dynamics
- 00:14:36and this is another key trick to
- 00:14:38remember if we want to understand how to
- 00:14:40do this radical paradigm shift
- 00:14:43and at the university of zurich at eth
- 00:14:45at the institute of neuroinformatics
- 00:14:47we've been building these types of
- 00:14:48systems
- 00:14:49for many many years and we are uh now
- 00:14:53also building these systems at our new
- 00:14:55startup at since
- 00:14:57the type of systems are shown here
- 00:14:59basically we create arrays of neurons
- 00:15:02with analog circuits
- 00:15:04these circuits as i told you are slow
- 00:15:06they have slow temporal non-linear
- 00:15:08dynamics
- 00:15:09as i told you they are massively
- 00:15:11parallel we do massively parallel
- 00:15:13operations all of the circuits work in
- 00:15:15parallel
- 00:15:16the fact that they are analog actually
- 00:15:18brings this
- 00:15:20feature that are that is basically
- 00:15:23device mismatch all the circuits are
- 00:15:25inhomogeneous they are not equal and
- 00:15:28this actually can be used as an
- 00:15:29advantage to carry out robust
- 00:15:31computation it's counter-intuitive but i
- 00:15:33will show you that it's actually an
- 00:15:35advantage to have variability in your
- 00:15:38devices and this actually is also very
- 00:15:40nice for people doing memory still
- 00:15:42devices that are typically very very
- 00:15:44variable
- 00:15:46the other features are that they are
- 00:15:48adaptive all of these circuits that we
- 00:15:50have have negative feedback loops they
- 00:15:51have learning
- 00:15:52adaptation plasticity so the learning
- 00:15:55actually helps in creating robust
- 00:15:58computation through the noisy and
- 00:16:00variable elements
- 00:16:02by construction there are many of these
- 00:16:04in working in parallel even if some of
- 00:16:05these stop working the system is fault
- 00:16:08tolerant you don't have to throw away
- 00:16:09the chip like you would with a standard
- 00:16:11processor if one transistor breaks
- 00:16:14probably performance will degrade
- 00:16:16smoothly but at least the system will be
- 00:16:17fault tolerant and because we use both
- 00:16:20the best of both worlds analog circuits
- 00:16:22for the dynamics and digital circuits
- 00:16:24for the communication we can program the
- 00:16:26routing tables and configure these
- 00:16:28networks so we have flexibility in being
- 00:16:31able to program these dynamical systems
- 00:16:34like you would program a neural network
- 00:16:35on a cpu on a computer
- 00:16:38uh of course it's it's more complex we
- 00:16:40still have to develop all the tools and
- 00:16:42and simpsons and and other colleagues
- 00:16:44around the world are still busy
- 00:16:46developing the tools to program these
- 00:16:47dynamical systems
- 00:16:49it's not nearly as well developed as you
- 00:16:52know having a java or a c or a python
- 00:16:55piece of code but
- 00:16:57there is very promising work going on
- 00:17:00and now the question always comes why do
- 00:17:02you do it if analog is noisy and
- 00:17:04annoying in homogeneous why do you go
- 00:17:07through the effort of building these
- 00:17:08analog circuits so let me just try to
- 00:17:11explain that there are several
- 00:17:12advantages if you think of having large
- 00:17:15networks in which you have many elements
- 00:17:17working in parallel for example these
- 00:17:19memorizative devices in a crossbar array
- 00:17:22and you want to send data through them
- 00:17:24these membership devices if you use the
- 00:17:26physics they use analog variables so
- 00:17:31if you just send these variables in an
- 00:17:33asynchronous mode you don't need to use
- 00:17:35a clock so you can avoid using digital
- 00:17:38clock circuitry which is actually
- 00:17:40extremely expensive in terms of area
- 00:17:42requirements in large complex chips and
- 00:17:45extremely power hungry so avoiding
- 00:17:47clocks is is something really really
- 00:17:49useful
- 00:17:51if we don't use digital if we're staying
- 00:17:52analog all the way from the input to the
- 00:17:54output we don't need to convert we don't
- 00:17:56need to convert from digital to analog
- 00:17:58to to run the physics of these
- 00:18:00memristors and we don't need to convert
- 00:18:02back from analog to digital and these
- 00:18:04adcs and these dacs are actually very
- 00:18:06expensive in terms of size and power so
- 00:18:09again if we don't use them we save in
- 00:18:11size and power
- 00:18:13if we use transistors to do for example
- 00:18:15exponentials we don't need to have a
- 00:18:17very complicated uh digital circuitry to
- 00:18:20do that so again we can use a single
- 00:18:22device that through the physics of the
- 00:18:24device can do complex nonlinear
- 00:18:26operation that saves area and power as
- 00:18:28well
- 00:18:29and finally if we have analog variables
- 00:18:31like variable voltage heights variable
- 00:18:34voltage pulses widths
- 00:18:37and and
- 00:18:38other types of variable currents we can
- 00:18:40control the properties of the
- 00:18:42of the devices that we use these
- 00:18:44memorized devices we can make them uh
- 00:18:48depending on how strong we we drive them
- 00:18:50we can make them volatile or
- 00:18:51non-volatile we can use their intrinsic
- 00:18:54non-linearities
- 00:18:56depending on how strongly we derive them
- 00:18:58we can even make them switch with a
- 00:18:59probability so we can use their
- 00:19:01intrinsic stochasticity to do stochastic
- 00:19:04gradient descent or to do probabilistic
- 00:19:06graphical networks to do probabilistic
- 00:19:08computation
- 00:19:10and we can also use them in their
- 00:19:12standard way of operation in their
- 00:19:15non-volatile way of operation as
- 00:19:17long-term memory elements so we don't
- 00:19:19need to shift data back and forth from
- 00:19:22peripheral memory back we can just store
- 00:19:24the value of the sinuses directly in
- 00:19:27these membrane devices
- 00:19:29so if we use analog for our neurons and
- 00:19:31synapses in cmos
- 00:19:33we can then best best benefit the use of
- 00:19:36future emerging memory technologies uh
- 00:19:39reducing power consumption
- 00:19:41and uh in a very recent in the last
- 00:19:44escas conference which was just you know
- 00:19:45a few weeks ago we did show with the pcm
- 00:19:49trace algorithm or or the pcm series
- 00:19:51experiments that we can exploit the
- 00:19:54drift of pcm devices which are these
- 00:19:56shown here in a picture from ibm
- 00:19:59to implement eligibility traces which is
- 00:20:02a very useful feature to have for
- 00:20:04reinforcement learning
- 00:20:06so if if we are interested in building
- 00:20:08reinforcement learning algorithms for
- 00:20:10example for for having behaving robots
- 00:20:12that run with
- 00:20:14brains that are implemented using these
- 00:20:16chips we can actually take advantage of
- 00:20:18the properties of these pcm devices
- 00:20:21none that are typically thought as
- 00:20:23non-idealities we can use them to our
- 00:20:25advantage for computation now
- 00:20:28analog circuits are noisy i told you
- 00:20:30there are the they are variable in
- 00:20:31homogeneous for example if you take one
- 00:20:33of our chips you you stimulate the
- 00:20:36neurons with the same current to all the
- 00:20:38neurons to 256 different neurons and you
- 00:20:41see how long it takes for the neuron to
- 00:20:42fire
- 00:20:43not only these neurons are slow but
- 00:20:45they're also variable the time at which
- 00:20:47they fire can greatly change depending
- 00:20:49on which circuit you're using
- 00:20:51and there is this noise which is usually
- 00:20:53typically you have variance of 20
- 00:20:55over the mean so the coefficient of
- 00:20:57variation is about 20 percent
- 00:21:00so the question is how can you do robust
- 00:21:01computation using this noisy
- 00:21:04computational substrate and the obvious
- 00:21:06answer the easiest thing that people do
- 00:21:08when they have noise is to average
- 00:21:11so we can do that we can average over
- 00:21:13space and we can average over time if we
- 00:21:16use populations of neurons not single
- 00:21:18neurons we can just take you know two
- 00:21:20three four six eight neurons and look at
- 00:21:22the average time that it took for them
- 00:21:23to spike or if they're spiking uh
- 00:21:26periodically we can look at the average
- 00:21:28firing rate
- 00:21:29and then if we integrate over long
- 00:21:31periods of time we can average over time
- 00:21:33so these these two strategies are going
- 00:21:35to be useful for uh reducing the effect
- 00:21:38of device mismatch if we do need to have
- 00:21:40precise computation
- 00:21:42and we are doing experiments in this in
- 00:21:44these very days this is actually a very
- 00:21:46recent experiment where we took these
- 00:21:48neurons and we put two of them together
- 00:21:51four of them together eight sixteen if
- 00:21:53you look at the cluster size basically
- 00:21:54that's the number of neurons that we are
- 00:21:56using for the average over space
- 00:21:59and then we are computing the um
- 00:22:02firing rate over two milliseconds five
- 00:22:03milliseconds 50 milliseconds 100 and so
- 00:22:06on and then what we do is we calculate
- 00:22:09the coefficient of variation basically
- 00:22:11how much device mismatch there is the
- 00:22:13larger the coefficient of variation the
- 00:22:15more noise the smaller the coefficient
- 00:22:17of variation the less noise so we can go
- 00:22:19from a very large coefficient of
- 00:22:21variation of say 12
- 00:22:23actually 18 as i said 20
- 00:22:26by integrating over long periods of time
- 00:22:29or by integrating over large numbers of
- 00:22:31neurons we can decrease this all the way
- 00:22:32to 0.9
- 00:22:34and you can take this coefficient of
- 00:22:35variation and you can calculate the
- 00:22:37equivalent number of bits if we were
- 00:22:39using digital circuits how many would
- 00:22:42bits would this correspond to so for by
- 00:22:45just integrating over larger number of
- 00:22:47neurons and over longer periods we can
- 00:22:49have for example a sweet spot where we
- 00:22:51have eight bit resolution just by using
- 00:22:5416 neurons
- 00:22:58and integrating for example over 50
- 00:22:59milliseconds
- 00:23:01this can be changed at runtime if we if
- 00:23:03we want to have a very fast reaction
- 00:23:05time and a course idea of what the
- 00:23:07result is we can have only two neurons
- 00:23:10and integrate only over two milliseconds
- 00:23:13then there's many false myths when we we
- 00:23:15use spike neural networks people tell us
- 00:23:17oh but if you have to wait until you
- 00:23:19integrate enough it's going to be slow
- 00:23:22if you have to average over time it's
- 00:23:23going to take area all of these are
- 00:23:26actually false myths that can be
- 00:23:27debunked by looking at neuroscience
- 00:23:29neuroscience has been studying how the
- 00:23:31brain works the brain is extremely fast
- 00:23:34it's extremely low power we don't have
- 00:23:36to wait long periods of time to make a
- 00:23:37decision
- 00:23:39so if you use populations of neurons to
- 00:23:41average out it's been shown for example
- 00:23:43experimentally with real learners
- 00:23:45that populations of neurons have
- 00:23:47reaction times that can be even 50 times
- 00:23:49faster than single neurons
- 00:23:51so by using populations we can really
- 00:23:53speed up the computation
- 00:23:55if we use the populations of neurons we
- 00:23:57don't need them to be every neuron to be
- 00:23:59very highly accurate they can be noisy
- 00:24:01and they can be very low precision but
- 00:24:04by
- 00:24:05using populations and using sparse
- 00:24:08coding we can have very accurate
- 00:24:09representation of data and there is a
- 00:24:12lot of work for example by sophie the
- 00:24:14nerve that has been showing how to do
- 00:24:15this by by training populations of
- 00:24:18neurons to do that
- 00:24:19and now it should be also known in
- 00:24:21technology that if you have variability
- 00:24:24it actually helps in transferring
- 00:24:26information across multiple layers
- 00:24:28and here what i'm showing here is data
- 00:24:30from one of our chips where we use 16 of
- 00:24:32these neurons per core and we basically
- 00:24:35provide some in desired target as the
- 00:24:38input we drive a motor with a pid
- 00:24:41controller and we minimize the error
- 00:24:44it's just to show you that by using
- 00:24:45spikes we can actually have very fast
- 00:24:48reaction times in robotic platforms
- 00:24:50using these types of uh chips that you
- 00:24:52saw in the previous slides
- 00:24:55in fact we've been building many chips
- 00:24:56for many years also at since the the the
- 00:24:59colleagues are building chips and the
- 00:25:00latest one that we have built at the
- 00:25:02university as a academic prototype
- 00:25:06is um
- 00:25:08is called the dynamic neuromorphic
- 00:25:09asynchronous processor it was built
- 00:25:11using a very old
- 00:25:13technology 180 nanometer as i said
- 00:25:15because it's an academic exercise but
- 00:25:18still this has a thousand neurons it has
- 00:25:21four cores of 256 neurons each we can
- 00:25:24actually do very interesting edge
- 00:25:25computing applications just with a few
- 00:25:28hundred neurons and then of course then
- 00:25:30the idea is to use both the best of both
- 00:25:32worlds to see where we can do
- 00:25:34analog circuits to really have low power
- 00:25:37or digital circuits to have you know to
- 00:25:39verify principles and make you know
- 00:25:42practical problems solve practical
- 00:25:44problems quickly and then by combining
- 00:25:46analog and digital we can also there get
- 00:25:49the best of both worlds
- 00:25:50so to to conclude actually i would just
- 00:25:53like to show you examples of
- 00:25:54applications that we built
- 00:25:57using this
- 00:25:59this is
- 00:26:00a long list of applications but if you
- 00:26:02have
- 00:26:03the slides you can actually click on the
- 00:26:07on the references and you can get the
- 00:26:08paper the ones highlighted in red are
- 00:26:11the ones that have been done by simpsons
- 00:26:13uh by our colleagues from instance on
- 00:26:15ecg anomaly detection and the detection
- 00:26:18of vibration anomalies was actually done
- 00:26:19by simpsons and by university of zurich
- 00:26:22in parallel independently and just go to
- 00:26:24the last slide where i basically tell
- 00:26:26you that we are now at a point where we
- 00:26:29can actually use all of the knowledge
- 00:26:31from the university about brain inspire
- 00:26:34strategies
- 00:26:35to develop this neuromorphic
- 00:26:36intelligence field transfer all of this
- 00:26:39know-how into
- 00:26:40technology and in the in with new
- 00:26:43startups that can actually use this
- 00:26:45know-how to solve practical problems
- 00:26:47and try to find you know what is the
- 00:26:48best market for this
- 00:26:50as i said industrial monitoring for
- 00:26:52example for vibrations is something but
- 00:26:54something that can also be done using
- 00:26:56both sensors and processors is is really
- 00:26:59what the simpsons company has been
- 00:27:01developing and it's it's been really
- 00:27:03successful at for doing intelligent
- 00:27:05machine vision for putting both uh
- 00:27:07sensing and
- 00:27:09artificial intelligence algorithm on the
- 00:27:11same chip
- 00:27:12and having a very low power in the order
- 00:27:14of microwave uh tens or hundreds of
- 00:27:16microwatt power dissipation for solving
- 00:27:19practical problems that can be useful in
- 00:27:21society and in fact even solving
- 00:27:23consumer applications
- 00:27:25so with this um sorry i went a bit over
- 00:27:27time but i would just like to thank you
- 00:27:29for your attention
- 00:27:39you
- AI
- hjernensinspireret
- neuromorfik
- lavenergi
- computing
- analoge kredsløb
- energibesparelse
- neurale netværk
- hukommelse
- specialisering