Mean field theory comes from phsyics in explaining phenomena like the Ising Model, but in neuroscience allows us to approximate the collective behavior of large, densely connected sub neural networks by replacing the individual synapse inputs with an average/mean field input.

Essential for reducing complexity and allows for high order extensions of individual dynamical system Neurons like the Hodgkin Huxley Model.


Collapsing down Hodgkin Huxley Model to a Single Plane

Mean field theory allows us to bring down the term into a singular field.

We dont track exactly which neuron pushes which, and instead assume that each neuron feels a collective average field influence from the rest of the population.

If global population firing rate is , then input current to neuron becomes a Gaussian random variable thanks to Central Limit Theorem. as proportional to network activity, and as proportial to network fluctuations/noise.

Thus reducing coupled equations to one singular consistent equation. This is usually a Fokker-Planck equation to describe the probability density of Membrane Potentials.


Connections with Ising Model

Ising model acts as a sort of skeleton for attractor dynamics.

To form a mapping from Continuous Dynamical Systems to the Ising Model, we usually can perform a Course-Grained model of time/state.

For discretization, we can ignore the sub threshold trajectory of to focus on output as either spiking or silent. As a symmetry, we then map the neuron state to an Ising spin .

Probability of network being in a given configuration state is given by Boltzmann distribution:

and our Hamiltonian energy function is: