the perceptron is the simplest model of a neuron: takes a weighted sum of inputs, fires if it exceeds a threshold. foundation of everything that follows.
by “firing”, we simply mean that after all the inputs have each been multiplied by the bias term, they are all added together. This sum is then passed through an Activation Function , there are many but this often either just squishes the range of possible values to a scale between 0 and 1 (Sigmoid Function ) or sets to 0 if less than 0 ( ReLU )
z = i ∑ w i x i + b
y ^ = { 1 0 z ≥ 0 z < 0
"\\usepackage{tikz}\n\\usetikzlibrary{arrows.meta, positioning, decorations.pathreplacing}\n\n\\begin{document}\n\\begin{tikzpicture}[\n node distance=1.8cm and 2.5cm,\n input/.style={circle, draw=black, thick, minimum size=0.9cm, fill=blue!8},\n sumnode/.style={circle, draw=black, thick, minimum size=1.3cm, fill=orange!12},\n actnode/.style={rectangle, draw=black, thick, minimum width=1.4cm, minimum height=0.9cm, rounded corners=3pt, fill=green!10},\n output/.style={circle, draw=black, thick, minimum size=0.9cm, fill=red!10},\n weight/.style={font=\\small, midway, fill=white, inner sep=1.5pt},\n arr/.style={-{Stealth[length=2.5mm]}, thick}\n]\n\n% Input nodes\n\\node[input] (x1) at (0, 2.4) {$x_1$};\n\\node[input] (x2) at (0, 0.8) {$x_2$};\n\\node[input] (x3) at (0, -0.8) {$x_3$};\n\\node[input] (xn) at (0, -2.4) {$x_n$};\n\n% Dots between x3 and xn\n\\node at (0, -1.6) {$\\vdots$};\n\n% Bias node\n\\node[input, fill=yellow!15] (b) at (2.5, -3.6) {$b$};\n\n% Summation node\n\\node[sumnode] (sum) at (4, 0) {$\\displaystyle\\sum$};\n\n% Activation function node\n\\node[actnode] (act) at (7, 0) {$\\sigma(\\cdot)$};\n\n% Output node\n\\node[output] (y) at (9.5, 0) {$\\hat{y}$};\n\n% Input-to-sum edges with weights\n\\draw[arr] (x1) -- node[weight, above] {$w_1$} (sum);\n\\draw[arr] (x2) -- node[weight, above] {$w_2$} (sum);\n\\draw[arr] (x3) -- node[weight, below] {$w_3$} (sum);\n\\draw[arr] (xn) -- node[weight, below] {$w_n$} (sum);\n\n% Bias edge\n\\draw[arr] (b) -- (sum);\n\n% Sum to activation\n\\draw[arr] (sum) -- node[weight, above] {$z$} (act);\n\n% Activation to output\n\\draw[arr] (act) -- (y);\n\n% Labels\n\\node[above=0.3cm of x1, font=\\small\\bfseries] {Inputs};\n\\node[above=0.3cm of sum, font=\\small\\bfseries] {Sum};\n\\node[above=0.3cm of act, font=\\small\\bfseries] {Activation};\n\\node[above=0.3cm of y, font=\\small\\bfseries] {Output};\n\n% Equation annotation\n\\node[below=0.6cm of act, font=\\small, text=black!70] {$\\hat{y} = \\sigma\\!\\left(\\sum_{i=1}^{n} w_i x_i + b\\right)$};\n\n\\end{tikzpicture}\n\\end{document}" x 1 x 2 x 3 x n . . . b X ¾ ( ¢ ) ^ y w 1 w 2 w 3 w n z Inputs Sum Activ ation Output ^ y = ¾ ¡ P n i =1 w i x i + b ¢ source code