Function calling in language models refers to the ability of a model to not only generate text but also interact with external functions or APIs. This capability allows the model to perform specific tasks like retrieving data, performing calculations, or interacting with other systems, enhancing its utility beyond just text generation.

It is kind of Warhammer-esque in that it is basically “talking to the machine spirit” of the Large Language Model, and asking it nicely to “please return json like this and we will give you the information you need and you can do this”

Looks something like this in most frameworks:

Python
Output

Agents

This was the precursor to tools like Claude Code and related programs, that put the Large Language Model as the “action agent”. It turns out, making the LLM interact and debug code like a software engineer seems to improve its performance! Who would have guessed!