Cogito ergo caffeinatus

LLMs are Narrators


I find that most GenAI conversations end up covering similar pro/con positions constantly. I believe that most positions are resolved if:

  1. you accept that these are fundamentally narrative tools. They build stories, In whatever style you wish. Stories of code, stories of project reports. Stories of conversations.
  2. this is balanced by the idea that the core of everything in our shared information economy is Verification.

The reason experts get use out of these tools, is because they can verify when the output is close enough to be indistinguishable from expert effort. Domain experts also do another level of verification (hopefully) which is to check if the generated content computes correctly as a result - based on their mental model of their domain.

I’d predict that:

  1. LLMs are deadly in the hands of people who can’t gauge the output, and will help them drive off of a cliff,
  2. Experts will find it useful, on tasks where output verification has a marginal advantage over creating output from scratch
  3. LLMs are probably better used as co-pilots, than as agents.