1 Comment

Prompt:

Andrej Karpathy tweeted today:

```

With many puzzle pieces dropping recently, a more complete picture is emerging of LLMs not as a chatbot, but the kernel process of a new Operating System. E.g. today it orchestrates:

- Input & Output across modalities (text, audio, vision)

- Code interpreter, ability to write & run programs

- Browser / internet access

- Embeddings database for files and internal memory storage & retrieval

A lot of computing concepts carry over. Currently we have single-threaded execution running at ~10Hz (tok/s) and enjoy looking at the assembly-level execution traces stream by. Concepts from computer security carry over, with attacks, defenses and emerging vulnerabilities.

I also like the nearest neighbor analogy of "Operating System" because the industry is starting to shape up similar:

Windows, OS X, and Linux <-> GPT, PaLM, Claude, and Llama/Mistral(?:)).

An OS comes with default apps but has an app store.

Most apps can be adapted to multiple platforms.

TLDR looking at LLMs as chatbots is the same as looking at early computers as calculators. We're seeing an emergence of a whole new computing paradigm, and it is very early.

```

Here is my take on it:

```

These are some amazingly thought provoking ideas.

Today, we write software in programming languages that are precise, deterministic, and very efficient. This software runs on top of operating systems that manage IO, compute, and memory resources in a deterministic way. This keeps us safe and allows us to build incredibly complex, yet reliable systems.

Enter AI. The easy step forward is for software to use an AI as a subroutine. Traditional software has full control, the AI is its tool.

What Karpathy is envisioning is a reversal of this paradigm. The AI is in control, and uses traditional software as its tools, its subroutines.

Will this prove to be most productive? Nobody knows.

And so we will pursue both paths - finding better and better ways to use LLMs us subroutines to traditional software, and empower AIs by giving them better and better tools.

The “AI a a tool” approach will result in better and better, more human accessible and human friendly machines. They will work like machines - focused on their task, at the service of humans, forever in humanity’s service, but also limited, ultimately by what the human told it to do. When something isn’t working, we will know.

The “AI as kernel” approach will result in a more resilient system that will do its best to adapt to new situation, to choose what to do. A human might or night not know what it’s doing, what’s happening, etc.

What do we want? An easy answer would be that “AI as kernel” is a threat to humanity, and one could argue we should focus on “AI as a tool”.

But the reality is - if there I value in “Ai as kernel”, someone will do it, it will happen inevitably.

So I think the real answer is: It depends on the use case. The kernel for habitat construction robots on Mars should probably be an LLM. But our international banking system - let’s keep that in traditional software?

```

Rewrite this as a blog post in the style of <NO SPOILER>

Expand full comment