A simple but useful mental model for using large language models: prompts are programs.

A program is just a set of instructions you give to a computer to perform actions and produce output. What are prompts? Natural language commands that you pass to an LLM to produce text output, which have recently become shockingly coherent and useful.

Like software programs, you write different prompts to produce different outputs. You can reuse and compose prompts, and your prompts can accept parameters as inputs which modify their functionality.

Take a simple example: Summarize the top {n} pros and cons of {subject}.
I can “run” this prompt with different inputs to get a quick evaluation of any subject I like. Taking it further, while reading a paper I can run a prompt that extracts the key practices it describes, then feed those to the original prompt to get a list of pros and cons for each.

An interesting if somewhat imperfect analogy I like is thinking of LLM’s as a “calculator for words”. You input in some words, along with a prompt that defines an operation, and new words come out.

A simple concept, but when combined with the fact that the calculator can perform a vast number of powerful functions that produce coherent knowledge and output - the implications are quite profound.

More broadly, if words represent thoughts, you can see how this process starts to resemble what we think of as intelligence and thought.

Differences from traditional programs

LLM prompts differ from traditional programs in a few important ways.

Most obviously, they’re defined in natural language, which dramatically changes the accessibility and ergonomics of writing them.

Furthermore, their API’s are much less obvious - there’s no formal documentation on how to use them and what functions they can perform.

Traditional programming languages have strict grammars and syntax structures that code needs to adhere to, whereas prompts don’t. This means there’s no clear measure of a prompt’s correctness other than its output, which can be both a pro and a con. For example, you’ve probably noticed that you can make spelling mistakes or leave out words and LLM’s are often still able to accurately infer your intent. On the flip side, seemingly small tweaks to a prompt can dramatically change their outputs in unexpected ways.

Most notably, prompt differ from traditional programs in that they produce non-deterministic results. This makes it hard to measure the correctness of their output, and it’s generally up to the software creator to define what “right” looks like - which can be contextual and subjective. Consequently, prompts can be quite fragile for production and business-critical use, where reproducibility and reliability are often required.

Implications of prompting as programming

I’ve listed out some interesting theoretical implications of the adoption of LLM’s and prompting as a new type of programming.

The accessibility of software creation expands
As Andrej Kaparthy said, “The hottest new programming language is English”. Beyond changing how programs are written at software companies, “prompts as programs” expands the category of people who can write programs to anyone with command of a spoken language. This is reminiscent of the “no-code” trend that reduced the barrier to creating apps to using simple GUI’s. When you remove esoteric requirements for software creation, more software will get written by more people.

Natural language becomes a more common user interface generally
When LLM’s are used to translate user input into other software actions, they enable natural language to become the primary interface. Not all tasks will be better when requested through natural language, but many could be, and LLM’s have made it dramatically easier for software creators to leverage this interaction pattern. Expect talking to computers to become a lot more common.

Products built with LLM’s will require managing and evaluating prompts as part of their development chain.
Robust toolchains for developing and testing production-grade applications are core to the software industry. As LLM’s and therefore natural language prompts become part of the software stack, development processes will need to evolve. If prompts are programs that define your business logic, shouldn’t they be version controlled, linted, sanitized, and “code” reviewed? More importantly, we need new ways to monitor and test (aka benchmark) these systems that can produce non-deterministic outputs. It’s still early so expect these processes to mature significantly. There’s already been a plethora of companies moving in to fill this space, and the LLM development stack will be an exciting one to track.

Prompts become valuable assets, just like code
Software code is treated as valuable intellectual property, so it follows that natural language programs will be too. Like programs, prompts are just well-defined sets of instructions for computers to meet an objective. Crafting those instructions to meet your specific goals has value, and therefore so do the crafted instructions.

Product iteration loops for LLM-enabled products might be faster
It’ll generally be easier to modify natural language text that defines a program’s behavior than it will be to update feature code. In addition, these iterations can be made by non-engineers, meaning more people might contribute directly to the implementation of a product. Making those changes with sufficient confidence will be challenging though, and could require additional overhead. Either way, the product development lifecycle for these types of products is going to start to look different.

Clear communication as a skill increases in value
Effective communication has always been a valuable skill, because it enables efficient cooperation and knowledge sharing between people. It’s value is now multiplied, because if you can clearly articulate what you know and what you want, you can make requests of both people and powerful computer systems to help you learn and create. We’re living in a moment in time where the benefits of communicating well are merging with the leverage of being able to code. That’s pretty exciting.