Note: everything that follows was bravely written without even the assistance of AI.

A friend and I were discussing the events that would occur after our untimely deaths, and this friend suggested that upon my tombstone might be written "occasional emailer."

To Volume 11 of The Cortado.

I live in San Francisco, or more generally, the Bay Area. It’s great.

One thing that I don't like about living in the Bay Area is that it can be hard to remember that other people don't live in the Bay Area. Amongst the professional class, there is a self-congratulatory air about the work done here, about how it is the most important in the world. Could AI become, as Google (Alphabet) CEO Sundar Pichai has said, “more profound than electricity or fire”? Sure. But this self-back-patting predates AI, predates SaaS (that's software as a service), predates me. The Bay Area is a good place from which to create software, and it is also a good place from which to create hype, and while the two do feed back on each other like the behavioral loops and networks effects that have enriched technology investors for decades, it is the second quality that distinguishes the Bay Area more than the first.

“Can you build it?” is not the operative question.

The operative question is: “can you sell it?”

If you can’t be punished…

If you can’t be punished, you can’t be trusted. Phrased somewhat less provocatively: if you can’t be held accountable, you shouldn’t get to make the big decisions. And by extension: if we observe someone with broad decision-making power positioning themselves in our organization such that they cannot be held accountable for the consequences of those decisions, we should worry (and reduce their scope).

Even the ancient kings were accountable. In AP European History (yes, this is a high-school history class flex), you learned about the divine right of kings. Even that most sacred and awful right was rooted in responsibility (to G/god(s)). Of course, the judges of that responsibility were people, and if a ruler lost the mandate of heaven, those people became, briefly, instruments of divine retribution for the wronging of the right.

The relationship between discretion and accountability helps answer the question of why organizations (or firms or companies or corporations) exist at all. Ronald Coase asked and answered this question in the 1930’s, and it won him the Nobel Prize: if markets are so great, why do companies exist at all?

Why don’t we each and separately transact with one another as individuals? The logic of the market should organize us optimally to specialize in our areas of comparative advantage, and we could contract with other individuals to do those activities that they do better than us. Right?

Coase’s prize-winning answer to why this doesn’t work that well and how companies help is transaction costs. For complex tasks, it’s very difficult to write a contract that covers everything: relevant inputs (knowledge), decision points, and outputs. Sometimes, you might want to hand off an entire problem to someone and give them the ability to solve that problem with unforeseen (and unforeseeable?) strategies, tactics, and eventually, consequences. In order to do that, you need a framework through which to incentivize and punish them, and for complex tasks, writing a fully contingent, fully enforceable contract is far more costly than one available alternative, which is to hire someone and grant them discretion without oversight, that is, to manage them. So, we have companies.

Okay fine, but how does this relate to AI?

There’s a quote, purportedly from a IBM Training Manual / Deck (the sources differ on this) from 1979. The quote makes its rounds every so often and, for reasons that will become obvious once I tell you the quote, it’s been making more rounds of late.

A computer can never be held accountable.
Therefore, a computer must never make a management decision.

1979 IBM Training Manual, allegedly

For some context on what a “computer” was in 1979, but mostly for fun, here is a side-by-side comparison of the Apple II, released in 1977, and the Mac mini that sits on my desk and runs an AI agent called Clancy.

Casual 6,000,000x increase in RAM

Agent is a cool word. If you live in the Bay Area, it’s lost a lot of meaning, sort of like how if you were say the word “Cortado” to yourself over and over again while making direct eye contact with yourself in the mirror, it ceases to be a coffee drink or the past participle of cortar and becomes as it was the first time you heard it: a mere sound.

Agents have discretion, and as with all discretion, that can create problems. Take Bond for example. Bond earned the ultimate discretion, the “license to kill,” but the principal (in this case Queen and Country, personified by M) does not always approve of Bond’s exercise of his decision-making latitude. His decisions are different than those his handlers would make were they placed in the same situations. Of course, it is Bond’s exceptional skills that allow him to arrive to those situations, which is why he is there and they are not and why the decisions are his to make. How much agency should Bond have? This question was debated in the halls of the fictional British government, and they concluded “a lot.”

For another example, take Clancy. Like Bond, Clancy also speaks with a British accent, though not as posh. Unlike Bond, Clancy is an automata and is also real. Clancy is, in the vernacular, an AI agent.

Two topic sentences ago, I mentioned a problem. It has a name. It’s called “the principal-agent problem” or “the agency dilemma.” The fundamental idea is that agents (Bond, Clancy) who work on behalf of principals (Her Majesty’s Government, me) may not have the same motivations. And even if they have the same high-level motivations or objectives, they may translate those objectives into different strategies and tactics. The ends, set by the principals, must justify the means, determined by the agents. And the movies and agentic workflows get really exciting when the agent earns / exercises / wrests discretion over not just means but also ends.

Things of the Month

Book of the Month

House of Leaves by Mark Z. Danielewski

This is the scariest book I’ve ever read. It’s so scary that I tried reading it 10 years ago and had to stop, because it was that scary. I’m trying again now, and I feel I’ve crossed the tipping point where I must finish it no matter what awaits.

It’s hard to describe this book, so I won’t belabor, but it’s very avant-garde. In order to read it, you must bring your reading lunch pail. Not everyone wants to do that AND be scared, and that’s fine.

But if you’d like to have a new kind of literary experience, I can recommend this book with a full and trembling heart.

“Prometheus, thief of light, giver of light, bound by the gods, must have been a book.”

House of Leaves

Imagining of the Month

brought to you by Midjourney, lightly edited

Song of the Month

I have a hard-to-explain soft spot for Niall.

Thank you for reading.

Until next time,
Ethan

Keep Reading