You may be familiar with the essay “Choose Boring Technology”. In that essay, McKinley makes the case that choosing boring technology and wisely spending your company’s “three innovation tokens” is a global optimization for the company, ensuring that solving a problem doesn’t come with the additional burden of supporting a shiny piece of unstable tech too.

In the next ten years, I predict that “Choose AI-Friendly Technology” will be the new “Choose Boring Technology”.

What Will ‘AI-Friendly’ Mean?

In this context, AI-Friendly means way more than just “there are docs”. I’m talking about ways for AI, instead of humans, to quickly learn the context behind a system and interact with it.

The future is not AI tools reading Markdown files like humans, although it will look like that for a long time.

Slowly we’ll adopt industry standards for AI agents to interact natively with our systems.

We already have a few examples like this in the industry:

  • .well-known/ai-plugin.json – OpenAI Plugin Manifest
    This is an OpenAI Specification for giving AIs a hint at how to use a REST API. I’m not super sure how widespread this is. Here is one example. Basically a pointer from a well-known location to the actual API spec.
    llms.txt is a similar kind of thing.

  • AI Markdown in Repositories (ai.md / AI-friendly README sections)
    Honestly I hope this goes away and normal README.md files just get better. I do think that we’ll begin to see more structure (tags, yaml front matter) to aid in AI crawling of docs.

  • AI-First Integration Points (MCP)
    I think in the future, our infrastructure tools will just come bundled with a first-party MCP. But I really hope MCPv2 is better.

  • AI-Specific Cross-Platform Standards
    MIT is curating a list of Agents with structured card data (like a model card). This isn’t something that is going to scale, but maybe there could be a big index of public agents for agent-to-agent communication.

Honestly there are not a lot of examples in the wild.

Predicting AI-Friendly Integration Points

Predicting the future, I think we’ll see things like:

  • More Well-known Discovery Endpoints (More Reflection)
    I don’t like computers talking plain text to each other, but things like protobuf reflection make it possible to discover APIs at runtime. We need more of this in different aspects of infrastructure. Reflect all the things.

  • Global Discovery Conventions
    Someone needs to invent some sort of “infrastructure discovery protocol” so agents can discover the topology of a system, without knowing a lot ahead of time. I don’t want to have to tell an agent what “the production database” is, but I also don’t want that data hard-coded in a markdown file. No, this is not DNS, but maybe DNS text records could be used for more pointers.

  • Some Sort of Global Context Registry
    Right now, the business context around what does what is captured inside of engineers’ heads. Sometimes, we write this context down and call it “documentation”. I think we need to go further and encode all the things, at runtime, in machine-readable form. My best guess at what this looks like is lots of tags (annotations) on everything and a global tag registry for converting those tags to meaningful descriptions.

  • Machine-Friendly SSH RPC
    You might think that ssh should have a --json argument, but it really doesn’t. Fundamentally though, LLMs need big blocks of text. It will be neat when AI agents can literally exec() instead of wrapping everything.

The Coming Rise of ‘AI-Friendly’ Infrastructure

In the future where AI agents are working with us side-by-side on our infrastructure stacks, choosing something AI-friendly is going to become more important than how “boring” it is.

I mean, boring to whom?

Using MySQL may be boring to you and your current team, but using AiSQL may be “boring” to your AI SRE team, who might actually be running the thing in production.

And if AI is running it, will it really matter?

What will matter is: how much does it cost the company to run this thing?

But What If It Breaks?

Yeah, it is going to break.

In the early days AI infra tools are going to suck. We are going to point our finger and say “haha, look at how dumb it is!”

This won’t last long. The thing that AIs have on their side is speed and relentlessness.

They can respond to a page way faster than a human, and perform remediation faster than a human.

They can “watch dashboards” all day and not get bored.

They can also escalate to a human if they can’t figure it out!

The Future

In the early days, Sysadmins roamed the earth and created a world of bespoke hand-crafted servers.

Later, the DevOps movement showed us a world created by automation and tools, but still hand-repaired and debugged.

Kubernetes came onto the scene and gave us an API for running compute with all the extra parts, giving us a complete vision for automated infrastructure.

AI infrastructure agents are going to slowly come onto the scene, but really struggle because none of that infrastructure is very sympathetic to an AI agent. This will slowly evolve to the infrastructure itself becoming more friendly to the AI, perhaps with some of those examples above.

We’ll coexist for a while, but the AI with superior speed and context, can always beat out a human for fixing the machines. Eventually the AIs will be the first responders, and escalate to humans only when things get out of hand.


Comment via email