If Your Website Can't Be Grepped, It Might Not Exist
· 5 min read
Your primary audience is increasingly not a human with a mouse. It's an agent with a parser.
For years we optimized for visual impact. Animations. Client-side rendering. Content loaded after multiple API calls. Text hidden behind interactions. Pages that look beautiful but are effectively opaque unless you execute JavaScript and simulate a user session.
That philosophy made sense in a human-first browsing era.
It makes less sense in an agent-mediated one.
If your site cannot be easily scraped, parsed, indexed, or grepped from raw HTML, you are introducing friction for the systems that are starting to mediate discovery. These systems are not patient. They are cost-sensitive, token-aware, and optimized for structured retrieval.
Every extra layer of obfuscation costs tokens. Every hidden block of content requires additional compute. Every dynamic render that could have been static increases processing overhead.
There is also an environmental dimension. LLM-driven systems consume energy. If your content requires significantly more processing to extract the same meaning, you are effectively increasing the ecosystem's compute burden for aesthetic reasons.
There is a strategic risk as well. If an agent cannot efficiently understand your content, it may deprioritize referencing it. In an agent-mediated discovery layer, visibility depends on machine comprehension as much as human appeal.
Static-first. Clean semantic HTML. Structured metadata. Predictable URLs. Exposed documentation. Markdown as a distribution layer. These are not retro ideas. They are forward-compatible infrastructure.
I do not think human-readable design disappears. But I increasingly believe machine-readable structure becomes table stakes.
If your site cannot be read by a terminal window, you may find it is not read at all.