Thinking

Essays and framework work on AI-native product management, agentic workflows, context frameworks, PRDs in an AI world, token economics, and product operating systems.

The Next Competitive Edge Is Not Output, It’s Cost

· 4 min read

I spend much of my time thinking about structured AI workflows, token budgets, and how to build products from 0 to 1 with agent support.

Early on, the differentiator was quality.

Who could generate better copy. Better code. Better documentation. Better design artifacts. Model capability varied widely enough that output quality created separation.

That advantage appears to be compressing.

Models are improving rapidly. Tooling layers are standardizing. Structured workflows are becoming more common. The baseline level of acceptable output is rising across teams and organizations.

As capability equalizes, constraints become more visible.

Cost becomes harder to ignore.

Token cost. Compute cost. Energy cost. Latency cost. Human review cost.

The organizations that create durable advantage will likely not be the ones who simply use AI most aggressively. They will be the ones who use it intentionally.

Do you really need the largest model for this task, or will a smaller one suffice. Are you passing far more context than required. Are you regenerating artifacts that could be cached. Are you designing workflows that accumulate clarity, or repeatedly pay to rediscover it.

There is a broader implication here. Compute is energy. Energy has impact. Efficiency is not only financial, it has environmental and ethical dimensions.

The competitive frontier appears to be shifting from who can generate impressive output once, to who can consistently extract the most value per unit of compute.

In an increasingly agent-driven economy, leverage may be measured less by raw output and more by cost-adjusted impact.

That shift is subtle, but it may become decisive.

The Death of the Button Click

· 4 min read

For decades we designed interfaces to guide behavior.

Color to drive conversion. Placement to trigger clicks. Friction to increase dwell time. Entire disciplines emerged around shaping attention and influencing user flows.

That model assumes the human is directly navigating the interface.

As conversational interfaces mature, more interactions will originate in natural language. A user will say, “Find me the best option,” or “Book this,” or “Summarize their pricing model.” An agent will interpret, navigate, extract, and act.

They may not scroll through your landing page. They may not experience your micro-interactions. They may not respond to your CTA hierarchy.

Instead, they say what they want. An intermediary translates that intent into structured actions.

This does not mean frontends disappear in three years. But it does suggest that the center of gravity may shift.

When interaction becomes conversational and mediated, some of the behavioral mechanics optimized for clicks lose leverage. The surface area shifts from persuasion to clarity.

Clear APIs. Explicit actions. Structured affordances that an agent can safely execute. Transparent pricing models. Well-defined capabilities. These become increasingly important.

Design does not go away. It evolves.

The emerging question is not only how do we design pages that convert humans, but how do we design systems that agents can understand, reason about, and act within safely.

The teams that adapt well will likely not be those who simply build the flashiest UI. They will be those who make their systems understandable, composable, and interoperable with agent workflows.

We are not losing interface design. We are expanding the definition of interface.

If Your Website Can't Be Grepped, It Might Not Exist

· 5 min read

Your primary audience is increasingly not a human with a mouse. It's an agent with a parser.

For years we optimized for visual impact. Animations. Client-side rendering. Content loaded after multiple API calls. Text hidden behind interactions. Pages that look beautiful but are effectively opaque unless you execute JavaScript and simulate a user session.

That philosophy made sense in a human-first browsing era.

It makes less sense in an agent-mediated one.

If your site cannot be easily scraped, parsed, indexed, or grepped from raw HTML, you are introducing friction for the systems that are starting to mediate discovery. These systems are not patient. They are cost-sensitive, token-aware, and optimized for structured retrieval.

Every extra layer of obfuscation costs tokens. Every hidden block of content requires additional compute. Every dynamic render that could have been static increases processing overhead.

There is also an environmental dimension. LLM-driven systems consume energy. If your content requires significantly more processing to extract the same meaning, you are effectively increasing the ecosystem's compute burden for aesthetic reasons.

There is a strategic risk as well. If an agent cannot efficiently understand your content, it may deprioritize referencing it. In an agent-mediated discovery layer, visibility depends on machine comprehension as much as human appeal.

Static-first. Clean semantic HTML. Structured metadata. Predictable URLs. Exposed documentation. Markdown as a distribution layer. These are not retro ideas. They are forward-compatible infrastructure.

I do not think human-readable design disappears. But I increasingly believe machine-readable structure becomes table stakes.

If your site cannot be read by a terminal window, you may find it is not read at all.

Topics: AI-native product management · Agentic development workflows · Context frameworks · PRDs in AI world · Token economics · Product operating systems