Commentary

Something Big Is Happening — But We Have Work to Do To Go Beyond Coding

Matt Shumer's viral essay "Something Big Is Happening" has now been viewed tens of millions of times, and it deserves every one of them. His core observation is right: something genuinely significant is underway, and most people haven't grasped the magnitude yet. The release of GPT-5.3 Codex and Opus 4.6 on the same day in early February 2026 marked a visible inflection point. AI systems that can write tens of thousands of lines of working code, test their own output, iterate on what they've built, and present polished results — that is a real and remarkable development. His point about AI contributing to its own improvement is important and well-made.

Shumer is also right about the magnitude of impact this will have across knowledge work broadly. The transformation coming to most professional work is enormous. But it won't arrive the same way it arrived for coding. Software development had a set of structural advantages that allowed AI to reach autonomous capability faster than it will in other domains. Getting to the same level of impact with the rest of knowledge work — and we will get there — is going to require additional steps, new approaches, and deliberate investment in how humans and AI systems work together.

Understanding why coding moved first isn't a reason for complacency. It's a roadmap for what we need to build next.

Why Coding Moved First

Coding occupies a uniquely favorable position for AI systems. Several characteristics converge to make software development the domain where autonomous AI capability was always going to advance fastest. Recognizing these advantages helps clarify what needs to happen to unlock the same level of transformation elsewhere.

First: Formal, Structured Language

Code is written in formal, structured languages with precise syntax and unambiguous semantics. A Python function either follows the language specification or it doesn't. There is no room for the kind of interpretive flexibility that characterizes most human communication. When an AI system generates code, the rules for what constitutes valid output are explicit and machine-readable. Compare this to writing a performance review, drafting a strategy memo, or composing an email to a difficult client. In those tasks, the "language" is natural language — rich with ambiguity, implication, cultural context, and unstated assumptions. Two perfectly competent professionals might produce radically different outputs for the same brief, and both could be excellent. This isn't a weakness of AI; it's a characteristic of the work itself that demands a different kind of collaboration between humans and AI.

Second: Extraordinary Training Data

The training data available for code is extraordinary in both quantity and quality. Platforms like GitHub host billions of lines of public code, much of it version-controlled with clear commit histories showing what changed and why. Crucially, this corpus includes not just correct code but clearly labeled incorrect code — failed builds, bug reports, rejected pull requests, and stack traces. AI systems can learn from both success and failure with unusual clarity. The equivalent doesn't yet exist for most knowledge work. There is no public repository of "good memos and bad memos" with annotations explaining why each succeeded or failed. There is no version-controlled history of how a consulting recommendation evolved through six rounds of partner review. Building richer feedback data for other domains is one of the steps that will accelerate AI's impact beyond coding.

Third: Objective, Automated Feedback

Code provides objective, automated feedback. An AI system can write code, compile it, run it, execute test suites against it, and determine with high confidence whether the output does what it was supposed to do. Code works or it doesn't. A function returns the right value or the wrong one. A build succeeds or fails. An application renders correctly or throws errors. This tight feedback loop is what enables the kind of autonomous iteration Shumer describes so vividly — the AI writing code, opening the app, clicking through it, deciding something doesn't look right, going back and fixing it. That workflow depends on the system's ability to evaluate its own output against objective criteria.

Most daily knowledge work doesn't have this built-in feedback mechanism. When an employee drafts a quarterly business review, there is no compiler that declares it correct or incorrect. When a marketing team develops a campaign strategy, there is no test suite that returns pass or fail. Quality in these domains is determined by the subjective judgment of the employee themselves, their manager, their colleagues, their clients — people whose standards vary, whose preferences shift, and whose evaluation criteria are often unarticulated until something feels wrong. The impact AI has on these tasks will be just as large, but achieving it requires us to create new ways of structuring the collaboration between human judgment and AI execution.

Fourth: Well-Defined Boundaries of Correctness

Code has well-defined boundaries of correctness at multiple levels. At the lowest level, syntax must be valid. At the next level, the program must execute without runtime errors. Above that, it must pass unit tests, integration tests, and end-to-end tests that verify specific behaviors. Each layer provides concrete, checkable criteria. Knowledge work rarely has this layered structure of verification. A legal brief might be grammatically correct, logically structured, and factually accurate — and still be strategically wrong for the client's situation. An investment memo might correctly summarize every data point and still miss the insight that matters. The gap between "technically correct" and "actually good" is narrow in coding and vast in most other professional work. Closing that gap is where the most important work lies ahead.

Fifth: Tasks Decompose Cleanly

Coding tasks decompose cleanly. Building an application can be broken into modules, functions, components, and endpoints — each of which can be specified, built, and tested independently. This modularity is a natural fit for AI systems that work best with well-scoped, well-defined tasks. Much of knowledge work, by contrast, is entangled. A change in one section of a strategy document can invalidate the reasoning in another. The tone of a single paragraph in a client communication can reshape the entire relationship. These interdependencies are difficult to specify and even harder to test for automatically. New frameworks for decomposing complex knowledge work into AI-addressable components — while preserving the holistic judgment that ties them together — are essential to extending AI's impact.

Sixth: Alignment with AI Companies' Needs

There is a deep alignment between what AI companies need and what coding AI produces. As Shumer himself notes, AI labs focused on coding first because building AI requires a lot of code, and an AI that writes code can help build the next AI. This means coding AI has received disproportionate investment, attention, and iteration relative to AI for other knowledge domains. The result is that coding AI is further ahead not only because the domain is structurally more tractable, but also because far more resources have been poured into it. As investment broadens — and it is broadening — other domains will begin to close the gap.

The Path to the Same Impact Everywhere Else

The magnitude of AI's eventual impact on knowledge work will be every bit as large as Shumer suggests. But reaching that impact requires additional steps that coding didn't need.

In software development, AI reached a point where it can autonomously complete entire tasks — write, test, iterate, and deliver working software with minimal human involvement — because the domain provided built-in mechanisms for the AI to evaluate and improve its own work. In most other knowledge domains, those mechanisms don't exist natively. They have to be created. The question isn't whether AI will transform these domains, but how we build the workflows, feedback structures, and human-AI collaboration models that make it possible.

This is the core challenge that Concierge Computing is designed to address. Concierge Computing systems don't try to replicate the autonomous loop that works so well in coding. Instead, they create structured workflows where human intent guides AI execution and human evaluation shapes the outcome. The system interprets what you want to accomplish, plans and executes work, and then brings its output back for your review and direction. That review step isn't a limitation — it's the mechanism that provides the evaluative feedback that coding gets for free from compilers and test suites.

There are, of course, some areas of knowledge work that already share coding's favorable characteristics. Financial reconciliation, regulatory compliance checking, data validation, and certain categories of analysis involve clear rules and verifiable outputs. AI will reach full autonomy in these areas relatively quickly. But the larger opportunity — and the larger transformation — lies in the vast majority of knowledge work where quality depends on human judgment. Getting AI's impact in those areas to match what we've seen in coding is achievable, and the payoff will be immense. It just requires us to build deliberately rather than assuming it will happen on its own.

What This Means Right Now

Shumer's advice to start using AI seriously is exactly right. The people and organizations who figure out how to work effectively with AI will outperform those who don't, and the gap will widen quickly. This is true today, and it will be dramatically more true a year from now.

The skills that matter in this transition aren't technical. They're the ability to articulate intent clearly, to exercise judgment about quality when objective metrics don't exist, and to direct AI systems toward outcomes that reflect human values and organizational context. These are the capabilities that will distinguish effective knowledge workers in an AI-augmented world — and they're the capabilities that organizations need to start developing now.

Something big is indeed happening. Shumer is right about that, and right about the scale of what's coming. Coding moved first because it was the easiest domain for AI to master autonomously. The rest of knowledge work will follow — with the same magnitude of impact — but it will require us to take deliberate additional steps to get there. The organizations and individuals who understand this, and who start building the skills and systems to work with AI effectively, will be the ones who capture the enormous value that's on the way.