Home What I've Built Writing

Death of Coding

And why it might be a good thing after all

For two decades, the ability to write software has been treated as a rare and valuable skill - a gatekeeper to some of the most well-paid, intellectually stimulating work in the modern economy. Learning to code was the advice given to students, career changers, and entire nations looking to participate in the digital economy. The logic was simple: software is eating the world, and the people who write software will eat well.

That logic has not disappeared. But it is being rapidly renegotiated by AI. This essay argues that the decline of coding as a gatekeeping skill is not a loss - it is an opening. And that opening has the potential to be particularly significant for learners who have historically been locked out of the technology economy.

What is actually changing

AI tools like Claude Code can now write functional software from structured instructions. A person who can clearly describe a problem, break it into logical steps, and evaluate whether the output works can now produce software that would previously have required months of learning syntax, debugging, and language-specific knowledge.

This does not mean all coding is disappearing. Engineers working on complex systems, novel algorithms, performance-critical infrastructure, or security-sensitive code still require deep technical knowledge. What is changing is the layer below that - the vast middle territory of software that needs to be built, but does not require elite engineering to build: the college managing admissions on Excel, the clinic scheduling patients over WhatsApp, the small manufacturer with no inventory system, the NGO reconciling donations on a shared spreadsheet.

The skills that are gaining value in this new environment look something like this:

  • Software mental models Understanding what software is, how client and server interact, what databases and APIs do, and how data flows through a system to produce an output.
  • Product thinking Identifying real problems, understanding who has them, breaking a task into logical steps, and evaluating whether a solution actually works.
  • AI-native development Giving structured tasks to AI agents, asking them to explore a codebase, running iterative improvement cycles, and decomposing complex work into smaller prompts.
  • System integration Connecting services through authentication, databases, AI APIs, and automation workflows.
  • Operating software Reading errors and logs, deploying applications, debugging with AI assistance, and maintaining basic security hygiene.

None of these are trivial. But none of them require knowing how to write a recursive function in JavaScript.

The multiplication table problem

A useful analogy comes from mathematics education. Should children learn multiplication tables if calculators exist? Most educators say yes - not because the child will spend their life doing mental arithmetic, but because understanding multiplication builds the cognitive scaffolding to use calculators intelligently, catch errors, and think numerically.

The equivalent in programming is computational thinking: understanding what a variable is, how a loop works, what happens when a system encounters an error, what it means for something to have state. These concepts can be taught through light Python programming in weeks. They do not require six months of JavaScript.

The distinction matters because curriculum time is finite. Every hour spent on JS syntax is an hour not spent on systems thinking, product reasoning, or AI-native development - skills that compound more directly into real-world value for most learners. Python earns its place because it teaches computational thinking with minimal syntactic overhead. JavaScript, for most learners not aiming at frontend development specifically, is legacy importance dressed up as current importance.

The opportunity argument

The intuitive fear about AI is that it will get better at everything, squeeze humans into a shrinking set of tasks, and drive wages down. That fear is not unreasonable in the long run. But it misses something important about where we are right now.

AI has real limitations today - cost, context, trust, and the kind of on-the-ground human relationships that software adoption actually requires. In many settings, a skilled human builder who understands the local problem is genuinely cheaper and more effective than an AI-led solution. This is the window.

And the window sits on top of an enormous unmet need. India alone has tens of thousands of institutions running on spreadsheets, WhatsApp, pen and paper, and institutional memory. The problem was never that this software was hard to build. It was that the economics did not work. A custom operations system for one mid-size college was not worth six months of an engineer's time. It might be worth three weeks of someone who can direct AI agents well, understands the user's actual workflow, and can iterate fast.

AI compresses the build cost dramatically. It makes previously uneconomical software suddenly viable. And the person who captures that value is not necessarily the best coder - it is the person closest to the problem who can translate messy human workflows into working systems.

The counterarguments

The optimistic case above needs to be stress-tested. Here are the most serious objections, and an honest assessment of each.

1. The skill floor may be higher than it appears

Even with powerful AI tools, effective builders still need system thinking, debugging ability, architecture decisions, and understanding of data and security. These are not trivial. If they remain hard to acquire, particularly for learners with weak educational foundations, the new barrier may simply be a different barrier, not a lower one.

Two things are worth saying here. The purely technical skills - architecture decisions, debugging complex systems, understanding security - are being abstracted away by AI faster than most people expect. The floor on those is falling.

But the other half of the skill set - identifying real problems, understanding users, knowing whether what you built actually solves anything - still requires a human in the loop. AI can help with all of it, but only if someone is asking the right questions in the first place. And knowing what questions to ask comes from being close to the problem. These skills are also not learned in a classroom - they are learned by building real things for real people. That kind of learning is arguably more accessible to someone who grew up navigating real-world constraints than to someone who spent years in an academic environment disconnected from them.

2. Experienced engineers using AI may simply outcompete juniors

AI raises the productivity of existing engineers, which may reduce demand for junior builders rather than creating new roles for them. The market might decide that ten senior engineers with AI are more valuable than fifty junior AI builders.

This is likely true in certain product contexts - fast-moving startups, technically complex systems, frontier software. It is much less true in the long tail of institutional and local software described earlier. Senior engineers in Bangalore or Mumbai are not competing for the contract to rebuild a small college's admissions workflow in Nagpur. That market has no incumbent. The question is whether someone trained in AI-native development can get there and deliver something that works.

3. The role could commoditize before it stabilizes

If directing AI agents becomes easy, the market could be flooded with builders, driving down wages to gig-economy levels. No-code platforms have already experienced this dynamic - initial excitement, then commoditization, then a race to the bottom on price.

The honest response is: this is a real risk, and it depends heavily on whether learners develop genuine product judgment or merely surface-level prompting ability. The difference between someone who can prompt an AI to generate a form and someone who can design an end-to-end workflow that a college admin will actually use every day is enormous. The former commoditizes. The latter does not. The curriculum design question is which of those two people you are producing.

4. Structural barriers persist beyond the technical

Even if the technical barrier drops, disadvantaged learners may still face weak professional networks, limited English proficiency, and restricted access to real business problems to solve. These factors often matter as much as technical skill in securing work.

This is true and important. It means the training program cannot only teach technical skills - it must also build the surrounding infrastructure: access to real clients, mentorship, portfolio development, and commercial pathways. A learner who can build but cannot find, pitch, and close work is not economically independent. The services business model described in this essay - a studio or apprenticeship structure where learners work on real projects under guidance - is partly an answer to this problem. It embeds the commercial layer into the learning experience rather than leaving it as something learners must figure out alone afterward.

5. The cost of AI tools is not trivial

A Claude Code subscription costs $20 a month - not cheap for a learner from a disadvantaged background in India. Stack that with hosting, APIs, and other tools, and the cost of entry starts to add up. This is a real barrier, not a theoretical one.

First, constraints breed resourcefulness. Some of the most creative builders have come out of environments where they had to do more with less - and a scrappy approach to tooling is not necessarily a disadvantage. Second, AI pricing is almost certainly going the way of the internet and cloud computing - expensive and inaccessible at first, then cheap enough to take for granted. The $20 subscription of today will likely look like the cost of a SIM card in a few years. In the meantime, training programs that are serious about this population need to invest in tool access - it is as fundamental as providing a laptop.

6. Do we even need this software?

Things are working without it. Colleges are admitting students, clinics are seeing patients, businesses are running. If spreadsheets and WhatsApp were genuinely broken, someone would have fixed them already. Maybe the demand for this software is not as large as this essay assumes.

This is worth taking seriously. Technology is not a solution to every problem. A college with poor governance does not become well-governed because it has better software. Sometimes the real problem is misaligned incentives, weak leadership, or cultural inertia - and no amount of tooling fixes that.

But working and working well are different things. The question is not whether these institutions are surviving - it is whether better software would free up meaningful time, reduce errors, or open up new possibilities. There are cases where the bottleneck genuinely is operational - where the problem is not will but friction. A clinic that loses patient records because they live in a physical file somewhere. A housing society where the treasurer spends two weekends a month reconciling maintenance payments by hand. A coaching centre that cannot tell which students are falling behind until it is too late. In these cases, the people inside already know something is broken. They just never had anyone affordable enough to ask.

What we are actually training for

Strip away the technology for a moment, and the profile of the person this model is trying to produce looks like this: someone who can identify a real problem, reason about the system needed to solve it, direct tools to build that system, and take ownership of whether it works.

These are not new skills. They are the skills of a good product manager, a thoughtful consultant, and a resourceful operator. What is new is that the tools available to a person with those skills have become dramatically more powerful. A person who previously needed a team of engineers to bring an idea to life can now do it largely alone, or with a small, lean team.

The death of coding as a gatekeeper is not the death of building. It is the democratization of building.

For learners from disadvantaged backgrounds, this is a genuine shift in the economics of entry. The traditional path into the technology economy required years of formal education, strong mathematical foundations, and access to elite networks. That path is not going away - but it is no longer the only path.

The new path requires different things: practical problem-solving ability, persistence in the face of systems that break, comfort learning independently, strong enough English to read documentation and communicate with AI, and the product judgment to know whether what you built actually solves the problem. These are not equally distributed across privilege - but they are distributed differently than academic credentials. That difference matters.

A note on what comes next

There is a services business waiting to be built around this model, one that connects AI-native builders to the enormous backlog of institutional software that nobody has had the economics to commission. The unit economics work differently than traditional IT services: lower build cost, faster iteration, outcomes-based pricing. The bottleneck is not the technology. It is building the pipeline from learner to client, and the judgment layer that ensures what gets built actually works.

That is the harder problem. But it is the right problem to be working on.