Programming becomes obsolete quickly. What should we do about it?

Roman Leventov
9 min readNov 22, 2020

--

Robots, automation, and AI will obliterate a lot of human jobs in the next 10–20 years. Very few people now argue with this prediction. We have been discussing this idea widely in the last few years, often using the example of autonomous driving systems and how they will replace human drivers (the most popular occupation in America today).

Most programmers think they won’t be at the forefront of this trend because software programming is a cognitively demanding activity that only a fraction of humans actually able to perform well. Sure, robots and AI will first replace these blue-collar and less intellectual white-collar jobs before climbing up the ranks and eating up from the pie of software engineers?

I think this is a delusion. Programmers might be among the first to be out of their jobs. There are several reasons for this:

  • Software programming is 100% digital by design. We manipulate digital inputs to produce digital outputs, through the logic implemented in digitally stored and compiled code.
  • We have built very good and elaborate automated infrastructure for software: testing frameworks, deployment mechanisms, regression monitoring, etc. Ironically, this might help to replace manual programming with AI models or AI-generated code safer and quicker than in other industries.
  • Software one of the most composable and leverageable things that we create. We are living in the age of increasing specialisation and leverage: the team which is the best in the world at something will be doing that for the whole world. This applies to software more than to most other things.

The digital nature of programming makes it suitable for reinforcement learning and/or learning from a large corpus of existing code

AI has surpassed humans in playing chess, the Go game, and already computer algorithms such as array sorting (note: computer surpassed human-written QuickSort, which is currently not the fastest algorithm anymore: the state-of-the-art is now an ML-based algorithm itself, but in a more “traditional” sense than AI coding) through the sheer power of computation: massively scaled learning from auto-generated or digitally available examples. It takes seconds to understand whether the model plays chess or sorts arrays better or worse than the previous incarnation of it. This nearly instant digital feedback fuels reinforcement learning.

I think it’s easier to learn to engineer a perfect Li-ion battery cell than to become a world-class Go player, but the feedback is so slow (it takes days and a lot of manual labour to manufacture a single custom cell and then to evaluate its performance) that Li-ion cell engineering can’t be a subject for reinforcement learning anytime soon. We, humans, need much less experience to learn complex things than the present AI systems. This gives us (so far) the unique advantage over AI in the areas of research, engineering, and problem-solving where we have little historical information to learn from and slow or non-digital feedback to our decisions or learning process.

GPT-3 learned the patterns of human language so shockingly well because it has “read” an enormous number of books and other texts written by humans. A similar thing will definitely happen with code since a lot of it is hosted online on Github and other code hostings. This model, learning from all available source code, for the very least will be able to spot any inconsistencies in human-written code, i. e. bugs. DeepCode already made a very promising advance in this direction, learning from bug-fixing PRs in other projects on Github and suggesting fixes in your project. Aroma is another interesting project in this area.

Contrast this with the areas like mechanical engineering or psychotherapy. We designed and engineered more physical objects and conducted psychotherapy sessions in the last 100 years than wrote code. But we didn’t record, digitalise, and amass the examples of human practice in these areas. So I think it will be much harder to jump from GPT-3 to “AI psychotherapist better than a human psychotherapist” than “AI programmer better than a human programmer”, despite psychotherapy is not “harder” than programming (“optimal” psychotherapy is probably a very hard task, but human psychotherapists are not even close to that, so, in theory, it should be relatively easy to surpass them).

The area of physical health and disease probably lies between psychotherapy and programming when it comes to large-scale computer learning: at least some countries accumulate some health data and medical treatment histories of their citizens, but probably not as much as we wish they did to teach a very good “AI doctor” model.

The composability and the leverage of software

60 years ago programmers did all assembly programming for their computers from scratch. Then operating systems and libraries appeared. Most software systems written 40–50 years ago were self-contained and didn’t interface with anything apart from human operators (through input and output devices) and sensors.

After libraries, probably the next big class of programs designed to interface with other code were databases. Programming frameworks (along with managed languages, such as Java) started to appear in large numbers I think about 30 years ago. At about the same time appeared configurable systems or frameworks aimed to “automate enterprise” from vendors such as IBM, Microsoft, and SAP. I think these systems anticipated today’s trends but mostly failed because they were too self-contained (like the earlier systems), complex, and slow.

15 years ago we have got first cloud computing primitives (Amazon S3) and backend systems designed to do a single task well (such as map-reduce computation, Apache Hadoop). Today these tools proliferated and matured so much that I’d say good backend and data engineering work should be more systems and reliability engineering than programming.

About 10 years ago, a new trend emerged of software systems abstracting away big complex tasks such as load balancing and content delivery (Cloudflare), payments (Stripe), mobile network management (Twilio), website development (Netlify), data engineering (Fivetran), infrastructure and device monitoring (Datadog), etc. Read Stephen O’Grady’s “Addition by abstraction” for more details.

These products learned from the mistakes of the earlier enterprise automation systems from IBM and SAP. These new “vertical” systems integrate with as much external systems and API as possible, support all popular programming languages, provide good APIs, and strive to appear as simple as possible for the users (programmers) and to abstract the most complexity.

So I think these systems will hold and will be much more ubiquitous in the next 5–10 years, similar to how it became ubiquitous to program using cloud computing primitives, databases, and Big Data systems. The new abstractions will push the programmer’s job even further, from systems and reliability engineering to pure business analysis with perhaps a sprinkle of integration programming. Some people will need to do this, sure, but it will require an order of magnitude less programming (and programmers).

The trend for specialisation and leverage is not completely unique to software engineering: we have a similar situation in hardware where only a few companies manufacture chips, consumer products and electronics where only a handful of companies create phones or laptops for the whole world, etc.

Software engineering is moving from the “craft” stage to the “mass factory production” stage. This is normal. But it looks to me like a denial that software engineers will somehow be even in higher demand than they are today when the industry will complete the transition. I don’t quite buy the argument that the improvements in programmer’s efficiency will be balanced by the need for much more complex software. A higher percent of world’s population were factory workers in the first half of the 20th century than today, nevertheless, we produce much more factory goods today (per capita) than before through automation, composability, and world-wide leverage. How is the programming industry fundamentally different from manufacturing in this regard?

Compare software programming with investigative journalism, which will still be a craft work of going to the places where something happened, finding people to speak with, etc. Not much opportunity for factory-like automation and leverage here. The final text or video might be generated or edited by an AI from the raw materials, but this is a relatively small part of the investigative journalist’s work anyway.

What should we do as programmers in response to this?

I didn’t think much about this yet, but here are a few strategies that come to my mind (ordered from these providing shorter-term to longer-term “relief”):

Program things that are harder to replace by no-code tools and AI

I primarily think about embedded, low-level, system programming, and DevOps (who do the integration work). Most other areas, from backend to frontend, mobile, and game programming, look more vulnerable to automation.

Leverage ML and AI in your programming work

This can range from becoming more of an ML engineer or a Data Scientist than a “simple” programmer to early adoption of DeepCode or GPT-3-based code generation tools.

Double down on systems engineering, reliability engineering, and business analysis

I think it will be much longer (at least 10–20 years) until AI will surpass humans in system analysis. Joscha Bach dropped the idea (in this interview) that currently, AI operates on the cognitive level of solving tasks defined by humans. Moving to the level of defining tasks (i. e. what systems engineers and business analysts do) is a qualitative leap for them. (And the even higher level is prioritising tasks, i. e. ethics.)

The bad news is that we will need fewer “task definers” than programmers, so to secure and retain such a job, you should be really good at this. So better to begin improving sooner.

Join a world-class team that creates the best library, framework, database, backend system, or a software product in the respective area

Note that these teams will be increasingly composed of ML and AI engineers than of programmers because you will compete with other teams using trying to leverage ML and AI.

This may sound like an exciting opportunity, but remember that only a single or a handful of teams will survive. And it will be very hard to join the winning team because they will be hiring only the best people in the world. I’d only approach this path with caution and a good deal of humility: if some team hired you, are you indeed among the best in the world, or this is just a mediocre team that will go out of business in a few years?

Unfortunately, the above are all the paths of increased intellectual competition. They also provide only a temporary solution: I’d cap it at about 30 years. Beyond that, I think virtually all programming, no matter at what level and the complexity and the novelty of the task, will be done by AI.

Move into management or business

The competition in engineering management will increase as we will have fewer software engineers to manage. Entrepreneurship looks better in the long-term: see “The Fourth Economy” for more on this topic.

Surrender to AI and move into a craft or interpersonal work

I think the defeat of the Go champion by AI and GPT-3 marked that we will ultimately lose the battle with AI in all intellectual and creative domains without exception. I don’t believe in the romantic notion of the endless possibilities of the human brain. Computers will rule humans.

The good news for us is that this will probably happen after we will die. But anyway, you can embrace the intellectual humility and prepare to transition into unassuming roles.

Farming is a popular escape for former software engineers in America. This could also be health care, baby care, elderly care, cooking, etc. This could be craft programming (such as of indie games) or writing or art.

Conclusion

Programming as a craft job quickly becomes a thing of the past. This is good for businesses but bad for individual programmers who will lose their jobs. Programmers should think very hard about this principal — agent conflict and how to approach it in their future careers.

Thanks for reading!

Subscribe to new Engineering Ideas on Substack.

Resources and further reading/watching

“The Bitter Lesson” by Rich Sutton

“21 Lessons for the 21st Century” by Yuval Noah Harari

GPT-3: Is AI Deepfaking Understanding?— interview with Joscha Bach

--

--