Join our growing community

Subscribe to InView to receive fortnightly newsletters access to exclusive content and invites to exciting events near you.

Moore’s Law and the Future of Work; are we ready for an AI tipping point?

In the 1970s, Professor Carver Mead popularized the term "Moore's Law" named after Gordon Moore, the co-founder of Fairchild Semiconductor and Intel (and former CEO of the latter). Moore's Law observes that the number of transistors in a microchip doubles about every two years. This is important because, generally speaking, the more transistors a chip has the more its functionality and capability is also increased.   

Moore’s Law has also been applied to technology more broadly.  In his 2021 manifesto, Sam Altman famously extends this to “Moore’s Law for Everything” in which he imagines a world “where, for decades, everything – housing, education, food, clothing, etc – [becomes] half as expensive every two years.” Altman posits that this outcome will be the result of AI lowering the cost of goods and services, essentially by reducing the cost of human labor as it is increasingly performed by machines. He says that while people will still have jobs, those jobs won’t be creating as much economic value, which instead will be driven by technology. It means people will be freed up to do more of the things they care about. Companies and land will be taxed, and the proceeds will be distributed via an even annual distribution to all national citizens over 18 years (a form of the concept known as Universal Basic Income or UBI).  

This blended capitalist and socialist utopia is intriguing. Universal Basic Income is an increasingly popular theme in Silicon Valley which has been associated with AI due to its predicted destabilizing impact on human labor.  

The concept of UBI is often lauded by academics and technophiles, although it’s unproven with the exception of some small trials. And its title is a misnomer. Where the income is funded nationally, it is not a global but rather a national benefit. I’m skeptical about how the AI-corporate funded UBI model could be applied to developing countries which, generally speaking, have less capital to invest in AI development, and so are likely to experience lower economic returns from the very technology funding the UBI.  

We would need to consider whether UBI could be self-funded by other means in these nations.  While the universal reduction of labor value ought to be a great equalizer, in reality its effects will be unequally distributed. Existing economic divisions between nations will be reinforced.  Developing nations with an export-based growth model could lose any trade advantage of lower labor costs, reducing foreign investment in their economies. Therefore, developing nations investing in (and therefore producing) less wealth-generating technologies and receiving less foreign economic investment would face greater hurdles in a self-funded UBI model. Far from the social stability at a national level Altman predicts, international social unrest could increase.  

For UBI to be a real solution in a global and borderless world, we need to come together as a collective to consider universal solutions (or national solutions that are feasible across all nations). Currently, we haven’t even managed to act together to regulate existing global issues that are much simpler than UBI, like privacy and data regulation. So, we have a lot of work to do. 

Putting the specific concept of UBI to one side, at first glance Altman’s utopia appears antithetical to the current mainstream narrative that AI is more likely to halve your workload than halve your value. One might be forgiven for arguing that the doubling in efficiency of technology every two years corresponds with a doubling in efficiency of workers using such technology. While the productivity paradox, which I elaborate on below, suggests this isn’t a general universal law, it does seem to correspond with the general AI trend that some of us are starting to experience firsthand – such as developers using GitHub Copilot – or hearing about in social media echo chambers.  

On deeper consideration, both perspectives could be correct, just not at the same time. To the extent that humans are performing tasks that AI cannot, the combination of a human and AI blended skillset is powerful and will supercharge the productivity of the human. This can be viewed as AI in its infancy (Phase 1). However, Altman’s manifesto is predicated on a scenario where the tasks able to be performed by humans but not AI will become infinitesimal. Coupled with technology becoming more productive at the same price point, eventually a human’s economic value pales against its metallic contender. This can be viewed as AI in its maturity (Phase 2). At its extreme, in Phase 2 the price of human labor falls toward zero. Phase 1 is here and guaranteed; Phase 2 is speculative.   

Let’s consider what each phase means for our careers. In Phase 1, while generative AI is still a companion tool that can make us more efficient, we should wrap our arms around this opportunity and adopt it. Effective AI use will be key to partaking in the cascading benefits to humans of Moore’s Law, while we still can. I say this while acknowledging the so-called ‘productivity paradox’ which warns that improvements in technological efficiency don’t necessarily translate to improvements in human productivity. There have been various periods in modern history with lower developed world productivity growth despite improving productivity of information technology, particularly in the US.  For example, Erik Brynjolfsson wrote about how this paradox can be seen during the 1970s and 1980s. 

While it’s yet to be seen how this paradox will apply in the coming years as generative AI tools take off, my belief is that these emerging AI tools will correlate with overall productivity during Phase 1, at least for those pockets of society that adopt them. Also, the paradox doesn’t necessarily apply on an individual scale. So, regardless of any macro trends, by adopting the right tools individuals will be able to supercharge their own productivity in any time period.

What is certain is that in Phase 1 professional skills will outdate at a faster pace than ever before, with most professions and roles either systemically changing or becoming redundant to some degree. In either case, professional development and transition is more urgent than ever before.  

Merely showing up to perform the same role ad infinitum won’t be enough. In order to maintain our relative performance and value, we must continuously evolve. This requires motivation, and motivation requires satisfaction and genuine interest in the work we do.  For this reason, it’s increasingly important we find work that is satisfying to us.  

Even if we could take Phase 2 as granted (we can’t), no one is able to predict when it will happen. What I do know is that immediately preceding it there will be a tipping point. To adopt Malcolm Gladwell’s definition; this is the moment of critical mass, the boiling point that leads to significant change. The change is that AI goes from materially fostering human economic value to materially reducing it overall. 

Here, I do not intend to comment on the inherent value or worth of humans, I merely observe our comparative value with AI within the labor force. And if that relative value reduction is as material as Altman predicts, we need to seriously consider alternative models of commerce. At this point, instead of work being a necessity to meet our basic needs, it would become something we could choose to undertake in order to give us meaning and allow us to express our creativity. Universal Basic Income is one way to ensure basic human needs can be met through a flat payment made to all citizens. 

This post-work world spurred by AI represents the type of Outside Context Problem that Ian Banks wrote about in the 1990s; the kind of problem "most civilizations would encounter just once, and which they tended to encounter rather in the same way a sentence encountered a full stop". By definition it’s something we can’t fully predict until it arrives, but then it will change life as we know it. 

Let’s not try to predict the unpredictable. Instead, let’s make some informed assumptions one step at a time and adjust as we go. My base assumption, which we’re already getting a glimpse of, is that professional adaptability and continuous development is becoming increasingly critical due to the rate and nature of tech change. This needs to be prioritized whether we’re staying in the same job or moving profession entirely. It includes fast adoption for first-mover advantage but goes far beyond this. It also goes beyond traditional tick-box professional development mandated by governing associations. I’m talking about each of us learning about our technological future, new ways of thinking, and developing relevant transferable skills to set us up for the change ahead. 

A related assumption is that professional satisfaction will become more important because of my base assumption and in the event that we eventually move to Phase 2 of our AI future. Here I advocate a paradigm shift away from choosing a traditional safe career path toward picking a satisfying one. The pace of evolution is jeopardizing traditional career safety and security (in a good way). We don’t know what a traditional or safe career looks like for students leaving school at this point. Aside from that, to maintain the momentum required for my base assumption, doing a job that genuinely interests us will be a great asset. And, finally, the consistent change offers more opportunity to calibrate our career path than ever before; this means more opportunity to figure out what we like and chase it. 

My final and more general (read cautious) assumption is that eventually AI will probably lead to an Outside Context Problem that will radically change the way we organize ourselves fiscally and socially. It will be a global not a local issue that may possibly involve the reduced value of human labor. Debating the details of a world decades down the track is compelling but is a distraction when it becomes about proving who is right. We won’t know who is right until the change is upon us. The real value in this debate lies in informing the development and testing of creative and sensible alternative global economic models that we can have ready if and when they are needed.

Recommended Articles

In-house legal tips straight to your inbox

Subscribe