A New Soft Technology

Something momentous happened around the year 2000: a major new soft technology came of age. After written language and money, software is  only the third major soft technology to appear in human civilization. Fifteen years into the age of software, we are still struggling to understand exactly what has happened. Marc Andreessen’s now-familiar line, software is eating the worldhints at the significance, but we are only just beginning to figure out how to think about the world in which we find ourselves.

softtech75ppi

Only a handful of general-purpose technologies1 – electricity, steam power, precision clocks, written language, token currencies, iron metallurgy and agriculture among them – have impacted our world in the sort of deeply transformative way that deserves the description eating. And only two of these, written language and money, were soft technologies: seemingly ephemeral, but capable of being embodied in a variety of specific physical forms.  Software has the same relationship to any specific sort of computing hardware as money does to coins or credit cards or writing to clay tablets and paper books.

But only since about 2000 has software acquired the sort of unbridled power, independent of hardware specifics, that it possesses today. For the first half century of modern computing after World War II, hardware was the driving force. The industrial world mostly consumed software to meet existing needs, such as tracking inventory and payroll, rather than being consumed by it. Serious technologists largely focused on solving the clear and present problems of the industrial age rather than exploring the possibilities of computing, proper.

Sometime around the dot com crash of 2000, though, the nature of software, and its relationship with hardware, underwent a shift. It was a shift marked by accelerating growth in the software economy and a peaking in the relative prominence of hardware.2 The shift happened within the information technology industry first, and then began to spread across the rest of the economy.

But the economic numbers only hint at3 the profundity of the resulting societal impact. As a simple example, a 14-year-old teenager today (too young to show up in labor statistics) can learn programming, contribute significantly to open-source projects, and become a talented professional-grade programmer before age 18. This is breaking smart: an economic actor using early mastery of emerging technological leverage — in this case a young individual using software leverage — to wield disproportionate influence on the emerging future.

Only a tiny fraction of this enormously valuable activity — the cost of a laptop and an Internet connection — would show up in standard economic metrics. Based on visible economic impact alone, the effects of such activity might even show up as a negative, in the form of technology-driven deflation. But the hidden economic significance of such an invisible story is at least comparable to that of an 18-year-old paying $100,000 over four years to acquire a traditional college degree. In the most dramatic cases, it can be as high as the value of an entire industry. The music industry is an example: a product created by a teenager, Shawn Fanning’s Napster, triggered a cascade of innovation whose primary visible impact has been the vertiginous decline of big record labels, but whose hidden impact includes an explosion in independent music production and rapid growth in the live-music sector.4

Software eating the world is a story of the seen and the unseen: small, measurable effects that seem underwhelming or even negative, and large invisible and positive effects that are easy to miss, unless you know where to look.5

Today, the significance of the unseen story is beginning to be widely appreciated. But as recently as fifteen years ago, when the main act was getting underway, even veteran technologists were being blindsided by the subtlety of the transition to software-first computing.

Perhaps the subtlest element had to do with Moore’s Law, the famous 1965 observation by Intel co-founder Gordon Moore that the density with which transistors can be packed into a silicon chip doubles every 18 months. By 2000, even as semiconductor manufacturing firms began running into the fundamental limits of Moore’s Law, chip designers and device manufacturers began to figure out how to use Moore’s Law to drive down the cost and power consumption of processors rather than driving up raw performance. The results were dramatic: low-cost, low-power mobile devices, such as smartphones, began to proliferate, vastly expanding the range of what we think of as computers. Coupled with reliable and cheap cloud computing infrastructure and mobile broadband, the result was a radical increase in technological potential. Computing could, and did, become vastly more accessible, to many more people in every country on the planet, at radically lower cost and expertise levels.

One result of this increased potential was that technologists began to grope towards a collective vision commonly called the Internet of Things. It is a vision based on the prospect of processors becoming so cheap, miniaturized and low-powered that they can be embedded, along with power sources, sensors and actuators, in just about anything, from cars and light bulbs to clothing and pills. Estimates of the economic potential of the Internet of Things – of putting a chip and software into every physical item on Earth – vary from $2.7 trillion to over $14 trillion: comparable to the entire GDP of the United States today.6

By 2010, it had become clear that given connectivity to nearly limitless cloud computing power and advances in battery technologies, programming was no longer something only a trained engineer could do to a box connected to a screen and a keyboard. It was something even a teenager could do, to almost anything.

The rise of ridesharing illustrates the process particularly well.

Only a few years ago services like Uber and Lyft seemed like minor enhancements to the process of procuring and paying for cab rides. Slowly, it became obvious that ridesharing was eliminating the role of human dispatchers and lowering the level of expertise required of drivers. As data accumulated through GPS tracking and ratings mechanisms, it further became clear that trust and safety could increasingly be underwritten by data instead of brand promises and regulation. This made it possible to dramatically expand driver supply, and lower ride costs by using underutilized vehicles already on the roads.

As the ridesharing sector took root and grew in city after city, second-order effects began to kick in. The increased convenience enables many more urban dwellers to adopt carless lifestyles. Increasing supply lowers costs, and increases accessibility for people previously limited to inconvenient public transportation. And as the idea of the carless lifestyle began to spread, urban planners began to realize that century-old trends like suburbanization, driven in part by car ownership, could no longer be taken for granted.

The ridesharing future we are seeing emerge now is even more dramatic: the higher utilization of cars leads to lower demand for cars, and frees up resources for other kinds of consumption. Individual lifestyle costs are being lowered and insurance models are being reimagined. The future of road networks must now be reconsidered in light of greener and more efficient use of both vehicles and roads.

Meanwhile, the emerging software infrastructure created by ridesharing is starting to have a cascading impact on businesses, such as delivery services, that rely on urban transportation and logistics systems. And finally, by proving many key component technologies, the rideshare industry is paving the way for the next major development: driverless cars.

These developments herald a major change in our relationship to cars.

To traditionalists, particularly in the United States, the car is a motif for an entire way of life, and the smartphone just an accessory. To early adopters who have integrated ridesharing deeply into their lives, the smartphone is the lifestyle motif, and the car is the accessory. To generations of Americans, owning a car represented freedom. To the next generation, not owning a car will represent freedom.

And this dramatic reversal in our relationships to two important technologies – cars and smartphones – is being catalyzed by what was initially dismissed as “yet another trivial app.”

Similar impact patterns are unfolding in sector after sector. Prominent early examples include the publishing, education, cable television, aviation, postal mail and hotel sectors. The impact is more than economic. Every aspect of the global industrial social order is being transformed by the impact of software.

This has happened before of course: money and written language both transformed the world in similarly profound ways. Software, however, is more flexible and powerful than either.

Writing is very flexible: we can write with a finger on sand or with an electron beam on a pinhead. Money is even more flexible: anything from cigarettes in a prison to pepper and salt in the ancient world to modern fiat currencies can work. But software can increasingly go wherever writing and money can go, and beyond. Software can also eat both, and take them to places they cannot go on their own.

Partly as a consequence of how rarely soft, world-eating technologies erupt into human life, we have been systematically underestimating the magnitude of the forces being unleashed by software. While it might seem like software is constantly in the news, what we have already seen is dwarfed by what still remains unseen.

The effects of this widespread underestimation are dramatic. The opportunities presented by software are expanding, and the risks of being caught on the wrong side of the transformation are dramatically increasing. Those who have correctly calibrated the impact of software are winning. Those who have miscalibrated it are losing.

And the winners are not winning by small margins or temporarily either. Software-fueled victories in the past decade have tended to be overwhelming and irreversible faits accompli. And this appears to be true at all levels from individuals to businesses to nations. Even totalitarian dictatorships seem unable to resist the transformation indefinitely.

So to understand how software is eating the world, we have to ask why we have been systematically underestimating its impact, and how we can recalibrate our expectations for the future.

PreviousUp | Next


[1] Economists use the term general-purpose technologies to talk about those with broad impact across sectors. There is, however, no clear consensus on which technologies should make the list

[2] By a rough estimate, between 1977 and 2012 the direct contribution of computing hardware to the United States GDP increased 14% (from 1.4% in 1977 to 1.6% in 2012), while the direct contribution of software increased 150% (from 0.2% in 1977 to 0.5% in 2012). Computing hardware peaked in 2000 (at 2.2% of GDP) and has steadily declined since (source: a16z research staff).

[3] See for instance Silicon Valley Doesn’t Believe Productivity is Down (Wall Street Journal, July 16, 2015) and GDP: A Brief but Affectionate History, by Diane Coyle, reviewed in Arnold Kling, GDP and Measuring the Intangible, American Enterprise Institute, February 2014.

[4] Why We Shouldn’t Worry About The (Alleged) Decline Of The Music IndustryForbes, January 2012.

[5] The idea of seen/unseen effects as an overarching distinction in economic evolution can be traced to an influential 1850 essay by Frederic Bastiat, That Which is Seen and That Which is Unseen

[6] Three independent estimates, all for the year 2020, help us calibrate the potential. Gartner estimates $1.9 trillion in value-add by 2020. Cisco estimates a value somewhere between $14 trillion and $19 trillion. IDC estimates a value around $8.9 trillion (source: a16z research staff).