1 A Bitter Taste of Dystopia
The 2016 presidential campaign made everybody angry. Liberal Bernie Sanders supporters were angry at allegedly racist Republicans and a political system they perceived as being for sale, a big beneficiary being Hillary Clinton. Conservative Donald Trump supporters were furious at the decay and decline of America, and at how politicians on both sides of the aisle had abandoned them and left a trail of broken promises. Hillary Clinton supporters fumed at how the mainstream media had failed to hold Trump accountable for lewd behavior verging on sexual assault-and worse.
The same rage against the system showed up in Britain, where a majority of citizens primarily living outside of prosperous London voted to take England out of the European Union. In Germany, a right-wing party espousing a virulent brand of xenophobia gained critical seats in the Bundestag. And around the world in prosperous countries, anger simmered, stoked by a sense of loss and by raging income inequality. In the United States, real incomes have been falling for decades. Yet in the shining towers of finance and on kombucha-decked tech campuses for glittering growth engines such as Google and Apple, the gilded class of technology employees and Wall Street types continue to enjoy tremendous economic gains.
The roots of the rage are, in my opinion, traceable to the feelings of powerlessness that have been building since the incursion into our lives of the microprocessor and the computer. At first, we greeted computers with a sense of wonder. Simple things such as spreadsheets, word processors, and arcade-quality video games could be run on tiny boxes in our living rooms!
The technology wove deeper into our lives. E-mail replaced paper mail. Generations of Americans will never write a full letter by hand. Social networks reinstated lost connections and spread good tidings. Discussions flourished. Maps went from the glovebox to the smart phone, and then replaced our own sense of navigation with computer-generated GPS turn-by-turn guidance so prescient that neither I nor most of my friends can remember the last time we printed out directions to a party or a restaurant.
As the new electronics systems grew smarter, they steadily began to replace many human activities. Mind-numbing phone menu trees replaced customer-service reps. In factories, robots marched steadily inward, thinning the ranks of unskilled and semi-skilled human workers even as efficiency soared and prices of the goods produced plummeted. This happened not only in the United States but also in China and other cheap-labor locales; a robot costs the same in Shanghai or Stuttgart or Chicago.
And, around the time when computers first arrived, we began to experience a stubborn stagnation. Wages for the middle class seemed to remain depressed. The optimism of the baby-boomer era gave way to pessimism as the industrial heartlands hollowed out. Even the inevitable economic cycles seemed less forgiving. In the 1990s and early 2000s, the United States began to experience so-called jobless recoveries. In these frustrating episodes, though economic growth registered a strong bump, the number of jobs and wages remained flaccid in comparison with historical norms.
In the United States, a creeping fear grew with each generation that the promise of a life better than their parents' would go unfulfilled. Meanwhile, the computers and systems starting advancing at an exponential pace-getting faster, smaller, and cheaper. Algorithms began to replace even lawyers, and we began to fear that the computer was going to come for our job, someday, somehow-just wait.
As income inequality grew, the yawning gap pushed the vast majority of the benefits of economic growth in wages and wealth to the top 5 percent of the world's society. The top 1 percent reaped the biggest rewards, far out of proportion to their number.
None of this is to say that Americans are materially worse off than they were forty years ago. Today, we own more cars, our houses are larger, our food is fancier and cheaper. A supercomputer-the iPhone or latest Android model-fits in our back pocket. But human beings tune out these sorts of absolute gains and focus on changes in relative position. With that focus, a dystopian worldview is logical and perhaps inevitable. The ghost in the machine becomes a handful of culprits. Politicians fail us because they cannot turn back the clock to better times (which, in real terms, were actually poorer, more dangerous, and shorter-lived). The banks and other big businesses treat humans as pawns.
So it is the soulless technology that is taking away our jobs and our dignity. But we as individuals can help control and influence it. The public outcry[2] and e-mail deluge directed at the U.S. Congress over the Stop Online Piracy Act[3] and the Protect IP Act[4] are examples. Those laws sought to make it harder to share music and movies on line. A campaign mounted by millions of normal citizens to deluge Washington, D.C., with e-mails and phone calls overnight flipped politicians from pro to con, overcoming many millions of lobbying dollars by the entertainment industry.
Technology taken too far in the other direction, however, can bring out our worst Luddite impulses. The protesters flinging feces at the Google-buses in downtown San Francisco gave voice to frustration that rich techies are taking over the City by the Bay; but the protest was based on scant logic. The private buses were taking cars off the roads, reducing pollution, minimizing traffic, and fighting global warming. Could flinging feces at a Google-bus turn back the clock and reduce prices of housing to affordable levels?
The 2016 presidential campaign was the national equivalent of the Google-bus protests. The supporters of Donald Trump, largely white and older, wanted to turn back the clock to a pre-smartphone era when they could be confident that their lives would be more stable and their incomes steadily rising. The Bernie Sanders supporters, more liberal but also mostly white (albeit with great age diversity), wanted to turn back the clock to an era when the people, not the big corporations, controlled the government. We have seen violent protests in Paris and elsewhere against Uber drivers. What sorts of protests will we see when the Uber cars no longer have drivers and the rage is directed only at the machine itself?
So easily could the focus of our discontent turn to the technology and systems that hold the promise to take us to a life of unimaginable comfort and freedom. At the same time, as I discussed in the introduction, the very technology that holds this promise could also contribute to our demise. Artificial Intelligence, or A.I., is both the most important breakthrough in modern computing and the most dangerous technology ever created by man. Remarkably, in similar times in the past, humanity has time and again successfully navigated these difficult passages from one era to the next. The transitions have not come without struggle, conflict, and missteps, but in general they were successful once people accepted the future and sought to control it or at least make better-informed decisions about it.
This is the challenge we have ahead: to involve the public in making informed choices so that we can create the best possible future, and to find ways to handle the social upheaval and disruption that inevitably will follow.
2 Welcome to Moore's World
Parked on the tarmac of Heathrow Airport, in London, is a sleek airliner that aviation buffs love. The Concorde was the first passenger airliner capable of flying at supersonic speed. Investment bankers and powerful businessmen raved about the nearly magical experience of going from New York to London in less than three hours. The Concorde was and, ironically, remains the future of aviation.
Unfortunately, all the Concordes are grounded. Airlines found the service too expensive to run and unprofitable to maintain. The sonic boom angered communities. The plane was exotic and beautiful but finicky. Perhaps most important of all, it was too expensive for the majority, and there was no obvious way to make its benefits available more broadly. This is part of the genius of Elon Musk as he develops Tesla: that his luxury company is rapidly moving downstream to become a mass-market player. Clearly, though, in the case of the Concorde, the conditions necessary for a futuristic disruption were not in place. They still are not, although some people are trying, including Musk himself, with his Hyperloop transportation project.
Another anecdote from London: in 1990, a car service called Addison Lee launched to take a chunk out of the stagnant taxi market. The service allowed users to send an SMS message to call for the cab, and a software-driven, computerized dispatch system ensured that drivers would pick up the fare seeker anywhere in the city within minutes.[5] This is, of course, the business model of Uber. But Addison Lee is available only in London; its management has never sought to expand to new cities.
Addison Lee was most recently sold to private-equity firm Carlyle Group for an estimated £300 million.[6] In late 2016, Uber was valued at around $70 billion,[7] and there were predictions it would soon be worth $100 billion, two or three hundred times the worth of Addison Lee. That's because each of us can use the same Uber application in hundreds of cities around the world to order a cab that will be paid for by the same credit card, and we have a reasonable guarantee that the service will be of high quality. From day one, Uber had global ambition. Addison Lee had the same idea but never pursued the global market.
This ambition of Uber's extends well beyond cars. Uber's employees have already considered the implications of their platform and view Uber not as a car-hailing application but as a marketplace that brings buyers and sellers together. You can see signs of their testing the marketplace all the time, ranging from comical marketing ploys such as using Uber to order an ice-cream truck or a mariachi band, to the really interesting, such as "Ubering" a nurse to offer everyone in the office a flu vaccine. Uber's CEO, Travis Kalanick, openly claims that his service will replace car ownership entirely once self-driving car fleets enter the mainstream.[8] What will happen to the humans who drive for Uber today remains an open question.
So what makes conditions ripe for a leap into the future in any specific economic segment or type of service? There are variations across the spectrum, but a few conditions tend to presage such leaps. First, there must be widespread dissatisfaction, either latent or overt, with the status quo. Many of us loathe the taxi industry (even if we often love individual drivers), and most of us hate large parts of the experience of driving a car in and around a city. No one is totally satisfied with the education system. Few of us, though we may love our doctors, believe that the medical system is doing its job properly, and scary stats about deaths caused by medical errors-now understood to be the third-highest cause of fatality in the United States-bear out this view. None of us likes our electric utility or our cell-phone provider or our cable-broadband company in the way we love Apple or enjoy Ben & Jerry's ice cream. Behind all of these unpopular institutions and sectors lies a frustrating combination of onerous regulations, quasi-monopolistic franchises (often government sanctioned) or ownership of scarce real estate (radio spectrum, medallions, permits, etc.), and politically powerful special interests.
That dissatisfaction is the systemic requisite. Then there are the technology requisites. All of the big (and, dare I say, disruptive) changes we now face can trace their onset and inevitability to Moore's Law. This is the oft-quoted maxim that the number of transistors per unit of area on a semiconductor doubles every eighteen months. Moore's Law explains why the iPhone or Android phone you hold in your hand is considerably faster than supercomputers were decades ago and orders of magnitude faster than the computers NASA employed in sending a man to the moon during the Apollo missions.
Disruption of societies and human lives by new technologies is an old story. Agriculture, gunpowder, steel, the car, the steam engine, the internal-combustion engine, and manned flight all forced wholesale shifts in the ways in which humans live, eat, make money, or fight each other for control of resources. This time, though, Moore's Law is leading the pace of change and innovation to increase exponentially.
Across the spectrum of key areas we are discussing-health, transport, energy, food, security and privacy, work, and government-the rapid decrease in the cost of computers is poised to drive amazing changes in every field that is exposed to technology; that is, in every field. The same trend applies to the cost of the already cheap sensors that are becoming the backbone both of the web of connected devices called the Internet of Things (I.o.T.) and of a new network that bridges the physical and virtual worlds. More and more aspects of our world are incorporating the triad of software, data connectivity, and handheld computing-the so-called technology triad-that enables disruptive technological change.
Another effect of this shift will be that any discrete analog task that can be converted into a networked digital one will be, including many tasks that we have long assumed a robot or a computer would never be able to tackle. Robots will seem human-like and will do human-like things.
A good proportion of experts in artificial intelligence believe that such a degree of intelligent behavior in machines is several decades away. Others refer often to a book by the most sanguine of all the technologists, noted inventor Ray Kurzweil. Kurzweil, in his book How to Create a Mind: The Secret of Human Thought Revealed, posits: "[F]undamental measures of information technology follow predictable and exponential trajectories."[9] He calls this hypothesis the "law of accelerating returns."[10] We've discussed the best-recognized of these trajectories, Moore's Law. But we are less familiar with the other critical exponential growth curve to emerge in our lifetime: the volume of digital information available on the Internet and, now, through the Internet of Things. Kurzweil measures this curve in "bits per second transmitted on the Internet." By his measure (and that of others, such as Cisco Systems), the amount of information buzzing over the Internet is doubling roughly every 1.25 years.[11] As humans, we can't keep track of all this information or even know where to start. We are now creating more information content in a single day than we created in decades or even centuries in the pre-digital era.
The key corollary that everyone needs to understand is that as any technology becomes addressable by information technology (i.e., computers), it becomes subject to the law of accelerating returns. For example, now that the human genome has been translated into bits that computers process, genomics becomes de facto an information technology, and the law of accelerating returns applies. When the team headed by biochemist and geneticist J. Craig Venter announced that it had effectively decoded 1 percent of the human genome, many doubters decried the slow progress. Kurzweil declared that Venter's team was actually halfway there, because, on an exponential curve, the time required to get from 0.01 percent to 1 percent is equal to the time required to get from 1 percent to 100 percent.
Applying this law to real-world problems and tasks is often far more straightforward than it would seem. Many people said that a computer would never beat the world's best chess grandmaster. Kurzweil calculated that a computer would need to calculate all possible combinations of the 100,000 possible board layouts in a game and do that rapidly and repeatedly in a matter of seconds. Once this threshold was crossed, then a computer would beat a human. Kurzweil mapped that threshold to Moore's Law and bet that the curves would cross in 1998, more or less. He was right.
To be clear, a leap in artificial intelligence that would make computers smarter than humans in so-called general intelligence is a task far different from and more complicated than a deterministic exercise such as beating a human at chess. So how long it will be until computers leap to superhuman intelligence remains uncertain.
There is little doubt, though, about the newly accelerating shifts in technology. The industrial revolution unfolded over nearly one hundred years. The rise of the personal computer spanned forty-five years and still has not attained full penetration on a global scale. Smartphones are approaching full penetration in half that period. (For what it's worth, I note that tablet computers attained widespread usage in the developed world even faster than smartphones had.)
Already the general consensus among researchers, NGOs, economists, and business leaders holds that smart-phones have changed the world for everyone.
It's easy to see why they all agree. In the late 1980s, a cell phone-of any kind, let alone a smartphone-remained a tremendous luxury. Today, poor farmers in Africa and India consider the smartphone a common tool by which to check market prices and communicate with buyers and shippers. This has introduced rich sources of information into their lives. Aside from calling distant relatives as they could on their earlier cell phones, they can now receive medical advice from distant doctors, check prices in neighboring villages before choosing a market, and send money to a friend. In Kenya, the M-Pesa network, using mobile phones, has effectively leapfrogged legacy banking systems and created a nearly frictionless transaction-and-payment system for millions of people formerly unable to participate in the economy except through barter.[12]
The prices of smartphones, following the curve of Moore's Law downward, have fallen so much that they are nearly ubiquitous in vibrant but still impoverished African capitals such as Lagos. Peter Diamandis observed, in his book Abundance: The Future Is Better Than You Think, that these devices provide Masai warriors in the bush with access to more information than the president of the United States had access to about two decades ago.[13] And we are early in this trend. Within five years, the prices of smartphones and tablet computers as powerful as the iPhones and iPads we use in the United States in 2017 will fall to less than $30, putting into the hands of all but the poorest of the poor the power of a connected supercomputer. By 2023, those smartphones will have more computing power than our own brains.* (That wasn't a typo-at the rate at which computers are advancing, the iPhone 11 or 12 will have greater computing power than our brains do.)
* This is not to say that smartphones will replace our brains. Semiconductors and existing software have thus far failed to pass a Turing Test (by tricking a human into thinking that a computer is a person), let alone provide broad-based capabilities that we expect all humans to master in language, logic, navigation, and simple problem solving. A robot can drive a car quite effectively, but thus far robots have failed to tackle tasks that would seem far simpler, such as folding a basket of laundry. The comprehension of the ever-changing jumble of surfaces that this task entails is something that the human brain does without even trying.
The acceleration in computation feeds on itself, ad infinitum. The availability of cheaper, faster chips makes faster computation available at a lower price, which enables better research tools and production technologies. And those, in turn, accelerate the process of computer development. But now Moore's Law applies, as we have described above, not just to smartphones and PCs but to everything. Change has always been the norm and the one constant; but we have never experienced change like this, at such a pace, or on so many fronts: in energy sources' move to renewables; in health care's move to digital health records and designer drugs; in banking, in which a technology called the blockchain distributed ledger system threatens to antiquate financial systems' opaque procedures.*
It is noteworthy that, Moore's Law having turned fifty, we are reaching the limits of how much you can shrink a transistor. After all, nothing can be smaller than an atom. But Intel and IBM have both said that they can adhere to the Moore's Law targets for another five to ten years. So the silicon-based computer chips in our laptops will surely match the power of a human brain in the early 2020s, but Moore's Law may fizzle out after that.
What happens after Moore's Law? As Ray Kurzweil explains, Moore's law isn't the be-all and end-all of computing; the advances will continue regardless of what Intel and IBM can do with silicon. Moore's Law itself was just one of five paradigms in computing: electromechanical, relay, vacuum tube, discrete transistor, and integrated circuits. Technology has been advancing exponentially since the advent of evolution on Earth, and computing power has been rising exponentially: from the mechanical calculating devices used in the 1890 U.S. Census, via the machines that cracked the Nazi enigma code, the CBS vacuum-tube computer, the transistor-based machines used in the first space launches, and more recently the integrated circuit-based personal computer.
* The blockchain is an almost incorruptible digital ledger that can be used to record practically anything that can be digitized: birth and death certificates, marriage licenses, deeds and titles of ownership, educational degrees, medical records, contracts, and votes. Bitcoin is one of its many implementations.
With exponentially advancing technologies, things move very slowly at first and then advance dramatically. Each new technology advances along an S-curve-an exponential beginning, flattening out as the technology reaches its limits. As one technology ends, the next paradigm takes over. That is what has been happening, and it is why there will be new computing paradigms after Moore's Law.
Already, there are significant advances on the horizon, such as the graphics-processor unit, which uses parallel computing to create massive increases in performance, not only for graphics, but also for neural networks, which constitute the architecture of the human brain. There are 3-D chips in development that can pack circuits in layers. IBM and the Defense Advanced Research Projects Agency are developing cognitive computing chips. New materials, such as gallium arsenide, carbon nanotubes, and graphene, are showing huge promise as replacements for silicon. And then there is the most interesting-and scary-technology of all: quantum computing.
Instead of encoding information as either a zero or a one, as today's computers do, quantum computers will use quantum bits, or qubits, whose states encode an entire range of possibilities by capitalizing on the quantum phenomena of superposition and entanglement. Computations that would take today's computers thousands of years, these will perform in minutes.
So the computer processors that fuel the technologies that are changing our lives are getting ever faster, smaller, and cheaper. There may be some temporary slowdowns as they first proceed along new S-curves, but the march of technology will continue. These technology advances already make me feel as if I am on a roller coaster. I feel the ups and downs as excitement and disappointment. Often, I am filled with fear. Yet the ride has only just started; the best-and the worst-is ahead.
Are we truly ready for this? And, more important, how can we better shape and control the forces of that world in ways that give us more agency and choice?
3 How Change Will Affect Us Personally and Why Our Choices Matter
Imagine a future in which we are able to live healthy, productive lives though jobs no longer exist. We have comfortable homes, in which we can "print" all of the food we need as well as our electronics and household amenities. When we need to go anywhere, we click on a phone application, and a driverless car shows up to take us to our destination. I am talking about an era of almost unlimited energy, food, education, and health care in which we have all of the material things we need.
Another way of looking at this is as a future of massive unemployment, in which the jobs of doctors, lawyers, waiter, accountants, construction workers, and practically every other kind of worker you can think of are done by machines. Instead of having the freedom to drive anywhere you want, you are dependent on robots to take you where you want to go. Gone are the thrill of driving and the satisfaction of working for a living.
Some of us will see these potential changes as a positive, and others will be terrified. Regardless, this is a glimpse into the near future.
In this future there will be many new risks. Privacy will be a thing of the past-as it is already becoming-because the driverless cars will keep track of everywhere we go and everything we do just as our smartphones already do. Our entire lives will be recorded in databases-every waking instant. We will read about an international crisis breaking out because a politician is killed or seriously injured when remote hackers hijack a car, a plane, a helicopter, or a medical device. Schools as we know them will no longer exist, because we'll have digital tutors in our homes. Someone you know, maybe you, will experience biometric theft: of DNA or fingerprints or voice print or even gait. Man and machine will begin to merge into a single entity, and we will no longer be able to draw a line between the two.
But there is also a much brighter side to this future. You will be a thousand times better-informed about your own medical condition than your doctor is about your condition today. And all of that knowledge will come from your smartphone. You will live far longer than you expect to right now, because advanced medical treatments will stave off many debilitating diseases. You will pay practically nothing for electricity. You will use a 3-D printer to build your house or a replacement kidney.
Your grandchildren will have an astoundingly good education delivered by an avatar-and children all over the world, in every country, will have an equally good education. There will be no more poverty. We will have plenty of clean water for everyone. We will no longer fight over oil. We won't have any more traffic lights, because the robo-cars won't need them! And no more parking tickets, of course. Best of all, you will have far more time to do what you want to: art, music, writing, sports, cooking, classes of all sorts, and just daydreaming.
Early disruptions arising from computing power and the Internet provided faster tools for doing what we had been doing, so we took advantage of spreadsheets, word processing, e-mail, and mobile phones. But substantial advances in health, education, transportation, and work have remained elusive. And, though the Internet gave us access to a lot of information, it has done nothing to augment our intelligence. Kayak.com, for example, allows me to search for flights on complex routes, but no product exists today that can tell me the best route for my particular needs and preferences. For that, I still need to go to a thriving anachronism: a very smart human travel agent. (Yes, they still exist, and they are doing quite well, thanks to high-paying high-end travelers who want highly personalized service and truly expert guidance.)
Now, things are moving ever faster; the amount of time it takes for a new technology to achieve mass adoption is shrinking. It took about two decades to go from operation of the first AM radio station to widespread radio use in America. The video recorder also took about two decades to achieve widespread use. The same goes for the home PC. But the Internet has accelerated adoption.
Consider YouTube, which created a paradigm shift as significant as the VHS recorder and radio; and video search, which shifted the focus from text and static graphics to videos and more-dynamic content. These spawned an entire new economy catering to video on the Web-from content-delivery networks to advertising networks to a new production-studio system catering to YouTube production. They also created a new profession: YouTube stars, some of whom make millions a year and go on tour.
Founded in 2005, YouTube gained mass adoption in eighteen months. Such stunning acceleration of technological innovations has broad repercussions for many other technological realms susceptible to digitization.
The key links missing from previous innovations were the raw computational speed necessary to deliver more-intelligent insights, and ways to effortlessly link the software and the hardware to other parts of our lives. In short, fast computation was scarce and expensive; wireless connectivity was limited and rare; and hardware was a luxury item. That has all changed in the twenty-first century.
In the mid-2000s, broadband Internet began to change the way in which we viewed the world of data and communications. Voice communication rapidly became a commodity service. In the early days of the Internet, we paid for slow dial-up connection and viewed it as a service with which e-mail access was bundled. Today, broadband connectivity is commonplace in much of the United States.
Even this no longer seems enough. Today, with 4G/LTE connections running circles around the wireless links of five years ago, we find 3G connectivity over a wireless network painful. Even 4G/LTE can seem maddeningly slow, especially in population-dense areas, and we jealously eye communities such as Kansas City, Missouri, where Google Fiber has brought affordable greased-lightning connections to thousands of homes.
Nearly ubiquitous data connectivity, at relatively high speeds, is around the corner. Wi-Fi is available almost everywhere in the materially comfortable world, and it will become significantly faster. New projects such as Google's Project Loon (which uses high-altitude balloons) could overlay even faster networks in globe-girdling constellations of balloons plying the jet stream and carrying wireless networking gear. These projects promise to bridge the connectivity gap for the billions of people, in Africa and Latin America and Asia, who still don't have broadband.
The triad of data connectivity, cheap handheld computers, and powerful software will enable further innovation in everything else that can be connected or digitized, and that will change the way we live our lives, at least to the extent that we accept them.
Every major change in the technologies underlying our lifestyles, from gunpowder to steel to the internal combustion engine to the rise of electricity, has required a leap of faith and a major break from the past. Imagine the fear, traveling long distances astride a rickety vehicle that burned an explosively flammable liquid and rode upon black rubber tubes filled with air. What could possibly go wrong! Yet people quickly overcame their fear of cars and focused instead on how to improve safety and reliability.
I don't watch TV any more, because all of the shows I want to watch are on YouTube (or Netflix). Any topic I want to learn about appears in a video or a web page. Cutting the cord of my cable subscription was a difficult choice for me because I worried about missing out on the news, but I have found even better sources of information via the super-fast Internet access that I have. I made that leap of faith and haven't looked back.
Yes, the leaps of faith that embracing the forthcoming technologies warmly or even haltingly would require are psychologically imposing. It won't be as easy a decision as canceling a cable subscription. Let Google drive your children to school? Trust a surgical robot to perform cancer surgery or supply a crucial diagnosis? Allow a doctor to permanently alter your own DNA in order to cure a disease? Let a computer educate your children and teach them the piano? Have a robot help your elderly parent into a slippery bathtub? Those are the kinds of decisions that that will be commonplace within one or two decades. Everything is happening increasingly faster as technology accelerates and the world moves from analog to digital, from wet-ware (read: your brain) to software, from natural to super-biological and super-natural. This is the really, really big shift; and it is upon us far sooner than we might have expected it to be. Each of us, as individuals, must prepare to ask ourselves which changes we will accept, influence, speed up, slow down, or outright stop. One thing is certain: you will have less time to make up your mind than you've ever had before, and we know that change can be extremely disruptive if it is unexpected or uninvited.
Adapting to change will not be easy; sometimes it will seem traumatic. But my hope is that your thinking about the future will make you less a victim of circumstance and more a voting citizen in the messy democracy that guides technological change; that you will meet it as a chooser and a navigator, rather than as a passenger.
Why Our Individual Choices Matter
Why should you have to worry about the fate of humankind when other people should be doing so, including our government and business leaders? The reason is that they are not worrying about the fate of humankind-at least, not enough, and not in the right ways.
Technology companies are trying to build standards for the Internet of Things; scientists are attempting to develop ethical standards for human-genome editing; and policy makers at the FAA are developing regulations for drones. They are all narrowly focused. Very few people are looking at the big picture, because the big picture is messy and defies simple models. With so many technologies all advancing exponentially at the same time, it is very hard to see the forest for the trees; as you will find as you read this book, each technology on its own can be overwhelming.
I don't fault our policy makers and politicians, by the way. Laws are, after all, codified ethics. And ethics is a consensus that a society develops, often over centuries. On many of these very new issues, we as a society have not yet developed any sort of consensus.
Meanwhile, the shortfalls in our legal, governance, and ethical frameworks are growing as technology keeps advancing. When is a drone operator a peeping tom and not just an accidental intruder? When does a scientist researching human DNA cross from therapeutics to eugenics? We need the experts to inform our policy makers. But ultimately the onus is on us to tell our policy makers what the laws should be and what we consider ethical in this new, exponentially changing world.
4 If Change Is Always the Answer, What Are the Questions?
Technology Seeks Society's Forgiveness, Not Permission
A key difference between today's and past transformations is that technological evolution has become much faster than the existing regulatory, legal, and political framework's ability to assimilate and respond to it. To rephrase an earlier point, it's a Moore's Law world; we just live in it.
Disruptive technology isn't entirely new. Back in the days of the robber barons, the ruthless capitalists of the early United States built railroads without seeking political permission. And, more recently, in the personal-computer revolution, company employees brought their own computers to work without telling their I.T. departments. What is new is the degree of regulatory and systemic disruption that the savviest companies in this technology revolution are causing by taking advantage of the technology triad of data connectivity, cheap handheld computers, and powerful software to grab customers and build momentum before anyone can tell them to stop what they are doing.
In 2010, Uber had no market share in providing rides to the U.S. Congress and their staffs. By 2014, despite the service's continuing illegality in many of the constituencies of these political leaders, Uber's market share among Congress was a stunning 60 percent.[14] Talk about regulatory capture. Companies such as Uber, Airbnb, and Skype play a bottom-up game to make it nearly impossible for legacy-entrenched interests and players to dislodge or outlaw newer ways of doing things.
In fact, most of the smartphone-based healthcare applications and attachments that are on the market today are, in some manner, circumventing the U.S. Food and Drug Administration's cumbersome approval process. As long as an application and sensor are sold as a patient's reference tool rather than for a doctor's use, they don't need approval. But these applications and attachments increasingly are replacing real medical opinions and tests.
Innovators' path to market isn't entirely obstacle free. The FDA was able to quickly and easily ban the upstart company 23andMe from selling its home genetics test kits to the public, though it later partly revised its decision.[15] Uber has been fighting regulatory battles in Germany and elsewhere, largely at the behest of the taxi industry.[16] But the services these two companies provide are nearly inevitable now due to the huge public support they have received as a result of the tremendous benefits they offer in their specific realms.
Ingeniously, companies have used the skills they gained by generating exponential user growth to initiate grassroots political campaigns that even entrenched political actors have trouble resisting. In Washington, D.C., when the City Council sought to ban Uber, the company asked its users to speak up. Almost immediately, tens of thousands of phone calls and e-mails clogged switchboards and servers, giving a clear message to the politicos that banning Uber might have a severe political cost.
What these companies did was to educate and mobilize their users to tell their political leaders what they wanted. And that is how the process is supposed to work.
"That is how it must be, because law is, at its best and most legitimate-in the words of Gandhi-'codified ethics,'" says Preeta Bansal, a former general counsel in the White House.[17] Laws and standards of ethics are guidelines accepted by members of a society, and these require the development of a social consensus.
Take the development of copyright laws, which followed the creation of the printing press.[18] When first introduced in the 1400s, the printing press was disruptive to political and religious elites because it allowed knowledge to spread and experiments to be shared. It helped spur the decline of the Holy Roman Empire, through the spread of Protestant writings; the rise of nationalism and nation-states, due to rising cultural self-awareness; and eventually the Renaissance. Debates about the ownership of ideas raged for about three hundred years before the first statutes were enacted by Great Britain.
Similarly, the steam engine, the mass production of steel, and the building of railroads in the eighteenth and nineteenth centuries led to the development of intangible property rights and contract law. These were based on cases involving property over track, tort liability for damage to cattle and employees, and eminent domain (the power of the state to forcibly acquire land for public utility).
Our laws and ethical practices have evolved over centuries. Today, technology is on an exponential curve and is touching practically everyone-everywhere. Changes of a magnitude that once took centuries now happen in decades, sometimes in years. Not long ago, Facebook was a dorm-room dating site, mobile phones were for the ultra-rich, drones were multimillion-dollar war machines, and supercomputers were for secret government research. Today, hobbyists can build drones, and poor villagers in India access Facebook accounts on smartphones that have more computing power than supercomputers of yesteryear.
This is why you need to step in. It is the power of the collective, the coming together of great minds, that will help our lawmakers develop sensible policies for directing change. There are many ways of framing the problems and solutions. I am going to suggest three questions that you can ask to help you judge the technologies that are going to change our lives.
THREE QUESTIONS TO ASK
When I was teaching an innovation workshop at Tecnológico de Monterrey in Chihuahua, Mexico, last year, I asked the attendees whether they thought that it was moral to allow doctors to alter the DNA of children to make them faster runners or improve their memory. The class unanimously told me no. Then I asked whether it would be OK for doctors to alter the DNA of a terminally ill child to eliminate the disease. The vast majority of the class said that this would be a good thing to do. In fact, both cases were the same in act, even if different in intent.
I taught this lesson to underscore that advanced technology invariably has the potential both for uses we support and for uses we find morally reprehensible. The challenge is figuring out whether the potential for good outweighs the potential for bad, and whether the benefit is worth the risks. Much thought and discussion with friends and experts I trust led me to formulate a lens or filter through which to view these newer technologies when assessing their value to society and mankind.
This boils down to three questions relating to equality, risks, and autonomy:
1. Does the technology have the potential to benefit everyone equally?
2. What are the risks and the rewards?
3. Does the technology more strongly promote autonomy or dependence?
This thought exercise certainly does not cover all aspects that should be considered in weighing the benefits and risks of new technologies. But, as drivers in a car that's driverless-as all of our cars will soon be-if we are to rise above the data overload and see clearly, we need to limit and simplify the amount of information we consider in making our decisions and shaping our perceptions.
Why these three questions? To start with, note the anger of the electorates of countries such as the United States, Britain, and Germany, as I discussed earlier. And then look ahead at the jobless future that technology is creating. If the needs and wants of every human being are met, we can deal with the social and psychological issues of joblessness. This won't be easy, by any means, but at least people won't be acting out of dire need and desperation. We can build a society with new values, perhaps one in which social gratification comes from teaching and helping others and from accomplishment in fields such as music and the arts.
And then there are the risks of technologies. As in the question I asked my students at Tecnológico de Monterrey, eliminating debilitating hereditary diseases is a no-brainer; most of us will agree that this would be a constructive use of gene-editing technology. But what about enhancing humans to provide them with higher intelligence, better looks, and greater strength? Why stop at one enhancement, when you can, for the same cost, do multiple upgrades? We won't know where to draw the line and will exponentially increase the risks. The technology is, after all, new, and we don't know its side effects and long-term consequences. What if we mess up and create monsters, or edit out the imperfections that make us human?
And then there is the question of autonomy. We really don't want our technologies to become like recreational drugs that we become dependent on. We want greater autonomy-the freedom to live our lives the way we wish to and to fulfill our potentials.
These three questions are tightly interlinked. There is no black and white; it is all shades of gray. We must all understand the issues and have our say.
In the following chapters, we will apply these questions to relevant, developing, or already popular technologies as case studies.
Are you ready?