To Control Dangerous New Technologies, Congress Needs Help
By Peter Montague
[An edited version of this essay appeared on Truthout.org Oct. 1, 2020.]
Ever since humans learned to manage fire, technology has been a double-edged sword. It has made the modern world possible, yet every new technology has created its own new problems. In fact, “Each new technology creates more problems than it solves,” says Kevin Kelly, former editor of Wired magazine, in his book, What Technology Wants. [pg. 192]
Writing, printing, electricity, the gasoline engine, artificial fertilizer, air conditioning, the transistor, the internet — these and thousands of other innovations have created vast wealth and greatly expanded the possibilities for human well-being. On the other hand, technology has also created the climate emergency, the H-bomb, and a world so filled with plastic that you are probably breathing plastic microparticles as you read this, with unknown consequences for your health. As technologies become more powerful, and therefore possibly more dangerous, the question becomes how to get the most benefits with the least harm. In recent decades, this question has taken on new urgency.
Until 1945, humans didn’t really have the capacity to eradicate themselves from the planet or to end civilization. Now they do. Nuclear weapons, nanotechnology, synthetic biology, and artificial intelligence all contain the possibility of ending human life on Earth, or at least pushing us back into the stone age. They also offer great benefits, if we can control the hazards.
Nanotechnology is the manipulation of matter measuring less than 100 nanometers (one hundred billionths of a meter) in size, far smaller than the eye can see. A human hair is about 80,000 nanometers wide. Nanotechnology builds new things by assembling them atom by atom, molecule by molecule. For better or worse, many consumer products now contain nano-size particles. In his 1987 book on the glowing promise of nanotechnology, Eric Drexler saw the future in nano-scale “nanobots,” which he labeled “assemblers.” These nano-scale robots would be able to assemble individual atoms under software control, allowing the efficient creation of anything and everything. A nano-manufacturing household appliance would allow everyone to grab software off the internet to guide the creation of anything they wanted or needed. We’re not there yet, but at lot of people are working on it. The downside, Drexler said, would be that nanobot assemblers might replicate themselves quickly and cover the planet with “grey goo,” snuffing out humans. Drexler is optimistic the worst can be avoided, but others are not so sure.
Synthetic biology is the manipulation of DNA molecules to create new forms of life that have never existed on Earth before. These new life-forms would be designed and constructed in laboratories for specific purposes: to create fuel oil, for example, or to manufacture a drug to cure a particular disease. In 2010 a U.S. Presidential Bioethics Commission recommended against government regulation of synthetic biology, opting instead for self-regulation by the practitioners of synthetic biology. Synthetic biology, nanotechnology and artificial intelligence are now being combined to assemble new life-forms under software control. What could possibly go wrong?
Artificial intelligence (AI) is the creation of machines that can approach the human capacity to think. The holy grail is “artificial general intelligence” (AGI) — a machine that has the same mental (and perhaps emotional) abilities as a human. So-called “narrow AIs” already exist — for example, machines that can beat all humans at strategy games like chess and Go. They’re good at one task, but they do not have general intelligence.
Thousands of the world’s smartest people, working with hundreds of billions of invested dollars, are working feverishly to produce the first AGI. Among AGI researchers, there is widespread agreement that controlling an AGI is a very difficult problem, and that failure to control such a machine could be fatal for humanity if the machine were given a poorly-specified goal. For example, it might be told to design the smartest-possible machine, and to do that it might decide it needed a lot more computing power and to gain that power it might use the internet to tap into an automated nanotechnology laboratory and take control of nano-manufacturing equipment and start building more computers and, in so doing, fairly quickly cover large portions of the Earth with computers and the energy apparatus to power them. This may sound far-fetched, but many very smart and knowledgeable people are convinced it’s a real problem. To stop it, we could just unplug the machine — or could we? A machine with artificial general intelligence would quickly — perhaps very quickly — become far more intelligence than the smartest human. Even if a superintelligent AGI were contained in a self-standing computer not linked to the internet, it might turn out to be extremely clever at convincing its human minders to let it loose. Once loose, it would quickly become a formidable presence on the planet.
Each new technology creates a new set of problems, and each new problem in turn requires one or more solutions, each of which is likely to create additional problems. As this trend continues, society becomes more complex. Increasing complexity in turn requires more complicated rules to define acceptable behavior. For example, now that school lecturers are using laser pointers to highlight their PowerPoint slides, anyone buy a laser pointer and aim it at an airplane flying overhead — just for fun — so government has been forced to make it illegal to shine your laser pointer at airplanes, and these rules have to be enforced, which creates new requirements and duties for on government.
In sum, technical innovation creates new problems, requiring new solutions, which create more complexity, which in turn requires government to grow. Therefore, for anyone whose political ideology is based on a commitment to small government and so-called “free” markets, technological advance poses a conundrum. Without innovation, capitalism as we know it will stagnate. So far, there is no such thing as “steady-state capitalism” — it’s either growing or it’s collapsing into recession or depression. To grow, markets need (and feed) continuous technical innovation, which creates new problems, which require solutions, which increase both technical and social complexity, which requires more capable government.
Periodically, Republicans rebel against this trend, intentionally shrinking government, but in so doing, they necessarily move society back toward a more damaged physical world, and a more dog-eat-dog social world.
Furthermore, when the powerful shrink government, society loses the ability to anticipate and understand problems that only governments can solve. Government is the only entity that can impartially assess and control new technologies. The market is not going to solve the lasers-pointed-at-airplanes problem. Wall Street is not going to fix the climate emergency or control the proliferation of nuclear weapons or require that plastics be made biodegradable so they do not accumulate in the environment. Corporate managers may personally desire to be “socially responsible” but so long as they have shareholders expecting a hefty return on investment, they feel relentless pressure to give short shrift to workers, community and the natural world. They feel compelled to innovate.
No one (except perhaps Ted Kaczynski) is proposing that we cease technical innovation. Therefore, the question is, how can technology be managed to improve the chances that the new inventions will help us more than they harm us? And which technical innovations are so dangerous that they could wipe out the human species or permanently destroy civilization?
For eons, humans created and solved problems by trial and error. That is how we learned. However, with technologies that can extirpate humanity, trial-and-error learning is no longer an option. We don’t know what the effects of nuclear war might be — and we can’t run the “nuclear winter” experiment to find out if clouds of dust will blot out the sun, shut down agriculture, and starve all of humanity to death. Trial and error won’t do. We have to prevent the experiment.
The answer is a human invention called “technology assessment.” Between 1972 and 1995 Congress supported its own in-house think tank called the Office of Technology Assessment, or OTA. Congress defunded OTA in 1995, thus losing much if its capacity to foresee and forestall environmental and social harm that could be avoided or mitigated by sensible government policies and actions. During its brief existence, OTA produced 750 high-quality reports on a wide range of problems that Congress was trying to solve.
Congressmen Mark Takano (D-Calif.) and Sean Casten (D-Ill.) has proposed legislation to restore OTA, co-sponsored by 50 Democrats and one Republican. The enabling legislation that created OTA in 1972 was never repealed; Republicans in 1995 simply defunded OTA. OTA was never expensive; at its peak it has about 140 employees and an annual budget just under $35 million. Today, faced with the rapid proliferation of technologies more powerful and more dangerous than any previously known, restoring OTA would seem like a smart investment.
Watch Our First Democracy Summit and Get Inspired about How We Can Build Change, with Rashad RobinsonPlease watch and share BOLD ReThink's first Democracy Summit. You'll hear about the efforts of Color Of Change to challenge Democracy + Good Government
Evidence to Help Debunk False Voter Fraud Claims and Expose the Special Interests Pushing ThemThe BOLD ReThink created a video to help debunk false voter fraud claims before the November election. It also helps expose how Democracy + Good Government
New Timeline Describes the Racist and Anti-Government Roots of Attacks on American Democracy(To view a responsive version of this timeline, with clickable footnotes, click here .) Economy + Jobs
BOLD ReThink Recognizes Two "Artists for Democracy" -- Crys Matthews and Heather MaeEach month, BOLD ReThink's action team will highlight musicians and other artists standing up for our democracy, because art can Equal Justice + Courts
Introducing the BOLD ReThink: Because Our Democracy Needs a System UpgradeIt is time for a bold rethink… The murder of George Floyd and the demand of millions to redress structural racism have changed Economy + Jobs