Edited by Kate Canute
Technology gets a lot of flak from those who fear the prevalence of computers in our daily lives, or don’t understand its many benefits. Some fear that Artificial Intelligence, or “AI”, might eventually lead to humanity’s destruction. Yet technology has helped us overcome many problems as well. Our technological tools help us extend the range of what we can achieve. Technology is the externalization of human imagination, and humans have evolved rapidly in the last several thousand years.
There have been three waves of evolutionary acceleration fueled by the development of new, more advanced technology. The first wave happened when the human transitioned from hunting and gathering to agriculture, propelled by the ability to create basic tools such as plows, hammers, and spears. This transition happened a few thousand years ago. The second wave, fueled by the industrial revolution, brought the invention of assembly lines, standardization, and organized workflow. This happened a few hundred years ago. During the third wave, which started a few decades ago, humans invented information processing systems such as computers and the Internet, which is rapidly evolving humanity’s relationship with the tools it invents.
Technology has also made a large impact on biological and sociological evolution. The creation of stone tools caused the jaws of humans to become smaller. When we discovered that fire could cook our food, the improved convenience of meals freed up cognitive real estate necessary for the emergence of culture, religion, and arts. Technology is a scaffolding that extends our thoughts, our reach, and our vision out into the world around us. In many ways, our very identity is due to the feedback loops between us and our tools. These tools become extended appendages that are, though not a part of our biological tissue, a critical part of our selves. They are an extension of our existing phenotype. It is our termite colony – it is who we are.
But one question continues to haunt us: should technology be tasked with making moral and ethical decisions?
Let’s look at the concept of self driving cars from the perspective of law and regulation. Regulation is often seen as an inhibitor of advancement. While technology brings new and bigger markets, innovative ventures, and rapid growth, regulation brings to mind government intrusion, bureaucracy, and an impediment to discovery.
The newly emerging self driving car industry is one place where technology and regulation lock horns. Self driving cars can help ensure our safety. In fact, 90% of vehicular deaths are at least partially due to human error – for example, texting and driving, or driving under the influence of alcohol or drugs. By removing drivers from our roads, automated cars can make a positive impact.
In a perfect scenario, automated cars would run in sync with one another. They would respond perfectly to the rules of the road, and no lives would be lost. But is this realistic? We will always have unpredictable weather, mechanical malfunctions, and errors made when servicing or programming cars. Even with driverless cars, there will be times when a crash is inevitable. The question is, how should our driverless cars deal with a dangerous or irregular scenario? What moral compass will be put in place to program our cars’ accident algorithm? If you consider the classic Trolley problem of Philosophy 101 fame, it opens a whole new set of problems that an algorithm will need to deal with – the ethical dilemma of choosing a value to place on human lives.
There are ton of other things to consider when discussing self-driving cars. What about collateral damage or financial harm? What about liability or vehicular strength? Would cars that could withstand bigger damage, be allowed bigger risks? These are all fascinating yet difficult-to-answer questions. Driverless cars are creating a brand new infrastructure that necessitates the analysis and clear determination of our ethics. By removing human instinct from the equation, we are asked to pre-determine actions and assign numerical weight to our most personal values.
Self-driving cars will need new rules, and around the world, countries are already writing new laws to regulate their use. The UK has ruled that anyone who utilizes a self-driving car is required to be able to take over driving the vehicle, in case anything goes haywire. Similarly, among California’s proposed laws, all self-driving cars would be equipped with steering wheels, and require the presence of a licensed driver. Google has objected to California’s proposal, as they don’t want to add steering wheels or pedals to their vehicles. Japan wants to have a grid system in place for use during the 2020 Tokyo Olympics, but they haven’t yet agreed on the necessary regulatory framework.
Governments won’t be alone in having to adjust to this emerging technology – insurance companies will as well. Many analysts are predicting a “Napster moment” in the insurance industry – just as many people heralded Napster as the beginning of the end for music sales, some analysts believe self-driving cars will lead to reduced car ownership, and therefore fewer people purchasing car insurance.
Privacy will also present issues for self-driving cars. The very nature of a driverless vehicle requires the machine to constantly watch what is happening around it. This will also extend to watching the people who are riding in them. Advocacy groups such as Consumer Watchdog are concerned about what kind of information self-driving cars will collect on their passengers, including where they travel, where they live, and with whom they are traveling. A group of 12 car manufacturers signed a pledge promising to be open about what information they would collect from customers and to limit the length of time that they keep the information. But as a response to California’s self-driving car legislation, Google successfully lobbied to remove all proposed privacy protection from user contracts. Google’s self-driving car project presents one more way the company will collect our private information.
There is also some concern that self-driving cars will be too safe. Self-driving cars are designed to obey all traffic laws, but that does not mean they will always get it right. In Mountain View, California, a self-driving car was pulled over for going 24 mph in a minimum 35 mph zone. There have also been a couple of instances where cars have overreacted to irregular situations. In one case, the car violently swerved so as not to hit a car that was parked a little too far from the curb. Another time, a self-driving car, moved quickly to the right anticipating a crash because a car in the opposite lane was going slightly over the speed limit. Sharing the road with these “model drivers” will probably take humans a little getting used to.
The Obama administration had set aside almost $4 billion to develop self-driving cars within the upcoming 10 years, so it may not be long before you can relax and let your car do all the work for you. The future holds many exciting technologies, such as designer antibiotics, ingestible robots and photonics in space; however, the regulatory framework will have to evolve to accommodate the challenges that arise with each advance. The impact of regulation on technology will depend on the types of regulations chosen. Different methods of regulation such as tradable allowances, level of transparency in disclosure, taxation, performance standards, and technical requirements can have different effects on technological progress. Since the regulatory systems always seem to be catching up to technology, these systems will have to adapt to the ever-increasing pace of technological progress.
About the author:
I’m an entrepreneur based in Mumbai. I have co-founded 2 startups since November 2014. One is in Media Production and the other is in Micro-finance. I recently completed HBX Disruptive Strategy. I’m currently pursuing Economics of Emerging Economies and Judgement & Decision Making at Harvard University.