Code Over Life: Self-Driving Cars’ Ethical Crossroads Haunts a Dark Future


Photo by Brock Wegner on Unsplash

In the unlikely event that an accident occurs, should code decide who gets to live or die

In the annals of human progress, few technological marvels have shown as much utopian promise as the self-driving car. A reality where commuting is a hands-free, worry-free experience. Roadways flowing with optimized traffic, accidents rendered relics of a reckless past. It’s the dawn of a new era of automotive mobility, where human driving errors are erased by the cold precision of code and sensors.

Or so the techno-prophets would have us believe. Because while the driverless revolution undeniably offers incredible lifestyle benefits, it has collided with the most insidious of philosophical minefields — the trolley problem reimagined for the age of autonomous vehicles.

If you’re unfamiliar with this ethical quandary, it goes something like this: A runaway trolley barrels down the tracks towards a group of unsuspecting workers. The only way to prevent casualties is to divert the train onto another track where a single person is trapped. So do you pull the lever, killing one to save the many? It’s an utterly horrifying decision no moral human wants any part of.

Yet in designing the artificial intelligence (AI) algorithms that will one day control self-driving cars, automakers are effectively being forced to program in stone the correct ethical decision for their vehicles to make in unavoidable accident scenarios. Pull the lever or not? With autonomous cars, that stomach-churning choice could boil down to swerving into a crowd or plowing through it.

Photo by Roberto Nickson on Unsplash

When framed this way, the once dazzling promises of hassle-free, accident-free commuting begin to resemble the nightmarish innards of science fiction’s darkest creations. Because when you strip away the slick marketing and starry-eyed projections about driverless utopias, the grim reality is that these vehicles will inevitably find themselves in lose-lose collisions with life-or-death consequences. And the haunting question we as a society must grapple with is — what moral code should we enshrine within their programming?

The Chilling Calculus of Casualties

Put yourself behind the wheel (or rather, the AI decision matrix) of an autonomously driving sedan for a moment. You’re cruising down a bustling urban street when suddenly, through no fault of your own, an obstacle darts into your path. Slamming the brakes isn’t enough — impact is guaranteed.

To your left is a crowded bus stop, families with young children among the vulnerable masses. To the right, a concrete barrier with a lone individual cycling beside you. In this split-second, the only options are:

1) Swerve into the barrier, undoubtedly killing the cyclist but sparing the lives of over a dozen pedestrians.

2) Plow ahead, likely leaving multiple casualties at the bus stop while giving your passenger(s) the highest odds of survival.

It’s a horrific scenario no rational person would ever wish to confront — the quintessential “trolley problem” playing out in real life. Yet this ghastly decision will ultimately need to be programmed into the autonomous driving systems governing every self-driving vehicle destined for public roads. The million-dollar (or rather, million-life) question is: What ethical framework should take precedence when the code has to choose?

The Moral Battlegrounds of AI

Photo by George Pagan III on Unsplash

Among ethicists, philosophers and futurists, two polarizing ethical frameworks have emerged as the leading schools of thought for how self-driving vehicle AI should respond:

Utilitarianism — Where the goal is to reduce overall casualties and preserve the maximum number of lives possible. In the bus stop scenario, this doctrine would favor swerving into the barrier to kill one cyclist over putting numerous pedestrians in harm’s way. It’s the philosophical equivalent of pulling the lever in the trolley problem to divert deaths.

Consequentialism — This prioritizes protecting the occupant(s) inside the autonomous vehicle above all else. While potentially leaving higher casualty counts in its wake, it defends the premise that a self-driving passenger has effectively hired the vehicle as a service to deliver them safely, with an understood self-preservation priority.

Each ethical basis holds profoundly unsettling implications bound to keep automakers and regulators awake at night. Should a self-driving Volvo plowing through a bus stop full of children be allowable because it statistically minimized casualties relative to other outcomes? Conversely, does giving self-preservation precedence for passengers open a Pandora’s box where autonomous vehicles potentially make decisions that sacrifice the many for the few? Or worse, the many for the one?

These aren’t just philosophical thought experiments anymore when the hypotheticals are being inscribed as real-world code governing two-ton masses of metal and glass.

Unraveling this knotted skein is made even more complex when factoring the advanced sensor arrays that will serve as self-driving vehicles’ electronic nervous systems. Powered by technologies like LiDAR (Light Detection and Ranging), radar, and high-fidelity cameras, these perception systems are how driverless cars will comprehend and map their surrounding environments in granular detail.

But what happens when those sensors make mistakes, or encounter confusing edge cases? What if a family is obscured behind a shroud of unexpected fog, or accidentally misclassified as an inanimate object? How might those errors catastrophically alter the vehicle’s split-second ethical decision-making in an accident?

Even more unsettling is the possibility that perception and object recognition issues can systematically skew the moral premises of autonomous vehicles without oversight. A 2019 study showed that AI trained only on driving scenarios from certain regions may develop unconscious blindspots that prioritize the preservation of certain demographics over others based on how it learns to visually categorize and prioritize “people” versus physical obstacles.

The prospect of ever-scaling fleets of self-driving cars being underpinned by unintentionally unethical, even discriminatory coded logic is a dystopian sci-fi horror we’re duty-bound to avoid at all costs. Yet the technological complexity and subjectivity underlying these issues makes proactively governing the ethics of autonomy an eternal game of whack-a-mole.

It’s enough to make you long for the days when the only real trolley problem was deciding whether or not to let a hungover college student blazing down the sidewalk pulverize you barreling through a crosswalk. Analog times were so simple.

Programming Ethics: Whose Moral Code Reigns?

But as grizzly and panic-inducing as all these thought experiments seem, the reality is automakers are already coding ethical decision-making frameworks into the AI brains of their autonomous prototypes. Controversial judgments that could literally mean choosing between being judged as heroes or villains in life-or-death scenarios.

So how do these massive corporations, dictatorships by way of trillion-dollar market caps, go about inscribing their take on what constitutes moral behavior into their rapidly evolving technologies? The devil, as always, lies cloaked within the opaque details.

Take for instance, the moral compass encoded within Uber’s self-driving fleet. Court documents surfacing during the high-profile lawsuit from the Google Waymo controversy unveiled some eye-opening revelations about the ethical training of the rideshare giant’s autonomous program.

While Uber claimed its vehicles were designed to mitigate harm and follow road rules, the reality was more nuanced. Early software builds showed their AI models prioritizing not getting stuck in traffic jams or performing illegal turns over avoiding fender benders. In the cutthroat race to market, collateral damage seemed an unfortunate but permissible cost.

To their credit, Uber’s current public posturing indicates its autonomous division now leans more towards a “minimizing casualties” form of utilitarianism as its overarching decision-making framework. How rigidly this holds up in real-world conditions, however, remains to be seen — especially when legal liability and liability lawsuits get thrown into the mix down the road.

Other automakers like Mercedes-Benz and BMW have suggested greater transparency and ethical accountability with user-defined parameters that dictate the tolerance for risk in emergencies. But implementing such customized moral settings quickly spirals into a logistical minefield.

Does every autonomous vehicle need some byzantine menu where riders can toggle their preferences for vehicular self-preservation vs pedestrian minimized casualties? How would such settings even be defined, let alone governed? Should parents be able to mandate their teenager’s self-driving be locked into a “safety above all else” mode? If so, how do you handle split-household custody disputes over a car’s moral compass?

Even more concerning are the ramifications once you enable user override over ethical parameters. Because beyond just undermining the entire purpose of autonomous decision-making, it opens a moral loophole for bad-faith actors to prioritize self-interest in downright villainous ways.


Post a Comment

Previous Post Next Post