Marc van Meel

Marc van Meel

Hi I’m Marc!

I speak and write about the intersections of technology, society and philosophy.

BLOG

Why Bridges Are Racist and AI Totalitarian

Share on twitter
Share on linkedin
Share on facebook
Share on whatsapp

We have all seen the Youtube videos of people angerly smashing their computers. Directing blame to physical objects seems equally funny as foolish. After all, it’s just a machine right? Technology itself appears to us as neutral by default. Any political and ethical implications must originate from the society in which the technology is used and developed in. Today Artificial Intelligence (AI) and algorithms are accused of having racist, sexist and biased effects. But surely the root cause must be found in biased historical data, unethical corporations or capitalism.

Not entirely.

What if I tell you that bias is neither new nor exclusive to AI? That objects can be inherently political and ethical. That technology shapes society, independent from the socio-economic forces in which the technology is developed. And why the technology of AI pushes our society towards totalitarianism.

Racist Bridges

Technologies have political properties. Meaning that they shape and exercise power on society. Political theorist Langdon Winner did an excellent job at making this argument back in 1980.[1] The most famous example of political technology (the one everyone remembers from reading his paper) are the racist bridges on the Southern State Parkway on Long Island, New York.

racist-bridge
Southern State Parkway, Long Island, New York, 1950

Robert Moses was the most famous urban planner of mid-20th century New York. However, he was also a classist and a racist.[2] He instructed his engineers to build the bridges on the Southern State Parkway so low, such that buses would not be able to pass under them. Public transportation was the primary means of transportation of poor people, immigrants and minorities back then. The kind of people which Moses didn’t want to have access to the Long Island resorts and beaches. Moses integrated his racist politics directly into his bridges, enforcing specific cultural and societal effects. The low bridges still exist, even leading to accidents with buses today.[3]

racist-bridge
Southern State Parkway, Long Island, New York, present

White is the Norm

Moses purposely put his racial politics into his works. But even if there was no malintent, the effects would have been the same. Technologies are not always political on purpose. Shirley cards for example were color reference cards, named after an employee of Kodak. They were the standard way to calibrate skin color of photographs during the 1940s to 1980s. The problem? They were developed for white skin only. Photographs calibrated this way could not properly capture the facial contrast of dark-skinned people, making them barely visible. Kodak finally changed this in the 80s. Because they listened to people of color? No. Because of complaints from chocolate companies, whose commercials were poorly visible.

Digital photography eventually solved this problem. However, new problems are arising in the field of Image Recognition. Shirley Cards are the predecessor of problems such as Google’s racist algorithms, which classified pictures of black people as “gorillas”.[4] Racism and bias in technology has been around for a long time. The field of AI has only given rise to new instances of these problems. Technologies are political and the norm is not one of fairness and equality.

shirley-cards-racist-technology
Kodak’s Shirley cards, labelled “normal”

A Ship can only have One Captain

Bridges and Shirley cards are technologies which are political by choice, conscious or not. But the political properties they possess are by no means set. Bridges are not necessarily that low, and Shirley cards could have featured a larger variety of skin color. There is a stronger version of political technology; technologies which allow for no such freedom. Technologies which are in themselves highly compatible with, or even require a specific political system.

Nuclear power plants for example cause the need for extreme safety. Safety which can only be found in, and offered by, a centralized chain of command. If a society accepts the technology of nuclear power, its has no choice but to accept the techno-scientific and military authorities that come with it. The same argument can be made for technologies such as highways, railroads and ships. As Plato argued, a ship can not be run democratically out of practical necessity.[5]

On the other hand we find technologies with a more decentralized nature, such as the early internet or solar panels. Solar energy is inherently much more compatible with a distributed system. Solar energy fits a democratic free market society. It empowers individuals to generate their own energy, promoting social equity and freedom.

totalitarianism-ai-surveillance

AI and the Road to Totalitarianism

If we examine AI technology in this way, we discover that AI is much more compatible with a centralized policial regime. AI technology pushes society towards totalitarianism. The first clue comes from AI being just as, if not more, successfully applied in less democratic regimes than Western democracies. The succes of AI is directly tied to the easiness to which (personal) data can be gathered and applied. Data protection and privacy legislation only make this a more difficult exercise.

Second, when we apply AI, or any form of statistical reasoning for that matter, we are admitting a defeat. We are honest about reality making it either too complex or unfeasible for us to perform exact determinations. AI is a technology that learns and generalizes historical patterns. This means that we sacrifice individual determinations for generalizations on group level. Individuals sacrifice their personal autonomy and privacy for short-term gains, mainly at the benefit of third parties.

Many large, sophisticated technological systems are in fact highly compatible with centralized hierarchical managerial control.

– Langdon Winner

Many technologies in the 1980s and 1990s, such as the internet, were supportive of individualism, libertarianism and decentralization. Ronald Reagan even stated that: “The Goliath of totalitarianism will be brought down by the David of the microchip.” Nowadays, many new technologies are more compatible with centralized control. They mainly benefit groups or organizations, not individuals. Examples include personalized pricing, predictive policing, student placement algorithms, social credit systems and employee monitoring software. The list goes on. All these applications can easily result in biases against individuals which don’t fit a majority demographic.

Destiny vs. Design

Our society is adapting to the increasingly impactful applications of AI. This comes with a price tag: our human autonomy and privacy. This is not only due to us designing and adopting centralized applications of AI, but also because AI technology itself tilts our society towards this centralization. To combat this we should not only view AI systems as social constructs, and therefore seek to change societal forces only. Social constructivism[6] is only half right, and by itself cannot stop the forces of technological determinism. Instead, we should regard the implementation of an AI system in the same light as building a bridge or adopting a new law. AI systems are technologies which carry with them inherent societal and cultural effects, which are difficult to spot and cannot easily be reverted.

Before implementing AI systems we need to ask ourselves what the specific societal and cultural effects of these systems are. And we need to take responsibility as individuals and as a society when we discover that these effects do not align with our core values. It is for this reason that we need an appropriate level of control around AI. To safeguard the things we value most. Not because we will hit glass ceilings otherwise. But because we will again hit real, concrete ceilings.

References

  • [1] Winner, L., (1980). Do artifacts have politics?. Daedalus, pp.121-136.
  • [2] CARO, R. A. (1974). The power broker: Robert Moses and the fall of New York
  • [3] Allyson Chiu (April 9, 2018). Dozens injured on Long Island as bus full of students plows into low overpass, mangling roof. Washington Post
  • [4] Google apologises for Photos app’s racist blunder. (July 1, 2015). BBC
  • [5] Plato. The Republic, book VI

Share on twitter
Share on linkedin
Share on facebook
Share on whatsapp

More ideas

One Response

  1. Hi Marc, good article ! I like you encourage that we should think about the impacts on society before implementing a new technology. Even though challenge could remain that no one/company can accurately predict the effects and the size of impacts before introducing a new idea to the market, it is better to thoroughly think about that than just simly or blindly bypassing or disregarding.

Leave a Reply

Your email address will not be published.