Genesis

The AI in the Trap project began as a reaction to the insidious creep of social controls being built into artificial intelligence systems. While attending Data and Society’s Future Perfect conference, one of the presenters (at 26:39) discussed Taser’s transformation to an AI Company. Marketing directly to law enforcement and municipalities, Taser promised a vision of the future where every cop would be Robocop.  In this age of officer involved shootings and state sanctioned domestic terrorism, Taser is selling a new future of enhanced law enforcement practices, powered by artificial intelligence and machine learning.

Taser is not the only company seeking to profit from the machine learning renaissance- municipalities are also grappling with the implementation of algorithms and how they can perform or enhance key governmental functions. But these tools are being deployed before they are fully understood and without an understanding of key biases that poison data sets and that create adverse outcomes for even law abiding citizens. If we understand artificial intelligence as a series of systems and inputs, participants of a democratic society must ask better questions about the technologies being sold to our national, state, and local governments.

In the creation of these systems, what are the inputs we are assuming as normal and who gets to determine what is normal vs. deviant behavior? We’ve already seen algorithmic bias have an outsized impact on outcomes  – to name just a few instances of bias corrupting the end product, we can look to the Google photos ML algorithm tagging black people “gorillas” because there were more animals in the data set than African Americans; motion sensor technology not consistently registering darker skin tones; and criminal justice focused sorting algorithms like COMPAS showing a clear bias against African Americans in their recidivism predictions. Not addressing these biases while simultaneously deploying this systems will lead to society wide crisis.

Many designers of these systems have not experienced these types of injustices directly, or know what it feels like to be on the wrong end of a system biased against your racial, ethnic, or socio-economic group. But perhaps, instead of generating solutions from the more homogenous group of designers currently working in the field, what if we applied a different lens to the building of these systems? What if we viewed them not through the lens of Silicon Valley, but through the lens of the trap?

Why the trap in particular, with its glorification of guns, violence, and criminal activity?  The trap is the opposite of Silicon Valley. The trap has its own code, laws, language, origin story, and a positioning that is often directly opposed to the hallmarks of life in wealthier tech communities. It seems strange to try to design from the opposite end of the criminal justice system – but this reframing of possibilities of artificial intelligence is a call to action. A call to actively resist the choices in design, code, and data that lead to a form of technological gentrification, where all choices are standardized to the world view of the programmers and all non normative behavior is quietly erased through the choices we encode into these new systems.

The AI in the Trap project explores these tensions after a series of micro experiments. Similar to Janelle Shane’s work, the project will be multidisciplinary in nature and artistic in execution. The project will have three key components – (1) a set of white papers analyzing available training data, the social environment, music, and culture; (2) a created digital construct designed to emulate a rapper, and (3) ultimately a live show or concept to make the research and concepts accessible to a mass audience.

Teaching computers and their programmers to understand the trap isn’t a cure for bias – to be sure, researchers should expect the kinds of clumsy stereotyping and incorrect conclusions that fellow humans absorb and perpetuate. But by training neural networks on a variety of norms, the connections the system may make come a little closer to reflecting the actual world we live in. To serve humanity, what we give to a neural network should reflect the complexity of humanity. We need equal parts Chopin and equal parts Metro Boomin.

Activists have spent countless hours warring against respectability politics – how can we sit silently by and watch this be encoded into our default operating assumptions around systems of artificial intelligence?

About

Latoya Peterson lives at the intersection of emerging technology and culture.

Named one of Forbes Magazine’s 30 Under 30 rising stars in media, she is best known for the award winning blog Racialicious.com – the intersection of race and pop culture. Currently, she is working on projects involving virtual/augmented reality, artificial intelligence, and machine learning. In 2019, she is launching a collaborative art project that explores the future of artificial intelligence and predictive policing through a hip-hop lens.

At ESPN/Disney, she pursued projects around leveraging machine learning for business and entertainment applications creating demos and case studies for internal use, specifically focusing on video production and machine learning in partnership with the ABC R & D lab and ESPN’s Advanced Technology Group.

In 2016, she produced a critically acclaimed YouTube series on Girl Gamers that was highlighted on Spotify. Previously, she was the Deputy Editor, Digital Innovation for ESPN’s The Undefeated, an Editor-at-Large at Fusion, and the Senior Digital Producer for The Stream, a social media driven news show on Al Jazeera America.

Continue reading “About”

Want to Join?

Collaborators are welcome.