Now Reading
What Senator Cory Booker gets wrong about Black people and AI

What Senator Cory Booker gets wrong about Black people and AI

Recently, Senator Cory Booker sat down with tech insiders to discuss equity, discrimination, and Blackness, as it pertains to AI. Booker emphasized the importance of including Black people in the impending AI revolution, not just in America but globally. “Having African Americans as a part of it, as the center of it [and] at the table for it, is really important,” he said. Yet, Booker’s call for better inclusion for Black communities in AI fundamentally – and dangerously – misunderstands that, in fact, Black people are very much “the center of it.” When it comes to surveillance, incarceration, and education, we are the focus.

AI requires data collection and data collection requires surveillance. Since data is money, surveillance is too. Hence, why nearly every company is now building surveillance into their products. To be clear, the unregulated data collection free-for-all we are currently experiencing affects everyone, but those living within vulnerable communities experience the brunt of its harms.

In the US, people use Ring Security and Doorbell Systems outside of their homes and AI powered vacuums, refrigerators, and washing machines inside. New cars equipped with smart technology can integrate with other AI products. This web of connection from smart objects gives companies, advertisers, and governments access to our driving habits, locations, the layout of our homes, and the conversations being had within them. Smart phones, street lights, and personal security systems are all generating information being sold to other companies, employerslandlords, and government agencies. When corporations like AmazonMeta, and Google are collecting and selling our daily communications, contacts, and activities, we all run the risk of that information, one day, being used against us in ways we can’t necessarily predict. Being that America has a history of forcing the identity of “criminal” onto Black people, new surveillance technologies give law enforcement the green light to scrutinize every move they make, under the guise of preventing crime before it happens.

AI is just our collective thoughts, information, and images reflected back at us through the algorithms that control it. It is the good, the bad, and the ugly our world has to offer. Bias is definitely “the ugly” baked into current algorithmic systems, particularly within criminal justice.

It determines whether a suspect should be denied bail, which rehab facility or case management program a prisoner will go after release, which prisoners are denied parole, and which inmates are at high risk of recidivism. Facial recognition software that is supposed to identify suspects is being used on prisoners and their visitors. Visitors are routinely denied access to their incarcerated loved one, based on unreliable facial recognition technology. It has been proven that facial recognition doesn’t work well on dark skin. Since 75% of our US prison population is Black, the majority within that system are being treated inequitably.

Similar bias exists within education technology, which is harvesting student data while offering a below average education. The Chan-Zuckerberg Initiative (CZI) admits to selling student data to more than 19 other companies. In fact, the more vulnerable the community, the more technology you will find in their schools, negatively affecting student achievement. The tech industry’s goal appears to be, fully staffed affluent schools while children on the margins are taught via chat bot. AI technologies for the classroom are part of a much bigger AI surveillance apparatus that cities, states, and the federal government are making use of, further blurring the lines as to what’s considered student data.  Recently, a Florida county sheriff was reported to be purchasing student data from his county’s school superintendent to create “probable” criminals lists on predominantly Black school children, in an attempt to arrest kids and prevent crimes before they happen.

Our public school system is graduating 18-year-olds who can barely read or comprehend basic math and science. The school I worked for, whose personalized learning platform was created by Mark Zuckerberg’s CZI, has a 99% graduation rate and a 38% literacy rate, 16% math proficiency, and 28% proficiency in science. Corporations have usurped control of K-12 education through charter schools and ed-tech products. New Orleans is an all-charter district, and in California, a state with 7 charter-only districts, approval for new charter schools must be given to companies serving low-achieving students, where 50% or more are eligible for free and reduced lunch. This directly targets Black and immigrant students for mediocre education, being that they are the demographic most likely to qualify for free or reduced lunch.

Just as new medications must be tested and approved to avoid public harm, so too should algorithms and AI systems be tested and approved by a federal regulatory body. I know Sen. Booker means well in suggesting more visibility for Black people within this arena. But simply adding Black people into training data will not fix our problem. Bias must be removed from data sets. Expanding visibility within algorithms that simultaneously label us as drug dealersgang members, and criminals will only intensify surveillance, under-education, and incarceration. Algorithmic bias is the fault of program developers and the companies they work for, who disincentivize ethics, privacy, and safety. Furthermore, to only discuss AI harms with those companies’ CEOs, leaving out the people on the ground fueling the engine, repeats the mistakes of our past which believed the research from oil and tobacco executives over the lived experiences of those harmed by their products.

© 2022 VISIBLE Magazine. All Rights Reserved. Branding by Studio Foray.