The Future of AI: How AI Took Over Our Lives in the 2010s

There’s no turning back…

A.I. is the future Unsplash

IMAGES PROVIDED BY UNSPLASH: A 3d images created by Ted Wolf, can speak any language, and may be useful for translations in the future. 


DEC 9, 2019

Bots are a lot like humans: Some are cute. Some are ugly. Some are harmless. Some are menacing. Some are friendly. Some are annoying … and a little racist. Bots serve their creators and society as helpers, spies, educators, servants, lab technicians, and artists. Sometimes, they save lives. Occasionally, they destroy them.

In the 2010s, automation got better, cheaper, and way less avoidable. It’s still mysterious, but no longer foreign; the most Extremely Online among us interact with dozens of AIs throughout the day. That means driving directions are more reliable, instant translations are almost good enough, and everyone gets to be an adequate portrait photographer, all powered by artificial intelligence. On the other hand, each of us now sees a personalized version of the world that is curated by an AI to maximize engagement with the platform. And by now, everyone from fruit pickers to hedge fund managers has suffered through headlines about being replaced.

So here’s how we changed our bots this decade, how they changed us, and where our strange relationship is going as we enter the 2020s…

A man experiences VR created by A.I. machine learning. Image: Unsplash

IMAGES PROVIDED BY UNSPLASH: A man experiences a VR construct created by a machine learning algorithm.

Humans and tech Have always coexisted and coevolved, but this decade brought us closer together—and closer to the future—than ever. These days, you don’t have to be an engineer to participate in AI projects; in fact, you have no choice but to help, as you’re constantly offering your digital behavior to train AIs, that are faster, and more accurate than us.

We Invited Them In

This decade, artificial intelligence went from being employed chiefly as an academic subject or science fiction trope to an unobtrusive (though occasionally malicious) everyday companion. AIs have been around in some form since the 1500s or the 1980s, depending on your definition. The first search indexing algorithm was AltaVista in 1995, but it wasn’t until 2010 that Google quietly introduced personalized search results for all customers and all searches. What was once background chatter from eager engineers has now become an inescapable part of daily life.

One function after another has been turned over to AI jurisdiction, with huge variations in efficacy and consumer response. The prevailing profit model for most of these consumer-facing applications, like social media platforms and map functions, is for users to trade their personal data for minor convenience upgrades, which are achieved through a combination of technical power, data access, and rapid worker disenfranchisement as increasingly complex service jobs are doubled up, automated away, or taken over by AI workers.

The Harvard social scientist Shoshana Zuboff explained the impact of these technologies on the economy with the term “surveillance capitalism.” This new economic system, she wrote, “unilaterally claims human experience as free raw material for translation into behavioral data,” in a bid to make profit from informed gambling based on predicted human behavior.

We Put Them in Charge

 We’re already using machine learning to make subjective decisions— even ones that have life-altering consequences. Medical applications are only some of the least controversial uses of artificial intelligence; by the end of the decade, AIs were locating stranded victims of Hurricane Maria, controlling the German power grid, and killing civilians in Pakistan.

The sheer scope of these AI-controlled decision systems is why automation has the potential to transform society on a structural level. In 2012, techno-socialist Zeynep Tufekci pointed out the presence on the Obama reelection campaign of “an unprecedented number of data analysts and social scientists,” bringing the traditional “confluence of marketing and politics” into a new age.

Intelligence that relies on data from an unjust world suffers from the principle of “garbage in, garbage out,” futurist Cory Doctorow observed in a recent blog post. Diverse perspectives on the design team would help, Doctorow wrote, but when it comes to certain technology, there might be no safe way to deploy:

“Given that a major application for facial recognition is totalitarian surveillance and control, maybe we should be thinking about limiting facial recognition altogether, rather than ensuring that it is equally good at destroying the lives of women and brown people.”

It doesn’t help that data collection for image-based AI has so far taken advantage of the most vulnerable populations first. The Facial Recognition Verification Testing Program is the industry standard for testing the accuracy of facial recognition tech; passing the program is imperative for new FR startups seeking funding.

But the datasets of human faces that the program uses are sourced, according to a report from March, from images of U.S. visa applicants, arrested people who have since died, and children exploited by child pornography. The report found that the majority of data subjects were people who had been arrested on suspicion of criminal activity. None of the millions of faces in the program’s data sets belonged to people who had consented to this use of their data.

We Tried to Control Them

 State-level efforts to regulate AI finally emerged this decade, with some success. The European Union’s General Data Protection Regulation (GDPR), enforceable from 2018, limits the legal uses of valuable AI training datasets by defining the rights of the “data subject” (read: us); the GDPR also prohibits the “black box” model for machine learning applications, requiring both transparency and accountability on how data are stored and used. At the end of the decade, Google showed the class how not to regulate when they built, and then scrapped, an external AI ethics panel a week later, feigning shock at all the negative reception.

Even attempted regulation is a good sign. It means we’re looking at AI for what it is: not a new life form that competes for resources, but as a formidable weapon. Technological tools are most dangerous in the hands of malicious actors who already hold significant power; you can always hire more programmers. During the long campaign for the 2016 U.S. presidential election, the Putin-backed IRA Twitter botnet campaigns— essentially, teams of semi-supervised bot accounts that spread disinformation on purpose and learn from real propaganda—infiltrated the very mechanics of American democracy.

Next Up: The Second Bot Decade

 Keeping up with AI capacities as they grow will be a massive undertaking. Things could still get much, much worse before they get better; authoritarian governments around the world have a tendency to use technology to further consolidate power and resist regulation.

Tech capabilities have long since proved too fast for traditional human lawmakers, but one hint of what the next decade might hold comes from AIs themselves, who are beginning to be deployed as weapons against the exact type of disinformation other AIs help to create and spread.

There now exists, for example, a neural net devoted explicitly to the task of identifying neural net disinformation campaigns on Twitter. The neural net’s name is Grover, and it’s really good at this.