The Demons in the Machine
Meet an Artificial Intelligence
There’s an AI in a game I like to play. I’ve been playing this game for a few years: cute animations, fun puzzles, just enough to distract my brain.
Except lately, I’ve been distracted by the way the AI is changing the game.
About a year ago, the developer decided to train the algorithm and asked players to rate how we were feeling after each level.
Maybe I should have thought through my responses better, but since I liked the game, I gave honest responses. (Is it lying if you lie to a machine?)
After a while, the game stopped asking how I was feeling. And then, slowly, it began to change.
At first, I thought I was imagining it, being paranoid.
But this game is devious.
The developer seems to have combined the phone’s sensors that it has access to with the feedback that the users provided during those repeated “how are you feeling?” questions.
I did have a few warning clues that this developer wasn’t looking after his customer’s best interests, but at first, I thought it was accidental. I caught on about a month after I filed a tech support ticket because the “buy now” button for in-game currency was covering the “play” button. Whenever a sale ran, it was hard not to hit the pop-up sale ad.
The reply was something to the effect of “so it is.”
Then the game became even more focused. Each time you want to advance a level, you have to hit a button that is in the same spot. Sometimes you need to dismiss several pop ups by hitting a button in that one spot. And then…oops. There’s a “buy now” button in there.
I finally caught on. This wasn’t an accident.
You may be laughing at me, because of course it was intentional, but I thought this developer was one of the good ones.
I’ve started watching myself play this game, and watching the game respond.
Sometimes I’ll see a piece change color out of the corner of my eye. Other times, the counter for moves remaining will blink and change.
Worse, it seems to respond to my moods. As I become annoyed, it will reveal a level that I like. Once I start relaxing and having fun, the game will create no-win scenarios…or rather pay-to-win scenarios. I’m playing along and I just need 3 pieces of one color. The game will then stop dropping that color at all unless I’m willing to spend in-game currency.
I’m cheap.
I do in-game challenges to “earn” that currency, and I hoard it. Apparently, the game doesn’t like that. It wants me to spend it so I will buy more.
After a few moments of thinking I must be imagining it, I decided to try an experiment. I hit a really hard level that I would need to pay for extra moves to beat. I logged off. I logged back on an hour later. The level was easier, but still suggested paying for it. I logged off.
When I logged back in the next day, the level was easy and the next level after that — surprise reveal! — was one that the game knows is my favorite, because I’d foolishly told it which one I liked best.
In disgust, I didn’t play for a while. When I came back, I was offered a string of my favorite levels and some in-game challenges for currency that were much easier than normal.
As I watched the game more closely, I saw how the algorithm had changed to make the game more addictive, more immersive, and more aware of when and how in-game currency was used or stored.
I just looked at its ratings on the app store and I see other users are also noticing this. The developer denies that this behavior is intentional. Does the developer not realize what the algorithm is doing? Maybe not. Or is this just like the “buy now” button?
This is one of the more sophisticated game AIs that I’ve noticed.
But that’s nothing compared to the advertising bots that run around the internet.
Algorithms and AI
And that brings me back to the demons in the machine.
There’s a joke: never work on a computer when stressed. Computers smell fear and will pounce.
Of course that isn’t true. Any of those disasters that have happened to you while working stressed can easily be attributed to power fluctuations or inattentiveness, right?
Black Box Scenarios
When I was in school, a black box scenario was a programming challenge where you were supposed to guess what the programming was inside a “box” based on what you put in and what came out. You would test the “box” by giving it different input and then see what comes out.
For example, if you put in 3 and you got 6, then you put in 4 and you got 8, you could guess that it was probably multiplying the numbers by 2. (The astute will note that there are other possibilities, but that’s not my point.)
That’s a very basic black box scenario.
Most advertising algorithms are black box scenarios for everyone outside of the NDA’s (non-disclosure agreements) within the corporations that pay for their development. Unless you actively worked on developing the algorithm, you probably don’t know how it works.
But we can make educated guesses based on the outcome.
For example, sit next to your phone that absolutely positively isn’t listening in to you. Tell your friend that you are considering buying a camper van. Don’t search for it, don’t do anything else other than talk to your friend about camper vans (or something else you actually have no desire for).
Then see how long it takes to see ads for camper vans on your social media news feeds or on websites that make their living showing paid ads.
Companies that run these ads swear this is entirely coincidental, but a number of anecdotal experiments have cast doubt on their claims.
Making Life Easier
Programmers and developers tell you that these algorithms are designed to make our lives easier. Isn’t it wonderful if the thing you want just comes to you? (Don’t worry that the careful research you thought you were doing to find the best price and best product was actually subverted by other algorithms making sure you only saw certain options.)
I do believe that most of the programmers who designed these algorithms and AIs meant well.
Understand: algorithms and AIs are now commodities. They can be bought and sold, twisted and changed. And there is no oversight on what changes can be made once one of these gets loose in the wild.
I would go so far as to suggest that we really don’t know how many of these are out there, who owns them, or what the current programming is.
Big Data
Big data is a term that refers to huge databases with massive amounts of information. For example:
- Your purchase history on Amazon.
- Your likes on Facebook.
- Which songs you play repeatedly on Spotify.
- Your search history.
Each of these are just examples of the information about you that is available in the machines. Oh, and they know who your friends are and what they like, too.
It is fairly safe to say that there is probably at least one AI on the planet that knows you better than your best friend.
Big databases can be sold and stolen.
Computers are Binary
Modern computers are becoming more sophisticated, but at their heart, most machines still see the world in 0s and 1s. The algorithm succeeds in making a sale or it fails. (Yes, I’m aware that modern computers are becoming more sophisticated, but that isn’t encouraging.)
AIs and the Convergence of Intent
I believe that as Artificial Intelligence has grown in usage, the intent, the training of these things, is causing a convergence that is disrupting our way of life.
Allow me to explain:
Let’s say a programmer has given an AI a task to maximize sales for paid advertisers. They then give this AI access to just a piece of that big data.
Depending on how much access the AI has, it may decide to correlate buying history with likes and interests. It can now begin to test scenarios: are people with a certain income more or less likely to buy this product? Are people of a certain political or social leaning more or less likely to buy this product? Okay, then we can divide people into groups and create subdivisions that improve marketing results.
Now imagine repeating that testing process over and over again with the speed of a computer given huge computational powers and access to search history and public domain databases.
As these machine algorithms become more reliable in their predictions, we bring in the most frightening piece of the puzzle.
Advertisers are not just selling products
Those who want their information to be heard can “boost” a post — getting it more air time by paying for it to be displayed on social media, for example.
Politics, social issues, health issues, pharmaceuticals — all of these topics are now using algorithms for signal boosting.
Things that were designed to tell us what to buy are now capable of telling us how to think.
Just as an AI can create a bubble that will surround you with advertisements for a camper van, these AIs can create thought and information bubbles, pushing people further to the left and right on any issue.
Anyone who has read my books knows that I believe we live in a world caught in the midst of a spiritual war. As these separations between camps become more pronounced, we can feel as if that war is more immediate, more intense, and at the same time more confusing than ever. Which side are we on? Are there two sides?
The bots push us towards the belief that the other side is evil, or stupid, or just unthinking. The people on our side (coincidentally also the ones who are paying for the advertisements we are being shown) are intelligent, well informed, and on the side of good.
Are computers amoral?
Computers don’t have souls. Neither do corporations, right?
Computers are given objectives:
- Make more money for advertisers.
- Sell more advertising.
- Identify sources of income.
Whether programmers intended these objectives to be morally motivated or not, I believe that morality — or immorality — has begun to creep in.
And not in a good way.
Absolute power corrupts — absolutely.
Corporations that are seeing stunning levels of success are not motivated to rein in the algorithms.
I don’t see demons behind every bush, but lately I’ve begun to wonder what is looking out at me as I interact with my phone, my computer, the digital products in my life.
We have monitors on our homes — for safety. We have monitors on our phones — for ease of work. I don’t even want to think about the monitors that are part of our infrastructure: power usage, traffic patterns, etc. [I was stunned recently to learn that the traffic flow strips — those black strips road workers lay across the lanes — in my area were tracking the destination of cell phones that passed over them. Not tracking the individuals, but tracking the cell phones in the cars that passed over one area of road so that they knew the destination of those cars.]
Seven deadly sins
Morality or immorality means different things to different people. Just to pick semi-neutral, classic ground, let’s use the list of “seven deadly sins” as a reference. Here’s that list as a refresher:
- lust
- gluttony
- greed
- sloth
- wrath
- envy
- pride
What do you think?
As you interact with the devices in your life today, ask yourself: what behaviors are they encouraging?
Have our devices been infected by evil entities that roam the Earth?
Evil has always found fertile ground in humanity. Is it possible that these same forces of darkness have found even more fertile ground amongst the algorithms and artificial intelligences that humanity has created?
Additional Reading:
https://www.wired.com/story/ai-generated-text-is-the-scariest-deepfake-of-all/
https://www.wired.com/story/fight-to-define-when-ai-is-high-risk/
Woah, your report is intense. The documentary “The Social Dilemma” corroborates the warnings you provide–ultimately, social media users are being tracked in an effort to up the tracker’s profits, despite costs/risks to the user. Folks are being manipulated by their likes and are being encouraged/pushed down rabbit holes that appear, at the outset, to make them happy or comfortable, or that provide fun, or that group them with like-minded folks. Algorithm are designed to track and manipulate. Users need to be aware. Play in their playground and enjoy the games they offer, but know that propaganda and marketing in the manner used today are powerful tools.
Thanks, Bonnie. Yes, it is a scary time. I believe that in time we will learn how to manage this dangerous technology, but for now…we’re at a stage where we need to be cautious. We definitely live in interesting times!