A few years ago Erika invited a horse trainer to teach her how to train our horses to be more cooperative. I witnessed some of his training sessions and learned enough to try using the same techniques on our horses afterwards. I won’t bore you with all the details, but much of what we learned is directly relevant to Facebook’s relation to its users.
Our trainer pointed out that the goal of training is to get the horse to do things that benefit us, despite the fact that the horse has no natural inclination to do these things because they would be of no benefit to a horse in its ancestral environment. He pointed out how we can use the horse’s natural inclinations to teach it to do things that we want it to do. For example, we learned how to position ourselves in a circular arena in such a way that the horse would continuously trot around the inside edge of the arena to avoid having us in a location that made it nervous, and then reposition ourselves in such a way that the horse would stop or fluidly reverse course and trot in the opposite direction. Developing this trotting behavior benefitted us because Philip would be competing in horse events requiring sustained arena trotting. It just so happens that the exercise also benefitted the horse.
When it comes to Facebook, and by extension much more of the online world, we users are in the position of a horse. Most of the websites that most of us visit are designed at least partly to get us to do things that benefit the owners of the website, some of which we would otherwise not be inclined to do. Shoshana Zuboff’s book, The Age of Surveillance Capitalism, is again the best thorough examination of how the web became like this. You can find brief introductions to her work at many places on the web, including here.
But the critical issue with Facebook — and most other websites or platforms that offer you internet services for “free” — is not just that the company is trying to manipulate you, but also that you must be kept ignorant of what is happening to you. We are all very familiar with some of the manipulative methods used by older forms of media, such as pictures of rugged cowboys smoking cigarettes while riding their horses on the range or shapely young women in bikinis draped over cars. It doesn’t take a genius to figure out the implicit message being sent. These forms of influence may be crass and tasteless, but most of us don’t regard them as a threat because we can tell we are being manipulated, even if in the end we do what the advertisers want. Likewise, few of us are likely to be horrified by news sites loaded with articles with headlines like, “Newly-released Video May Show Trump Propositioning Mrs. Putin,” because we have been seeing that kind of clickbait in grocery store check-out lines for decades.
What if you are no more aware of how you are being manipulated by Facebook than my horse was when I got it to continue trotting in a circle? What if cues you are receiving in the design, layout, graphics, workflow, and interactions with Facebook and its other users are meant to provoke behaviors that you would never in your wildest dreams imagine could be stimulated by those cues. Well, there is no what if. It happens online all the time. For comparison, you can reference the use of color and other atmospheric modifications in restaurants to stimulate hunger in the patrons so they will order more food. The difference in online platforms like Facebook is that they are also collecting volumes of data on your behavior and using that data to customize your online environment, in expectation that you will be motivated to do things like buy items advertised on the site, click on links that take you to places on the web favored by advertisers, or do things in your real life that will increase your exposure to products favored by advertisers.
Google had discovered the power of using data culled from users of its websites to manipulate their behavior before Facebook even existed. I say this at least partly to stop you from speculating that I have some personal grudge with Facebook, but also because lessening exposure to Facebook doesn’t solve the problem. It happens across the web now. And it is fundamentally immoral, and should be illegal.
If that strikes you as an extreme reaction, maybe you need to consider in more detail how this works. Let’s say that you are a consultant with Cambridge Analytica, to take just one known example, and using data obtained from Facebook you determine that there is a class of potential voters with the following online behavioral characteristics: 1.) when perusing their timeline, they are almost guaranteed to like or comment on any post from friends more recent than 2 days ago, and almost guaranteed not to view any posts older than that, 2.) when commenting, they are almost guaranteed to post no more than 2 sentences and use an emoji or accompanying image over 60% of the time when commenting, 3.) tend to browse their “People You May Know” list at least once per week, and 4.) when browsing that list are almost guaranteed to a.) make a friend request to at least one person in the list with whom they already have 2 or mutual friends and b.) have a large enough list to span at least 3 pages and browse through at least the first 2 pages. Furthermore, you learn that this class of users are 63% more likely to get to a polling place and vote for the candidate Cambridge Analytica wants to promote if for at least two months before the election they are shown a set of advertisements and links to online content that promote anti-bacterial lotions and sprays or female country-music artists placed on their Facebook timelines between 5:00-6:30 AM or 9:45 -11:00 PM. Furthermore, these promotions must occur in combination with other online content attacking Nancy Pelosi, the Federal Reserve Board, or the Walt Disney Company and placed in such a way that the targeted users are likely to click on at least 75% of the content and links provided, based on their color preferences, mouse-moving habits, and instant reactions to specific topics and themes.
Even this example is too generic. For you personally, they also know that you have a persistent habit of over-clicking (for example, clicking on a link more than is necessary to follow it), are more likely to click on links highlighted in dark purple than other dark colors, and spend on average less than 2 minutes away from Facebook after clicking on a link before returning and performing further actions on Facebook. They already have learned from latitudinal studies of people exhibiting this combination of behaviors how to tailor the presentation of the content mentioned in the previous paragraph to increase the odds of you voting for their desired candidate. All they need now is sufficient resources to make enough changes to the appearance of your timeline to get the results they desire. After all, this is what they are paying Facebook for, and Facebook without that money wouldn’t exist.
In my case, at least, I have nowhere near the level of self-awareness necessary to recognize that these types of subtle clues could be manipulating my behavior. In fact, it is entirely possible that at least some groups with access to Facebook data were attempting to manipulate me to get off Facebook! Certainly they wouldn’t tell Facebook that’s what they were using the data they paid for to do, but I can imagine certain right-wing groups with that data considering me a lost cause and wanting to limit the exposure of my ideas on the Facebook platform, placing content on my timeline to chase me off the platform. No doubt Facebook would have means to catch and limit this type of customer behavior. Again, they are still in business.
Regardless, why should I expose myself to an environment where I am in effect a lab rat? Where I participate in a game in which not only do I not know the goal or the rules but I don’t even know what game is being played? If you have a good answer to this question you are welcome to post a comment. It had better be a very good answer if you want it to change my mind about Facebook.