The last two or three years have shown how powerful social media has become in our society. We’ve all been shocked to see how quickly and deeply Twitter, Facebook, and similar tools have worked their way into our individual lives and our culture at large. From a neuroscience perspective, however, it’s not surprising at all.
Our brains crave novelty. Discovering something new and interesting triggers a happy little reward circuit, and social media is a never-ending stream of interesting things. Our brains are also wired to make social information one of our highest priorities, so having a never-ending stream of new and interesting social content at our fingertips is naturally extremely engaging.
The surprise then isn’t that social media has had a large influence on us, but rather that we didn’t see it coming. (Actually, some did: I heard David Thornburg say this almost twenty years ago: “People keep saying we’re in the Information Age. We’re not. We’ve passed through the Information Age into the Communication Age. We need to understand the difference.”) What also shouldn’t be surprising is that there are people, businesses, and organizations that have become masterful at using neuroscience to create social media that is very powerfully manipulative.
We’ve all seen “clickbait” titles that prey on our curiosity like You won’t believe what happened next! While annoying, these are mainly harmless tricks in a media world competing for you attention. In the last few years, however, we’ve seen more and more items like Senator Belfry Advocates Eating Live Puppies, where the goal is to trigger a fearful and/or outraged response – and widespread sharing.
If you’re seeking to manipulate people, outrage and fear are very helpful tools. When these are strong enough, they will trigger your emotional system to a level that significantly impedes your ability to think. You go into a reactive mode rather than reflective state, and you feel like you just have to do something. People can be motivated to say things and act in ways they normally wouldn’t. (We’ve all had the experience of being in an argument and saying or doing something that we not only regretted, but afterwards didn’t even make sense to ourselves.)
If you are a regular user of Facebook or Twitter, think about how many of the messages you see that are framed to be upsetting. Many political messages in particular are built around outrage or fear. Even more troubling, many of the groups or individuals who create this content have figured out that it simply doesn’t matter if it’s true or not – people will click on it, read it, and share it regardless. A visit to www.snopes.com to read the Hot 50 stories will show how prevalent false stories are. (As of the time I’m writing this, only nine of the “Hot 50” articles are actually true.) We have a situation where there is a strong financial return for creating upsetting stories that feed into people’s pre-existing biases, and it’s a lot easier and cheaper to make up stories than to research real ones.
And the situation will only get worse. This posting was inspired by a tweet from Michelle Zimmerman (@mrzphd) about the subject of a presentation by writer Zeynep Tufecki (@zeynep):
Every time you use Machine Learning, you don’t know exactly what it’s doing. YouTube has found out that finding more extreme content keeps people engaged, like conspiracy, extreme politics.
Our social media and news feeds are increasingly managed by artificial intelligence and machine learning systems, which constantly monitor what gets and holds attention best. They combine the big data of the whole population of users with our own individual patterns to fine-tune exactly what should keep us most engaged. This AI doesn’t care about the nature of the content, whether it’s true, or whether it’s good or bad for you or society at large. It only cares about your attention. And AI will get more powerful at doing this every single day.
I want to be clear that I’m not saying social media and AI are bad. After all, this is a social media post inspired by other social media posts, and AI is an exciting topic for students and responsible for some amazingly positive things. (Michelle Zimmerman, mentioned above, has written a book for ISTE on the topic – http://iste.org/TeachAI) . What I am saying, however, is that part of digital literacy for our students has to be learning how to understand the manipulative power of social media can be used against them, and how they can use that knowledge to both protect themselves and to avoid becoming manipulators.
We should do is to stop teaching social-emotional learning and digital literacy as separate topics. For many (if not most) of our students, their social-emotional lives and their smartphones are inseparable. Many SEL programs and digital literacy programs already touch on this, but I think the last few years have demonstrated it needs to be emphasized much more strongly. When we teach students about social-emotional learning, part of their learning needs to be the application of this knowledge to their use of digital media. If they understand how social media sources use their emotions to manipulate them, they can be better equipped to resist it. Likewise, they can hopefully use this knowledge to make informed, more positive decisions on what they post and share themselves, and be part of the solution rather than part of the problem.
Our students will spend their entire lives in a world where social media and machine learning are ubiquitous. We need to help give them control over these technologies, so it's not the other way around.