Summary: Learn how we make associations between what we scan and your account, and why posts are associated with you and not someone else.
Do you know the game the Six Degrees of Kevin Bacon or "Bacon's Law"? It’s based on the "six degrees of separation" concept, which asserts that any two people on Earth are six or fewer acquaintance links apart. The Kevin Bacon reference turns it into a fun game where you use movies to try to find the shortest path between a random actor and Kevin Bacon. It’s quite amusing.
So why am I bringing this up? It’s a good analogy to explain associations. We get many questions at Social Sentinel about how we make associations between what we scan and your account, and why posts are associated with you and not someone else. By definition, associations are how an author of a social media post that contains a particular term/s is connected to a client; but when you put this in the context of scanning around 1 billion data points per day, it requires a little more explanation. We can dig into it technically ad nauseam because it’s what we do and love, or we can play six degrees of Kevin Bacon.
In this exercise, imagine your school is The Kevin Bacon University (probably not Animal House, but whatever speaks to you; go nuts, it’s your scenario.) And since Twitter is one of the most voluminous forms of communications on social media, we will use that platform in our example. (Twitter produces approximately 500 million posts per day or about 6k posts per second.) Let’s use, say, @baconuniversity as the handle in our game.
First, there are the basics of association that apply to every social media platform we scan. They include geography, demographic information, school details like address, mascot, building names, etc. But these alone aren’t enough to make an association. We need to connect the dots between your account information and the tons of social media posts we scan every day to make the most relevant associations possible. Remember: we aren’t trying to serve up what’s being said about your school in general; we are finding that needle in the digital haystack that could be a potential threat or cry for help associated with your school.
Fast forward two years. Jimmy’s now a junior, and things have been tough. He’s struggled with academics, relationships, and is feeling very alone. The girlfriend he met Freshman year, whom he also follows on Twitter, has left him. The posts he once liked and contributed to @baconuniversity he now finds to be a trigger for his depression and frustration.
One day, Jimmy discovers his former girlfriend is dating his best college friend, and he tweets: “I hate this place and everyone here. To everyone who treated me badly - you’ll regret it.” Now, an association is made because:
1) Threatening language was used that matches words and phrases in our Library
2) The Tweet was public
3) It was linked to the University because Jimmy is a follower of the school’s Twitter handle.
The result? An alert is delivered to your team, providing an opportunity to reach out and possibly intervene.
Associations can also be made via hashtags. These can include the popular hashtags being used in your school community or new ones generated, and they can be added to your account for association by your account manager. In order to protect privacy, free speech, and the right to assemble, there are limitations around which hashtags can be applied. Generally speaking, it can broaden the scope of scanned content to deliver yet another layer of threat awareness.
For instance, if someone uses the hashtag #kbulife and tweets “Stay away from Adams Hall today unless you want to go up in flames #kbulife”, Social Sentinel would make an association between the hashtag, your account, threatening language, and the tweet being public.
Hashtags can be limitless, so in order to apply the ones that make the most sense to your account, we think locally and in the vernacular.
Now, what about the rare instances where it seems like an alert SHOULD have been delivered, but wasn’t. Why was it not included? Let’s explore a scenario:
Say our friend “Kevin” posts on Twitter, “I am literally going to buy a gun & shoot everyone if we lose one more game.” The post isn’t flagged as an alert, even though Kevin goes to the school that uses Social Sentinel. Here’s why:
- Kevin does NOT follow any school Twitter handles
- In his post, Kevin doesn’t identify the school through tagging, keyword usage or hashtag usage
- The Geo-location settings are NOT turned on (note less then 5% of all Twitter account users turn on their location settings)
In this scenario, we would not deliver an alert because we have no association method. It is a fine line between a solid association and no association at all. However, if another person that followed the school’s handle and retweeted this and tagged the school, it would be flagged. We proudly maintain privacy boundaries at all costs, which means we will not and do not follow or surveil specific users. As always, if you have questions about what is in scope for scanning, please ask.
It’s our job to keep the account continually updated and fresh, so we have a current list of associations in your profile. Can’t remember what associations you have? Your Account management team is here to review every handle, hashtag, and keyword in your account.
Now imagine a random person named “Jimmy”. Jimmy heads off to KBU, ready to study music and acting. To be sure he doesn’t miss a thing happening on campus, he follows @baconuniversity, the school’s public Twitter handle, and goes all in on retweets, likes, and tagging everything about KBU. But none of Jimmy’s posts are ever delivered to you as an Alert or Discussion. Why? Because there is no language included that would escalate the post via the Social Sentinel model. In other words, there’s no threat or cry for help in anything he’s posting.