By now, you’re likely to have heard of the great ‘Twitter purge’ that has taken place in the last few months
Over the course of May, June and July, the social networking site is believed to have suspended an incredible 70 million fake and inactive accounts – plummeting the stock price by 20% and wiping £3.8bn off its market value.
Of course, this is bad news for anyone with stakes in the website, but as the company announced its latest move was in the long term interest of the platform, it could well prove to be a huge step forward in the avoidance of online scams.
The problem doesn’t only exist on Twitter, but all over the internet: dating sites, social media, any platform which facilitates communication between relative strangers – it’s prone to scammers. So far in 2018, surveys approximate that a total of $64,597,734 has been lost to online scams – and there’s still plenty of time left for that number to continue rising.
Following on from this, ‘Dating & Romance’ sites are the second largest victims online – having lost $13,058,384 since the beginning of the year. It’s clearly a huge problem in the industry, and for anyone tasked with ensuring the safety of a vast user base, action is needed now more than ever. 75% of businesses are calling for advanced authentication and security measures that have little or no impact on the digital customer experience – but what is the solution?
Across the internet, companies are stuck trying to manually tackle questionable accounts which prevent the growth of their business and hinder the experience of genuine users. It’s time consuming, costly, and, quite frankly, ineffective.
The traditional problem that we’ve had with fraud detection is that the software on offer just isn’t sophisticated enough to tackle to growing number of threats. Getting around a filtering system is simply too easy. Take the word London as an example; if the software blocks it, the likes of L0nd0n, london and L-o-n-d-o-n can still get through with no trouble. While these filters will already be installed into your emails and websites – or wherever scammers might look to infiltrate – they cause few genuine problems for fraudulent activity.
Unreliable Community Screening
Community screening is another option that’s been explored (unsuccessfully) in fighting scams. Particularly dedicated to emails, users ‘flag’ certain messages they’ve received, which then blocks the mail from all other mailboxes. In theory, the screening should be a good solution, but what’s often found is that legitimate mail finds its way into the spam folder – and it’s easier to delete a few unwanted pieces of clutter than to trawl through hundreds of scam messages in hunt of one or two genuine emails.
Challenge Response Verification
You’ve undoubtedly seen these before; a tedious message asking you to ‘prove you’re human’, before setting you a task of clicking on four pictures of trees to ensure you are a genuine user.
This method is somewhat effective, but any scammer that’s switched on will realise that once they’re beyond that point, they can near enough do whatever they please in regards to scam.
In short, manual software isn’t good enough…
The Answer Is Automation
Automated, real-time detection of fake scam accounts can play a huge role in removing scammers from websites, allowing the opportunity to save both time and money whilst improving a brand’s reputation.
Platforms today can help produce anti-fraud solutions that help businesses moderate scammers automatically – freeing up their business to grow and flourish in a space filled by genuine users. These platforms can offer solutions for businesses to tackle online fraud and help increase security, eradicate questionable accounts and ultimately prevent scammers.
Machine Learning & Real-Time Detection
The use of machine learning technology significantly enhances the automatic detection of fraudulent activity. It differentiates between custom scammer signals and scammer trends that are emerging on a global basis, meaning that scammers are now being caught out and removed from a platform faster than ever before.
Working in correlation with machine learning is real-time detection. An API can now respond with a score to allow your system to immediately detect fraud and automatically remove these users – even without moderators. These two features working in tandem means that your website is constantly kept up to date, and more importantly, keeping unwanted accounts out.
Keeping Scammers Away
It’s all well and good being able to remove fraudulent accounts, but keeping them out is another matter. Once automatic detection has taken place, having access to a blacklist of scammer profiles and network data can help ensure that anyone else that has previously been detected will fail to make their way into your vast database of users again – or even if they’re completely new to your site.
If that isn’t enough, image recognition software can also play a huge role in keeping scam accounts out. It can immediately cross reference photos against millions of blacklisted images worldwide – greatly enhancing the quest for authenticity.
But wat if that isn’t enough? What if these accounts find a way of breaking through the machine learning software and real-time detection? What if they’re not on any blacklisted sites, using completely new images?
Text-pattern analysis can ensure that they won’t last long. At this point, we’re all too familiar with the way that scam accounts address new users online – that robotic, often poorly typed introduction – you know the drill.
Text-pattern analysis is able to detect this use of so-called ‘scammer grammar’ in comparison to the typical, local use of grammar; and as soon as any of this automated language is spotted, they’re shown the exit door.
When you strip everything back, it’s simple. Twitter seems to be the first to have done it in culling its questionable accounts, and while on a short term basis this may seem a significant step back, the experience for genuine users with no scam or fraud damaging their enjoyment around the site will be a major boost to the network in the long run.
Stopping scammers may seem a long and arduous task, but automation turns it into a breeze.
About the Author
Nick Tsinonis is CEO at Scamalytics and is an IT entrepreneur who setup South Africa’s Friends Reunited in 2003. He then founded a popular dating site called yesnomayB.com which proposed using machine learning to match users. Since 2008 Nick, together with co-founders, setup Recsys.com and then Scamalytics.com to solve specific problems for dating and social networking sites using Machine Learning as a service. Scamalytics was designed to automatically detect scammers and fakes profiles. Nick was instrumental in the design and commercialisation of these products.