advertisement
Twitter VIP Account Hack Highlights The Danger Of Insider Threats
Most companies are putting a lot of effort into making sure their network perimeters are secure against remote attacks. Still,…
Most companies are putting a lot of effort into making sure their network perimeters are secure against remote attacks. Still, they don’t pay the same level of attention to threats that might originate inside their organisations. The attack earlier this week that resulted in the hijacking of Twitter accounts belonging to high-profile individuals and brands is the perfect example of the impact a malicious or duped insider and poor, privileged access monitoring could have on businesses.
What happened in the Twitter hack?
On Wednesday, the Twitter accounts of business leaders, artists, politicians, and popular brands posted messages that instructed users to send bitcoins to an address as part of a cryptocurrency scam. Impacted accounts included those of Elon Musk, Bill Gates, Jeff Bezos, Barack Obama, Joe Biden, Kanye West, Kim Kardashian, Mike Bloomberg, Uber, Apple and even Twitter’s own official support account.
Attackers often impersonate celebrities on Twitter to post similar scam messages, but those campaigns are usually done with fake accounts with few followers. In this case, the rogue messages were posted from verified accounts, which have a checkmark next to their name and whose real identity has been verified by Twitter. This gave more credibility to the scam and allowed it to reach hundreds of millions of users instantly. It is estimated that the attackers earned around $120,000 as a result.
advertisement
Twitter responded by temporarily suspending the ability of all verified accounts to post new messages immediately launching an investigation. How could attackers gain access to so many accounts at once? It was achieved by compromising one or more Twitter employees who had access to an internal tool applied to manage user accounts.
Some screenshots of the tool were posted on Twitter, but the company deleted them citing violations of its terms of service. The tool seems to allow Twitter employees to perform several privileged actions such as suspending accounts, blacklisting tweets, and even changing the email addresses associated with accounts, a feature the attackers abused to take over the accounts.
“On Wednesday, the Twitter accounts of business leaders, artists, politicians, and popular brands posted messages that instructed users to send bitcoins to an address as part of a cryptocurrency scam. Impacted accounts included those of Elon Musk, Bill Gates, Jeff Bezos, Barack Obama, Joe Biden, Kanye West, Kim Kardashian, Mike Bloomberg, Uber, Apple and even Twitter’s own official support account.”
Motherboard cited two of the attackers who claimed they bribed a Twitter employee for access to the control panel. Twitter, however, said the compromise was the result of “a coordinated social engineering attack by people who successfully targeted some of our employees with access to internal systems and tools.”
advertisement
“We believe the attackers targeted approximately 130 accounts in some way as part of the incident,” the company said via its support account. “For a small subset of these accounts, the attackers were able to gain control of the accounts and then send Tweets from those accounts.”
Since few details are available from the official investigation, it’s hard to say what exactly Twitter means by “social engineering.” The company could be using the term loosely to refer to anything from a phishing attack that resulted in the theft of employee credentials to attackers successfully bribing an employee. Both of these scenarios fall in the insider threat category but are different attack vectors — unwitting insider vs malicious insider — and require somewhat different preventive measures.
The term ‘unwitting insider’ generally refers to an employee who provides access to an attacker unintentionally due to a lapse in judgment or a lack of training. Examples of this can be
advertisement
– an employee opening the door to a restricted area to help someone carrying a large package without actually checking if they have an access card or company ID,
– plugging a USB stick they found on the floor in the lobby or that was mailed to them into their work computer to check what’s on it,
– transferring money to a third-party after receiving a spoofed email from their manager without getting confirmation through a phone call,
– or clicking on a link in an email and inputting their username and password on a phishing site.
If this is the scenario with Twitter, a few questions arise:
- Was Twitter enforcing two-factor authentication (2FA) for their employees or did the attackers manage to bypass it, too?
- Was the support panel openly accessible from the internet, or did it require a VPN connection into Twitter’s secure network?
- If it was accessible from the internet, were there other checks in place such as checking the geographical location of the employee’s device or whether that device was used to access the tool in the past?
2FA implementations that rely on one-time use codes delivered via text messages or that are generated by mobile applications can be bypassed. Those codes can be phished, too, just like passwords, and some open-source frameworks automate this.
Organisations should opt for using hardware authentication tokens (USB keys) that are plugged into computers and communicate with websites directly through the browser to supply the authentication codes. If it’s too expensive to provide such tokens for all employees, companies should at least consider making them mandatory for highly privileged accounts, such as administrators and managers. Twitter supports all three types of 2FA on its website, but it’s not clear which option if any, they enforced for their employees.
“When it comes to the phishing aspect of it, [organisations] should make sure that the credentials used to log into their admin panels are secured by multi-factor authentication. Hopefully with some sort of tokenised hardware authentication; maybe something like a YubiKey for instance,” Rachel Tobac, CEO of security consultancy SocialProof Security and winner of social engineering contests in the past, tells CSO. “That would be really hard for me to attack as a social engineer because I’d have to do the attacking in person, which I tend to avoid. That’s why Google recommends using something like Google Titan or YubiKey for things like Gmail and your bank account if possible because those things cannot be phished.”
“Every employee should only have the minimum privileges inside an application or system that are required to perform their job. This type of control is also essential to protect against malicious insiders who might decide to abuse their access because they’re disgruntled, have taken a bribe, they were promised better employment by a competitor or whatever other reason.”
Social engineering prevention training is also critical, according to Tobac, and should be augmented with technical controls, such as limiting privileged access, 2FA, and insider threat detection software.
In the current global environment, when many employees are forced to work from home due to the COVID-19 pandemic, relying just on 2FA to secure remote access is not enough. Many workers are performing their job responsibilities from personal computers located in unsecured home networks. So they are much more exposed to malware, attacks that exploit vulnerabilities in outdated software or phishing campaigns that could result in a full system compromise. To address such risks, companies can look into web-based secure access gateways that follow the zero-trust networking concepts. And ensure that users are granted access based not only on their location but the identity of their devices and their security state.
These products collect and analyse a variety of information through the user’s browser or through a lightweight agent, including OS and software versions, running processes, basic malware scan results, and more, before granting access. However, to be truly effective, they also need to be backed by access policies that follow the least-privilege principles and role-based access controls.
Every employee should only have the minimum privileges inside an application or system that are required to perform their job. This type of control is also essential to protect against malicious insiders who might decide to abuse their access because they’re disgruntled, have taken a bribe, they were promised better employment by a competitor or whatever other reason.
Since we don’t know the role of the Twitter employee whose account was misused, it’s hard to say if their level of access was appropriate or if the administration tool was overly permissive, to begin with. The Twitter account of US President Donald Trump was not affected in this attack, reportedly because it has unique controls in place after it got temporarily deactivated in November 2017 by a Twitter support technician on his last day of work.
“You have to look at multiple different approaches, and it very much depends on what the internal system is and what the application is. You have to look at it not just once but have an adaptive response which means continuously monitoring what the threats are, what systems you’re using, and adapt your controls as a result.”
This suggests that for individual high-profile accounts, Twitter can and does enforce additional controls or additional levels of approval. This means the company could implement a system where actions like manually changing an account’s email address would require a second person to approve the act or an attempt to make modifications on 130 verified accounts in a short amount of time could automatically raise an alert.
“We have also been taking aggressive steps to secure our systems while our investigations are ongoing,” Twitter said. “We’re still in the process of assessing longer-term steps that we may take and will share more details as soon as we can.”
Tackling malicious insiders
Dealing with malicious insiders is a complex problem where things like multi-factor authentication and device verification will not help because a disgruntled employee will have no problem overcoming such restrictions. Tackling this problem requires a layered approach that combines prevention capabilities with detection and response.
“You can’t rely on a single solution,” Mark Harris, Senior Research Director at Gartner in the UK, tells CSO. “You have to look at multiple different approaches, and it very much depends on what the internal system is and what the application is. You have to look at it not just once but have an adaptive response which means continuously monitoring what the threats are, what systems you’re using, and adapt your controls as a result.”
“Organisations need to consider in their threat modelling what happens when an administrator is compromised. And what controls they can put in place to detect that or to prevent it from having a big impact on their systems,” David Kennedy, the founder of security consulting firm TrustedSec and creator of the open-source Social-Engineer Toolkit (SET) says.
“Insider threats are really complex to fix and really complex to monitor against because you have someone who’s supposed to be working on behalf of your organisation and now they’ve gone rogue, so all of those [scenarios] need to be built into your overall threat model program, and what you’re looking at and how you adjust your controls based off that: role-based access controls and potentially a second level of verification – acting on a large number of accounts or specific individual accounts could require an approval process,” Kennedy says.
Establishing a baseline for workflows and employee behaviour and then using security analytics and user behaviour analytics to detect anomalies and deviations from that baseline can also be a good approach that complements preventive controls. This is one of the areas of security where the much-hyped machine learning shows some promise. Still, many technologies are in this market segment, and according to Harris, they can sometimes generate a lot of false positives, which means more work for security teams to investigate.
“Organisations should ideally move higher on the security maturity curve by switching from an approach where they buy and implement specific products or technologies to tackle particular types of threats, towards a security operations approach where they’re proactively searching for threats inside an organisation.” Dan Panesar, director for UK and Ireland at Securonix highlights. “What I mean by being proactive is using security analytics and user behaviour analytics to spot anomalies and by anomalies, I mean behaviour that’s out of the normal.”
In the case of Twitter, this could have been used to learn what normal looks like: How often are email changes being performed on accounts? How often are support technicians accessing VIP accounts? All that information could have been correlated so that in the event of multiple accounts having email administration changes made in quick succession, it would look suspicious.
“That is where definitely machine learning and the security analytics piece really starts to build a very significant investment case compared to what can happen from a reputational, financial impact and regulatory perspective,” Panesar says.
Another important aspect, according to Harris, is logging all changes inside applications and systems. Not only because it provides an evidence trail if something happens, but also because potential malicious insiders can be discouraged if they know everything is tracked and audited. “There are a whole bunch of technologies and approaches that are the right thing,” Harris says. “But to me, it’s a case of not relying on one single technology and continuously looking at other controls that you need to put in place, as well as auditing, but not sitting still.”