Far too many breaches occur due to the all-too-effective practice of social engineering. Just look at the latest SANS breach, an organization that is renowned for its security training. This only goes to prove that any organization can sit at the receiving end of a successful social engineering attack. When it comes to social engineering, since attackers are targeting people as opposed to technical assets, no organization is impenetrable.
Too often, social engineering attacks are left unreported because it can be an extremely personal issue for those affected by it. Security teams may end up unintentionally shaming recipients – and as a result, employees may be reluctant to admit that they were successfully compromised because of fear of retribution. Being socially engineered can carry with it a kind of stigma and have significant psychological ramifications. Once an employee who has been socially engineered comes forward and is greeted with chastising, they may be more unwilling to report future attempts. Traditional security organizations don’t typically consider this aspect of social engineering when building a security program. Subsequently, dealing with the human element – how to assuage the anxieties of employees who have been targeted – usually gets left out.
Organizations that do have some social engineering protections in place usually stop at Step 1, which is adding social engineering to the scope of a security assessment. Then, they rely on this benchmark annually or semi-regularly, in some instances without considering training and education programs, security controls, processes around identification, containment and eradication of a threat, and many other factors.
In this piece, we’ll review several things organizations can get wrong about social engineering and ways that they can reduce their risk of being compromised by it.
HOW ATTACKERS MANIPULATE AN ORGANIZATION’S EMPLOYEES
Security compromises can occur through a variety of means; targeting people is one of the more successful routes to compromise, as attackers often look at an organization’s people as “its weakest links.” Meanwhile, most organizations will invest heavily in technical controls and fail to think about their “soft” targets – i.e., their employees and contractors. Humans can be subject to manipulation, want to be a “team player” for their employer, and often genuinely wish to help others. Attackers then exploit this behavior by baiting them, such as urging employees to provide private details for work initiatives or clicking links that promise to help their team or company. In 2020, almost a quarter (22%) of all attacks included a social engineering component, with 40% of malware being installed via malicious email links and 20% via an email attachment (according to the 2020 Verizon DBIR report).
Additionally, financially motivated social engineering is on the rise, as organized criminal groups are becoming more technically capable and proficient, their techniques and methodology have advanced as a result. A timely example would be some of the ransomware groups that have emerged in recent years. Often, these groups will run their operations similar to nation-states or mature cybersecurity firms, using techniques that are reminiscent of APTs. And recently, a Tesla employee found himself the subject of not just social engineering (the attacker had previously attempted to get to know the employee), but a $500,000 bribery in return for installing ransomware in the Tesla company’s network. If this attack had been successful, the attacker would have possibly made millions from it. The example of the Tesla employee may in fact point to a new pattern in social engineering.
Because of the prevalence and evolution of social engineering attacks, organizations must address it in their incident response plans – and do their best to protect their employees from exploitation.
MOST ORGANIZATIONS ARE REACTIVE, NOT PROACTIVE
In my 15-plus years of security experience, I’ve seen that most organizations are only focused on being reactive, rather than adopting a proactive approach. This isn’t an unreasonable mindset – being reactive is a natural response to how security is treated right now in our industry. There’s a race to disclose new vulnerabilities, and organizations are left scrambling to remediate those risks. The downside, however, is that it leaves organizations unnecessarily at risk because of the lack of thorough planning. After detecting a potential compromise, they will engage their incident response functions to contain and eradicate any potential attacker in their network. They will often neglect to conduct an after-action review (AAR) of what happened, why it happened, and how to prevent it in the future.
Many organizations also overlook building robust social engineering assessment programs to proactively test their email, network, and endpoint controls as well as their employees. This can be due to a number of reasons such as organizational maturity, the security team’s level of experience, or a lack of knowledge of how attackers operate. Building comprehensive and full-spectrum security strategies that account for both people and technology is one solution. Security is dependent on both people and technology, not simply one or the other.
Educating employees on social engineering techniques such as phishing, vishing, pretexting, and other variants is important. But the upstream processes that handle when an employee should report and how they should report is also key. You want to remove all fear from an employee’s mind if they need to report a suspected social engineering attempt. Additionally, you’ll want to ensure that your strategy doesn’t penalize users but instead educates them if they have fallen for a social engineering attack.
SOCIAL ENGINEERING DEFENSE BEST PRACTICES
Every organization will have their own definition of what an acceptable level of risk looks like. This risk level then becomes the foundation for security decisions and investments. Beyond employee training and education, organizations will want to focus on getting the basics right to ensure there are layers of controls in place to make them more resilient even if their users are socially engineered.
Some examples include the following:
- Ensure your organization doesn’t expose itself via open mail relays. Open mail relays, which are SMTP mail servers that allow for the third-party relay of email messages (more on that here), can increase email spoofing because they allow unauthenticated email to be sent externally to an organization. This can make it harder to defend against phishing since the emails will appear legitimate to internal users. By implementing strict user authentication and IP authorization at the gateway, you can take this opportunity away from the attacker.
- Implement email filtering via security controls. Some email security controls provide an email filtering capability that provides the ability to strip all external attachments and links. This helps to prevent execution and clicking on malicious links with drive-by downloads as well as label external emails with designators such as [EXTERNAL] in the subject line and/or in the body of the email when received or put a colored bar across the email with a warning. Using this sort of email filtering can help lower the chance of pretexting a victim as an internal user.
- Invest in social engineering security controls for email. Security controls such as Cofense PhishMe provide an email client plug-in called PhishMe Reporter. This plug-in allows an end user to submit a suspected phishing email for analysis and enables an organization’s security operations center (SOC) to rapidly delete all occurrences of the offending email from user mailboxes to prevent further spread. Other security controls have similar capabilities and should be reviewed to see what works best for the organization (some other ideas can be found here).
- Understand your adversary. If you do fall for a social engineering attack, knowing how attackers operate and educating your defenders on these tactics will be helpful when they’re tasked with monitoring the networks and identifying the exfiltration of data.
More advanced examples based on the maturity of an organization’s defensive posture are:
- Revisit rules and privileges. Remove privileged and administrative accounts where absolutely not needed and leverage a just-in-time secrets management system; if an end user is successfully phished, this can reduce how much access rights an attacker could start off with when establishing a foothold.
- Institute a credential check-out process for privileged and administrative accounts. This process should require a two-party approval process with justification review and the ability to automatically expire credential access after a set period of time. Ideally, create a standard for your organization for baseline activities with options to request extensions for privileged actions – and if possible, automatically automate those extensions.
- Establish user baselines with user and entity behavior analytics (UEBA). By doing this, you’ll have an early alert system if your endpoint controls fail, which can allow you to detect an attack based on deviations from these baselines of usage and access patterns.
- Use machine learning to enrich your data. Similar to above, as you generate baselines of activity for users and entities, you can start to enrich your data with intelligence that will allow you to apply machine learning with technologies and security controls through what is known as Security Orchestration, Automation, and Response (SOAR). Instead of relying on a human analyst to review potential incidents, there are solutions that provide an automated task management approach to repeatable and mundane tasks. This then frees up the analysts to focus on more complicated security issues and investigations. SOAR technologies provide scalability and speed to organizations that struggle manually identifying threats and responding to them.
- Implement a no-fault social engineering testing program. Having a no-fault social engineering testing program is a good way to test employees via phishing, and other social engineering techniques. Ensure end-user profiles are created with known access rights to various assets and data. Knowing what could be potentially exposed if an end user is compromised may inform what controls you put in place and where. Remember, not all controls are equal for every user. Some users may require unique controls based on their business processes and technical aptitude, while others may not need to be exposed to critically sensitive information or processes.
- Finally, shore up your incident response plans. Use a combination of controlled exercises and tabletop exercises to identify the most at-risk people, processes, technology, and control gaps before they're exploited.
Even if you do all of the above, there is still a very realistic chance (if only somewhat reduced) that you may find your organization has been compromised via social engineering. Should that happen, leverage your incident response plan to find out exactly what transpired – and determine what to do going forward to avoid similar issues. And avoid shaming your employees. As recent events have clearly illustrated, nearly any organization’s staff can find themselves the target of social engineering attacks.
Subscribe to Bishop Fox's Security Blog
Be first to learn about latest tools, advisories, and findings.
Thank You! You have been subscribed.
Recommended Posts
You might be interested in these related posts.
Oct 15, 2024
Off the Fox Den Bookshelf: Security and Tech Books We Love
Sep 17, 2024
Navigating DORA Compliance: A Comprehensive Approach to Threat-Led Penetration Testing
Aug 28, 2024
Offensive Security Under the EU Digital Operational Resilience Act (DORA)
Aug 13, 2024
Manipulating the Mind: The Strategy and Practice of Social Engineering