Skip to main content
search results
Sorry, but nothing matched your search terms. Sorry, but nothing matched your search terms. Sorry, but nothing matched your search terms.
Sorry, but we cannot handle your search query now. Please, try again later! Sorry, but we cannot handle your search query now. Please, try again later! Sorry, but we cannot handle your search query now. Please, try again later!
Search suggestions

ASG Series: The urgent need to shift-left security testing

Andy Morrison, Solutions Architect for ASG at Expleo Group discusses security testing in the second blog of the ASG Series.
Scroll

Security testing is a complicated and expensive – but very necessary – stage of the software delivery cycle. With the growth of cloud-hosted applications and unprecedented home working due to the pandemic, has automated security testing in a DevOps pipeline matured enough to mitigate the real risks of cyberattack? The recent “ransomware” breach on the Colonial oil pipeline in the US suggests not.

Men working in front of the computer

Historically, security testing has been a complex, largely manual and deeply technical discipline within software testing. It’s typically an activity that is scheduled towards the end of a delivery cycle and often, when an application is deemed close to production readiness.

More and more organisations are adopting a DevOps approach to their development practice and moving to microservice architectures. We are therefore seeing more aggressive delivery times and smaller modules of work being delivered.

However, this increase in the velocity and variety of service releases brings an ever-increasing attack surface for malicious actors. It also creates impossibly-small time windows for executing comprehensive manual security checks. How do we mitigate these risks while still maintaining aggressive release velocity?

 

Defence in depth

The big players in the cloud space (Amazon AWS, Microsoft Azure, Google Cloud) are doing their bit to help. On the deployment side of the equation, they are providing multiple layers of protection – or “defence in depth”. This is useful for building out a cloud-based infrastructure that’s centred around the lowest level of permission paradigm, where access to resources must be explicitly granted. Other example protections include AUTH-N/AUTH-Z and the encryption of data at rest using server-side encryption (SSE) routines such as AES-256.

This lowest level of permission paradigm means that complex applications that have large architectures (and also ones that don’t) need to implement methods of communicating across cloud resources using Keys and access control lists (ACLs) etc. The larger the architecture, the larger the surface area for attack becomes.

Men working in front of the computer

Unfortunately, the biggest risk of compromise to any application or company is the people who run it. Keys can accidentally be committed into source code repositories. Secure coding principles may not be clearly defined or adhered to. Time pressure on releases can cause many issues to go undetected in a manual security scan. Human fallibility is the recurring factor.

Easy access

Consider a web-based book-selling application with a subscription service that requires users to provide credit card details to join. The database is securely deployed into a private subnet, with an Access Control List allowing access to only certain source network addresses, as well as an AUTHN/AUTHZ control for access and privilege protection. It would be natural to assume that these layers of cloud protection mean that the database is nice and safe and cannot be breached externally, right?

Alas, that is not the case. The users of the service must enter their card details to the website. Therefore, we can deduce, with a high degree of confidence, that the web page is in the ACL to the database. Already we have a publicly facing access route into the “safe and secure” database – provided a method of access can be found. The scary truth is that there are multiple methods to attack this database, including SQL Injection, Cross Site Scripting (XSS) etc. To an adept cybercriminal, it’s an open door.

Bear in mind this is only one small part of what could be a much larger website with many, many vectors for attacking. How then, can we expect a manual security scan in a short time frame to adequately analyse and test all the possible attack vectors and expose them?

Vulnerability scanners

The good news is that the security community is really beginning to step up in the tooling space. There are currently multiple tools on the market, both commercial off-the-shelf and open source, that come under the heading of vulnerability scanners. These tools promise to bring automation to the practice of scanning applications, networks, compute resources etc. for potential vulnerabilities.

Recently, I was engaged on a project for a web-based application where my client was interested in reducing the number of issues that were caught in their regular manual pen-testing cycle. Catching these issues earlier not only allows for them to be addressed faster, but it also reduces the costs associated. The further “right” in the cycle a defect is detected, the more expensive it becomes to address.

Hand reaching to the icons on the board

In short, how was this achieved? After a few days spent researching the tools landscape, I shortlisted several tools to be implemented into the development CircleCI pipeline that covered Dynamic Application Security Scanning (DAST), Static Application Security Scanning (SAST) and interactive passive scanning of the web application:

  • GitHub’s dependabot was implemented for flagging issues with either outdated or vulnerable third-party opensource libraries and automatically raising pull requests to address them.
  • Netsparker enterprise was easily implemented into the pipeline using the CircleCI orb provided from Netsparker to automatically invoke a DAST scan of the application once it had been deployed to the staging environment and executing active attack scans against the web application such as XSS Fuzzing, API Fuzzing etc.
  • The great folks over at OWASP provide their Zed Attack Proxy (ZAP) utility in a regularly updated docker container that contains all of the latest known CWEs that can be scanned for. Leveraging the automated tests written using Selenium, it was incredibly straightforward to proxy all of the web application traffic through the ZAP proxy and generate a passive security report, in a well-formatted HTML for each page that the automation visited.

With each of these tools implemented into the CI Pipeline, the scans run each time there is a code change to the code base. This provides the client with the confidence that vulnerabilities can be scanned for and raised in the bug-tracking system using the power of automation inside a DevOps pipeline.

As a final word, this does not remove the requirement for a dedicated security testing team. There is a difference in the roles that the automated scanning and a dedicated security team provide. Using the power of automation to quickly identify obvious vulnerabilities, the existing security team can spend their time assessing the risk to the business. How? By exploiting these vulnerabilities, crafting much more complex attacks, and keeping themselves up to date with the current best practices and any new issues that may be found in the wild that do not yet have automated checks in place.

Ready to help

At Expleo we firmly believe in the value of automation in protecting systems and data. Visit Expleo’s Process Automation page for more information on our capabilities in this space.

Andy Morrison, Solutions Architect of Advanced Solutions Group at Expleo Group.

Download

Download whitepaper