The demands of today’s tech-savvy customer have placed huge emphasis on software development and user experience as a barometer for success.
DevOps adoption has grown rapidly as a result, with many businesses looking at routes to either introduce or accelerate DevOps workflows within their IT organisations.
‘Tool chains’ are an integral part of any DevOps programme, helping automate the delivery, development, and management of software applications and deliver better products to both customers and business units, more efficiently and effectively. The collaborative nature of these development and production environments makes them difficult to protect however, particularly the privileged accounts and secrets associated with them.
Navigating this risk and securing key tools and infrastructure is therefore critical if organisations are to achieve successful DevOps outcomes and progress on their digital transformation journeys. To do so, there are five key measures they must consider to prioritise protection of DevOps tools and processes:
Set and enforce policies for the selection and configuration of tools
Any security conversation should always begin a full inventory of the DevOps tools being used by dev teams. After all, it’s impossible to defend environments if you don’t know they exist. This process can be cumbersome, but is massively important for open source tools.
Once these tools are accounted for, security teams should undertake an evaluation to identify any existing security deficiencies and address them promptly. This could involve making sure tools are not being used in an unsecure default configuration for example, and that they are kept up to date.
As part of this evaluation process, security teams should also find a way to get a seat at the table. That means collaborating with the group with the business that is responsible for tool selection and configuration, or working closely with IT procurement to select the best tools for the organisation so that enterprise security standards are established at the outset.
Lock down access to DevOps tools
Attackers only need to exploit one vulnerability to carry out their mission, so it’s important to take a holistic approach to addressing security requirements and potential vulnerabilities. This starts with securing the secrets and credentials associated with DevOps and cloud management tools in an encrypted vault protected with multi-factor authentication (MFA).
Once complete, access privileges should be reviewed so that users are only granted “just in time” access. In other words, provide high-level access only when it’s needed to perform certain tasks, and ensure that this temporary usage is closely monitored.
Access to high-risk commands within DevOps tools should also be limited. For instance, Docker users often run a Docker container with the —privileged flag, which provides the container with direct access to host elements. Where possible, security teams should mandate that users are not able run containers with this flag, and if it’s a “must,” severely limit user access and monitor and record all activities with the –privileged flag.
Once you have addressed access, it’s also advisable to adopt other cyber hygiene best practices, such as setting up access controls that segregate DevOps pipelines. This prevents attackers from gaining access to one and then moving to another, ensuring that credentials and secrets are not shared between DevOps tool accounts and Windows sysadmin accounts. It also removes all unnecessary accounts with access to DevOps tools, including those of developers who may have changed roles or no longer require access to these tools.
Reduce the concentration of privilege
Enforcing the principle of least privilege should be a requisite for every company. Doing so limits each user’s level of access to DevOps tools to the minimum necessary for their role. However, it will be less effective unless security teams configure DevOps tools to require dual authorisation for certain critical functions. They should for example require that a second person must review and approve any changes before a change to a Puppet manifest file goes live.
Additionally, teams should ensure separation of duties for build automation tools such as Jenkins, which often retain permission to perform all duties without restriction, from building and testing to packaging. In the case of Jenkins this problem can be overcome by separating duties by implementing multiple Jenkins nodes, each dedicated to one function (build or test or package) for each application. This ensures each node will have a unique identity and a limited set of privileges, which minimizes the impact of a potential compromise.
Ensure code repositories do not expose secrets
Code repositories such as GitHub have become infamous in recent years due to IT teams erroneously leaving code in publicly accessible locations. Security teams should therefore develop risk-based policies for developers that secure the use of such repositories. It’s worth noting however that beyond credentials, code may contain details about the organisation’s internal network that could be useful to attackers. Ideally firms should therefore use an on-premises rather than a cloud-based code repository, if it’s possible to do so without adversely affecting workflow.
If this approach is applied, then the next step is to scan the environment to make sure that any on-premises code repositories are inaccessible from outside the network. If cloud-based repositories are used however, then security teams should ensure they are configured to be private.
Above all, every organisation should make it their policy that code is automatically scanned to ensure it does not contain secrets before it can be checked in to any repository.
Protect and monitor infrastructure
Cyber attackers seek the path of least resistance, and for many organisations, this remains their employees. Well-crafted phishing emails can often do the trick, so IT teams should make sure that all workstations and servers undergo regular patching, vulnerability scanning and security monitoring.
Away from hardware, it’s also important to monitor your cloud infrastructure for signs of unusual credential usage or configuration changes (such as making private data stores public). This means ensuring VM and container images used in development and production environments come from a sanctioned source and are kept up to date.
To ensure security remains “baked in” to countless rounds of automatic rebuilds, security teams should also work with their DevOps counterparts to automate the configuration of VMs and containers so that, when a new machine or container is spun up, it is automatically configured securely and given appropriate controls – without requiring human involvement.
The benefits of DevOps are plain and clear for all to see – hence the rapid adoption that we have witnessed in recent years. Adopting a DevSecOps approach, using the measures outlined above, is critical to ensuring application and infrastructure security from the outset of any software development activity.
About the Author
Josh is a DevOps security lead at CyberArk, where he leads efforts in the UK and North Europe. He’s a recovering cloud infrastructure architect with more than a decade of experience architecting, implementing and managing automated infrastructures, he’s also led operations for several infrastructure providers and Datacenters.
Featured image: ©spainter_vfx