The Hidden Risks of Docker images: Unmaintained Software Components

Container images that are unmaintained or unpatched could be ticking timebombs in your infrastructure.

supply chain security security docker
2024-08-03
Thomas Kooi

As an early adopter of Docker, I’ve witnessed its amazing growth from the early days. I started with using Docker Swarm, and in 2018, I even wrote several blog posts on using Docker Swarm and how to use logspout to collect logs from Docker Swarm containers. In those posts, I explained how to deploy a global container on every node in a Swarm mode cluster to forward logs to a remote Logstash endpoint.

By the end of 2018, I had fully transitioned to Kubernetes, leaving Docker Swarm Mode behind. Along with this shift, I adopted different log collection solutions, starting with Fluentd and eventually moving to Promtail and other tools. I promptly forgot about the whole logspout solution.

Recently, while browsing through my GitHub and Docker Hub accounts, I was surprised to discover that a container image I had published on Docker Hub was still being actively pulled by others, despite having been unmaintained for about five years. This realization highlighted a common issue: software, particularly Docker images, often continues to be used long after updates and maintenance have ceased.

The Risks of Unmaintained Images

Using maintained container images is critical for the security and stability of your applications. Maintained images receive regular updates and patches that address vulnerabilities and improve functionality. Always check how recent the latest release is to ensure you’re not relying on outdated software. Ensuring that all dependencies and software components, including container images, are maintained by a trusted party is crucial for security and stability.

Additionally, investigate the activity of the maintainer on GitHub—active maintainers are more likely to respond to issues and push necessary updates. By prioritizing maintained container images, you can mitigate risks and ensure your applications run smoothly and securely.

It’s pretty common nowadays for people to automate their dependency update processes (yay!), only to have those processes fail because the dependencies that are used are no longer maintained. This can pose significant risks, as unmaintained images do not receive new patches and may become vulnerable over time. If an image is no longer maintained, it’s essential to either fork it or switch to a different solutions (or maintained forks from some other maintainer).

Vulnerability scanning

A slightly more technical detour: Going back to the logspout image that is the cause of this blog post, I made sure to check if it has any vulnerabilities. For this I used the open-source tool Trivy and scanned the container image. This highlighted a number of issues:

❯ trivy i thojkooi/logspout --report summary
2024-06-15T01:04:11+02:00       INFO    Vulnerability scanning is enabled
2024-06-15T01:04:11+02:00       INFO    Secret scanning is enabled
2024-06-15T01:04:11+02:00       INFO    If your scanning is slow, please try '--scanners vuln' to disable secret scanning
2024-06-15T01:04:11+02:00       INFO    Please see also https://aquasecurity.github.io/trivy/v0.52/docs/scanner/secret/#recommendation for faster secret detection
2024-06-15T01:04:12+02:00       INFO    Detected OS     family="alpine" version="3.7.0"
2024-06-15T01:04:12+02:00       INFO    [alpine] Detecting vulnerabilities...   os_version="3.7" repository="3.7" pkg_num=12
2024-06-15T01:04:12+02:00       INFO    Number of language-specific files       num=0
2024-06-15T01:04:12+02:00       WARN    This OS version is no longer supported by the distribution      family="alpine" version="3.7.0"
2024-06-15T01:04:12+02:00       WARN    The vulnerability detection may be insufficient because security updates are not provided

thojkooi/logspout (alpine 3.7.0)

Total: 6 (UNKNOWN: 0, LOW: 0, MEDIUM: 2, HIGH: 2, CRITICAL: 2)
...

At first I was surprised by the relatively low number of vulnerabilities reported, however upon further reading I found that this image is so old, that Trivy is no longer properly reporting CVE’s within the container image due to an End Of Life (EOL) OS Family. Running the scan again using the additional flag --exit-on-eol 1 makes sure Trivy will fail on any image scan that contains an EOL Operating System. This is an important addition should you use Trivy in automation tools or CI/CD pipelines.

Now not all the CVEs (probably the majority of them) are an issue or affect the software running in it. The point being is that because it’s so outdated, issues may become undetectable. The fact Trivy warns about vulnerability detection possibly being insufficient presses the risk of unmaintained and out-dated container images further. Vulnerabilities may become invisible as they may not be reported for long unsupported and out-dated software.

This highlights the fact that software should be kept up-to-date and running EOL or unmaintained software are ticking time bombs waiting to blow up some part of your infrastructure.

You should do periodic assessments

Engineering teams should be aware of the software components deployed within their environments, including all container images used. They should also be performing periodic checks for each container and/or component to ensure that they are using up-to-date software.

One good way to do this is by ensuring you have a Software Bill of Materials (SBOM) for all components that run on your environments. Combining this with tooling that allows for querying across these components, versions, its vulnerabilities, etc, will aid with this. Tools like SBOM operator, or Trivy Operator will help you do this for your Kubernetes clusters.

Automatic Dependency patching will help you stay in control with the ever increasing speed at which software is released.

I’m a huge fan of automating dependency and container image updates through tooling such as Renovate. If you have not yet set-up an automated dependency update process, it’s definitely worth the effort to configure a tool such as Renovate.

A low effort action you can also take is to compile a list of all container images used within your environment(s), and verify if they have been built/released recently. Components that haven’t been patched or built for over a year are suspect and would benefit from further investigation. After all, software may be at the latest version - that does not mean it is still safe to run it.

Use those tools and others to generate periodic reports, so you have insights into where there may be outdated software or vulnerabilities within your systems. A good solution I found was to ship your SBOMs to Dependency Track.

SBOM Insights with Dependency Track

Dependency-Track is an intelligent Component Analysis platform that allows organizations to identify and reduce risk in the software supply chain.

dependencytrack.org

With Dependency Track, combined with SBOM operator, you gain real-time insights into your software components. The tool offers an easy way to mark vulnerabilities as not applicable, something that is essential to avoid vulnerability burn-out. Dependency Track is a project by the OWASP Foundation.

Dependency Track audit vulnerabilities

Being both a consumer and producer

As someone who is both a consumer and producer of open-source container images, I found myself wondering about this Docker image that I published years ago. I’m not using it anymore, and clearly not maintaining it (because I have no need for it anymore). I found I had a couple of questions;

  • Should I be obligated to continue supporting it?
  • How do you encourage users to stop using your container image if you’ve moved on from it?
  • What are the potential effects of simply deleting your Docker Hub repository for that image? Could it lead to unexpected downtime or broken systems for some unsuspecting DevOps team?

These are all good questions to ask, as being a responsible part of the Open-Source eco-system also means thinking about the people that use your software (components).

I think, in the end, sun setting some piece of software is part of it’s lifecycle. Nearly all systems and components will eventually be replaced by something new. Once it’s time for software to be phased out, maintainers should make it clear on all places this software could be found that it should no longer be used. Often times this comes down to updating the project’s README, and switching the Git project to archive.

Additionally it’s the responsibility of people or teams running or depending on software to ensure they are using maintained software. In the previous section we discussed some of the risks and how to stay in control in that regard.

Conclusion

While Docker images can be incredibly useful, it’s crucial to ensure they are actively maintained. Should you find yourself using some software that is critical for your organisation but is no longer maintained, please consider switching or contributing upstream to help out everyone.

Using the right kind of tools to stay in control of the software components running in your infrastructure. Tools and platforms such as trivy or Dependency Track are some examples of solutions that help you do this.


7 min read
Share this post:

Related posts