The Tech Sales Newsletter #81: How do you like them containers?

This week, we will explore the importance of containers in the context of deploying applications and the tech sales opportunities at Docker. This article will also serve as an introduction to DevOps as a practice.

The goal is not to provide a detailed overview for practitioners or an in-depth analysis of the Docker/container ecosystem, as that would require multiple articles. At a high level, the key takeaway is to understand how developers work today with enterprise-level applications and how the cloud infrastructure software industry has evolved as an extension of those workflows. I’ll use containers as a starting point for examining this workflow.

The key takeaway

For tech sales: If you want to sell for a cloud infrastructure software company, you need at least a rudimentary understanding of how companies build, test, and deploy applications into production. The deeper your understanding, the sooner you’ll recognize potential opportunities to pivot toward. Docker is a prime example of one of these “insider” opportunities.

For investors: Virtualization is one of the most direct ways to monetize opportunities in cloud infrastructure software, as it directly correlates with the compute resources required. Several legacy players were once considered the “default” choice for most organizations. However, as containers have quickly become the industry standard for leading companies to test and deploy applications in production, the ecosystem surrounding this virtualization approach presents an intriguing investment opportunity beyond simply buying stock in the hyperscalers.

The DevOps way

Imagine a world where product owners, Development, QA, IT Operations, and Infosec work together, not only to help each other, but also to ensure that the overall organization succeeds. By working toward a common goal, they enable the fast flow of planned work into production (e.g., performing tens, hundreds, or even thousands of code deploys per day), while achieving world-class stability, reliability, availability, and security.

In this world, cross-functional teams rigorously test their hypotheses of which features will most delight users and advance the organizational goals. They care not just about implementing user features, but also about actively ensuring their work flows smoothly and frequently through the entire value stream without causing chaos and disruption to IT Operations or any other internal or external customer. Simultaneously, QA, IT Operations, and Infosec are always working on ways to reduce friction for the team, creating the work systems that enable developers to be more productive and get better outcomes.

By adding the expertise of QA, IT Operations, and Infosec into delivery teams and automated self-service tools and platforms, teams are able to use that expertise in their daily work without being dependent on other teams. This enables organizations to create a safe system of work, where small teams are able to quickly and independently develop, test, and deploy code and value quickly, safely, securely, and reliably to customers. This allows organizations to maximize developer productivity, enableorganizational learning, create high employee satisfaction, and win in the marketplace. These are the outcomes that result from DevOps.

For most of us, this is not the world we live in. More often than not, the system we work in is broken, resulting in extremely poor outcomes that fall well short of our true potential. In our world, Development and IT Operations are adversaries; testing and Infosec activities happen only at the end of a project, too late to correct any problems found; and almost any critical activity requires too much manual effort and too many handoffs, leaving us always waiting.

Not only does this contribute to extremely long lead times to get anything done, but the quality of our work, especially production deployments, is also problematic and chaotic, resulting in negative impacts to our customers and our business. As a result, we fall far short of our goals, and the whole organization is dissatisfied with the performance of IT, resulting in budget reductions and frustrated, unhappy employees who feel powerless to change the process and its outcomes.

The solution? We need to change how we work; DevOps shows us the best way forward.

Kim, Gene; Humble, Jez; Debois, Patrick; Willis, John; Forsgren, Nicole. The DevOps Handbook: How to Create World-Class Agility, Reliability, & Security in Technology Organizations

DevOps is a movement in software development that gained traction around 2009 with the first DevOps Conference organized in Belgium. As outlined above, the core premise is that those who write code and those who deploy and maintain applications should collaborate as a single team. Over the past 16 years, there has been ongoing and intense debate about the “right way” to do DevOps and whether DevOps ever truly existed.

The ironic part, of course, is that an argument can be made that the entire cloud infrastructure software industry, which emerged around the same time, has been driven by DevOps principles—prioritizing stability, reliability, availability, and security. So whether DevOps is just a meme or not is irrelevant when entire software niches have been built around the need for automation and better synergy in the development process.

Source: devops.com

Across the whole of this process, container management is one of the standout parts of the stack that is closely aligned with the “DevOps way”:

As seen in the enterprise data warehouse example above, one of the major contributing causes of chaotic, disruptive, and sometimes even catastrophic software releases is that the first time we ever get to see how our application behaves in a production-like environment with realistic load and production data sets is during the release.* In many cases, development teams may have requested test environments in the early stages of the project.

However, when there are long lead times required for Operations to deliver test environments, teams may not receive them soon enough to perform adequate testing. Worse, test environments are often misconfigured or are so different from our production environments that we still end up with large production problems despite having performed pre-deployment testing. In this step, we want developers to run production-like environments on their own workstations, created on demand and self-serviced. By doing this, developers can run and test their code in production-like environments as part of their daily work, providing early and constant feedback on the quality of their work. Instead of merely documenting the specifications of the production environment in a document or on a wiki page, we create a common build mechanism that creates all of our environments, such as for development, test, and production. By doing this, anyone can get production-like environments in minutes, without opening up a ticket, let alone having to wait weeks. To do this requires defining and automating the creation of our known, good environments, which are stable, secure, and in a risk-reduced state, embodying the collective knowledge of the organization.

All our requirements are embedded, not in documents or as knowledge in someone’s head, but codified in our automated environment build process. Instead of Operations manually building and configuring the environment, we can use automation for any or all of the following:

•copying a virtualized environment (e.g., a VMware image, running a Vagrant script, booting an Amazon Machine Image file in EC2)

•building an automated environment creation process that starts from “bare metal” (e.g., PXE install from a baseline image)

•using “infrastructure as code” configuration management tools (e.g., Puppet, Chef, Ansible, Salt, CFEngine, etc.)

•using automated operating system configuration tools (e.g., Solaris Jumpstart, Red Hat Kickstart, Debian preseed)

•assembling an environment from a set of virtual images or containers (e.g., Docker, Kubernetes)

•spinning up a new environment in a public cloud (e.g., Amazon Web Services, Google App Engine, Microsoft Azure), private cloud (for example, using a stack based on Kubernetes), or other PaaS (platform as a service, such as OpenStack or Cloud Foundry, etc.)

Because we’ve carefully defined all aspects of the environment ahead of time, we are not only able to create new environments quickly but also ensure that these environments will be stable, reliable, consistent, and secure. This benefits everyone. Operations benefits from this capability to create new environments quickly, because automation of the environment creation process enforces consistency and reduces tedious, error-prone manual work.

Furthermore, Development benefits by being able to reproduce all the necessary parts of the production environment to build, run, and test their code on their workstations. By doing this, we enable developers to find and fix many problems, even at the earliest stages of the project, as opposed to during integration testing or, worse, in production. By providing developers an environment they fully control, we enable them to quickly reproduce, diagnose, and fix defects, safely isolated from production services and other shared resources. They can also experiment with changes to the environments, as well as to the infrastructure code that creates it (e.g., configuration management scripts), further creating shared knowledge between Development and Operations.

Kim, Gene; Humble, Jez; Debois, Patrick; Willis, John; Forsgren, Nicole. The DevOps Handbook: How to Create World-Class Agility, Reliability, & Security in Technology Organizations

Source: Lambdatest

Another way to think through DevOps as applied to tools is via what’s known as the CI/CD pipeline—Continuous Integration and Continuous Delivery. If the goal is to maintain consistency in performance and control over the process of creating, testing, and deploying code into production, then following a structured workflow (with the appropriate tools to support it) becomes essential:

Source: Lambdatest

Now if we expand this a bit further, containers play a key role across all parts of the value stream. Here is our AI fren Claude explaining it:

During Development: Developers use container managers to create consistent development environments, ensuring that everyone on the team works with identical configurations. They can spin up containers locally that mirror the production environment, reducing "it works on my machine" issues.

In the Build Phase: Container managers help create reproducible build processes through containerized build environments. Build artifacts can be packaged into container images, which become the deployable units for your application.

Throughout Testing: Test environments can be quickly provisioned using containers, allowing for parallel testing and isolation between test runs. Container managers help orchestrate complex test scenarios involving multiple interconnected services.

In Deployment: Container orchestrators like Kubernetes handle rolling updates, ensuring zero-downtime deployments. They manage the rollout of new container versions while gradually retiring old ones, with built-in capabilities for rollback if issues arise.

During Runtime Operations: Container managers handle crucial operational aspects like:

  • Auto-scaling containers based on demand

  • Load balancing traffic across container instances

  • Self-healing by automatically replacing failed containers

  • Managing network connectivity between containers

  • Handling storage persistence

    For Monitoring and Maintenance: Container managers provide deep insights into application health through:

  • Container-level metrics and logging

  • Resource usage monitoring

  • Health checks and readiness probes

  • Debug capabilities like container exec and log streaming

In more recent years, using images and deploying them at a scale in containers has become the default way of deploying applications in larger organizations. If we pick up one of the stories from “The DevOps Handbook”:

While at one of the largest hotel companies, Dwayne Holmes, then Senior Director of DevSecOps and Enterprise Platforms, and his team containerized all of the company’s revenue generating systems, which collectively supports over $30 billion in annual revenue. Originally, Dwayne came from the financial sector.

He was struggling to find more things to automate to increase productivity. At a local meetup on Ruby of Rails, he stumbled onto containers. For Dwayne, containers were a clear solution for accelerating business value and increasing productivity. Containers satisfy three key things: they abstract infrastructure (the dial-tone principal—you pick up the phone and it works without needing to know how it works), specialization (Operations could create containers that developers could use over and over and over again), and automation (containers can be built over and over again and everything will just work).

With his love of containers now fully embedded, Dwayne took a chance by leaving his comfortable position to become a contractor for one of the largest hotel companies who was ready to go all in on containers. With a small, cross-functional team made up of three developers and three infrastructure professionals. Their goal was to talk about evolution versus revolution to totally change the way the enterprise worked. There were lots of learnings along the way, as Dwayne outlines in his 2020 DevOps Enterprise Summit presentation, but ultimately the project was successful.

For Dwayne and the hotel company, containers are the way. They’re cloud portable. They’re scalable. Health checks are built in. They could test for latency versus CPU, and certs are no longer in the application or managed by developers. Additionally, they are now able to focus on circuit breaking, they have APM built-in, operate zero trust, and images are very small due to good container hygiene and sidecars being used to enhance everything.

During his time at the hotel company, Dwayne and his team supported over three thousand developers across multiple service providers. In 2016, microservices and containers were running in production. In 2017 $1 billion was processed in containers, 90% of new applications were in containers, and they had Kubernetes running in production. In 2018, they were one of the top five largest production Kubernetes clusters by revenue. And by 2020, they performed thousands of builds and deployments per day and were running Kubernetes in five cloud providers.

Kim, Gene; Humble, Jez; Debois, Patrick; Willis, John; Forsgren, Nicole. The DevOps Handbook: How to Create World-Class Agility, Reliability, & Security in Technology Organizations

If we have to explain this in the simplest terms, just writing good code isn’t enough to create applications that matter—you also need to test, secure, and automate how that code is delivered and deployed at scale. Creating images of the application and deploying them in containers is arguably the best approach we have today.

Containers help manage every aspect of the compute challenge involved in running an application (i.e., you define the optimal configuration to run exactly what’s needed) while isolating the deployment from other applications. To put it in very simple terms: if you want ice cubes, the best way to ensure consistency is to pour water into an ice cube tray and freeze it. Similarly, containers provide a controlled, repeatable environment for deploying applications.

Docker is coming for the money

While Kubernetes has become the default for orchestration, Docker remains the most popular open-source tool for building, testing, and packaging applications into containers. It has gained significant traction in this space, with some estimates suggesting at least 80% adoption among developers who use containers.

This widespread adoption has created an opportunity to monetize developer mindshare. Since 2014, Docker has experimented with various business models, but in recent years, it has pivoted toward offering a developer-centric ecosystem of products and features.

Source: Docker

Key Products

Docker Desktop (biggest product with at least 60% of the share)

The flagship product, Docker Desktop is a GUI-based tool that lets developers build, test, and run containerized applications on macOS, Windows, and now Linux. Docker Desktop’s licensing model is central to the company’s revenue, as enterprise users (companies with more than 250 employees or over US $10M in annual revenue) must purchase a paid subscription.

Docker Hub (significant portion of the rest of the business)

This is Docker’s cloud-based registry service for storing, managing, and sharing container images. While Docker Hub offers a free tier (especially for public repositories), teams needing private repositories and additional features must opt for a paid plan.

Docker Scout & Docker Build Cloud (new offerings, likely less than 10% of the business)

Docker Scout helps teams scan container images for vulnerabilities and manage software supply chain risks.

Docker Build Cloud is designed to accelerate image builds by leveraging cloud infrastructure.

Similar to other open-source products, Docker’s pitch on “why pay” is a bit shaky. Here is a set of questions on how they qualify the opportunity vs free alternatives:

Does the alternative have a long-term product roadmap and a commitment to product support?

Is the alternative growing and will it keep adding new features to make developers more productive?

Does it offer an integrated solution reducing the need for separate tools and ensuring tool interoperability?

Does it offer an engineered integrated suite of developer tools that go well beyond containerization?

Does it offer enterprise-grade commercial support?

Does it have advanced security features?

Does it have product lifecycle management for security updates and patches?

Does it support large-scale deployment options?

Does it allow organizations to adapt to varying workloads and demands?

Does it have a robust developer community and partner ecosystem?

Does it help organizations maintain compliance with industry standards for security, audits, and compliance?

Does the alternative enable engineering organizations to scale to large development teams, standardize tooling across the company, and implement security policies?

Does it support the business needs of organizations of all sizes, including large enterprises?

So how do they go to the market? 2019 was a pivotal year for Docker, as the company sold off its Enterprise business—primarily focused on orchestration—to shift its focus toward a seat-based ARR model directly targeting developers.

By the end of 2022, Docker had more than 1 million seats under subscription, generating approximately $135 million in revenue. Today, projections estimate revenue closer to $200 million, especially following the appointment of a new CRO with a long tenure at MongoDB. This strategic hire introduced a playbook-driven sales approach, leading to some predictable responses:

Source: RepVue

While the situation has stabilized somewhat lately, there are some glaring issues being flagged for both sales:

Source: RepVue

And engineering:

Source: TeamBlind

The reality is that there’s a clear ambition for a major pivot. As of this week, Docker’s new CEO has joined, bringing what is arguably the right pedigree for a highly technical organization looking to expand its presence in large-scale enterprise deployments.

Source: LinkedIn

The most obvious and expected move right now is for Docker to enter the container-as-a-service (CaaS) market. This wouldn’t be direct competition with the hyperscalers but rather an attempt to expand beyond a subscription-based, seat-driven model into new lines of business that scale with compute-based consumption.

This shift is crucial because a seat-based model has inherent scaling limitations, especially in an environment where developer headcount growth is stagnating. However, this doesn’t necessarily mean Docker will abandon its developer-first approach. A core strength of the company remains its commitment to delivering the best possible user experience. This requires the sales team to develop a deeper understanding of the developer workflow and effectively evangelize Docker’s expanding set of features, particularly as the company moves toward offering end-to-end development lifecycle functionality.

Another key shift involves further automation and AI-driven features, particularly through Docker AI and co-pilot-style functionality. This aligns with the broader industry transition toward platform engineering, where organizations build Integrated Development Platforms (IDPs)—self-service environments that abstract infrastructure complexity and provide a unified developer interface.

This evolution presents a significant opportunity: Docker is already a widely used and well-liked tool among developers, making it a strong candidate for adoption within IDPs. The more use cases Docker supports across the development pipeline, the more likely developers are to consolidate parts of their workflow into a single tool. Additionally, AI-driven automation makes it more compelling for teams to adopt a broader set of Docker products. The introduction of Scout, Docker’s application security product, is a good example of this expansion (despite the crowded AppSec market).

That said, it’s unlikely Docker will attempt to re-enter the orchestration space. Docker Swarm was sold off to Mirantis in 2019, and Mirantis has since announced that the product will receive no further development.

Source: SentinelOne article on containers

The Kubernetes ecosystem deserves its own deep dive, which I’ll cover at a later stage—especially since multiple large players within it present their own tech sales opportunities, even though the Cloud Native Computing Foundation (CNCF) that oversees Kubernetes is not one itself.

Returning to Docker, the company’s success will ultimately depend on the sales execution of its GTM team. The more the product extends it’s scope, and the better the reps are at evangelizing Docker to developers and platform engineering, the higher the potential growth.

Source: Docker SKO

A lot of this will ride on Don Johnson as the new CEO and his ability to get everybody rallied behind a single vision. This is a big year ahead for Docker, either up or down.

The Deal Director

Cloud Infrastructure Software • Enterprise AI • Cybersecurity

https://x.com/thedealdirector
Previous
Previous

The Tech Sales Newsletter #82: Letters from Stripe

Next
Next

The Tech Sales Newsletter #80: Q4’24 in cloud infrastructure software