decorative pattern
decorative pattern

Codefresh Case Study

Codefresh is a continuous integration and delivery platform built for Docker from the ground up. Their SaaS product provides a complete solution for delivering microservices at scale — from building Docker containers, flow-through provisioning of dynamic environments, to unit and integration testing, and eventually to one-click deployments.

Let's Talk
Codefresh: a case study

The Client

Codefresh is a continuous integration and delivery platform built for Docker from the ground up. Their SaaS product provides a complete solution for delivering microservices at scale — from building Docker containers, flow-through provisioning of dynamic environments, to unit and integration testing, and eventually to one-click deployments.

Region
Israel
Industry
Development Tools
Main Technologies
Google Cloud, Kubernetes, Helm, Chef
Services
Kubernetes Adoption
Date of Project
Problem

The Challenge

The company operates in a highly dynamic market, where development tools and best practices change very rapidly. To keep pace, Codefresh is required to shorten delivery time to market of new features and experimental ideas.

Before working with Leonid, Codefresh’ team was using a combination of Chef cookbooks to deploy and maintain their ever-growing number of micro-services running on Google Cloud.

Although this approach allowed the team to automate the release cycle of their applications, it also presented a few challenges:

  • Adding a new microservice to the deployment automation needed to be simplified and usually couldn’t be completed by the development team.
  • Since most of the controls were codified within Chef cookbooks, there was no visibility into the up-to-date configuration that was currently deployed on each one of the running environments and across the different services.
  • Scaling individual services was challenging since services were allocated to specific instance types in groups

Codefresh’ team leaders identified early on that switching their deployment strategy from Chef to Kubernetes could solve all the problems listed above and improve the development velocity.

The team decided to move to Kubernetes and contacted Leonid to help translate the Chef-based deployment to Kubernetes, migrate the existing production environments, and train the team members to allow them operate the system themselves.

It was hard for us to translate our Chef-based world to Kubernetes’ best practices. Leonid didn’t leave any loose ends. The migration to Kubernetes was carefully planned and executed without surprises or downtime.

Oleg Verhovsky, CTO

What we’ve done

The Solution

Codefresh’s release cycle consisted of three separate stages: local development, release to a shared staging environment, and eventual deployment to production. The team uses Codefresh’s product internally to automate all the steps of the release process.

Because it was important to avoid any downtime during the migration to Kubernetes, Leonid started the project by migrating the staging environment first while leaving the local development practices and production deployment as-is.

After the Kubernetes-based staging environment was completely operational, the development team met for a half-day training session. The goal was to help team members to gain theoretical and practical operational knowledge of Kubernetes before they started working on Kubernetes in production.

The migration to the new Kubernetes-based production environment was completed within a few hours and with zero downtime and with no complaints from Codefresh customers.

The team also quickly realized that Kubernetes could help them speed up the testing process by eliminating bottlenecks in the staging environment.

Within a few months, additional automation was added to automatically create dynamic environments per each branch. Instead of one staging environment that could be blocked by an individual developer for a few hours, every team member could test his code in isolation and do it without blocking anyone from their team.

With Leonid’s help, we were able to successfully implement multiple complex, inter-team dependent projects. Leonid knows how to translate the requirements of a complex projects into a practical work plan and see them through to completion. He asks the right questions to tackle the risks in advance and regularly presents multiple options along his technical recommendations.

Oleg Verhovsky, CTO

Results

The Outcome

Within three months since the migration to Kubernetes, the number of microservices the development team has deployed and now maintains have doubled.

The training and experience gained by the team from working with Leonid and continuously deploying new components on top of Kubernetes eventually helped to develop a more tied integration of the product with the Kubernetes platform.

The initial migration to Kubernetes, and consecutive work of Leonid with DevOps team members on packaging the application components using Helm, also allowed Codefresh to release their SaaS product for on-prem installation to answer the needs of enterprise customers.

Packaging the different application’s components with Helm allowed Codefresh to onboard beta testers for the Enterprise tier within a month since the Helm’s project inception, and allowed a quick and painless way of adjusting the on-prem installation based on customer feedback.

Kubernetes is now an integral part of Codefresh ongoing operations and part of the company’s product vision going forward.

With Leonid’s assistance we were able to migrate our applications to Kubernetes, which has helped us tremendously. It standardized our deployment processes and allowed us to release more and often, and it helped our team to quickly develop the skills required to continue to integrate our product with the Kubernetes platform.

‍Oleg Verhovsky
CTO
quote decorative illustration

Next Projects

Bria.ai migrated its EKS workloads to AWS Graviton and achieved better application performance with lower costs.

Learn how Opsfleet helped the engineer team at Bria.ai to seamlessly shift their EKS applications to run on top of Graviton Instances.

Imagen Standardized their AI Cloud Infrastructure and scaled from dozens to 10K+ users

Learn how Opsfleet helped Imagen AI build and scale their application on AWS with Fractional DevOps

Graviton Migration Case Study

Large Security Company