Blog Post

March 14, 2019

A Case Study in Serverless and Kubernetes

By Ryan Jones, CEO and Founder of Serverless Guru

 

Serverless can be a bit of an undertaking to incorporate in your application. It depends greatly on in house talent or external consultants, what the application you're trying to build will need to do, and whether or not you already have an application. If you're starting from scratch then it becomes a lot easier to define if Serverless meets your needs. If you are coming from an existing application, then Serverless may meet your needs, but it requires identifying whether your application can be classified as microservices or a monolith.

A microservice can be thought of as a single block of functionality, whereas a monolith is a tower of blocks which have interweaving dependencies. The monolith can be more tricky and time consuming to transition to Serverless, yet it can be done with a few different strategies based on the level of comfort your company is willing to handle. The easiest strategy is to build new features using fully managed (Serverless) services. An example of this could be as simple as a new REST API which the monolith utilizes.

That REST API could use a cloud function written in NodeJS where your monolith APIs could be in Java Spring Boot. You would also be able to have that cloud function hook into your existing database or use a completely different, fully managed database. This allows you to dip your toe without over committing or spending time re-architecting. The other strategy would be not extending the monolith, but rather rebuilding the entire application using as many fully managed services as possible.

Rebuilding the entire application is a heavy lift and definitely not the first place to start. It also depends heavily on in house talent or external consultants. However, the deeper you go to create workarounds for existing systems, the more complex the integrations can become. In some cases, rebuilding the application is a necessary change.

Last year, I was helping a client transition from mainframes to Serverless. It was a leap that required re-architecting how the pieces worked together. In this case, options were to start from scratch versus extend. When it comes to microservices, they have a bit of an easier time transitioning to Serverless. It does, however, require some internal adjustments similar to moving from virtual machines to Serverless, as a lot of the overhead behind managing the containers or servers is abstracted away.

The main difference is that you have already broken your application apart into small blocks. That's important because cloud functions serve a small, sometimes singular, function. Moving your microservices to Serverless skips the step of having to identify which parts of the application can be broken apart. That saves quite a bit of time.

The final scenario is starting from scratch. This is a scenario where you're building a new application and can introduce completely new technologies and services that may not correlate to your company’s legacy applications.

I love starting from scratch when working with clients because it allows much more flexibility to showcase how Serverless or fully managed services can rapidly accelerate development.

With that background out of the way, let's jump into a story of a company that was the polar opposite of utilizing fully managed services.

Case Study: Frankfort’s Kubernetes on Serverless

Frankfort is a billion dollar company that spends millions of dollars per year on servers. For a company with over 500+ employees, they are by no means small. They had developed a strong product and spent tens of millions in the development and maintenance of their product and supporting services. However, they had built this product before Serverless or even containers were being utilized by the software development industry.

Problem

With the change in new tools and solutions such as Kubernetes and Serverless, the existing infrastructure, which worked for the most part, started to fall apart and show its holes when held up to the light. The clock was ticking, as money was leaking. Frankfort had a decision to make, as the organization began to inspect where costs were flowing. The millions of dollars in server costs and millions of dollars in development/maintenance was becoming to much of a burden on the business to continue. Reducing the total spend on development and maintenance was of utmost importance.

Frankfort wanted to kill, reduce, and optimize: Kill anything that shouldn't be on. Reduce anything that was over provisioned. And optimize whatever was left to further lower costs, overhead, and accelerate development. As a first step, Frankfort began to turn servers off. Second, they started scheduling servers to only be on during certain periods. Finally, they took whatever was left and was part of their production applications, and began optimizing.

Solution

The optimization was gradual and required heavy lifting, as the application was tightly coupled. Frankfort’s main application was a monolith that needed to be carefully chipped away piece by piece while ensuring limited downtime for end users. Frankfort had to adhere to strong SLA's (Service Level Agreements). Frankfort did not have the in house expertise to move in this new direction, so they hired outside consultants to fill the gaps where needed.

Results

Frankfort did not start to see large gains until making the legacy transition an organization-wide effort. By investing time in killing, reducing, and optimizing, Frankfort went from spending millions a year on servers to a third of the total cost. Frankfort’s focus on fully managed services and automation allowed the development teams and operations teams to level up in areas which accelerated releases, as well as responses. In the wake of these large efforts the Frankfort operations team started to fly under a new flag. The flag of the SRE or Site Reliability Engineer.

The development teams at Frankfort began to naturally evolve into a service ownership model where they were given more control to build and deploy their new features by leveraging the tools created by the SRE team and other fully managed services. Frankfort is a prime example of the impact that Serverless can have on an entire organization. Frankfort re-architectured existing monolithic applications, extended existing monolithic applications, and built entirely new applications with a Serverless first perspective.

Share with your friends

 Serverless, Kubernetes

You may also find interest in these posts:

Technology

Is Voice Technology The Future Of Human-Computer Interaction?

Startups

How To Make Remote Work, Work

Subscribe to Insights at Tangelo

Accelerate your growth with key industry insights.