Serverless Computing: The Future Architecture for IT Leaders
Serverless architecture is the inevitable future for IT leaders. Serverless Computing is under the scrutiny of CIOs due to the advances that have taken place in the realm of cloud computing and the numerous advantages it can offer.
Major players in the technology sector are shifting towards cloud platforms. The cloud provider can provision and manage the core computing resources. Like any novelty, it comes with many promises of happiness and some risks.
We spoke about Serverless with Vicenç García-Altés, Technical Coach at Voxel Group, with extensive experience in software development in academic and business settings. Vicenç is responsible for the course "The Serverless Course" which will be held on October 24th and 25th in Barcelona, hosted by Runroom.
Hello Vicenç, we're delighted to have you here. Serverless is on everyone's lips, with a lot of enthusiasm, and like any disruptive novelty in the technology landscape, it comes with some fears and doubts.
Let's start with the basics: What is Serverless?
Serverless is a new computing model in which we delegate a lot of work to our cloud provider. In its simplest form, we only focus on encoding our business logic into a small function. The platform will be responsible for invoking that function when an event we have configured occurs (such as an API call, a new file in storage, or a new message in a queue). The platform also handles more advanced tasks like auto-scaling. Eventually, our function may call other managed services. We won't be responsible for the servers, operating systems, etc.; the cloud provider will take care of that on our behalf.
How and when did you discover the existence (and importance) of this architecture?
I was working for a large supermarket chain in the UK. One of the teams decided they would set up a large Kubernetes cluster to host any team's application. After more than eight months, they had nothing up and running. Our team, on the other hand, decided to explore ways to deliver value as quickly as possible, and one of the things we explored was AWS Lambda. In less than three months, we already had the first project in production. The last I heard, they now have around 200 Lambdas in production.
What are the advantages of Serverless computing?
Well, the list is quite long! We can separate them into advantages for developers and advantages for the business. For developers, the main advantages are that they can let go of many concerns. By using functions, they don't need to worry about things like patching servers, operating systems, auto-scaling, capacity planning, and more.
From a business perspective, since it's a technology that allows faster development, it improves time to market and the ease of experimentation and innovation. It also has a significant impact on costs, but I think we'll talk about that later.
Can we say that a Serverless architecture has a less critical impact on the environment? Could it be considered as a solution for large clusters with many servers that require constant maintenance and operation?
Yes, certainly. What's interesting here is what Ben Kehoe calls the Serverless mindset. Ben envisions Serverless as a ladder with endless steps to climb. Moving from on-premise virtual machines to cloud-based virtual machines is one step on that ladder. Using a PaaS platform is another step, and so on indefinitely. For every technological decision we make (the queuing system, the storage system, where we host the API, etc.), we should consider on which rung of the ladder we stand and whether it's worth climbing a few more.
In this case, this would translate into clear benefits for the companies that are adopting it. Is that correct?
Well, this largely depends on the context. If you were to tell the folks at Stack Overflow that they would be better off without their on-premise servers, they would probably disagree. But as a general rule, I believe it's true. Let's let cloud providers do what they do much better than us and focus on trying to add value with our code.
What is the most critical aspect of getting rid of a server?
There is a certain loss of control that can make us a bit nervous. If the cloud provider has an issue with the service we are using, we won't be able to fix it. In general, this shouldn't be a problem because the cloud provider probably has much higher SLAs than we do. As for Serverless, there are things that, rather than becoming more complicated, need to be done differently, such as observability, chaos engineering, etc. In the end, it's a new way of programming our applications that brings some new challenges.
In what type of projects can we use Serverless?
Perhaps the question should be: In what projects can we not use Serverless? Basically, there are three categories of projects where it doesn't fit as well.
The first is projects where low and consistent latency is crucial because cold starts can be a disadvantage. The second is projects with high and consistent throughput because the costs of running the solution can be significantly higher. The third category includes projects where, for some reason, a permanent connection to the server is needed.
If I, as a CIO or head of the IT department, want to start experimenting, where should I begin? What simple problem can I test to start seeing the benefits of Serverless?
There are many ways, but a fairly typical one is to migrate the cron jobs you have in your system to a lambda function. That's how I started, and I think it's a very good way to begin.
Let's talk about Serverless lock-in. Apparently, "server lock-in" is becoming a major concern for companies because migrating to another provider's platform seems to involve significant effort and costs, along with the fear related to not having information located on their own servers, leading to potential loss of control in terms of security. Do you think this concern is justified?
Vendor lock-in is something that is often talked about, yes. Yan Cui has a good article on this. There are many types of lock-ins. When you choose a technology, you are essentially locking yourself into that technology. If you use Active Directory as an identity server, you are locking into that, for example.
As always, it's good to consider costs, and here there are two costs to take into account: first, the migration cost. How much will it cost me to switch providers if I need to? The other cost to consider is the opportunity cost. How much will trying to make something generic enough set me back?
The drawback of creating something that's valid for all cloud providers is that you can only use the common denominator between them, and it's often not very large since not all providers implement the same services with the same features.
Finally, the big question to ask is, "When will I need to change providers?" Personally, I don't know of any cases. However, this doesn't mean that you can't (and shouldn't) use good architectural practices to separate your business logic from the plumbing of your platform.
In terms of costs, Serverless is initially more expensive. Do you believe it can be more cost-effective in the long run?
Well, I don't think it's initially more expensive; quite the opposite, actually. For the vast majority of products, it will be cheaper. But discussing costs is a complex matter with significant implications. I'll try to summarize it as best as I can. The cost of a solution can be broken down into the cost of running the solution (what the cloud provider charges you), the cost of developing and maintaining that solution (personnel cost), and the opportunity cost.
We've already mentioned that developing a Serverless application can be faster than other solutions because you have to worry about fewer things, which means a lower opportunity cost.
The fact that the platform does a lot for us means that we need fewer people to develop an application. You'll likely see a significant reduction in the need for DevOps profiles. This will reduce the personnel cost, and this is the most significant cost in a project in 90% of cases.
Finally, there's the cost of running the solution. AWS Lambda, for example, offers a generous layer of free services that will reduce your costs.
But besides the cost reduction, what's really interesting is the payment model, which is pay-as-you-go. We only pay for what we use, and when we're not using the service, we don't pay a cent. This has many financial implications, which can be summarized as turning what used to be fixed expenses into variable expenses.
Let's talk about another common concern for IT departments: performance. What's your experience?
As we've seen earlier, if you require very low and consistent latency, Serverless may not be the best choice. But if, as in the vast majority of services, that's not the case, there's no problem. The platform's performance is very good and continuously improving. Evidence of this is that large companies like DAZN and iRobot use Serverless in many of their client-facing systems.
Why did you see the need to organize The Serverless Course? What can we learn in this course?
The course emerged from the need to condense what I've learned in these years of study and development with this technology so that students can quickly adapt to it. The idea of the course is to provide the knowledge and tools so that, the day after the course, they can start a project with this technology and be fully productive.
What we can learn can be summarized as follows:
- How to develop, test, and deploy a Serverless application using AWS Lambda, the Serverless framework, and NodeJS.
- Strategies for testing Serverless applications.
- Logging and monitoring.
- Basic security.
- Continuous integration.
- Environment management.
Thank you very much, Vicenç, for your time and for addressing our questions about this complex topic!
We hope this interview has been helpful in understanding the main advantages and risks of Serverless Computing.
If you want to delve deeper into this topic, The Serverless Course is for you.
We look forward to seeing you at Runroom!