How is serverless different from container-based services? We answer this question and more in this episode of Coding Over Cocktails with Adnan Rahic.
Join the DZone community and get the full member experience. Serverless computing is growing in popularity and is heavily promoted by public cloud providers. The much-touted benefit of serverless computing is to allow developers to focus on their code whilst the public cloud provider manages the environment and infrastructure that will be running it. But how is serverless different from container-based services? What are the best use cases for serverless? How about the challenges? And can this architecture move forward in the future? We answer these questions and more in this episode of Coding Over Cocktails. Kevin Montalbo: Joining us all the way from Australia is TORO Cloud’s CEO and founder David Brown. Hi, David! David Brown: How have you been Kevin? I’m very well. You? KM: I’m great. And our guest for today is a Developer Evangelist at Sematext.com, a SaaS-based out of Brooklyn, New York. He is also a passionate teacher, helping people embrace software development and healthy Devops practices in various venues since 2017. He is also the author of „Node JS Monitoring: The Complete Guide“, and has several published articles, programming tutorials, and courses under his belt found in websites such as freeCodeCamp, HackerNoon, Medium, and Dev.to. He’s now here with us today to share his expertise on Serverless computing. Joining us for a round of cocktails is Adnan Rahic. Hey Adnan! Great to have you on the podcast. Adnan Rahic: Hey, good to be here! KM: All right, so let’s dive right in. In our previous podcast, we have often discussed Kubernetes and container-based approaches to microservices. Can you briefly explain to us how serverless is different from container-based services? AR: Yeah, for sure. When you think about it, with containers, you get a package where your code runs, which basically, you package your code into an executable. And then you run this on an infrastructure, right? And they’re quite logically called containers because of this. But with serverless, you don’t really get that. With serverless, you just deploy your code directly to the cloud provider, and then the cloud provider handles everything from there. You don’t really care about the dependencies. You don’t really care about the runtime or anything like that. You just let the cloud provider handle all of that for you. Whilst with containers, you have to package all of those things within that container. So you have to figure out, „Okay, so I need to package around the dependencies. I need to manage all of that. I need to make sure that’s all running correctly.“ But having this serverless approach, it kind of makes it easy in one sense. But it can also be very complex in another sense, because if you overdo it, it gets really hard to manage all of that complexity. And then when you think about it, you can also reduce complexity. Because if you have a huge Kubernetes cluster, for example, or a huge monolith, and then you have things like cron jobs or email services or things that aren’t really related to the core functionality of your actual cluster or of your product, you can then cut those pieces out into serverless functions that would basically be isolated. So if you know how to use it correctly, or if you have a very good sense of how to get the best out of it, then it makes sense. But it’s not a silver bullet. As anything, you have to figure out the best use-case and then based on that, use it for what it’s kind of intended to be used as if that makes any sense. DB: Yeah good stuff. We’d like to get to the use-cases and some of the challenges and complexities you mentioned in a minute. Before we get onto that, serverless is often mentioned in reference to Functions-as-a-Service. But serverless is broader than that, right? So it’s encompassing more than just Functions-as-a-Service. AR: Oh yeah, definitely. Basically, anything that doesn’t require a server can be considered as serverless, right? But only Functions-as-a-Service? That’s a subset, if you can call it, basically. If you think about services like Lambda or Azure functions or things like that, those are all f-a-a-s or FaaS. We call them Functions-as-a-Service where you have this service where you can deploy your code, hook it up to an event trigger, something triggers that code and, you know, it runs. Something happens, and you get a return value, which is basically what you want. And that’s just one subset of having serverless or using serverless. If you think about it, like if you’re running a website – a super simple static website – on S3? That’s serverless as well. Are you managing a server? No. You have S3, slap your files in there and you hook it up to a domain and it’s serverless, right? So it’s very vague in what it could be defined as. But it’s also very loose in a way where, if you’re running a website on Netlify and you’re hooking up an API to some Lambda functions or using services like [inaudible] or you’re just running it by yourself on AWS Lambda, on S3. All of those things could be considered serverless, because, I mean, have you ever touched an EC2 instance? Not really. No, right? So, I mean, it could still be considered that way. I know a lot of people that are, like, hardcore, like purists. They’re gonna say, „Oh, this is so weird.“ Maybe yeah. Maybe no. It’s just that in the end, whatever floats your boat. I mean, the point of serverless is to make it simple, to make it easy for people that don’t need to manage infrastructure. Hypothetically, if I’m a startup founder, I don’t really wanna care about managing containers and instances and running the infra and then hooking all of these things up, getting like a really large bill for something. I mean, I don’t really need that if I’m making a ton of money and then I need to employ tons of people to run that so I don’t have downtime, then sure. Yeah, I mean that’s the next logical step. DB: Well, there’s managing services for your containers as well. So, manage Kubernetes and, you know, as you say, managed virtual service through EC2 or container-based services as well. So there’s plenty of opportunity for managed infrastructure and containers that I guess that sort of starts leading us down the path. And I guess, one thing we want to clarify, sometimes when we’re talking about best use-cases or complexities or challenges, we’re actually talking about functions and service. We’re talking about that subset, so I think we just need to clarify that. Let’s maybe talk about some of the best use cases for serverless then. So you said, you know, it depends on the use-cases to when you use serverless, when you use microservices, container-based technologies. So let’s run through some of that: some of the differentiations between serverless and microservices based on containers. AR: Yeah, for sure. I mean, to keep it simple, anything that requires a persistent database connection or requires many database connections, especially to relational databases like Postgres or SQL. Whatever. Just don’t. Just skip the FaaS altogether. Unless you have, if I go really technical in it, unless you have, like, a proxy API that hooks into your database, then it’s fine. But that requires another layer of complexity that often you don’t really want. Except if that’s a use-case that you’re okay with, because the problem with functions is that if you run one function, that’s basically one API. If you think about it, that one API needs a connection to the database and if you’re scaling out, then you have thousands of functions. They have thousands of connections to the database, and that’s just that’s just like an accident waiting to happen. That’s just running with scissors, right? You don’t want to do that. It’s an unnecessary load on the database. It’s unnecessary connections, multiple points of failure, multiple points breaches. So I mean, you just don’t really want to do that right? Unless you’re using a database that’s a service as well that hooks into that fast ecosystem, like AWS has DynamoDB, which works fine. Azure has DocumentDB or I don’t really know what it is called. So, any service that can hook into it, it’s fine. But you get vendor lock in there, so if you want to move away from that, you’re gonna have a pain on basically anything that goes with that. So, I reckon if you have database connections to figure something else out with anything else that has to do with, basically, you can think of it as sidecars. So, if you have cron jobs that are running, you don’t really need to run that in your core infra, like if you have a core community server that handles your main APIs or your main database-handling. You don’t really need to run those front jobs there. You can just like fire a Lambda, right? Or if you have email services or any type of service, an API that you can extract from your core product? Great. Because you have that one less thing to think about. And that’s going to be less of a load on your entire system. So regarding those things, amazing. That’s absolutely great. One example is, I built an email service to get triggered through a Lambda function and another few services through AWS that when somebody types in a form, I got emailed that response for that question. And then I could just email back that person through any email client, so on. But that’s not running anywhere. It’s not running on a server. That’s not taking up any space or any mental capacity for myself to have to, like, focus on actually getting that running and keeping it running. It’s just there in a function in my account in AWS. So, things like that are absolutely amazing because it takes away all the stress of having to manage it. Unless it’s databases. You don’t wanna go into that one at all. DB: What about managing it at scale, though? So, I get the cron job thing or infrequently run services or functions, then you don’t necessarily want as service sitting there idle most of the time, if it’s only gonna be running that function every five minutes or every hour. A serverless makes a perfect use-case for that. But what about when you’re doing at scale? The serverless still makes sense when you’re running hundreds of thousands of transactions per second. AR: Yeah. It can. Just because it can scale so effortlessly. So if you think about the use-case on, if you have function on AWS, if you get 1000 concurrent connections in the same millisecond to that one API, it’s gonna scale horizontally to 1000 functions right away. So, you’re not gonna get this typical type of latency you would get on a standard API like on a server or whatever.