Ivan Čuljak
Cloud Solution Architect @ Celeste Maze
There’s a huge hype around serverless and the idea that the only thing you need to do is write some business logic code.
There are a lot of cases where this is true, especially if you’re using serverless for background jobs without much concern when it will finish processing.
Unfortunately, there are still a lot of cases where things aren’t so peachy, where you still need to worry about latency, performance, and scaling.
We’ll start with discussing the granularity of your functions, and their “distribution” among multiple functions apps/deployments you might have, as well as options and latencies in connecting them.
Naturally, we’ll also see the impact of huge libraries on cold starts, and general performance.
Monitoring your code has always been important, but it becomes crucial once you start using functions, especially when your workflow is jumping between multiple serverless instances, a bit of VM Scaled Sets in the cloud, and a tad of machines on-prem.
Cold starts are the arch enemy of performant serverless functions, but there are ways to tame them. We’ll discuss and demo preheating solutions, DIY hybrid deployments, and a new hybrid solution on Azure.
The “only” remaining problem is scaling. Although the news says you can get the resources you need when you need them without ever thinking about the underlying infrastructure, that’s not really true.
When we’re confronted with scaling limitations the “simplest” thing we can do is multi-deploy our functions apps, which brings some other concerns like provisioning all those function apps, connecting them to CI/CD, and making sure you can load balance between them.
In the end, a problem ignored by the most is versioning. In most cases, there is no problem, but once you go multi-deploying your functions, and have deployments while there is some load on your system, you might produce a lot of problems if you’re not careful.
Come, join, and enjoy a mixture of experiences from the Azure trenches and useful demos.