The internet contains a zillion articles on serverless computing, and a fraction of those focus on hosting websites in this way. Now that the hype on serverless is winding down, let’s have a look at what this hosting paradigm can do for you, and what its pitfalls are.
Let’s get one thing out of the way first. No professional IT infrastructure runs without some kind of server. Serverless computing simply means that you outsource the entire management of the servers to a third party, like a cloud provider. These people still run servers, and your application’s code still gets hosted on them. These cloud-folks just happen to be a lot better at the server-hosting-game than most of us could ever be from our lonely basements.
The promise of value in “serverless” is in the managed aspects of limitless scaling, global availability and robust uptime. And truth be told: serverless computing in Amazon’s public cloud lives up to these promises. Other providers may do just as well, but I can’t judge those from personal experience. AWS, Azure and GCE simply are much bigger than any but the largest enterprises could ever hope to match in their IT Ops departments, but their problem would be in formulating a positive business case for such an effort.
Should you join the dark side
So where’s the catch? Because the serverless computing scheme seems, at face value, to be too good to be true. The answer, as always, depends.
- Are you starting a software project from scratch?
- Is it difficult to estimate required processing capacity beforehand?
- Are you looking to engage in a long-term relationship with a single hosting provider?
If the answer to any of the above is ‘no’, then you really should reconsider the severless promise. Here’s why.
Dealing with a legacy code base
It’s impossible to simply throw an existing application into the serverless cloud and have it work. Business logic in the serverless world gets chopped up into tiny microservices, each of which gets its own small bit of the cloud to run in. This deployment pattern is completely different from anything that came before it. In fact, it’s so radically different that decomposing and refactoring an existing application effectively amounts to a full rewrite. If you’re prepared to deal with that, then by all means go serverless.
Anticipate resource use
Elastic resource requirements are one of the great drivers that push applications to the cloud. And rightfully so. But should you go completely serverless in one big operation? Yes, if you can. And usually you only can when you’re (re)building something completely from scratch.
If you’re not starting greenfields, you still have strong options. Just don’t expect miracles from “serverless” in such a case. Have a good look at the more traditional VM-based hosting options such as AWS EC2. These can be grouped into fleets and managed by a load balancer which pops up more instances as required by application load. It destroys them again when the storm settles to keep cost down. While this method isn’t as instantaneous as true serverless constructs, it’s still much better than what on-premise hosting can do. And by tweaking the application and its accompanying VM-images, you’ll be able to make auto-scaling quite responsive indeed.
Beware the vendor lock-in
While all of this fluffy, free scaffolding is incredibly convenient and an awesome enabler for a quick time-to-market, it also locks you entirely into this one specific cloud hoster’s platform for the entire life cycle of your application.
The more or less simple way around that, is to modularize wisely. Many websites have, for instance, calculation-instensive functions that take up a lot of the CPU-power of a web server to run. This aspect forces their owners to spend big on loadbalanced server farms.
Take insurance websites for example, which often have extremely complex services running behind the scenes to calculate premiums for potential customers. Apart from this domain specific complexity, they generally run fairly simple, even bland websites (from a hosting perspective at least).
By dividing your application into generic, simple, conventional workloads such as CMS-hosting on the one hand and specialized processing-intensive services on the other hand, you’re able to use serverless where it shines while avoiding a total vendor lock-in by avoiding serverless where it doesn’t immediately make sense.