Recently, in Semantive we have started to use serverless principles in our projects quite extensively and re:Invent proved to be a perfect place to learn something new and see how others deal with problems related to this technology. Serverless, along with Deep Learning was one of the most prominent topics highlighted during sessions and keynotes. I want to share my thoughts and comments on what I have found particularly interesting and inspiring.

“What we are actually seeing is that large enterprises are the ones that are really embracing serverless technology. The whole notion of only having to build business logic and not have to think about anything else truly drives the evolution of serverless”Werner Vogels, CTO of AWS during his keynote at re:Invent 2018.

It all started in 2014 when AWS released Lambda functions. Since then, we have been facing a real “serverless revolution”. The rapid growth of new services and constant extensions to the current offering best prove that this technology and its ecosystem will stay with us for a couple of years and we can expect its further expansion. Today, serverless offering on AWS platform goes way beyond Lambda functions and includes components for computing, storage, messaging, service orchestration and more.

The answer why serverless is becoming increasingly popular is simple – you no longer have to manage or provision infrastructure and most importantly, you only pay for the services you use.  In addition to being highly available, serverless solutions are also scalable out-of-the-box. Those benefits are so apparent, that despite being a relatively new technology, serverless is being adopted by a lot of businesses such as Netflix, Fender, Coca-Cola – just to name a few.

Following the serverless path

During re:Invent, I had an opportunity to take part in a  number of serverless-related sessions, that demonstrated both the best practices for serverless architectures and real, successful customer stories. I found two of them particularly enlightening and inspiring.

“Serverless stream processing pipeline best practices” led by Roy Ben-Alta and Eyal Levi was a great example of how choosing serverless solutions can make a real difference in your business. Eyal Levi from Intel Pharma presented the detailed architecture of patient monitoring and analytics service as well as their two-year journey to embracing serverless architecture in one of their stream processing pipelines. Despite using state of the art technologies such as Akka, Hadoop, Kafka and having a production-ready system, they decided to replace it with a serverless solution. Why changing something that works? The key factor driving that decision was the fact that they spent a lot of time on the production maintenance providing scalability instead of focusing on new features and capabilities. A truly impressive benefit of choosing serverless architecture was the massive cost reduction – 75% compared to the previous solution, and that, as Levi said, only translated into the decrease in hardware and infrastructure expenses.

“A serverless journey: AWS Lambda under the hood” session presented by Holly Mesrobian and Marc Brooker was the most detailed and the “lowest level” session I took part during re:Invent. As the title says, Holly and Marc set us out on an in-depth journey through lambdas’ execution model, starting from the hardware and ending at the code level  – a lot of useful knowledge that can’t be easily found in the documentation and some sneak peeks at the new features such as Firecracker. Despite being one of the most popular AWS services, Lambda has some limitations to be considered by everyone, prior to going fully serverless with it and (with) AWS platform. These problems are gradually being addressed by AWS; early in 2019, we should expect an improved VPC support for Lambda functions. Today, Lambda functions, placed in a custom VPC, suffer from long cold starts that can be even up to a couple of seconds long. Since placing Lambdas inside VPC gives a little extra security benefit, the simplest solution to this problem is to avoid using VPC for Lambdas whenever possible. This might be the perfect way out for the majority of use cases, but only applicable if we do not have to communicate with external services such as RDS or Elasticsearch,  placed in private subnets of custom VPCs. Although Marc Brooker and Holly Mesrobian didn’t reveal what order of magnitude in terms of cold start and latency improvement we can expect, I’m really excited that this issue is being addressed!

Should we all go serverless?

The benefits of using serverless are clearly visible and some of the companies decide not only to utilize some of the serverless principles but to be fully serverless. Does it mean that all services should follow serverless principles? The short answer is – no. Serverless is undoubtedly an innovative and constantly evolving technology that has taken cloud computing world by storm, but it is not a one-size-fits-all solution.

Written by Anna Stępień, Big Data Engineer at Semantive.

Leave a Reply