The Top 8 Challenges You Will Face On the Path To Serverless Implementation (And Suggested Mitigation)
Serverless looks like a microservice architecture's best friend. As you read about it, you might be dreaming of all of the time and flexibility using it will give you. Maybe you're thinking about the infinite scalability that comes with serverless, or the high level of reliability that comes with cloud vendors' SLAs. You're looking forward to the agility the solution provides to give you a competitive advantage and the increased performance over your on-prem solutions. Even the scaffolding providers offer for the observability of your services appeals to you.
There are tremendous benefits to be had when combining serverless solutions with microservices. But to make serverless work for you, you also need to be aware of its pitfalls - those "challenges" that can prevent you from fully realizing everything that serverless can bring to your architecture. Taking the time to examine them, however, will ensure you can navigate around them, or worst case, make accommodations for the technology's shortcomings.
The 8 Challenges of Serverless Computing
The Dreaded "Cold Start"
It's easy to disassociate a serverless instance from the underlying architecture. That's part of the architecture's usefulness. In reality, though, your code is still sitting on a server somewhere, and if your function isn't used often, it will be deallocated.
Large or small, your function is going to take time to load back into memory for use, slowing down performance. This is the cold start problem, and the bigger the function is, the bigger the impact will be on autoscaling and execution times.
To address the issue, you need to use atomic functions - small, compartmentalized, discrete chunks of code. Atomic functions break down complex logic and can therefore load, and run faster.
Also, some vendors offer solutions to solve issues with cold starts. You can opt to pre-warm functions by making regular calls to it, or reserve virtual machine instances behind the scenes to host the serverless function. This will improve execution and speed but will also have an impact on costs.
Security must be taken into consideration with a serverless architecture. It's critical that customer-facing apps maintain a tight security posture. Access to these services should also follow the principle of least privilege - only providing exactly as much access as is needed and nothing more.
Most vendors and providers have built-in access control and mechanisms. Some of these, such as AWS Appsync, AWS API Gateway, and Azure API Manager, provide public-facing endpoints that can be secured simply and in numerous ways. These services also include default rates of throttling and authorization features, thus enhancing security and avoiding DDoS and API hacking attacks.
Governance and Standardization
Without standards and governance, your serverless environment can turn into the wild west. Integrating services published by different developers will be problematic. Costs can skyrocket due to a lack of provisioning oversight, especially in a multi-cloud environment. Orchestrating regular maintenance, updates, and versioning requires a concerted effort, exacerbated by a manual release process.
Establishing standard automation and continuous integration and deployment (CI/CD) processes can alleviate these challenges. Standardization and governance will help you fully realize and successfully manage deployment and versioning of your serverless capabilities.
At the beginning of this post, we mentioned that observability is a benefit of a serverless architecture. That doesn't mean that, with current offerings, it's easy or automatic. In truth, it can be quite frustrating.
In his excellent blog series on the subject, Yan Chui discusses the many challenges and reasons why observability is problematic. The goal of observability is to be able to review the most relevant information at the right time.
Serverless inherently decouples us from the underlying infrastructure but also gives you less control or access to background messaging. The good news is that observability is becoming less of an issue as new toolsets and features are being produced. The challenge is shifting, however, to more of a coordination one, as teams must consider tools and providers against their needs. Big Compass keeps current on these features and how to begin your evaluation and can help you understand the path forward.
Managing Parallel Processing
Another double-edged sword with serverless is its ability to scale on demand. While that's great for handling a spike in activity, it can cause problems for downstream systems. Architects need to change their mindset from one application processing large batches to many microservices quickly processing smaller workloads.
Throttling parallel processing using queues or throttling the microservice itself is one way to approach the problem of flooding downstream systems.
Also, planning ordered delivery is another difficult problem. Since so many parallel functions can run at the same time, this challenge requires that you implement a "stage" that can help order events appropriately. Take into account, though, that passing your messages through an ordering stage will likely slow your transmission speed. Returning to serial instead of parallel processing can also address the issue.
Vendor lock-in remains a concern. Third-party APIs, a lack of customizable operational tools, implementation drawbacks, and architectural complexity must all still be taken into consideration.
Third-party APIs introduce issues around vendor control, multitenancy problems, security concerns, and vendor lock-in for the services. Compliance may require upgrades, which could lead to losses in functionality, unexpected limits or constraints, and changes in costs.
Distributed debugging tools can encourage developers' dependence on the cloud provider. Piping logs to other third-party offerings, like New Relic or Splunk, can lessen the ties to the cloud provider, but increase the reliance on those third-party offerings.
Implementation drawbacks vary by cloud provider. For example, frameworks at AWS and Azure allow you to deploy a logical application with multiple serverless functions, but that's not the case for all providers. That cascades many other issues, including problems with versioning and rollbacks.
Architectural complexity in serverless is the result of the time and effort required to size the functions appropriately. Without spending time designing right-sized functions, an application or service may need to call a multitude of functions, essentially warping it into a mini-monolith.
The Right Tool at the Right Time
Serverless isn't a one-size-fits-all solution. It's crucial to ensure that the development going on is suited to a serverless architecture; otherwise, the choice to use serverless and the challenges to make it work are wasted efforts.
Successful implementations orchestrate individual microservices together, favoring use cases where async or decoupled processing capabilities apply. Common scenarios that marry well with serverless architecture are transactional processing, event streaming, APIs, or websites that require automated scaling, multimedia transformation and processing, and SaaS providers.
Because microservices are infinitely scalable, it's easy to miss duplicates caused by the underlying architecture of the cloud provider, or duplicates produced by your system. This can lead to severe consequences for clients.
Therefore, duplicate detection is always necessary with serverless microservices. Luckily, there are many tried and true methods for preventing duplicates. Maintaining a temporary ledger is one such method. Here you can log everything that has run through the system and check the ledger for events that have already occurred.
Some providers offer de-duplication solutions out-of-the-box. Since duplicates can be produced from various systems or within the provider itself, you'll need to be prepared to handle both scenarios. If you need help or clarification on de-duplication processes or functionality, Big Compass is happy to help and use our prior experience to assist with this nuanced challenge.
This list represents just a handful of the common serverless challenges you need to be aware of, as well as a few of the mitigation options you can leverage to get the most out of a serverless strategy. We strongly suggest that you design for each of these when considering a serverless solution.
Big Compass has experience working with multiple implementations across different clients and cloud providers using each one of these strategies to mitigate common serverless pitfalls. Reach out to the experts at Big Compass to help plan, design, and implement your serverless implementation.