Upcoming Event: 
DataWeave - Back to Basics
View all events

Serverless Cloud Integrations — The Path Forward

Aaron Lieberman:

I'm the Cloud Practice Manager here at Big Compass and I've come up with an integration background. Originally starting at webMethods and MuleSoft as my background, and now bringing that integration experience over to the cloud in AWS and driving integration as a competitive advantage within our cloud practice here at Big Compass.

Aaron:

So, for purposes of the presentation, I'm actually going to go off camera here while we're presenting. But what we're here to talk about today is serverless. As Tim said, we're here to talk about the benefits of serverless and counterbalance that with the serverless blind spots that are so common. Of course, getting into the use cases for serverless too because it's always important to take a step back and just determine that you're not fitting a square peg into a round hole. That you're using serverless for the right use cases and only the right use cases.

Aaron:

You also want to quickly migrate to the cloud. This is what so many businesses are after today. Real quick, we're also going to talk about what's happening in the industry and why are we even talking about serverless in the first place. So, what's happening in the industry is that 84% of on-premise servers are over-provisioned, and this could be IPaaS solutions or any legacy system that's on-premise. 84%, or almost all of those, are over-provisioned. Due to the nature of on-premise servers, you're almost forced to over-provision your servers. In fact, it could be a smart move to over-provision your servers because you want that wiggle room within those servers to be able to scale up if needed.

Aaron:

What's more, it's very difficult to right size those servers. You don't want to come up against the point where a bottleneck is created due to the capacity on your on-premise servers. So you can run up against those scaling bottlenecks and overpriced resources due to this increasing your total cost of ownership to maintain that system. This is exactly why so many organizations are looking to move to the cloud where they don't have to be locked into physical capacity on their data center and they can set up their foundation on a platform where they can innovate quickly and scale effortlessly and drive their competitive advantage.

Aaron:

So, why serverless specifically? We're here to talk about serverless and it's just one slice of how businesses can use integration as a competitive advantage like Tim just talked about. Serverless is hot right now. It excites a lot of people, including myself. I'm very passionate about it. But outside of just being exciting, it helps power integration as a competitive advantage because you can lower costs, increase reliability. If you get your deployment cycle right, you can also rapidly produce flexible, scalable solutions quicker than ever before, even though your solutions might be more complex. That allows you to innovate quickly and create integrations at a pace faster than ever before.

Aaron:

You can also stay ahead of technological trends. What I mean by this is that your workloads running through your integration platforms are becoming more and more diverse. You're only increasing traffic and more systems are talking to each other these days than ever before, and it's becoming more unpredictable than ever before. So, you're really opening yourself up to the serverless cloud and that allows you to open yourself up to the rapid innovation of AWS specifically also. You don't even know how you're going to use that in the future, but hooking into AWS allows you to take advantage of the continuing innovation that AWS constantly produces.

Aaron:

So, serverless is relatively new. I want to give you the common benefits and pitfalls of serverless so you can understand fully the value that serverless can provide. And like any technology, of course it has its downsides, so we're going to talk about those, those common pitfalls and blind spots as well, and the mitigation techniques so you can most effectively implement serverless.

Aaron:

The benefits of serverless are vast and I can talk about six of the main benefits here. This is where you can derive a ton of value using a serverless architecture for your integration, so I want to enable you to help you understand the real world benefits. The first benefit of serverless is scalability. This is what many people think about. By default in AWS, you can spent up to a thousand concurrent Lambdas. Serverless doesn't just scale up, it also scales in too. This is a major benefit for workloads that you don't need to keep always on. It's an extremely common integration scenario where you might have processes scaling up, but then scaling back down and maybe it's dormant overnight.

Aaron:

High availability is a second benefit. The wonderful thing about serverless services is that they're highly available out of the box. With integration platforms, you can get as close to 100% SLAs that they can pass on to your clients and customers. This allows you to pass on great SLAs and better SLAs to the clients consuming your system.

Aaron:

Just be careful of VPCs here, especially with AWS Lambdas, which can actually work against you for high availability. You actually want to stay away from Virtual Private Clouds (VPCs). It's one of the common pitfalls actually.

Aaron:

Security is the third benefit. Security of serverless services in AWS is controlled primarily by identity and access management. So, if your identity access management (IAM) roles are solid, you'll be actually inherently secure with your serverless services. Of course, if you are exposing your serverless services externally through APIs or other means, you have to think about those scenarios. But of course, only in those scenarios. They are inherently secure.

Aaron:

The first three benefits here really allow an organization to focus on development rather than worrying about implementing the right architecture for scaling and high availability and security. So you still have to think about those things, but again only for those specific scenarios, again helping you speed up your service development life cycle.

Aaron:

The fourth benefit is skillsets. Like I said earlier, serverless is hot and it excites a lot of people. So if you're looking for serverless talent, including Big Compass, a lot of people will be excited to work on those projects. Coding in serverless is also relatively language agnostic. We prefer no-js and Java here at Big Compass, but there's many other languages that you can code in serverless. Many serverless services are actually codeless, so it allows you to stand up serverless services very quickly.

Aaron:

Cost is a fifth benefit. Big Compass has also helped our customers save over 90% in ongoing costs by moving their integration platform to the serverless cloud in AWS. This is huge for businesses. They can reduce their ongoing costs by over 90%. So at first, it's very easy and cheap to run serverless services, and you're using and adopting a pay-as-you-go model, for the most part. What's more, even in AWS, Lambda executions are free for the first one million requests per month, which his awesome. Huge benefit. One million requests free that you get for your common serverless services per month. Second, scaling down is a huge win. So when services are off, you are typically not being charged for them because you're adopting this pay-as-you-go model.

Aaron:

Then, finally, all these kind of roll up into this benefit of innovation. These benefits spur innovation. Serverless allows you to focus on future development rather than the minutiae of network security and DMs and virtualization. All these other things that you need to think about with your on-premise deployments.

Aaron:

So, you simply develop within a great service development life cycle and dev ops framework. You check in code, you deploy, and you stay focused just on that piece. Great feature development. Allows you to build out a more rich feature set quicker. You also don't have to worry about in some common IPaaS platforms not having a connector to fit the need. It allows you the flexibility and the freedom to build what you want when you want.

Aaron:

So, to balance that now, let's talk about the serverless blind spots. So, the benefits of serverless can really catapult a business into the stratosphere, but we need to counterbalance that because we're not just going to raise the flag of serverless and tell you that it's good in every single situation.

Aaron:

Most of these pitfalls can actually be mitigated with a mindset shift, I want to preface this slide with saying that. Mindset shift in that there's no infrastructure to manage. The currency in parallel processing is absolutely king when it comes to serverless and distributed microservices need to play well together sometimes. It's a pay-as-you-go model. So if you shift your mindset into thinking in the way of distributed microservices running in parallel, you'll be much better set up to be able to mitigate these common blind spots.

Aaron:

So, the first common blind spot is cold starts. Cold starts can occur on AWS Lambdas and it's a really dreaded problem to have, especially when you need a rapid response from your Lambda or API gateway. Cold starts, just to define them, is a result of serverless microservice running on AWS's infrastructure. When it goes to sleep for too long, when it's not used for too long of a period of time, that particular microservice needs to load back into memory to actually run that function. So, the larger the function and the more complicated the VPC networking, as I mentioned on the previous slide, the worse the cold start can be.

Aaron:

So, keep your functions small, and by best practice, don't put your Lambdas in a VPC unless absolutely necessary. There's also pre-warming techniques that you can employ to mitigate this blind spot.

Aaron:

Governance is the second blind spot. So, everyone loves microservices and we love to create them for the variety of benefits that they give us. But as you create more serverless microservices, your governance over those can quickly expand and explode if you're not ready for it. So I've seen many thousands, many thousands of AWS Lambdas in a single AWS account, and you can imagine it can quickly get out of hand without the right governing system in place.

Aaron:

So planning is absolutely crucial here and you're going to want to think about dashboards that you can create and monitoring services that you can use to monitor all those different microservices, and create great dev ops processes and automate using CICD pipelines here. Because if you truly need a thousand Lambdas out there, you're going to want to start to automate those deployments because it's going to get unmanageable very quickly. Just practice good serverless hygiene. So, if you're using things like Cloud Formation, for example, to stand up your serverless services, it's not only easy to stand up those serverless services, it's easy to tear them down as well. So, practice good hygiene there and tear down and serverless services you're not using because they can get out of hand quickly, like I said.

Tim:

Yeah. Aaron, I want to just zoom in on that a little bit there. To put some numbers to this, we were involved in a project that easily had a thousand Lambdas spun up. They were able to get to market really quick, but the amount of tech debt that was created was easily in the order of 20 to 40% of the whole project. They had to invest that much more money to go back and refactor the Lambdas to get them back into a governable state.

Tim:

If they didn't do that, they were left with updating individual jars and individual Lambdas, the run timing was expensive and the costs to refactor their Lambdas was expensive. So, just getting one foot in front of the other from the beginning is critical here.

Aaron:

Couldn't agree more. Yep. The third blind spot is visibility now. So, again, the mindset shift of using mini parallel processing functions and distributing microservices is a game changer for mitigating this blind spot. Without planning to actually see into your serverless services, especially when you have those thousands of serverless services develop that are out there, the minute you have an error, you need to find that quickly and determine what function that actually occurred and at what point I that function.

Aaron:

So, no integration platform of course is complete without good supportability, so mitigating this blind spot is really about a well-designed externalized, centralized logging and monitoring framework. So this could be within AWS. Think tools like X-ray, search. You could even log to a database like RDS or Dynamo for example. Or it could be externalized to Spunk or other external systems.

Aaron:

The fourth blind spot is parallel processing. Again, the mindset shift here of parallel processing functions is absolutely critical. I can't stress it enough. Scaling on demand is a double-edged sword when it comes to integration. So on one hand, you want to be able to process really, really quickly. But on the other hand, what this means is that out of order delivery could easily become an issue now, and flooding a downstream system could easily become an issue now. These are two very common integration patterns that can become nontrivial challenges if you don't think about them.

Aaron:

So, throttling back a Ferrari here sometimes is what you actually need to do. You might need to put that V6 engine in a Ferrari rather than letting it go at full speed. Happens all the time. You can use a stage for order of delivery. You can throttle back your AWS Lambda or use a queue in front of your Lambdas in order to mitigate this blind spot.

Aaron:

Duplicate detection is number five. This blind spot bite almost everyone. It's a silent killer sometimes even for businesses because it happens rarely enough that it can slip through QA testing very, very easily. So an upstream system integrating with your system can produce a duplicate or the actual serverless service like SQS in AWS can produce a duplicate message due to its underlying distributed architecture.

Aaron:

So, maintaining some sort of ledger of events that go through your system is the mitigation technique here, and making sure that that ledger is easily accessible and quick, because especially as you onboard more traffic into your system, you need to make sure that your duplication detection system is very, very quick and can scale.

Aaron:

Then, you want to use the right tool for the job. So there's times when you just want to try to fit a square peg into a round hole, but try to avoid that here. Serverless is great because you can code for virtually any use case, but there's use cases where combining serverless with the right tool for the job 100% makes sense, and we're going to introduce a hybrid approach here. I'll talk about that more later. We strongly believe in hybrid solutions here where you can use the strengths, say, of an IPaaS platform, like MuleSoft or Dell Boomi with the strengths of serverless services like AWS Lambda and SQS. So, we'll talk about some good use cases and anti-use cases here in a minute.

Aaron:

You might be thinking at this point, before we talk about use cases, "Is serverless right for me?" Don't worry too much quite yet. Number one, because serverless can be your primary approach, but augmentation is going to be expected. Whether it's on your path to migrating to the serverless cloud, or whether it's when you're in the serverless cloud and you're combining that piece of technology with an IPaaS platform for example. So, organizations rarely get to the final goal of being 100% serverless cloud people, so don't worry too much about getting there.

Aaron:

So, I want to pause really quick, I know we've talked about benefits and pitfalls, and see if there's any questions out there, David.

David:

Yeah, Aaron, it's a great time. Just from a time check standpoint, we're about 20 minutes in right now. I did have a question that came in, and I think you may have covered it in the blind spots, but I think maybe circling back on it might help. The question is this: with the serverless blind spot of parallel processing leading to sometimes throttling back and duplicates leaking through the system, is it true that the system becomes more delicate at higher volumes? Can you address that real quick? Is it true that the system becomes more delicate at high volumes?

Aaron:

Yeah, that's a really good question. So, number one, I would say it's not entirely true, and it's not entirely true because if you plan at those higher volumes to handle those blind spots, you'll be okay. Number one, serverless services are meant to scale. They're meant for large amounts of traffic to run through the serverless services. That's exactly what they're meant for. When you're implementing things like duplicate detection, you just need to make sure in your design that your duplicate detection can scale with your traffic.

Aaron:

So, it's not a more delicate solution. It's that you might see more duplicates. But if your duplication detection system, for example, is put in place to be able to scale elegantly and handle those duplicate events and even log them out in an elegant way, you'll be set up to be able to scale and handle large amounts of traffic very nicely.

David:

Right. I hope that answers the question. I'll encourage everyone else to enter your questions in the Q&A and I'll pick those up and we'll interject here in a second. Aaron, I'm looking forward to going through some of these use cases, so I'll turn this back over to you.

Aaron:

All right, thanks. So let's talk about those use cases now. One of the primary use cases that I see for serverless I see absolutely all the time, within your system you get a huge burst of messages or requests due to events occurring at a specific point in the day. It happens all the time in integration systems. Your current servers might be beefed up and over-provisioned to be able to handle these large bursts today, and this can result in slow downs and bottlenecks, or large costs due to that high capacity server.

Aaron:

What's more is you're paying a lot to maintain these large bursts even when your system is sitting idle for the majority of the time. You might only be on maybe one hour of the day, but you're still paying for that server. So with serverless, you can invoke your processing layer, incoming messages that scale out to thousands of concurrently processing Lambdas, and this only allows you to process faster, handle the bursts with ease, and best of all, use your assets more efficiently because you might only have these Lambdas awake for one hour of the day, and this ultimately helps you reduce your cost.

Aaron:

The second use case is sporadic workloads. In this use case, low to medium volume business processes can create sporadic workloads on your integration system. One such example of this is when you need to reconcile an invoice when a customer makes a request on demand. It might happen a few times a day. But mostly, again, your process might be off and you need to reconcile those invoices maybe 10, 20, maybe even 50 times a day. Today, you probably have a server, again, that's always on so you can handle this request on demand. But with serverless, you're taking advantage of being able to scale out and being able to scale in.

Aaron:

So, the beauty of auto-scaling is not just your ability to scale out, but it's also your ability to scale in too, which helps you save that cost. With serverless, you can invoke your processing layer again at any time. Other than that, it's completely dormant and for the most part you're not paying for those services that are completely dormant.

Aaron:

Use case number three is extending the life of legacy system. We at Big Compass are seeing this more and more in our integration projects. Organizations want to move to the cloud but they also want to extend the life of their current system in the interim. So, legacy system may not have the internet connectivity because it's in the DMZ or it may have very strict firewall rules that provoke a lot of red tape to get through. That prevents rapid innovation ultimately, and the legacy system might even be pushed to its limits. Many times, what I see work really well is putting an API gateway in from the legacy system to be able to handle the control and security and reliability and those customer-facing endpoints that can abstract your legacy system.

Aaron:

So, the public serverless API can now be more flexible and open up the way in which you expose your legacy system. It could also hold requests if you're worried about flooding your legacy system. It can hold those requests, act as that control point, and you can pass SLAs onto your customers to be able to throttle their request to your legacy system.

Aaron:

Again, opening yourself up to the serverless cloud opens you up to the rapid innovation of AWS and you don't even know how you're going to use that in the future, but hooking your legacy system into AWS allows you to take advantage of the innovation opportunities.

Aaron:

Finally, this is sort of a super use case which is APIs. This is bringing a few of the use cases together. It's a great culmination of many of those use cases where the use cases for APIs these days are almost limitless. You might have a customer-facing API that gets order information and processes orders when clients make requests to your API. The more you onboard clients that consume your system, the more diversity of traffic you have on this API. Therefore, the more you need to prepare for reliability and scalability of that API. Luckily, with Amazon API Gateway out of the box, it can handle 10,000 requests per second. That's out of the box it can handle 10,000 requests per second. So, that's going to allow you to scale very, very nicely just with a single API.

Aaron:

Behind that, of course API Gateway can invoke thousands of concurrently processing Lambdas to be able to get and process those orders that sit behind your API. So, ultimately, instead of having to scale your servers up or buying additional vehicles or licenses, you can use a pay-as-you-go model with a reliable, scalable API Gateway on AWS. Really, this allows you to absorb future workload, where we know that with APIs traffic is increasing, so you can prepare and feel comfortable with absorbing future workload that you know you're going from your APIs.

Aaron:

So, let's talk about the opposite now. When is it not the best use case for using serverless services? One great example of this is messaging not in AWS. If you need to continuously pull an external queue, like a JMS queue for messages, let's say you need to maintain an open connection to that external JMS queue to grab any new messages coming through that queue so you can process the message. This might not be the best use case for serverless because essentially what you would be creating is a serverless solution that is always on, and it kind of goes against the premise of serverless. Serverless, you want to be able to wake up on demand and then go dormant. Wake up and then go dormant, and then repeat that cycle.

Aaron:

So, it's not necessarily a bad solution if you were to do that, it's just that another tool might be better for the job, say an IPaaS solution. Because it can maintain an open connection to the third-party system rather than putting extra logic and complexity into your AWS Lambda to code for an always on solution.

Aaron:

The last serverless use case where you might want to rethink using serverless and use another tool for is massive data enrichment transformation. A great example of this is receiving a large manifest of the entire dump of records from a client's system. I've seen this in many integration platforms. The file comes in on FTP. It might be five gigabytes in size, and the maximum Lambda memory is actually only three gigabytes in size. So, what you could do is you could develop a more complex architecture to split the file before processing it in AWS lambda, but then once you get there, even if you get there, there's still not an out of the box transformation utility that's offered by AWS lambda, so you'd have to code specifically for that transformation and come up with an elegant way to be able to handle that.

Aaron:

So, again, it's not a bad solution, it's just that another tool might be better for the job because tools are specifically made to be able to handle these large files and transform files and enrich files.

Aaron:

So, hopefully I got you pretty excited. You might be getting really excited about this, and it is exciting. Serverless is exciting. You might be thinking about all these things like reliability and the performance that serverless can give you and the inherently secure nature of serverless, the operational excellence that you can achieve and the costs that you can save. We want to put you on the right migration path now. If you're going to take up serverless as an integration model, we want to put you on the right path to getting there.

Aaron:

So, let's talk about finding your path to getting you to the serverless cloud. I want to be able to enable you to think about the best approach to get you there. Knowing the use cases and benefits and pitfalls at this point allows you to think about how you can most effectively get your organization to the serverless cloud. And it's really not the technology that's going to let you down, it's a very robust, exciting technology that allows you to innovate quickly, as I said. It's your plan and design and usage of the tool that can succeed or fail, and I believe that's the same case with any technology.

Aaron:

So, there's typically a few paths that businesses can take on their way to the serverless cloud. Number one would be a custom cloud strategy. A custom cloud strategy is one where integrations are custom built using services provided by AWS. This strategy gives you the most capability and flexibility but also requires the most amount of initial investment. So you just have to balance those and keep that in mind.

Aaron:

A hybrid approach, which I've alluded to now earlier in the presentation is one great approach to enablement. We're really a big fan here at Big Compass of this hybrid approach. What you're doing in your hybrid approach is you're combining the strengths of your IPaaS or your legacy system with serverless. So, what that allows you to do is, this can be great because you can combine those strengths and you can dissect your current system for the best use cases, and only those best use cases, to actually move to serverless. So it allows you to take a more unbiased approach rather than trying to fit a square peg in a round whole.

Aaron:

Of course, there's always the option of waiting and revisiting. So, sometimes doing nothing is the right answer, and in cases where you might have conflicting drivers in terms of business stakeholders, sometimes doing nothing and waiting is the right answer.

Aaron:

Hybrid is so important that I just want to talk about it a little bit more. Really, no matter your path, you're going to see some form of hybrid along the way, so plan for hybrid because it's going to be a given. You're hybrid if your identity provider is separate from your API management. You're hybrid if you combine MuleSoft with AWS even though you're all deployed to the cloud. You're hybrid if you're using a transformation tool that's different from your routing logic. You're hybrid if you're sending email once a week to collect payments from a customer even.

Aaron:

So, regardless of your approach, at some point, whether you're migrating to the serverless cloud or your final solution will be being in a serverless cloud combined with something like an iPaaS solution, you're going to run across hybrid and we fully support this model here at Big Compass.

Aaron:

What that allows you to do is adopt the quickest and best way to get to the serverless cloud actually and so the quickest way to do that is really decomposing your existing system and looking for those best candidates to move to the cloud. That is why the planning phase is absolutely critical if you're going to go down this path. In your planning for your system it is based on two dimensions really. Those two dimensions would be the system components of your actual system, and then the processes running through your system. Based on those two dimensions, you're more than likely going to find some great serverless candidates that you can move off of your IPaaS or legacy system and onto the serverless cloud. Your bridge to the serverless cloud is iterative success through this hybrid approach. I strongly believe that.

Aaron:

This is exactly what Big Compass can help you with. We would bring in our integration knowledge, combined with our cloud knowledge to be able to help you identify great candidates that can work with the serverless cloud.

Aaron:

So, how can we accomplish this and what does this look like in practice? It's not some magic trick or marketing scheme. It's based on experience, and we want you to be successful if you choose to take this path because it has been tried and true methodology for us, and we would recommend it and we would use this methodology.

Aaron:

So, we have a few phases here. Number one, we would inventory those processes. This can be half the battle, and I've seen this many times working with organizations, that processes just probably aren't inventoried to the state that they should be. Inventorying those processes and understanding your current system, like I said, can be half the battle. So getting through this step is absolutely crucial in your planning phase. Because once you get through this step, you can really have that good understanding of your system.

Aaron:

Step two would be a checkpoint. You can use this checkpoint to really take a step back and look at all of the processes that you've inventoried. Look at your system and ask yourself, "Do I have good candidates to move to serverless?" You want to take an unbiased approach here because if you don't have good candidates to move to serverless, maybe you're already using the right solution for the job, for your integration platform today. But typically what we're finding is, at a bare minimum, at least 20% of your processes and system components would benefit from moving to the serverless cloud.

Aaron:

So, after this checkpoint, what you're going to want to do from that inventory is prioritize the serverless candidates. You can prioritize based on a few different dimensions. You can prioritize based on complexity and risk of those processes. Maybe there's upcoming timelines that you need to meet. It could also be based on this business stakeholders, what business stakeholders want to push over to the serverless cloud.

Aaron:

Once you're done prioritizing, now you get into a typical service development life cycle where you can design now for the serverless cloud working with your existing system. So you have to think about that and you have to think about adopting the serverless cloud. You have to think about whether you need customer communication or not. You're getting into migration as well as architecture of what your serverless cloud implementation will look like.

Aaron:

So, at a certain point here, you can use prior knowledge to an extent and use those SMEs that you've worked with on your legacy or IPaaS implementation. But sooner rather than later, you're going to want to involve AWS experts like we have here at Big Compass and you want to ensure that your service architecture is sound before moving into that implementation phase.

Aaron:

Then, you can get to your implementation phase. Again, people are excited to work on serverless, but you're going to want to involve people with the know-how so you can mitigate those blind spots that we talked about and those blind spots don't bite you down the road.

Aaron:

Finally, you can deploy and run your workload, and that would put you in the serverless cloud. You have serverless cloud adoption at that point, even if it's only one process. Ultimately, you can take this on. We want you to be successful because we're excited about the serverless cloud. And you can also call us to take you through the planning and design and implementation of your serverless cloud adoption because we're also very excited about this.

Aaron:

Finally, you want to adopt the serverless cloud quickly, and you want to achieve the maximum ROI that you can possibly achieve. So, failing fast is a great approach for this. This type of approach will allow your business to test the waters with minimal risk, and at Big Compass we even have a serverless accelerator that you can get up and running with serverless services and AWS more quickly than you typically would. It would help you mitigate those common blind spots that we talked about. We can come in with integration expertise and mitigation techniques for all of those pitfalls that we talked about.

Aaron:

So, from all of that, we can get you operational in the cloud in 30 days or less. That might seem like a bold statement, but it's very possible. There is processes out there in almost every organization, every integration platform where it's an absolute no-brainer to move to the serverless cloud. You might have battled with this process for a long time or it's causing a bottleneck in your system and it's causing pain points for you right now. These are great candidates to move to the serverless cloud and to prove your serverless approach and do it quickly.

Aaron:

So, overall, we believe that at minimum, at a bare minimum, 20% of your portfolio can benefit from a serverless cloud adoption. Sometimes it might even be more like 80%. So we would encourage you to look through your portfolio and find those candidates, and we can also help you with that if your organization doesn't have the people or the expertise.

Aaron:

With this iterative approach, that's really the bottom line. This is an iterative approach. So, with this iterative approach, you can adopt the serverless cloud quickly and at minimal risk. This allows you time to get used to supporting the serverless cloud in AWS. It even gives you an option to back down if serverless is not for you. It gives you a rollback or back down option, which a lot of people liked. So maybe serverless doesn't seem supportable once you actually get there. You can get there in 30 days or less and then make a go or no-go decision, which can be really valuable to some businesses.

David:

Hey, Aaron, this is David. I had a question on the timing thing that came in and I wanted to ask it because I think it's appropriate right here. The question is can you talk about a timeline of a real use case you have implemented with serverless? You talked about a quick 30 day here, but the question is more of what's a real use case? Maybe one of the ones that you've talked about earlier that you can give us a real life perspective on.

Aaron:

Yeah. Great question. 30 days or less might seem far fetched, right? But what we've implemented in the most simple form is a simple transformation serverless service that has been deployed to AWS's cloud using SQS and AWS lambda. This simple transformation service was stood up in just a couple of days. It stood up in just a couple of days, number one, because it's just that, it's simple. It was straightforward.

Aaron:

It's just like any other project that you're working on. The most simple of processes are going to allow you to create those features very quickly. In this case, it was a simple transformation that we had to achieve and we had it deployed all the way from development through testing and deployed within just a couple of days. But of course, that can expand very, very quickly, and the more components you add and the more complexity you add, you're going to be getting into the realm of months.

Aaron:

So, the largest serverless implementation that we've done on an entire integration platform is about six months. So you can see the great differentiation there where your most simple serverless candidates, very feasible to get operational in the cloud in 30 days or less. But your most complex and migrating entire systems is of course going to take a long time.

David:

Great. Thank you for the answer. We're about 40 minutes in, Aaron, so I'll give it back over to you.

Aaron:

Okay. All right, so that is adopting the serverless cloud quickly. Of course, serverless is something, like I said, that we're very passionate about here. I'm personally very passionate about it. It allows us to bring innovation to integration, allows businesses to be highly competitive in their respective industries.

Aaron:

So, next steps would be don't hesitate to reach out. You can reach out to me specifically, Aaron@BigCompass.com, for an assessment of processes to migrate to the cloud, and you can also contact me to begin your first POC in 30 days or less, like I said.

Aaron:

We don't have to engage in any one particular way. It could be us coaching you through this process, and we're happy to do that. It could be that you need to strategize before making the leap to the serverless cloud. We can do that too or anything in between from assessment to design to development.

Aaron:

So, again, we're here, and as Tim said in the beginning, we're happy to make those connections with our community and like-minded folks.

David:

Aaron, I've had another question-

Aaron:

Questions?

David:

... came in. Now that we're to this point, I'll go ahead and ask it. The question is in regards to IPaaS versus serverless, and I'll read the question. It says if a business is considering a build versus buy of getting an IPaaS solution or a serverless solution in place, what would be your recommendation on how to approach that decision? Not which one you would necessarily recommend but how would you approach the decision of build versus buy?

Aaron:

I'm glad this one came up because we had actually discussed this, and I almost even included a slide for it. I created basically a backup slide that I didn't include in the presentation. It's not a beautiful slide but I think it gets the point across where you're considering a build versus buy or IPaaS versus serverless, of course it always depends on your particular scenario and your particular use case. But in general, you want to be able to break it down based on these factors.

Aaron:

So, licensing costs might be a big factor for you. Licensing cost for an IPaaS platform might not be so great. It's going to be a greater ongoing cost than serverless. Your implementation cost for serverless is going to be greater than your initial implementation costs for your IPaaS platform. The operational costs, the ongoing costs of serverless is going to be pretty minimal, depending on your workload. But it's going to be pretty minimal versus the ongoing licensing and support costs of an IPaaS platform.

Aaron:

Innovation is really ramped up on serverless because it is so flexible, as we talked about. Vendor support. Vendor support with the IPaaS platforms is really second to none. You really do get great vendor support with the MuleSofts, the Dell Boomis and so on and so forth. Whereas in serverless, you're coding custom code with serverless, and you are probably not going to get the support from AWS within that custom code. So, you're more turning towards the experts or forums or blogs and so on and so forth for support.

Aaron:

Skill level. You don't necessarily need as much ramp up time with IPaaS platforms than what you do with serverless, and scalability. You might be reaching bottlenecks. You might need to sink more money into an IPaaS platform or renegotiate licenses to be able to scale. Whereas in serverless, it's out of the box and automatic. So, these are just seven of the factors that we've come up with that I would put into that build versus buy decision.

David:

That's interesting. When you look at things like innovation and scalability, do you think that that's more customer-facing than an IPaaS solution might be? Because a lot of the things that you spoke about are related to cost. Is there particular industries or types of solutions that might lend themselves to be more serverless-friendly than IPaaS?

Aaron:

Sure. Yeah. Specifically, you mentioned innovation. Innovation I would actually say is more internally focused. It could definitely be externally focused too, but if your culture of your organization is innovation-driven and technology-driven, serverless is probably going to be a great approach for you, and a byproduct of that is your customers are only going to benefit from that too. You can release quicker and better features using the serverless platform once you're up and running. So your customers are only going to benefit from that.

Aaron:

Okay. I'd like to thank everyone. I really thank you all for attending. The recording will be posted as well as the slides. Don't hesitate to reach out to us, like I said, talk about serverless innovations and dive deeper. Thank you, everyone.

Get the eBook