Log Transport Strategies for MuleSoft Deployments
Using logging to optimize and troubleshoot your MuleSoft-enabled integrations can be complicated. Without some visibility into how messages are moving throughout the ecosystem or where failures happen, it's hard to track down where problems occur.
You need to see under the hood to understand where problems are coming from.
That’s why application logging is essential for optimizing, monitoring, and troubleshooting your integration solutions. But, despite how helpful logs can be, they can be challenging to read (gain insight) and require strategies to transport from application to logging application. That’s why logging is frequently one of the most neglected tasks in a project.
Thankfully there is a lot of information available to help. And there are several tools - notably Splunk and ELK - that can help parse your logs and make them useful. But for these tools to work with your implementation of the Anypoint Platform, they need the logs produced and held by the MuleSoft application.
There are several strategies to make your Anypoint logs accessible to your logging management tools. Choosing the right one depends on your deployment model. This article identifies four approaches commonly used by MuleSoft customers to manage the movement of log data.
Strategy One: Anypoint Platform On-Prem with On-Prem or Cloud Log Management
If your deployment of MuleSoft is on-prem, then this is the preferred option, regardless of your log management tool is also on-prem or is in the cloud. But, it depends on your access to the logs. If the application team doesn’t have access to logs, that can hinder your ability to utilize this strategy.
This approach requires that you configure the Splunk Universal Forwarder or ELK Filebeats to monitor the application and system logs. Log formatting is accomplished with SplunkCimLogEvent or LogstashLayout.
- The logging overhead is decoupled from the application
- Logging files are persistent, adding robustness to your logging strategy
- Logging is continuous, offering historical data for analysis and the current situation
- Because of security concerns, the application team may not have access to the log files
Strategy Two: Anypoint Platform On-Prem with Limited Log Access
If you’re using an on-prem deployment of MuleSoft’s Anypoint Platform, but don’t have access to Anypoint’s logs, there is still hope. The Log4j2 framework can facilitate transportation of the logs for use with your log management tools.
This method relies on two primary components - loggers that define specific packages for logging and Appenders that will define the delivery of the log events.
To do this, you’ll need to customize your Log4j2 configuration so it pushes the application logs to log management using Appenders such as SocketAppenders or HttpAppenders. Of course, transporting logs via HTTP or TCP/IP requires those transport mechanisms are up and available to move the files. An outage or transport issue could delay your log receipt.
- The DevOps team does not have to configure the log transfers; the application team owns the process
- Transportation is embedded within the application
- Developers will need to spend additional time designing and developing a solution to optimize how the logs are transported
Strategy Three: MuleSoft CloudHub with Cloud-based Log Management
When both your MuleSoft deployment and your log management tools are in the cloud, your options are limited, especially if you’re looking for log management that can inform your performance management and troubleshooting solutions. That’s because CloudHub - the cloud runtime implementation of Anypoint - has limitations.
CloudHub only allows a 100 MB log size with a 30-day rolling limit. Go beyond that size or length of time, and your logs will be truncated, losing any historical log data related to your integrations. And if you don’t have a Titanium license with MuleSoft, you can’t search logs under Anypoint Monitoring.
To get around this limitation, you’ll need to disable CloudHub logging. You can then create a custom log Appender leveraging tools like GelfAppender, SocketAppender, or SplunkHttp. Then add Log4j2CloudhubLogAppender and your custom log appender. This will send your logs from CoudHub to your log management tool.
- This approach gets around the limitations caused by CloudHub
- The logs are lost if the application is deleted
Strategy Four: MuleSoft CloudHub with On-Prem Log Management
Using a cloud deployment of MuleSoft with on-prem log management presents its own set of concerns. Many companies disallow inbound connections into on-prem data centers for security reasons. This hobbles your ability to send logs directly into Splunk or ELK.
If that’s the case, you can still get the logs from CloudHub and bring them into your log management tools. It will just take a few extra steps. You’ll need to create a scheduler or batch process to download the logs. They will then need to be pushed into the data center for ingestion by ELK or Splunk.
The following utility can be utilized to aggregate the logs: https://github.com/mulesoft-catalyst/cloudhub-log-aggregator.
- Allows logs to get to the on-prem log management tools
- Requires the use of another script - a scheduler or batch process - to transport the logs, increasing fail points
- Adds multiple steps to the process
- CloudHub API call limits set in your SLAs may cause throttling
Log management is crucial in identifying, mitigating, and rectifying issues with your integrations. Although sometimes complicated, managing and parsing logs can be accomplished no matter which MuleSoft implementation - cloud or on-prem - is the best fit for your company. Implementing the best approach for log data movement is an essential factor in your logging success.
If you have questions about how to set up and effectively use your Anypoint Platform logs to improve your integration performance, Big Compass can help. Contact us with your questions about logs, log management, and MuleSoft’s Anypoint Platform.
ADOPTION & EXPANSION
+ Number of APIs
+ Business coverage
+ Number of contracted apps
+ API usage
+ API reuse
EFFICIENCY & COST SAVINGS
+ Number of APIs in each SDLC stage
+ Time spent in each SDLC stage
+ Cost and time to build an API
+ App development velocity
+ Number of launches per year
+ Number of defects
SECURITY & VULNERABILITIES
Time since the last version was published
Number of throttling issues
+ Time to onboard
+ Number of deployments
+ Number of incidents
+ Percentage of customers impacted. per incident
+ Time to resolve incidents