by Haim Koschitzky, XpoLog CEO
XpoLog 6 is coming soon. In this series of posts I am covering the new primary features and enhancements. This post will dive into our new visualization gadgets and the ideas guiding us in our long term visualization development road map.
Even though we see many log data analysis deployments, we still identify many challenges users are facing regarding IT log data visualization, analysis, and insights.
Although stating the obvious, before investing expensive efforts and resources into analyzing data, it is crucial to define your expectations and requirements. While in the past, merely collecting all log data and making it available for search was good enough, this is no-longer the case.
In order to ask the right questions, determine what the most important use cases your log data has shown you and what role you want your log data to play in your future ongoing work. To do this, you must monitor system availability, software quality, continuous deployment, application performance, and business insights, troubleshoot, analyze security incidents, compliance audit etc.
There are specific use cases for the application life cycle: architect, developer, tester, DevOps, APM, operations, and production support all have specific uses cases and requirements. Giving the right answer to the right question makes a big impact and will drive smart actions.
Once the requirements and expectations are well defined, add data to XpoLog. When doing so, organize data in Apps logical structures and AppTags, as was discussed in my previous post, XpoLog 6 Virtual Applications Structures, AppTags, and IT Visualization Strategies. Create an App that will contain a collection of dashboards; we recommend creating a dashboard per topic or use case, and providing each one with a meaningful name (“performance”, “errors”, “user audit”). Now follow the steps of creating search queries, or use out of the box gadgets for analytics.
With XpoLog 6 you will find example Apps that you will be able to use as examples of best use cases for log analysis data visualization.
In the new version we added more than 20 new gadgets including 3D graphs, as we witnessed a growing demand for better visualization tools. Once you’ve created search queries to analyze data and generate proper result sets, you will need to select the visualization gadget that best reads these result sets and visualizes it in the most effective way.
Let’s look at a result set that aggregated and computed the avg. memory consumption and total memory usage of two application servers. Take a look at the figure below. On gadget 1 you can see the totals over 24 hr aggregated memory consumption at 1 hr intervals. This gadget tells the story of both servers. Gadgets 2 and 3 represent the same data but for each of the individual servers. Once we split the data for each server we discover that each of the servers had a very different memory consumption pattern.
An hourly aggregation for memory is far from being accurate; memory changes at a much faster rate. On the upper row of gadgets we see the totals for both servers (gadget 4) and two additional gadgets, 5 and 6, representing each server in 1 min intervals.
We were looking to monitor our application server memory consumption to avoid spikes that might crash one of our clusters. Choosing the right visualization tools, and in this case, intervals, makes a big difference.
Optimize your dashboards and visualization gadgets by verifying they deliver the insights you’re after in the right resolution. In the example above, analyzing memory for the entire cluster did not provide a clear status image of the memory consumption, but grouping by server and later reducing the time interval resolution to minutes gave a clear understanding of which cluster spiked.
Once your Apps and Dashboards provide clear views and visualization, it will become easy to identify problems, trends, and insights on your IT and applications. Now you will be able to monitor or view the dashboards live. Leverage the visibility and you will now be able to take actions that will make you applications more agile, secure, and optimized for the business.
Again, go to the first step. This is an ongoing process. Data changes every day. The content of logs and other data types is being updated by IT, developers, and vendors every day. In order to stay ahead, keep asking questions and never stop looking for the answers.
We will publish a more comprehensive use case on how to create, optimize, and use the new Apps module. In my next post I will present our new Operations and DevOps screens with more visual examples.