Aren’t we told, “The best things in life are free”?
When tasked with picking up our “4 Cs of Quality Monitoring Tools” series. I knew the next topic to cover was “Cost” and the aforementioned quote immediately popped into my head. Researching the quote to give me additional context from which to write I was surprised to learn the words are attributed to Coco Chanel and that there’s an additional sentence to the quote: “… The second best things are very, very expensive.”
These nuggets of useless trivia might lead you to believe two things: A) I’ve forfeited my “geek” credibility because I’m quoting a maven from the fashion world. B) Chanel was prophetic in her observation and missed her calling in life by NOT pursuing a career in Information Technology. Perhaps that second part is a little overblown … unless you consider a network and systems management application one of the “best things in life”.
Balancing the TCO Equation
There are a couple dimensions to software tool costs. There are the direct monetary costs and then those resource-related costs like time, computing, etc. that are indirect. Are you getting the proper value for your IT spend? There are some incredibly complex and feature-rich monitoring tools available in the market place. IBM Tivoli Netcool comes to mind. It’s … well … cool. This package does a lot of very sophisticated things in the IT monitoring realm. It also comes with a “cool” price tag for customization, training, and all the trimmings. Of course, there’s the other end of the spectrum as well. Open source software is available to do everything you need in an enterprise-monitoring context. However, “free isn’t free”. Nagios, RANCID, and Cacti may not “cost” anything to buy, but if your staff isn’t well versed in the F/OSS world good luck getting them to pop their head up from instructional man pages and wiki pages to do the job you likely hired them for in the first place.
Now that we’ve clarified what we’re talking about with regard to costs it’s always important to ask “why” it’s an important factor. Let’s file this one under “D” for “Duh!” Generally speaking, the IT function for an organization is a cost center and not directly responsible for driving company revenue. You don’t need an MBA to know that reducing operational costs is a good thing.
The big trick when it comes to IT software tools cost is how to walk that line between spending a ton of cash on a tool that does everything you could possibly want vs. spending much less while sacrificing features and/or support. So how do we start? Perhaps the best way is to view your IT monitoring requirements through a lens of pain points? In other words what are the scenarios in your environment that cause the biggest headaches for you and your team? In my travels I’ve seen and experienced three things that cause the biggest problems.
Make the Alerts Stop!
By far the biggest problem I see in the field is the issue of alert deluge. The monitoring system is doing what it’s designed to do: Alert you when things are going bad. The problem is there typically isn’t any context for those alerts. Whether that’s re-notification on an existing issue or numerous alerts for different variables that should be related to one another. The obvious consequence of too many alerts is that inevitably things will fall through the cracks. The laws of probability tell us that eventually you’ll miss something that was genuinely important to look after. Make you sure seek out a tool that at least has the capabilities to reduce the number of false positive alerts coming into your inbox and/or smartphone and/or ticketing system.
Less is More
The next problem I see in a lot of environments, which goes directly to the idea of cost savings, is that there are multiple tools in place to do that the same things. There certainly are situations where multiple tools are necessary. Perhaps your DBAs use a tool that gives in-depth visibility to MS-SQL databases where your regular monitor system doesn’t. On the other hand, maybe the Network engineers are using one general purpose NMS and the the systems folks another, but there’s a 90% overlap in functionality. If I’m a CIO and I see that scenario, then my first question is “Why?”. You should aim to find tools that are broad enough in their feature set where you have the option to consolidate software packages and thus save on your IT spend.
Make the Machines Do your Dirty Work
The last pain point is one that’s most prevalent in large IT infrastructures, but potentially a problem everywhere. That’s the idea of maintaining your NMS system once it’s deployed. Technologies are getting more and more dynamic. Virtualization with VMotion-like features, SD-WAN, and public cloud computing are just the most prevalent, but many other exist. As these dynamic technologies take hold in corporate IT organizations monitoring them gets more and more difficult *if* it has to be done by hand. You should be looking for tools that have a lot of automation and auto-scaling features built into them.
IT software tool costs are one of those things that impact people at every level of an organization. In upper echelons of a company it’s the actual dollars and cents the get the spotlight. However, down in the IT trenches those costs translate into tools that can make an engineer’s job easy, hard, or somewhere in between. It’s for this reason why “Cost” is one of the four things to consider in a quality monitoring tool.