Preface

Within the working world of technology, there are hundreds of thousands of different applications, all (usually) logging in different formats. As a Splunk expert, our job is make all those logs speak human, which is often the impossible task. With third-party applications that provide support, sometimes log formatting is out of our control. Take, for instance, Cisco or Juniper, or any other leading leading manufacturer.

These devices submit structured data,specific to the manufacturer.  There are also applications that we have more influence on, which are usually custom applications built for a specific purpose by the development staff of your organization. These are usually referred to as 'Proprietary applications' or 'in-house' or 'home grown' all of which mean the same thing. 

The logs I am referencing belong to proprietary in-house (a.k.a. home grown) applications that are often part of the middleware, and usually control some of the most mission critical services an organization can provide.

Proprietary applications can be written in anything, but logging is usually left up to the developers for troubleshooting, and up until now the process of manually scraping log files to troubleshoot quality assurance issues and system outages has been very specific. I mean that usually, the developer(s) are the only people that truly understand what those log messages mean.

That being said, developers often write their logs in a way that they can understand them, because ultimately it will be them doing the troubleshooting / code fixing when something severe breaks.

As an IT community, we haven't really started taking a look at the way we log things, but instead we have tried to limit the confusion to developers, and then have them help other SMEs that provide operational support, understand what is actually happening.

This method has been successful, but time consuming, and the true value of any SME is reducing any systems MTTR, and increasing uptime. With any system, the more transactions processed means the larger the scale of a system, which after about 20 machines, troubleshooting begins to get more complex, and time consuming with a manual process.

The goal of this book is to give you some techniques to build a bridge in your organization. We will assume you have a base understanding of what Splunk does, so that we can provide a few tools to make your day to day life easier with Splunk and not get bogged down in the vast array of SDK's and matching languages, and API's. These tools range from intermediate to expert levels. My hope is that at least one person can take at least one concept from this book, to make their lives easier.

What this book covers

Chapter 1 , Application Logging, discusses where the application data comes from, and how that data gets into Splunk, and how it reacts to the data. You will develop applications, or scripts, and also learn how to adjust Splunk to handle some non-standardized logging. Splunk is as turnkey, as the data you put it into it. This means, if you have a 20-year-old application that logs unstructured data in debug mode only, your Splunk instance will not be a turnkey. With a system such a Splunk, we can quote some data science experts in saying "garbage in, garbage out".

Chapter 2 , Data Inputs, discusses how to move on to understanding what kinds of data input Splunk uses in order to get data inputs. We see how to enable Splunk to use the methods which they have developed in data inputs. Finally, you will get a brief introduction to the data inputs for Splunk.

Chapter 3 , Data Scrubbing, discusses how to format all incoming data to a Splunk, friendly format, pre-indexing in order to ease search querying, and knowledge management going forward.

Chapter 4 , Knowledge management, explains some techniques of managing the incoming data to your Splunk indexers, some basics of how to leverage those knowledge objects to enhance performance when searching, as well as the pros and cons of pre and post field extraction.

Chapter 5, Alerting, discusses the growing importance of Splunk alerting, and the different levels of doing so. In the current corporate environment, intelligent alerting, and alert 'noise' reduction are becoming more important due to machine sprawl, both horizontally and vertically. Later, we will discuss how to create intelligent alerts, manage them effectively, and also some methods of 'self-healing' that I've used in the past and the successes and consequences of such methods in order to assist in setting expectations.

Chapter 6, Searching and Reporting, will talk about the anatomy of a search, and then some key techniques that help in real-world scenarios. Many people understand search syntax, however to use it effectively, (a.k.a to become a search ninja) is something much more evasive and continuous. We will also see real world use-cases in order to get the point across such as, merging two datasets at search time, and making the result set of a two searches match each other in time.

Chapter 7, Form-Based Dashboards, discusses how to create form based dashboards leveraging $foo$ variables as selectors to appropriately pass information to another search, or another dashboard and also, we see how to create an effective drill-down effect.

Chapter 8, Search optimization, shows how to optimize the dashboards to increase performance. This ultimately effects how quickly dashboards load results. We do that by adjusting search queries, leverage summary indexes, the KV Store, accelerated searches, and data models to name a few.

Chapter 9, App Creation and Consolidation, discusses how to take a series of apps from Splunkbase, as well as any dashboard that is user created, and put them into a Splunk app for ease of use. We also talk about how to adjust the navigation XML to ease user navigation of such an app.

Chapter 10, Advanced Data Routing, discusses something that is becoming more common place in an enterprise. As many people are using big data platforms like Splunk to move data around their network things such as firewalls and data stream loss, sourcetype renaming by environment can become administratively expensive.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.8.82