Creating buckets across time periods

The following query will slice the data into intervals of one day. Just like how we were able to create buckets on different values of strings, the following query will create buckets on different values of time, grouped by one-day intervals:

GET /bigginsight/_search?size=0                   1
{
"aggs": {
"counts_over_time": {
"date_histogram": { 2
"field": "time",
"interval": "1d" 3
}
}
}
}

The key points from the preceding code are explained as follows:

  • We have specified size=0 as a request parameter, instead of specifying it in the request body.
  • We are using the date_histogram aggregation.
  • We want to slice the data by day; that's why we specify the interval for slicing the data as 1d (for one day). Intervals can take values like 1d (one day), 1h (one hour), 4h (four hours), 30m (30 minutes), and so on. This gives tremendous flexibility when specifying a dynamic criteria. 

The response to the request should look like the following:

{
...,
"aggregations": {
"counts_over_time": {
"buckets": [
{
"key_as_string": "2017-09-23T00:00:00.000Z",
"key": 1506124800000,
"doc_count": 62493
},
{
"key_as_string": "2017-09-24T00:00:00.000Z",
"key": 1506211200000,
"doc_count": 5312
},
{
"key_as_string": "2017-09-25T00:00:00.000Z",
"key": 1506297600000,
"doc_count": 175030
}
]
}
}
}

As you can see, the simulated data that we have in our index is only for a three-day period. The returned buckets contain keys in two forms, key and key_as_string. The key field is in milliseconds since the epoch (January 1st 1970), and key_as_string is the beginning of the time interval in UTC. In our case, we have chosen the interval of one day. The first bucket with the 2017-09-23T00:00:00.000Z key is the bucket that has documents between September 23, 2017 UTC, and September 24, 2017 UTC.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.78.155