--- title: Aggregations ---
An aggregation is responsible for providing analysis of a larger dataset, to make it more manageable. While it would be possible to plot millions of samples in a single graph, it is simply not practical. Aggregations give the mean to further aggregate the samples stored in the database.
An aggregation can be defined either in JSON, or HQL. For each aggregation, both forms will be displayed below.
The size of an aggregation determines the frequency that data occurs in the resulting aggregation. So a size of two minutes would cause an aggregation to output a series that has a sample, every two minutes.
The extent of an aggregation determines how wide a single sample will load data in time. So an extent of one hour would cause each sample to be the result of aggregating the last hour of data.
Combining size and extent, we now have a flexible system for describing how to build a dataset suitable for plotting.
The following graphics represents what data will be sampled to generate the sample at point 2
.
The blue bar is the extent, and the red bar is the size.
The next point we'll sample for is 3
.
This applies the same principle as above.
Resource Identifiers are not currently included in the Suggest index, and thus do not appear in suggestions. Resource Identifiers can still be used in aggregations by adding them free-form.
{"type": "average", "sampling": {"unit": <unit>, "value": <number>}}
average(size=<duration>)
The average aggregation takes all samples in a given extent, and calculates the average value over them.
{"type": "count", "sampling": {"unit": <unit>, "value": <number>}}
count(size=<duration>)
The count aggregation calculates the number of all samples in a given extent.
{"type": "delta"}
delta()
The delta aggregation calculates the change between samples in a given extent.
{"type": "deltaPerSecond"}
deltaPerSecond()
The delta per second aggregation calculates the change per second between samples in a given extent.
{"type": "max", "sampling": {"unit": <unit>, "value": <number>}}
max(size=<duration>)
The max aggregation picks the largest numerical value seen in the given extent.
{"type": "min", "sampling": {"unit": <unit>, "value": <number>}}
min(size=<duration>)
The min aggregation picks the smallest numerical value seen in the given extent.
{"type": "notNegative"}
notNegative()
The not negative aggregation filters out negative values from all samples in a given extent.
{"type": "stddev", "sampling": {"unit": <unit>, "value": <number>}}
stddev(size=<duration>)
The standard deviation aggregation calculates the standard deviation of all samples in a given extent.
{"type": "sum", "sampling": {"unit": <unit>, "value": <number>}}
sum(size=<duration>)
The sum aggregation sums the values of all points in a given extent.
{"type": "sum2", "sampling": {"unit": <unit>, "value": <number>}}
sum2(size=<duration>)
The sum squared aggregation sums the squared values of all points in a given extent.
{"type": "topk", "k": <number>}
{"type": "bottomk", "k": <number>}
{"type": "abovek", "k": <number>}
{"type": "belowk", "k": <number>}
topk(<number>)
bottomk(<number>)
abovek(<number>)
belowk(<number>)
These are a set of filtering aggregations. A filtering aggregation reduces the number of result groups according to some criteria.