Storage

data.h2.db
The data that is displayed under the transaction response time tab and on the JVM gauge screen is collected continuously and stored at 1 minute intervals. This setting defines how long to retain these 1 minute aggregates. (this setting also applies to the 5 second gauge data)
Response time and JVM gauge data is rolled up at 5 minute intervals. This setting defines how long to retain these 5 minute aggregates.
Response time and JVM gauge data is rolled up again at 30 minute intervals. This setting defines how long to retain these 30 minute aggregates.
Response time and JVM gauge data is rolled up again at 4 hour intervals. This setting defines how long to retain these 4 hour aggregates.
This setting defines how long to retain the trace points and trace header data that is displayed under the transaction traces tab.
This setting defines how long to retain the full query text that is displayed under the transaction queries tab. The truncated query text and query stats are retained according to *.capped.db settings below.
*.capped.db
The data that is displayed under the transaction queries, service calls and continuous profiling tabs is collected continuously and stored at 1 minute intervals. This setting defines how long to retain these 1 minute aggregates.
Query, service call and continuous profiling data is rolled up at 5 minute intervals. This setting defines the size of the capped data file used to store these 5 minute aggregates.
Query, service call and continuous profiling data is rolled up again at 30 minute intervals. This setting defines the size of the capped data file used to store these 30 minute aggregates.
Query, service call and continuous profiling data is rolled up again at 4 hour intervals. This setting defines the size of the capped data file used to store these 4 hour aggregates.
This setting defines the size of the capped data file used to store the trace detail data (trace entries, trace query stats and trace profiles).
Response time and JVM gauge data
The data that is displayed under the transaction response time tab and on the JVM gauge screen is collected continuously by the agents and sent to the central collector (and stored) at 1 minute intervals. This setting defines how long to retain these 1 minute aggregates. (this setting also applies to the 5 second gauge data)
Response time and JVM gauge data is rolled up at 5 minute intervals. This setting defines how long to retain these 5 minute aggregates.
Response time and JVM gauge data is rolled up again at 30 minute intervals. This setting defines how long to retain these 30 minute aggregates.
Response time and JVM gauge data is rolled up again at 4 hour intervals. This setting defines how long to retain these 4 hour aggregates.
Query and service call data
The data that is displayed under the transaction queries and service calls tabs is collected continuously by the agents and sent to the central collector (and stored) at 1 minute intervals. This setting defines how long to retain these 1 minute aggregates.
Query and service call data is rolled up at 5 minute intervals. This setting defines how long to retain these 5 minute aggregates.
Query and service call data is rolled up again at 30 minute intervals. This setting defines how long to retain these 30 minute aggregates.
Query and service call data is rolled up again at 4 hour intervals. This setting defines how long to retain these 4 hour aggregates.
Profile data
The data that is displayed under the transaction continuous profiling tab is collected continuously by the agents and sent to the central collector (and stored) at 1 minute intervals. This setting defines how long to retain these 1 minute aggregates.
Profile data is rolled up at 5 minute intervals. This setting defines how long to retain these 5 minute aggregates.
Profile data is rolled up again at 30 minute intervals. This setting defines how long to retain these 30 minute aggregates.
Profile data is rolled up again at 4 hour intervals. This setting defines how long to retain these 4 hour aggregates.
Trace data
This setting defines how long to retain trace data. This includes individual traces and error message data.
Updated expiration settings only apply to data collected from this point forward. Existing data is still tagged with the expiration settings in effect at the time the data was captured since using Cassandra TTL to implement expiration.
Cassandra TWCS window sizes are calculated using the TTL for the table divided by 24, so that the table will have approximately 24 windows.
H2 data file size: {{h2DataFileSize | gtBytes}}
Table name Total bytes Row count
{{analyzedH2Table.name}} {{analyzedH2Table.bytes | number}} {{analyzedH2Table.rows | number}}
By transaction type
Transaction type Traces captured Traces captured because they were flagged as errors
{{analyzedTraceOverallCount.transactionType}} {{analyzedTraceOverallCount.count | number}} {{analyzedTraceOverallCount.errorCount | number}}
By transaction name
Transaction type Transaction name Traces captured Traces captured because they were flagged as errors
{{analyzedTraceCount.transactionType}} {{analyzedTraceCount.transactionName}} {{analyzedTraceCount.count | number}} {{analyzedTraceCount.errorCount | number}}