Sample Size and Aggregation Mode
For complete information on the sample sizes and aggregation modes, see the Sample Size and Aggregation Mode page.
The sample size determines how the query results will look.
- Natural - A Natural query will look up the logging rate for the queried Tags (when possible), and return results spaced apart at that rate. This means that the return size will vary with the date range.
- On Change/Raw - An On Change query will return points as they were logged, and can be thought of as a "raw" query mode. This means that the results may not be evenly spaced. This will always return a row from every changed column, even if the timestamps are identical.
- Fixed - A Fixed query will return the given number of rows. Where data was sparse, interpolated values will be added. Where data is dense, the Aggregation Mode will come into play.
- Interval - Similar to fixed, but with the spacing based on time, rather than the number of requested results.
The aggregation mode dictates what happens when multiple raw values must be processed into a single value. Here is a description of the most commonly used options.
- Min/Max - Both the minimum and maximum values will be returned for each timestamp.
- Time-weighted Average - The values are averaged together, weighted for the amount of time they cover in the interval.
- Closest Value - The value closest to the ending time of the interval will be returned.
- Simple Average - The values are summed together and divided by the number of values.
Return format dictates how the requested data will be returned. The options are "wide" (default), in which each Tag has its own column, and "tall", in which the Tags are returned vertically in a "path, value, quality, timestamp" schema.
These options affect the query results in more subtle ways.
- Ignore Bad Quality - Only data with "good" quality will be loaded from the data source.
- Prevent Interpolation - Requests that values not be interpolated, if the row would normally require it. Also instructs the system to not write result rows that would only contain interpolated values. In other words, if the raw data does not provide any new values for a certain window, that window will not be included in the result dataset.
- Avoid Scan Class Validation - "Scan class validation" is the mechanism by which the system determines when the gateway was not running, and returns bad quality data for these periods of time. By enabling this option, the scan class records will not be consulted, which can improve performance, and will not write bad quality rows as a result of this check.
Tags Historian information is often easiest to work with in the Easy Chart component, which handles all of these options automatically.