Block Groups are useful in a number of situations where you need to deal with a lot of data efficiently. Mirroring/Synchronizing array values to DB - Arrays are often best stored vertically, which makes them perfect for Block Groups. Pattern mode makes configuration a breeze by allowing to you specify the array as a pattern, and set the bounds
Recipe management - Like Standard Groups, but used when set points are better stored vertically than horizontally.
Vertical history tables - Group data points by data type (int, float, string), create a copy of the item that stores item path, and then use the insert changed rows option to create your own vertically storing historical tables. Create additional copies of the block item that refer to quality and timestamp in order to get further information about the data point.
Due to their nature, Block Groups store records in a different format than the other groups. Consider how other Transaction Groups work. A single execution of a standard or historical group would store a row that looked like the following:
We could take the Tags from the above example, and place them in under a single block item like so:
Note that each Tag is nested under the block item, and the block item is targeting the "Tags" column under Target name. A single execution of this group stores the records in our table as so:
Each additional block item would store records as a separate column.
Row ID and Block ID
Using the same Tag example from above, if we kept inserting new rows at every execution, our table would start to looks like the following:
This isn't ideal, since the table doesn't have a great way to show which value came from which Tag. To help with this, Block Groups have optional row_id and block_id columns that can be enabled (see the "Store row id" and "Store block id" settings under Group Settings). If we enable both the Block ID and Row ID, our table would look like the following:
Block ID represents the a single execution of the group, meaning rows with the same block_id value were inserted together. We see block_id values of 1 (colored green) are part of the same execution, and rows with a block_id value of 2 (colored blue) are a separate execution.
Row ID in an index representing which item in the block item the row corresponds to. In our example, Tag1 is the first or top item in the block item (row index 0), Tag2 is next (row index 1), and Tag3 is last (row index 2). Now we know that any value on that table with a row_id of 0 came from Tag1.
Beyond the differences in the data, namely that the Block Group works with multiple rows instead of just 1, this group type shares many similarities with the Standard Group.
The unique settings are:
Automatically create table - If the target table does not exist, or does not have all of the required columns, it will be created/modified on group startup. If not selected and the table doesn't match, an error will be generated on startup.
Automatically create rows - If the target rows do not exist, they will be created on group execution. If not selected and the rows don't match, no records will be updated.
Use custom index column - If selected, you may enter any column name to hold the index. If unselected, the table index will be named <table name>_ndx.
Store timestamp to - Specifies whether or not to store a timestamp with the record, and the target column. The timestamp will be generated by the group during execution. For groups that update a row(s), the timestamp will only be written if any of the values in the group are also written.
Store quality code to - If selected, stores an aggregate quality for the row to the specified column. The aggregate quality is the combined quality of all of the items that write to that row. For more information about quality values, see Data Quality.
Store row id - Each row will be assigned a numeric id, starting at 0. If selected, this id will also be stored with the data.
Store block id - If selected, an incremental block id will be stored along with the data. This number will be 1 greater than the previous block id in the table.
Delete records older than - If selected, records in the target table will be deleted after they reach the specified age. This setting is useful for preventing tables from growing in an unbounded manner, which can cause disk space and performance problems over time.
This section details how the group interacts with the table on each execution, and is not available for the Historical Group type. This means when the Timer or Schedule is active, and the Trigger condition are met. The group can insert a new row, or update the first, last or a custom record.
Insert New Block - If selected, each row of the block will be inserted when the group executes, even if the data has not changed.
Insert changed rows - This option will only insert the rows that have new data when the group executes. This is particularly useful for recording history for many data points on an "on change" basis, provided there is a unique id column defined. The "store row id" feature is useful for this, as well as the ability to reference the item path in an item's value property.
Update / Select - This option will either update or select from matching rows based on the option selected below it. The Update Mode property above determines whether an update (OPC to DB), select (DB to OPC), or both (Bi-directional) are used when the group executes.
First - Use the first row in the table. It is not recommended to use this option unless the order of the data in the table is guaranteed.
Last - Use the last row in the table. This is commonly used when another group (or another program) is inserting new rows for us, and we always want to update the most recent record.
Custom - Like Standard Groups, this setting allows you to target a specific section of the table, using SQL where clause syntax, with the ability to bind to dynamic item values. Unlike standard groups, however, the WHERE clause specified should result in enough rows to cover the block. Excess rows will not be written to, but fewer rows will result in a group warning indicating that some data could not be written.