Skip to end of metadata
Go to start of metadata

The Ignition Platform offers a wide range of services that modules can build on, instead of implementing themselves. The services presented in this chapter are provided through the GatewayContext given to each module.

Databases


Database access is at the core of Ignition, and is used by many parts of the system. The platform manages the definition of connections, and provides enhanced classes that make it easy to accomplish many database tasks with minimal work. When necessary, full access is also available to JDBC connections. Database connection pooling is handled through the Apache DBCP system, so module writers do not need to worry about the efficiency of opening connections (though on the opposite side, it's crucial that connections are properly closed). Additional features, such as automatic connection failover, are also handled by the platform.

Creating a Connection and Executing Basic Queries

All database operations are handled through the DatasourceManager provided by the GatewayContext. The DatasourceManager allows you to get a list of all of the defined datasources, and then to open a new connection on them. The returned connection is an SRConnection, which is a subclass of the standard JDBC Connection object that provides a variety of time saving convenience functions. Remember, though, that even when using the convenience functions the connection must still be closed.

Example:

SRConnection con; int maxVal;
try{
	con = context.getDatasourceManager().getConnection("myDatasource"); 
    con.runPrepQuery("INSERT INTO example(col) VALUES", 5);
	maxVal= (Integer) con.runScalarQuery("SELECT max(col) FROM example");
}finally{
	if(con!=null){
		con.close();
	}
} 

Note that this example does not handle potential errors thrown during query

execution,

but does illustrate the best practice of closing the connection in a Finally block.

The SRConnection class also provides the following useful functions:

  • getCurrentDatabaseTime() - Shortcut to query the current time.
  • getParentDatasource() - Provides access to the datasource object that created the connection, which can provide state information and access to other important classes, such as the database translator.
  • runPrep*() - Several functions that send values to the database through prepared statements. Prepared statements, such as the one used in the example above, are generally preferred to text queries as they are safer and less prone to errors.

Executing Complex Transactions

The SRConnection extends from the standard JDBC Connection object and can be used in the same way. This means it is possible to run multi-statement transactions with rollback support, and to use batching for high-performance data insertion. For more information, consult any JDBC guide.

Verifying Table Structure

When creating database-centric modules, it is very common to expect a table to exist, or to need to create a table. Ignition provides a helper class called DBTableSchema that can help with this task. The class is instantiated with the table name and database translator to use (provided by the datasource).

Columns are then defined, and finally the state is checked against the given connection. Missing columns can be added, or the table created if necessary. For example, the following is a common way to define and check a table, creating it if required:

 

SRConnection con; DBTableSchema table; 
try{
    con = context.getDatasourceManager().getConnection("myDatasource");
    table = new DBTableSchema("example",con.getParentDatasource().getTranslator());
    table.addRequiredColumn("id", DataType.Int4, EnumSet.of(ColumnProperty.AutoIncrement, ColumnProperty.PrimaryKey));
    table.addRequiredColumn("col", DataType.Int8, null);
    table.verifyAndUpdate(con);
}finally{
    con.close();
} 


Execution Scheduling

Performing some task on a timer or in a different thread is an extremely frequent requirement of gateway based modules, and the Ignition platform makes it easy by offering ExecutionManager. In addition to managing time-based execution, the execution manager can also execute a task once or allow the task to schedule itself, all while providing status and troubleshooting information through the gateway webpage. Private execution managers can be created in order to allocate threads for a specific task, though for most uses, the general execution manager provided by the gateway context should suffice.

Registering Executable Tasks

Anything that implements Java's Runnable interface can be registered to execute with the execution manager. Tasks can either be executed once with the executeOnce().

functions, or can be register to run repeatedly with the various register*() functions. Reoccurring tasks must be registered with an owner and a name. Both are free form strings, and are used together to identify a unique unit of execution, so that it can be modified and unregistered later. 

After a task is registered, it can be modified later by simply registering again with the same name. To stop the task, call unRegister(). Some functions in the execution manager return ScheduledFuture objects, which can be used to cancel execution before it happens.

SelfSchedulingRunnable Tasks

Most tasks are registered at a fixed rate and rarely change. In some cases, though, the task may wish to frequently change its rate, and re-registering each time is inefficient. In these cases, instead of supplying a Runnable, you can implement SelfSchedulingRunnable instead. After every execution, the SelfSchedulingRunnable provides the delay to wait before the next execution. When it is registered, it is provided with a SchedulingController that can be used to re-schedule the task at any time. For example, a self-scheduling task could run every 30 seconds, and would normally return 30000 from the getNextExecDelayMillis() function. A special event could occur, however, and the task could be executed at 500ms for some amount of time. The self scheduling runnable would call SchedulingController.requestReschedule() and would then return 500 until the special event was over.

Fixed Delay vs. Fixed Rate

Executable tasks are almost always registered with fixed delays, meaning that the spacing between executions is calculated from the end of one execution to the start of another. If a task is scheduled to run every second, but takes 30 seconds to execute, there will still be a 1 second wait between each event. Some functions in the execution manager allow the opposite of this, execution at a fixed rate. In this case, the next execution is calculated off of the start of the event. If an event takes longer than the scheduled delay, the next event will occur as soon as possible after the first completes. It's worth noting that events cannot "back up". That is, if a task is scheduled at 1 second, but the first execution takes 5 seconds, it will not run 5 times immediately to make up the missed time. Instead, it will run once, and will then start to follow the schedule again.

Creating Private Execution Managers

In situations where the tasks being registered might take a long time to execute, and several of them may run at once, it is usually better to create a private execution manager. The private managers work the same as the shared manager, but do not share their threads. That way, if tasks take a long time to execute, other parts of the system won't be held up. A private execution manager can be created by calling GatewayContext.createExecutionManager(). When creating a manager, you must give it a name, and decide how many threads it will have access to. It is important to choose wisely, as too many threads will waste system resources, but too few might lead to thread starvation, where a task is waiting to execute, but no threads are available to service it.

Auditing

The Audit system provides a mechanism for tracking, or "auditing", events that occur in the system. The events are almost always associated with a particular user, in order to build a record of "who did what when". Audit events are reported through an AuditProfile, set on a per-project level. Any module that wishes to track user actions can report audit events to the profile specified for the current project.

Reporting Events

Adding events to the audit system is as simple as generating an AuditRecord and giving it to an AuditProfile. Instead of implementing the AuditRecord interface yourself, you'll almost certainly want to simply use the DefaultAuditRecord class.  With the project id or project name, the AuditProfile can be retrieved through the AuditManager provided by the GatewayContext.

Querying Events

Modules can also access the history of audit events by using the query() function on the AuditProfile. This function allows you to filter on any combination of parameters in the AuditRecord.

OPC

OPC is an industrial standard for communication that is widely used to provide access to nearly every type of device. OPC works on a client/server basis, with the server talking the device's language and translating data for the OPC client. While Ignition contains a built in OPC server, this section describes the client capabilities. There are several versions of the OPC specification, but the Ignition OPC system provides a single abstracted layer that hides the differences between them.

The OPCManager, accessible through the gateway context, provides the ability to Browse, Read, Write, and Subscribe to OPC data. Connections to OPC servers are handled by the platform and defined in the gateway, and besides the server name that is included in addresses, the module using the system does not need to be aware of the different servers.

Identifying Addresses - The ServerNodeId

Each data point (tag) in OPC is identified by its Server Name, and its Item Path. The newer OPC-UA specification makes room for more complex identifiers, such as GUIDs, and namespace based organization. The ServerNodeId object allows for both schemes. The ServerNodeId can be retrieved either through browsing, implementing the interface yourself, or using the BasicServerNodeId implementation. NodeId, the core address of a tag, can either be instantiated directly, or off of a properly formed string with NodeId.parseNodeId().

Reading and Writing Values

It is possible to read and write values to tags at any time using the corresponding functions on the OPCManager. Both functions take a list of inputs, and return a corresponding list of outputs, guaranteed to be the same length. The returned objects will indicate the success of the operation, and provide values, if the operation was a read. It is not necessary to make separate calls for different servers, as the OPCManager will handle separating out the values for you.

If you wish to obtain the value of an OPC item on a regular basis, a more efficient mechanism than calling Read is available: the subscription system. By subscribing to an address, you can specify a rate at which you want to be notified of changes. Notifications only occur when the value has actually changed. This mechanism is more efficient for the OPC server because it is able to optimize the set of tags that it needs to read.

Managing Subscriptions

OPC tags are subscribed through the definition of a SubscribableNode, and a named subscription. Each SubscribableNode defines its address and subscription name, and provides callback functions to be notified of changes to value and quality for the tag. The BasicNodeSubscriptionDefinition class can be used to quickly define subscribed nodes. 

Subscribing to OPC tags is a two step process:

  1. Define the subscription: use OPCManager.setSubscriptionRate() to define a subscription with the given name and rate.
  2. Create and subscribe SubscribableNodes: Implement the required interfaces, and use the subscription name defined above where necessary. Register them through OPCManager.subscribe()


Once the subscription is running, tags can be added and removed at will with the subscribe and unsubscribe functions. A subscription remains defined until OPCManager.cancelSubscription() is called, unless OPCManager.enableAutoCancel() has been called for it, and all nodes have been unsubscribed.

Understanding OPC Values and Qualities

Each OPC value has a value property, but also a quality property. The quality defines how trustworthy the value is, and communicates additional information about tag. For example, if a quality is DataQuality.Not_Found, the requested path is not available or correct, and so the value has no meaning. As you can see from the OPCManager interface, the subscribe function does not return errors. Instead, the status is communicated through the SubscribableNode.setQuality() function.

SQLTags


SQLTags is the realtime tag system in Ignition. Tags can be driven by OPC, Expressions, SQL Queries, or static values, and provide features such as scaling, alerting, and type conversion.

Understanding Tag Providers

When working with SQLTags, it is important to understand the architecture of how tag providers work together, and the different types of providers that exist. All tags exist inside of a Tag Provider, but depending on the type of provider, they may actually be driven by a different, remote provider.

There are currently two main types of providers that illustrate this: the internal provider, and the database provider. The internal provider is conceptually very simple- it is a tag provider that is only local to the system it is on, and it drives its own tags. The database provider comes in two forms, however, the regular form, and the driving form. The standard database provider cannot drive tags, it can only observe a database and make the tags in that database visible to the system. The database tags must be driven by a different entity, such as the driving provider on a different machine, or a FactorySQL installation.

Addressing SQLTags

All tags are addressed via the TagPath object. A tag path consists of several components: the tag provider, the folder path, the tag name, and optionally, the tag property. For example:
[MyProvider]Path/To/Tag.Quality The provider component can be left off if you are addressing a tag from inside a project, and that tag belongs to the default provider for the project.
The easiest way to generate TagPath objects is to use the TagPathParser class, though it is possible to construct them manually using the BasicTagPath, or implementing the interface yourself.

Example:

TagPath myPath = TagPathParser.parse("[MyProvider]Path/To/Tag");

Subscribing to Tags

From any context, you can subscribe to SQLTags via the TagSubscriptionManager. The TagChangeListener provided to the subscribe function can specify a specific tag property to listen for, or can listen for all tag changes. When listening for all changes, it will also be notified when the tag configuration changes.

Notes:

  • The listener specifies a TagProp that it is interested in. By returning null, all events will be received.
  • When the event is fired, a null TagProp on the change event indicates that the configuration of the tag has changed.
  • Folders can be subscribed and will be notified with a null TagProp when sub-tags are added or removed.
  • In the client scope, the tag provider portion of the TagPath can be left off to indicate that the project's default provider should be used. However, remember that this is only valid for scopes under a project- on the gateway, for example, there is no notion of a "default provider".

Reading and Writing Tags

To read from or write to a tag, you must use the TagManager provided by the context that you are in, as the write procedure differs based on scope. In general, for writing, you will provide a value for a specific path, and you'll receive back a quality indicating whether the write was successful (Good quality) or not. For reading, you'll provide a path, and receive a value. The value's quality can be consulted to see whether the read was successful or not. In both cases, the results map 1-to-1 with the inputs, with the proper number of results guaranteed.

Tag Reading vs. Subscribing

It is generally more efficient to subscribe to tags when possible, for a number of reasons. First, when subscribing, data is provided to you asynchronously on-change, allowing you to easily avoid blocking threads for any period of time (as might happen in a synchronous read). Secondly, the system is able to optimize/coalesce subscriptions into the minimum amount of work necessary. Reads require the system to execute the operation, usually going all the way down to the device, regardless of whether other parts of the system are interested in the same address.

On the other hand, manually reading data allows you to retrieve values on-demand, and ensures that you have the latest values available.

Alarming

There are generally two main components to alarming: the definition, execution, and state of alarms, and then the notification of alarm events. The first set of tasks is managed by the AlarmManager provided by the

GatewayContext, while notification is handled by the separate Alarm Notification Module, which provides its own API.

AlarmManager

This system handles the evaluation and state of alarms. Using this, modules can listen for alarm events, query status and history, and even register new alarms.

Listening for events

Alarms can be monitored by registering an AlarmListener through GatewayContext.getAlarmManager().addListener(...). The addListener(...) method takes a QualifiedPath, and will deliver events at anything at or below the specified path. Therefore, it is easy to subscribe to everything in the system, below a specific tag provider (only "provider" component specified), a specific tag, or a specific alarm under a tag.

Extended Configuration Properties

Alarms are defined using Properties, and AlarmEvents implement the PropertySet interface allowing code to query what properties are defined or included in the event. Normally, users configure predefined properties on alarms through the Ignition designer. However, modules have the opportunity to register additional "well known" properties that they provide. To do this, you simply define your properties using the AlarmProperty interface (or preferably, extending from BasicAlarmProperty, so that you don't need to worry about making your implementation class available to the designer scope as well), and register them through AlarmManager.registerExtendedConfigProperties(...). Now, they will display along with the standard properties in the designer, can be set by the user, will be stored in the journal, and can be queried from the status system or retrieved from an alarm event.

Querying Status and History


The status and history of alarms can be obtained through the queryStatus(...) and queryJournal(...) functions, respectively. Both use an AlarmFilter to specify events to return, and both result in an AlarmQueryResult.

Working with AlarmFilter

The alarm system provides a great deal of flexibility in querying events, and the AlarmFilter class is used to define what the search parameters are. A filter consists of one or more conditions, which operate on different fields of the alarm. Only an event that passes all defined conditions will be returned. The alarm filter can be defined by hand, by creating a new instance and adding conditions for the static fields defined on the class through AlarmFilter.and(...). However, it is considerably easier and generally advised to use the AlarmFilterBuilder helper class, unless you need to define your own type of conditions.

For example, to create a filter that returns all active alarms with priority greater than "Low":

AlarmFilter filter = new AlarmFilterBuilder().isState(AlarmState.ActiveUnacked, AlarmState.ActiveAcked).priority_gt(AlarmPriority.Low).build(); 

In addition to conditions, the AlarmFilter also has statically defined flags that affect how queries behave. For example, AlarmFilter.FLAG_INCLUDE_DATA specifies that the associated data of an event should be included in the query. These are applied by using the AlarmFilterBuilder (includeData(), for example), or by modifying the Flags object returned by AlarmFilter.getFlags().

Working with AlarmQueryResult

Fundamentally, AlarmQueryResult is simply a list of AlarmEvents. However, there are two additional functions that can be useful: getDataset(), which returns the events as a dataset that can be used with Ignition dataset functions, and getAssociatedData(), which returns the associated data of an event as a dataset. All of the information returned by these two functions can be obtained directly on the alarm events, but these functions are useful when datasets are required.

Creating New Alarms

Most alarms in Ignition are defined on tags. However, it is possible for modules to generate their own alarms. All alarm evaluation is handled by the AlarmManager, you simply give it the definition of an alarm through AlarmManager.registerAlarm(...), and it provides you with an AlarmEvaluator that you update from time to time with the current value.

Defining Alarms

An alarm configuration is defined by the AlarmConfiguration interface, which holds multiple AlarmDefinitions. This allows you to define multiple alarms for a particular "source". An AlarmDefinition contains properties that define the alarm, both static and bound. It is recommended that you use the BasicAlarmConfiguration and BasicAlarmDefinition classes instead of implementing the interfaces yourself.
Most of the basic alarm properties are defined statically in CommonAlarmProperties. Properties specific to the setpoint/mode are in AlarmModeProperties.


Important: Once you are done using the alarm, or the source is going to be destroyed, you should call AlarmEvaluator.release() to unregister the alarms.
 

  • No labels