This guide gives an overview of job scheduling in Intershop Order Management (IOM). It shows concepts, components of the IOM job scheduling, and possibilities to add custom jobs.
This guide is mainly intended to be read by developers and gives architectural insights into IOM.
Wording | Description |
---|---|
JDBC | Java Database Connectivity |
RAM | Random-Access Memory |
The IOM uses the following two types of jobs:
Local jobs are jobs that do local tasks (e.g., clear the Java cache). These jobs have to be run on every application server (frontend/ backend).
The IOM runs on a JEE7-compliant application server, so it is possible to use the EJB Timer to invoke methods periodically. There is no registration or configuration needed.
Job Class | Job Description |
---|---|
bakery.logic.bean.caching.CheckCacheStatusJob | Checks and processes a requested cache clear |
Example: The method execute()
will be invoked every 10 seconds.
@Singleton public class TimerService { @EJB CustomizedService customService; @Schedule(second="*/10", minute="*",hour="*", persistent=false) public void execute(){ customService.doWork(); } }
Clustered jobs are jobs that do cluster-wide/system-wide tasks (e.g., jobs of the control artifact).
It must be ensured that such jobs do not run in parallel.
This is guaranteed as all IOM cluster jobs belong to the backend server that can only have one live instance.
The default configuration uses a local RAM store to manage the jobs.
An alternate configuration using a JDBC store is commented out and should not be required as long as all cluster jobs are located on the single backend instance.
Time-Sync for clustered Jobs
Quartz clustered features would require that the involved hosts have to use some form of time-sync service (daemon) that runs very regularly (the clocks must be within a second of each other).
#============================================================================ # Configure JobStore #============================================================================ org.quartz.jobStore.misfireThreshold = 60000 #RAM JobStore org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore # JDBC store: example to use the database for the quartz JobStore # org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX # org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.PostgreSQLDelegate # org.quartz.jobStore.useProperties = true # org.quartz.jobStore.dataSource = PostgresDS # org.quartz.jobStore.tablePrefix = system.qrtz223_ # org.quartz.jobStore.isClustered = true # org.quartz.jobStore.clusterCheckinInterval = 20000 # Configure Datasources # org.quartz.dataSource.PostgresDS.jndiURL=java:/OmsDB
For further details please see Configuration Reference - Configure Clustering with JDBC-JobStore and the configuration of the job store in OMS_ETC/quartz-cluster.properties.
All clustered jobs with their descriptions can be found in OMS_ETC/quartz-jobs-cluster.xml.
The existing configuration in OMS_ETC/quartz-cluster.properties can be used to add further jobs/triggers to the existing scheduler by adding further files to the org.quartz.plugin.jobInitializer.fileNames
list of the XMLSchedulingDataProcessorPlugin (Cookbook - Initializing Job Data With Scheduler Initialization).
Note
Within the property file, the variable ${is.oms.dir.etc}
can be used to reference further configuration files. The value of ${is.oms.dir.etc}
is replaced by a SchedulerBean at startup with the value of system-property is.oms.dir.etc
.
#============================================================================ # Configure Job / Trigger Loading #============================================================================ ... # ${is.oms.dir.etc} could be used in the path in order to dynamically reference the folder, where installation specific properties are located org.quartz.plugin.jobInitializer.fileNames = ${is.oms.dir.etc}/quartz-jobs-cluster.xml,${is.oms.dir.etc}/my-custom-clustered-quartz-jobs.xml ... ...
Another possibility is to customize the existing job file OMS_ETC/quartz-jobs-cluster.xml.