IOM projects cannot have individual project layouts if they are to run in the Intershop Commerce Platform environment, as this environment cannot be customized invididually for each project. Instead, IOM projects must use a predefined project layout to support the generic installation of projects in the Intershop Commerce Platform environment.
IOM is provided in the form of Docker images. IOM projects have to add custom applications and configurations to these images. The customized images then have to be put into the Intershop Commerce Platform environment for execution. To be able to manage these images, Docker v.19 is required.
IOM is provided in the form of Docker images.
The images required as base for projects are available at:
Note
Adapt the tag (version number) if you use a newer version of IOM. For a full list of available versions see Overview - IOM Public Release Notes.
docker.intershop.de is a private Docker registry. Private Docker registries require authentication and sufficient rights to pull images from them.
caaS2docker is a small package consisting of a shell script and configuration, which helps to create customized IOM project-images.
This tool is provided as a Maven artifact, with the following properties:
Note
Adapt the version if you use a newer version of IOM. For a full list of available versions see Overview - IOM Public Release Notes.
The current version Standard IOM project structure was introduced along with the shift from Ansible4IOM/CaaS4Ansible4IOM to Kubernetes. The main reason for the new version was a different distribution of content to the two artifacts configuration and customization. This change was caused by the requirements of Kubernetes, where SQL-configuration is put into an init-Container and all remaining parts are placed into the app container. In order to have a 1:1 relation between the two Maven-artifacts of Standard IOM projects and the two different containers, mail templates and XSL templates were moved from the configuration artifact to the customization artifact.
The current version of IOM standard project structure requires IOM 3.0 and newer. This IOM version uses a single server type only: cluster.
If your project contains any server-type-specific files (e.g. etc/base/project.<server-type>.properties) you have to ensure that these files reference server type cluster only.
Nevertheless, caas2docker can be used with projects following the older version of IOM standard project structure, too if you migrate the configuration files to server type cluster. In this case there will be no 1:1 relation between artifacts configuration and customization and the according Docker images. The app Docker image will become a mixture of both project artifacts, but this will not be reflected by the versioning of the app Docker image. It is tagged with the version of the customization artifact only.
When working with the IOM Development Environment, caas2docker provides the required project images. caas2docker is able to access and use the Maven artifacts as defined by the IOM standard project structure. It is also able to access the expanded directory structure of these artifacts directly.
In a development context this might be helpful, as less time will be required until the IOM Development Environment is restarted with an updated image.
IOM standard projects have a standardized project layout in order to be installed automatically in the Intershop Commerce Platform. In general, IOM standard projects consist of two parts which both have to be provided as Docker images. These parts are the config image and the app image.
Project owners have to provide information about these two Docker images within the values file to be used by IOM Helm charts. Additionally, the project owners can define which environment type to use, allowing them to apply different configurations for different types of environment (e.g. production, integration, etc.). For more information, see Guide - Operate Intershop Order Management 3.X.
There is a 1:1 relation between Maven artifacts and Docker images.
The content of the configuration artifact will become part of the config image and the content of the customization artifact will become part of the app image. The version numbers of both artifacts are reflected by identical version numbers of the according Docker images.
The project-specific Docker images are both based on IOM Docker images provided by Intershop. The documentation of caas2docker will provide more information.
The configuration artifact has to be a .tgz file, holding directory for sql configuration.
More information about every aspect of the configuration artifact can be found within the sections below.
sql-config/
The customization artifact has to be a .tgz file holding directories for:
More information about every aspect of the customization artifact can be found within the sections below.
templates/ xslt/ etc/ customization/ test-data/
Configuration and customization artifacts have to be provided by a single Maven repository. Both artifacts have to use identical group-IDs. The following box shows an example of a Maven repository providing both IOM standard project artifacts.
com/ └── intershop/ └── oms/ ├── test-config/ │ ├── maven-metadata.xml │ └── 1.1.0.0-SNAPSHOT/ │ └── config/ │ ├── test-config-1.1.0.0-20180704.070939-18.tgz │ ├── test-config-1.1.0.0-20180704.070939-18.pom │ └── maven-metadata.xml └── test-customization/ ├── maven-metadata.xml └── 1.0.0.0-SNAPSHOT/ └── config/ ├── test-customization-1.0.0.0-20180703.080132-18.tgz ├── test-customization-1.0.0.0-20180703.080132-18.pom └── maven-metadata.xml
project provisioning
SQL configuration has to be part of configuration artifact of a project.
SQL configuration of projects is always a mixture of configuration of IOM standard features and project-specific database artifacts.
SQL scripts are used to:
The database objects and data required by the IOM core product and their possible modifications are provided by the standard IOM product in the form of an initial dump and SQL scripts. These core SQL scripts are always performed prior to the project-specific scripts, during project setup or version upgrades.
Table definitions and database functions provided for configuration of IOM can be considered as APIs that may change between core IOM Versions. The project-specific SQL scripts may rely on these APIs and must therefore be kept up to date during IOM version upgrades.
When working with different IOM versions, it may be necessary to also have different versions (branches) of the project scripts.
Database initialization and configuration of project-specific customizations must be implemented with a set of SQL scripts organized in a given directory structure (see below). These scripts will always be performed after the scripts of the IOM core product.
Note
The project-specific database objects are not created during DB-initialization of the product. These object definition will also not be migrated by the IOM standard product. The project has to care for migration by itself.
The directory structure of the sql-config directory does not contain any information about project versions. For this reason, it is recommended to have a dedicated sql-config directory for each project version and to avoid migrations over more than one version at a time. This rule can be mitigated using careful script logic:
The sql-config directory is split into three main sub-directories.
sql-config/ ├── dbinit/ │ ├── 001_....sql │ └── <N>_*.sql ├── dbmigrate/ │ ├── 001_....sql │ └── <N>_*.sql └── config/ ├── base/ │ ├── 001_....sql │ ├── 003_....sql │ └── <N>_*.sql └── env/ ├── <env-name 1>/ │ ├── 002_....sql │ └── <N>_*.sql ├── <env-name 2>/ │ └── ... └── ...
dbinit
dbmigrate
The two folders dbinit and dbmigrate are meant to organize the scripts according to their content. Always both of them will be executed, so you could decide to use only one of them.
config
The content of dbinit, dbmigrate and config are executed one after another, in this order.
All SQL scripts must have a numerical prefix. This prefix defines the order of execution within one of the three main folders starting with the lowest number.
Within config, only the numerical prefix is used to determine the execution order. The location within the sub folders base or env/<env-name> does not matter.
Each script will be processed by a separate call to the Postgres SQL client psql. In case of an error, the container (IOM init-container) will fail and Kubernetes will restart the Pod until the init Container succeeds or the timeout is passed. Only the changes performed by the current script will be rolled back. Modifications done by the previous scripts remain.
Most IOM Java Enums (or IOM Specific "extensible Enum") exist as database tables, too, and many SQL scripts need to refer to some of them. There is a process during the application startup that takes care of the synchronization.
This causes a chicken or the egg dilemma as the SQL scripts must run prior to the application start during setup or upgrade processes. To solve it, such new Enum values have to be added by script as well, either in the folder dbinit or dbmigrate to register them within the database prior to the application start and make them available for following SQL scripts.
IOM Development Environment (devenv-4-iom) provides a process which is able to execute sql-scripts from a directory matching the structure described above. For testing of single aspects of SQL configuration it is also possible to execute single SQL scripts.
During runtime, the SQL configuration to be executed is selected by the Helm parameter caas.envName, see Guide - Operate Intershop Order Management 3.0 | Parameters.
project provisioning
Mail templates have to be part of the customization artifact of IOM standard projects.
Mails can be customized by projects by adding new mail templates, mail images, etc. or by overwriting existing ones.
templates/ ├── mails_customers/ │ └── ... └── mails_operations/ └── ...
Mail templates belonging to a project, have to be placed into the directory templates within the project package.
All sub-directories and files located in mail-templates will be copied to the directory $OMS_VAR/templates within the IOM project image. Therefore you have to use the directory structure as used in $OMS_VAR/templates too. For more information see sections Default E-mail templates and custom templates in Reference - IOM Customer E-mails. Also see Concept - IOM Customer E-mails.
There is no support for environment specific mail templates.
When using the IOM Development Environment (devenv-4-iom), a script is provided, which is able to roll out custom mail templates into the development environment.
project provisioning
Xsl templates have to be part of customization artifact of a project.
Documents can be customized by projects by adding new XSL templates or by overwriting existing ones.
xslt/ ├── configuration/ │ └── ... ├── shop_default/ │ └── ... ├── utils/ │ └── ... └── ...
XSL templates belonging to a project have to be placed into directory xslt within the project package.
All sub-directories and files located in xslt will be copied to directory $OMS_VAR/xslt of the IOM project-image. Therefore, you have to use the directory structure as used in $OMS_VAR/xslt too. For more information see Reference - IOM Customer E-mails 3.0.
There is no support for environment-specific xsl templates.
When using IOM Development Environment (devenv-4-iom), a script is provided, which rolls out custom XSL templates into a running IOM Development system.
project provisioning
Custom deployment artifacts have to be part of the customization artifact of a project.
Projects requiring beans (e.g. decision beans, etc.) have to provide them as custom deployment artifacts.
customization/ ├── deployment.cluster.properties └── artifacts/ ├── <custom deployment artifact 1> └── ...
The root directory holding custom deployment artifacts is named customization. All files located in customization/artifacts, the custom artifacts, will be copied to directory $OMS_VAR/customization of the installation to be customized.
One file can be placed directly in the root of customization: deployment.cluster.properties. This file has three purposes:
To mark a deployment artifact to run only on the application server that runs singleton applications, prefix the artifact name in deployment.cluster.properties with ##exec-backend-apps-placeholder##.
Info
IOM Development Environment (devenv-4-iom) provides different processes for handling custom deployment artifacts:
The following box shows the default logging configuration of IOM. This configuration helps to understand the main concepts of logging in IOM:
CONSOLE
handler has no explicit assignments of Java packages. This handler is assigned to root-loggers, which do not need assignments. Instead, this log-handler handles all unassigned java-packages, too.CUSTOMIZATION
. In difference to CONSOLE
, this handler will not log any messages as long as no Java packages are assigned. The assignment of Java packages has to be done in project configuration and will be described in more detail below.#------------------------------------------------------------------------------- # configure predifined log-handlers #------------------------------------------------------------------------------- /subsystem=logging/console-handler=CONSOLE: named-formatter="JSON", level="${env.OMS_LOGLEVEL_CONSOLE}" /subsystem=logging/console-handler=IOM: named-formatter="JSON", level="${env.OMS_LOGLEVEL_IOM}" /subsystem=logging/console-handler=HIBERNATE: named-formatter="JSON", level="${env.OMS_LOGLEVEL_HIBERNATE}" /subsystem=logging/console-handler=QUARTZ: named-formatter="JSON", level="${env.OMS_LOGLEVEL_QUARTZ}" /subsystem=logging/console-handler=CUSTOMIZATION: named-formatter="JSON", level="${env.OMS_LOGLEVEL_CUSTOMIZATION}" #------------------------------------------------------------------------------- # assign java-packages to log-handlers #------------------------------------------------------------------------------- /subsystem=logging/logger=bakery: handlers=[IOM], use-parent-handlers="false", level="ALL" /subsystem=logging/logger=com.intershop.oms: handlers=[IOM], use-parent-handlers="false", level="ALL" /subsystem=logging/logger=com.theberlinbakery: handlers=[IOM], use-parent-handlers="false", level="ALL" /subsystem=logging/logger=org.jboss.ejb3.invocation: handlers=[IOM], use-parent-handlers="false", level="ALL" /subsystem=logging/logger=org.hibernate: handlers=[HIBERNATE], use-parent-handlers="false", level="ALL" /subsystem=logging/logger=org.quartz: handlers=[QUARTZ], use-parent-handlers="false", level="ALL"
The simplest logging configuration for projects can be realized by assigning all Java packages of the customization artifact to the CUSTOMIZATION
log-handler. When doing so, the log level of the customization artifact can be controlled at runtime, simply by setting the Docker environment variable OMS_LOGLEVEL_CUSTOMIZATION
.
The assignment of the Java packages belonging to the customization artifact is realized by adding additional entries to project.cluster.properties.
The following box shows a configuration example. You can apply this settings to your own project. To do so, replace the Java package names with the names used in your project.
/subsystem=logging/logger=com.my_company.iom_customization: handlers=[CUSTOMIZATION], use-parent-handlers="false", level="ALL"
If you want to use different log levels for different Java packages of your customization artifact, you have to use different logger configurations for the packages you want to log with different levels. The Docker environment variables OMS_LOGLEVEL_CUSTOMIZATION
defines the lowest log level to be logged. In combination with the configuration shown below, the logging system will show the following behavior:
OMS_LOGLEVEL_CUSTOMIZATION | pkg1 FATAL | pkg1 ERROR | pkg1 WARN | pkg1 INFO | pkg1 DEBUG | pkg1 TRACE | pkg2 FATAL | pkg2 ERROR | pkg2 WARN | pkg2 INFO | pkg2 DEBUG | pkg2 TRACE |
---|---|---|---|---|---|---|---|---|---|---|---|---|
FATAL | x | x | ||||||||||
ERROR | x | x | x | x | ||||||||
WARN | x | x | x | x | x | x | ||||||
INFO | x | x | x | x | ||||||||
DEBUG | x | x | x | x | x | |||||||
TRACE | x | x | x | x | x | x |
/subsystem=logging/logger=com.my_company.iom_customization.pkg1: handlers=[CUSTOMIZATION], use-parent-handlers="false", level="ALL" /subsystem=logging/logger=com.my_company.iom_customization.pkg2: handlers=[CUSTOMIZATION], use-parent-handlers="false", level="WARN"
Wildfly Admin Guide
project provisioning
Custom properties have to be part of the customization artifact of the project.
cluster.properties contains properties that can be controlled by the IOM project. The complete list of properties can be found in Reference - IOM Properties 3.0.
There are other configurations that are mainly used to control the behavior of the Wildfly application server. These are located in the system.std.cluster.property file of the IOM product. The project might bring an according project.cluster.property file, which is applied additionally to the system.std.cluster.property file. This enables projects to overwrite settings made in system.std.cluster.properties or to add new settings. It is impossible to overwrite any property that is defined in cluster.properties.
quartz-jobs-cluster.xml
is deprecated since 3.0, use quartz-jobs-custom.xml
instead.
It is possible to overwrite quartz-jobs-cluster.xml. This can be done globally or on certain environments only. Environment-specific quartz-jobs-cluster.xml will always overwrite a quartz-jobs-cluster.xml defined in base.
It is possible to delete standard IOM jobs, overwrite standard IOM jobs and to add custom jobs by defining them in quartz-jobs-custom.xml. IOMs standard jobs are defined in quartz-jobs-cluster.xml. During runtime of IOM, both configuration files are loaded, first quartz-jobs-cluster.xml and then quartz-jobs-custom.xml. This makes it possible to delete and overwrite IOM standard jobs. Furthermore, it allows to define custom jobs as well.
quartz-jobs-custom.xml can be defined globally or for certain environments only.
Use the following template to modify or add job configuration of your project:
<?xml version="1.0" encoding="utf-8"?> <job-scheduling-data xmlns="http://www.quartz-scheduler.org/xml/JobSchedulingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.quartz-scheduler.org/xml/JobSchedulingData http://www.quartz-scheduler.org/xml/job_scheduling_data_2_0.xsd" version="2.0"> <!-- All job information are held in RAM only, they are not persisted anywhere. Additionally, this file will never change during runtime, since it is part of the Docker image (product- or project-image). Hence, all pre-processing-commands and processing-directives have very limited impact. When the IOM application server starts, there cannot be any jobs, that could be deleted or overwritten. But, this file is loaded after quartz-jobs-cluster.xml, which defines the standard IOM jobs. This makes it possible to * Delete standard jobs, by using the according pre-processing-command (see http://www.quartz-scheduler.org/xml/job_scheduling_data_2_0.xsd), * Overwrite standard jobs, by redefining them here, * Or simply add own jobs. In this case, it is a good idea, to assign them to group CUSTOM. --> <pre-processing-commands> <!-- clear all jobs and trigger of group CUSTOM --> <delete-jobs-in-group>CUSTOM</delete-jobs-in-group> <delete-triggers-in-group>CUSTOM</delete-triggers-in-group> </pre-processing-commands> <processing-directives> <!-- enable overwriting of existing jobs --> <overwrite-existing-data>true</overwrite-existing-data> <ignore-duplicates>false</ignore-duplicates> </processing-directives> <schedule> <!-- add custom jobs and triggers here --> </schedule> </job-scheduling-data>
quartz-jobs-custom.xml supports the template variable ${platform.version}, which is automatically replaced by the current version of IOM. This mechanism should help you to lower maintenance efforts when referencing resources of the IOM platform.
All the standard Quartz jobs defined by IOM only trigger tasks defined by application Control, which is a singleton application running on one IOM application server only. However, the Quartz sub-system itself is rolled out and activated on all IOM application servers. Hence, using the Quartz sub-system does not restrict execution of jobs to a single IOM application server. See Guide - Intershop Order Management - Technical Overview | Quartz Jobs.
Some configuration settings cannot be changed by project.cluster.properties, since more sophisticated CLI code is required. In this case, projects can place Wildfly CLI code directly into initSystem.project.cluster.cli file at the custom properties directory structure.
Intershop does not recommend this kind of project customization/configuration, since it will make upgrades much more difficult. You cannot rely on sub-systems, which are bundled with the current version of Wildfly. When upgrading IOM, the Wildfly version might be upgraded as well, and with it the sub-systems bundled with Wildfly might change, too.
For more information, please see https://docs.wildfly.org/17/Admin_Guide.html.
The root directory holding custom properties is named etc, reflecting the name of the according configuration directory of the IOM product. Within etc the well-known directory structure for environment-specific settings is used. Files within the base directory are applied on all installations, whereas files in the env directory are applied only if the environment is matches.
Note
etc/ ├── base/ │ ├── cluster.properties │ ├── quartz-jobs-cluster.xml (DEPRECATED) │ ├── quartz-jobs-custom.xml │ ├── initSystem.project.cluster.cli │ └── project.cluster.properties └── env/ ├── <env-name 1>/ │ ├── quartz-jobs-cluster.xml (DEPRECATED) │ └── quartz-jobs-custom.xml ├── <env-name 2>/ │ ├── quartz-jobs-cluster.xml (DEPRECATED) │ └── quartz-jobs-custom.xml └── ...
IOM Development Environment (devenv-4-iom) provides a process for how to execute cli scripts.
During runtime, the configuration is selected by the Helm parameter caas.envName, see Guide - Operate Intershop Order Management 3.0 | Parameters.
project provisioning
Project files have to be part of the customization artifact of the project.
Project-files provides a generic directory structure to add files and directories required by projects. The content of project-files will be copied recursively to $OMS_VAR/project-files.
An example for usage of project-files are public keys, required to automate file-transfer to external partners. Public key files have to be referenced within the file-system by the SQL-configuration. Since the structure of IOM is not fixed, SQL-configuration have to be able to determine their position within the file system in a flexible way. For example, if the keys are all placed within the sub-directory public-keys located in project-files, the files will be copied to $OMS_VAR/project-files/public-keys. The SQL configuration has to use the according system property to create a valid reference to a key-file: "${is.oms.dir.var}/project-files/public-keys/<file 1>".
Directory Structure
Project-files does not support environment-specific differences. The complete directory structure is copied recursively on all environments to $OMS_VAR/project-files.
project-files/ ├── <custom dir>/ │ ├── <custom dir>/ │ │ ├── <custom file> │ │ └── <custom file> │ └── <custom dir> │ ├── <custom dir>/ │ │ ├── <custom file> │ │ ├── <custom file> │ │ └── ... │ └── ... └── ...
Project files are available in Guide - IOM Development Environment if they are part of the custom IOM image.
Project Provisioning
Test data have to be part of the customization artifact of the project.
Projects may require test data to be automatically loaded into the Intershop Commerce Platform, e.g. test systems require this behavior. Therefore, it is possible to manage test data in IOM standard projects, too. Test data might be loaded for specific environments (e.g. test systems) only or on systems of any environment. Again, the familiar directory structure is used to distinguish between these options.
The root directory holding the test data is named test-data. Within test-data the well known directory structure for environment-specific settings is used. Files within the base directory are copied on all installations, whereas files in the env directory are copied only if the environment matches.
test-data/ ├── base/ │ ├── <import file 1> │ └── ... └── env/ ├── <env-name 1>/ │ ├── <import file 1> │ └── ... ├── <env-name 2>/ │ └── ... └── ...
Test data are loaded into IOM Development Environment (devenv-4-iom) if they are part of the custom IOM image and the according config variables are set (CAAS_ENV_NAME
has to be set to an environment containing any test-data and CAAS_IMPORT_TEST_DATA
has to be set to true
).
During runtime, the environment is selected by the Helm parameter caas.envName, see Guide - Operate Intershop Order Management 3.0 | Parameters.