Info
Prior to Intershop version 7.7 the information provided in this document were part of the Administration and Configuration Guide that can be found in the Knowledge Base.
Note
All relevant setup options are to be configured in advance via dedicated deployment script files, before actually executing the deployment. So be aware that if you modify the Intershop Commerce Management configuration after it is deployed, the next deployment will override all changes with the settings specified for your deployment.
Concept | Description |
---|---|
Node Manager | The Node Manager is a standalone Java program that is used to control all server processes in an Intershop Commerce Management instance. The Node Manager starts, stops and (in case of abnormal process termination) restarts application server processes. In addition, it provides the communication interface for Cluster Management, which is used for remote control purposes. |
application server | The Apache Jakarta Tomcat application server makes the operating environment for all Intershop Commerce Management applications. It provides (at least) the JSP and servlet engine and HTTP(S) connectivity. The application server comes out of the box with Intershop Commerce Management and is installed with every Intershop Application Server. |
Cluster Management | In Intershop Commerce Management, an application for the system administrator to control the application server instances, as well as the applications on top of them, running in the cluster. |
You should be familiar with the main concepts of the Intershop Commerce Management infrastructure. Refer to Overview - Infrastructure, Scaling and Performance.
The Node Manager and Cluster Management interact starting and managing the Tomcat application server processes, and when checking the cluster state. The figure below illustrates the interaction using a distributed installation as an example.
For the communication and remote management to work, the Cluster Management instances and the Node Managers must share the event messaging settings and the user database.
The following table lists the required settings in tcm.properties that the Cluster Management instance and the Node Manager use for cluster-wide management.
Property | Description |
---|---|
intershop.tcm.event.messengerClass | Specifies the messenger class to be used, the default is |
intershop.tcm.event.multicastAddress | Defines the group address used for multicast messaging. |
intershop.tcm.event.multicastPort | Defines the event distribution port for multicast messaging. |
intershop.tcm.event.multicastListenerThreads | Defines the number of handler threads to process incoming events. The default value is 5. |
intershop.tcm.registration.registrationTime | Defines the interval (in seconds) after which the Cluster Management instance or the Node Manager sends out a heartbeat packet to re-register with all other Cluster Management instances. The default value is 10. |
intershop.tcm.registration.expirationTime | Defines the interval (in seconds) after which Cluster Management unregisters a Node Manager or another Cluster Management instance if no heartbeat packets were received. The default value is 50. |
intershop.tcm.jmx.protocol | Defines the protocol used to transport JMX control commands to other Cluster Management instances and node manager instances. Currently only HTTP is supported. |
intershop.tcm.password.digest | Defines the algorithm used for Cluster Management user password encryption. The default value is MD5. |
If you intend to use other messaging systems than the default multicast, you must enable and adjust the corresponding intershop.tcm.event
properties.
In larger installations, it is typically necessary to separate the event messaging traffic (multicast is the default) between Cluster Management and Node Managers from other traffic. This can be done by binding multicast messaging to a dedicated network adapter.
If there are multiple network adapters installed on a machine, you can specify the IP of the network interface to use for multicast messaging as value of property intershop.tcm.event.multicastInterface
.
Note
This machine-specific configuration is defined in a local configuration file tcm.properties, located in <IS.INSTANCE.SHARE>/system/tcm/config/local/<IS.AS.HOSTNAME>/<IS.INSTANCE.ID>.
In addition to the shared and local cluster-wide configuration properties, each Node Manager reads its local configuration file, which defines the server processes to be started and the process control options.
The local Node Manager configuration is defined in the nodemanager.properties file (in <IS.INSTANCE.LOCAL>/engine/nodemanager/config/).
The location of the nodemanger.properties file can be set as JVM property via the command line expression -Dnodemanger.config.dir=<path>
.
As one Node Manager controls all server processes in an Intershop Commerce Management instance, these settings apply to all servers. The following table lists the Node Manager properties in the nodemanager.properties file.
Property | Description |
---|---|
network.protocol | Defines the protocol to use for communication (default HTTP). |
network.interface | Specifies the IP of the network interface to use; by default, the primary IP is set. |
network.port | Sets the port for receiving requests from the Cluster Management instance (default: 10050). If not set, the Node Manager starts without communication functionality. |
config.update.timeout | Defines the interval (in seconds) between two configuration look-ups to enable configuration changes at runtime. If not specified or set 0, the Node Manager reads its configuration once at startup. |
The table below lists the properties that can be set for the processes controlled by the Node Manager.
Property | Description |
---|---|
process.allowedExitCodes | Specifies untreated exit codes of the Node Manager sub-processes. If a sub-process exits with one of these exit codes, the Node Manager will not restart it. |
process.list | Specifies the names of the sub-processes that are controlled by the current Node Manager instance (comma separated list), e.g., appserver0,appserver1. |
process.<process_name>.command | Specifies the command line string to start the sub-process. This string can include command line arguments to be passed to the sub-process. |
process.<process_name>.autostart | A Boolean value indicating whether the Node Manager should start the specified sub-process at startup. The default value is true. If set false, only the Cluster Management instance can start this sub-process. |
To pass additional properties to the application server process, either edit the startup script (this would apply to all server processes) or enter the required arguments in the command shell when starting a single instance. Additional properties include specific memory allocation, additional classpaths, etc.
As the command line call to start the Node Manager is very complex, a command line script is used for convenience reasons (nodemanager.sh on Unix platforms, nodemanager.bat on Windows, located in <IS.INSTANCE.LOCAL>/bin/).
The startup script performs the following tasks:
Intershop recommends not to change the Node Manager startup scripts.
Once running, the Node Manager starts the Intershop Commerce Management server processes, as defined in the nodemanager.properties file.
Intershop recommends to use the nodemanager.sh|bat script only for development or debugging purposes.
The configuration for each Tomcat instance that runs a specific Intershop Commerce Management instance, is saved in the directory <IS.INSTANCE.LOCAL>/engine/tomcat/servers/appserver<ID>/conf.
The default configuration, which is used, for example, when cloning a Tomcat instance, is stored in <IS.INSTANCE.LOCAL>/engine/tomcat/servers/_default_/conf.
The Tomcat application server receives shutdown requests via a dedicated shutdown port. The default port number for instance ES1 is 10051. However, the shutdown port can be freely defined.
The string which is used internally to request a shutdown is configured in the server.xml file. By default, the string is set to SHUTDOWN
, as shown below:
<Server port="10051" shutdown="SHUTDOWN">
For security reasons, it is strongly recommended to change the default shutdown request string in the server.xml file. Otherwise, any local network user can shut down the application server instance by simply sending the string SHUTDOWN
to the respective shutdown port.
The Apache Jakarta Tomcat provides abundant options for configuring its network connection. These properties are set in the Tomcat configuration file server.xml as attributes to the Connector element. The Connector component represents Tomcat's connection interface for serving HTTP(S) requests.
For detailed information on how to configure the Apache Jakarta Tomcat, refer to the Tomcat documentation.
For the purpose of serving Intershop Commerce Management, the following options are relevant and may require customizing:
The TCP port number on which the Tomcat Connector will create a server socket and await incoming connections is defined by the attribute port. With Intershop Commerce Management, the default application server ports are 10052 for HTTP and 10053 for HTTPS. The port numbers have to be modified when cloning an application server.
In the context of Intershop Commerce Management, the TCP port numbers discussed above are used for communication with Cluster Management. Note that the Web Adapter routes all requests to its internal JSP and servlet engine, using the port specified by the key intershop.servletEngine.connector.port
.
The following table lists the recommended application server port scheme.
Intershop Commerce Management Instance | AS Instance | Port | Description |
---|---|---|---|
ES1 | 10050 | Node Manager port | |
appserver0 | 10051 | Tomcat shutdown port | |
10052 | Tomcat HTTP port | ||
10053 | Tomcat HTTPS port | ||
10054 | Intershop Commerce Management port | ||
appserver1 | 10061 | Tomcat shutdown port | |
10062 | Tomcat HTTP port | ||
10063 | Tomcat HTTPS port | ||
10064 | Intershop Commerce Management port | ||
ES2 | 10100 | Node Manager port | |
appserver0 | 10101 | Tomcat shutdown port | |
10102 | Tomcat HTTP port | ||
10103 | Tomcat HTTPS port | ||
10104 | Intershop Commerce Management port | ||
appserver1 | 10111 | Tomcat shutdown port | |
10112 | Tomcat HTTP port | ||
10113 | Tomcat HTTPS port | ||
10113 | Intershop Commerce Management port |
The location of the security key for the HTTPS Connector is specified in the attribute keystoreFile
. This attribute gives the absolute path and file name of the keystore file, the default is
keystoreFile="<IS.INSTANCE.SHARE>/system/tcm/config/keystore"
To edit the key file, you need a password. This password is defined by the attribute keystorePass
, with intershop
as default value.
Intershop Commerce Management provides a demo, yet fully functional security key. To prevent warnings about certificate/IP mismatches, Intershop recommends to create an own key file for each server.
To edit the existing keystore file, e.g., to create new certificates, use the keytool program, located in <JAVA_HOME>/bin. For information on managing keystores using keytool, refer to the JDK documentation.
The keystore settings play a role for communication with Cluster Management. They are not relevant to securing Commerce Management or your storefront applications.
The Tomcat log files tcc_access_log are stored separately for every server instance in the Tomcat log directory <IS.INSTANCE.LOCAL>/engine/tomcat/servers/appserver<ID>/logs. For information on the log, consult the official documentation of the Apache Jakarta Tomcat.
In complex deployments involving multiple Intershop Commerce Management clusters, it may be desirable to manage Tomcat server processes from different Intershop Commerce Management clusters in a single Cluster Management instance. For example, consider a data replication scenario with source system and a target system. Being part of different Intershop Commerce Management clusters, source and target system access separate Intershop Shared Files instances.
In a standard setup, the Cluster Management configuration files are also separate, as they reside in the respective Intershop Shared Files area of cluster 1 and cluster 2. Using the Intershop Commerce Management environment variable IS_TCM_SHARE
, defined in the intershop.properties file below <IS.INSTANCE.HOME>, both clusters can be forced to use the same set of configuration files. As a consequence, the Tomcat server instances can be managed from the same Cluster Management instance.
For different clusters to use the same Cluster Management configuration files, the IS_TCM_SHARE
variable in the intershop.properties file of each instance has to point to the same location.
You can use any Tomcat server instance to connect to the Cluster Management instance, using the instance's Tomcat HTTP port (e.g., port 10052). In larger deployments, consider using a dedicated Tomcat server instance which serves administrative purposes only and does not run the Intershop Commerce Management application.
Info
This section replaces the outdated Knowledge Base article with the ID 23K026 and the title Using the OnOutOfMemoryError JVM Option.
The OnOutOfMemoryError option helps you dealing with OutOfMemory situations of your Java Virtual Machine by defining an action to be executed when an OOM error occurs.
Simply adjust your tomcat.bat / tomcat.sh in <%ESERVER_HOME %>/bin by adding the following line to the JVM options section:
set JAVA_OPTS=%JAVA_OPTS% -XX:OnOutOfMemoryError="<YOUR_ACTION>"
Example:
set JAVA_OPTS=%JAVA_OPTS% -XX:OnOutOfMemoryError="C:/WINDOWS/system32/taskkill /F /PID %%p"
This example would kill the application server process in case of an OOM error in a windows environment. For more information about this and other JVM options refer to Java HotSpot VM Options.