This document approaches the logging system of Intershop Commerce Management from the configuration perspective. It is addressed to administrators or DevOps who configure and maintain Intershop Commerce Management instances.
This document does not describe the logging framework. For details about the framework implementation, refer to Concept - Logging.
Info
Prior to Intershop version 7.7 the information provided in this document were part of the Administration and Configuration Guide that can be found in the Knowledge Base.
Note
All relevant setup options are to be configured in advance via dedicated deployment script files, before actually executing the deployment. So be aware that, if you modify the Intershop Commerce Management configuration after it has been deployed, the next deployment will override all changes with the settings specified for your deployment.
The Intershop Commerce Management logging framework is used to log application events. Intershop Commerce Management sends logging messages via SLF4J, which interfaces to logback as logging system. The combination of SLF4J and logback allows Intershop Commerce Management to produce more detailed log information and to integrate third party APIs (log4j, Jakarta Commons Logging, java.util.logging, etc) into its logging framework.
The application log files are located in the directory <IS_SHARE>/system/log/. The available options are controlled via logback configuration files logback-*.xml and the Intershop Commerce Management-specific logging configuration files.
The main configurable components of the logging framework include:
Concept | Description | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Logger | Generally, loggers are the central framework objects that provide the methods for creating and sending log events. However, "logger" is also used to name the log categories (see below). | ||||||||||||
Category | Loggers are categorized by their name, based on a hierarchical naming rule. They are usually named according to their corresponding classes. Thus, for instance, the logger Categories are used to filter the log output. As the logger name corresponds to the class name, it indicates the code location that produces a log message. | ||||||||||||
Appender | Appenders are responsible for sending the log message to a target, i.e., an output destination. A logging request will be forwarded to all appenders specified for a logger as well as to any appenders higher in the hierarchy, i.e., appenders are inherited additively from the logger hierarchy. To limit this appender inheritance in a certain category, you can explicitly set the additivity flag of that logger to | ||||||||||||
Level | Levels are used to hierarchically categorize log messages by severity. Supported levels include The root category should always be assigned the | ||||||||||||
Filter | Adding filters to appenders allows for selecting log events according to various criteria. With respect to filters, consider the following aspects:
| ||||||||||||
Turbo Filter | As opposed to filters assigned to appenders, turbo filters apply globally. They pre-select the output before the actual logging event is created. Intershop Commerce Management generates a TurboThresholdFilter automatically based on the lowest level defined for any appender. | ||||||||||||
Layout | Layouts format the log output and return a string. In the initial configuration, Intershop Commerce Management uses PatternLayout, which allows for customizing the output string based on a configurable conversion pattern.
| ||||||||||||
Mapped Diagnostic Context (MDC) | MDCs can be seen as additional logging context definitions that enrich the logging events and, consequently, allow for further filtering options. Intershop Commerce Management provides a configurable mechanism to enrich the MDC API. These customizations are configured in <IS_SHARE>/system/config/cluster/logging.properties. By default, the followingMDC enhancements are available:
|
The logging framework options are controlled via a global logback configuration file and cartridge-specific logback configuration files, as well as a global Intershop Commerce Management logging configuration file and server-specific Intershop Commerce Management logging configuration files.
Upon application server startup, Intershop Commerce Management dynamically creates the internal main logback configuration file. This file sets some basic properties and defines a list of <include>
elements, which, basically, constitute the logback-*.xml files as described below.
The internal logback configuration could look, for example, like this:
<?xml version="1.0" encoding="UTF-8" ?> <configuration> <include file=".../<cartridge_name>/release/logback/logback-bc_auditing.xml"/> <include file=".../<cartridge_name>/release/logback/logback-bc_pricing.xml"/> </configuration>
The central logback configuration file <IS_SHARE>/system/config/cluster/logback-main.xml controls the cluster-wide appenders and loggers. Any cartridge-specific configuration is passed via logback-<cartridgename>.xml files located in <CARTRIDGE_DIRIECTORY>/<CARTRIDGE_NAME>/release/logback. In addition, Intershop Commerce Management provides a dedicated DBinit log configuration file (<IS_SHARE>/system/config/cluster/logback-dbinit.xml), which is added to the dynamically generated configuration if the application server is started in dbinit mode.
Basically, the appender definitions in the logback-*.xml files specify
In addition, the logback-main.xml file defines
For information about creating project-specific appenders or customizing existing ones, refer to the logback documentation.
The System Management application provides an interface to upload and manage additional logback configuration files, which allow for changing the logging details at server run time, e.g., for maintenance reasons. For details, see the System Management online help.
The file <IS_SHARE>/system/config/cluster/logging.properties controls the global, i.e., cluster-wide Intershop Commerce Management-specific logging settings. Intershop Commerce Management-specific logging settings can be defined locally, i.e., on application server level, using <IS_HOME>/config/appserver#.properties.
Basically, the global logging.properties file defines
MDC enrichment for certain object types used in the current Intershop Commerce Management instance, using Java expressions
intershop.logging.mdc.types=<type> intershop.logging.mdc.<type>.class=<fully_qualified_class_name> intershop.logging.mdc.<type>.attr.<attr1_ID>=<Java_expression> intershop.logging.mdc.<type>.attr.<attr2_ID>=<Java_expression>
default categories, encoding and pattern for any dynamically created appenders, to be used as fallback if not defined explicitly, for instance
intershop.logging.dynamictarget.categories=root intershop.logging.dynamicfiletarget.encoding= intershop.logging.dynamicfiletarget.pattern= [%date{yyyy-MM-dd HH:mm:ss.SSS z}] [%thread] %-5level %logger{36} - %msg%n
default JDK logging adapter settings, namely
intershop.logging.javaloggingadapter.enable=true intershop.logging.javaloggingadapter.exclusive=true
level filters and category assignments for existing appenders as changed via the System Management application
appender settings that control specific appender behavior, like, for example, intershop.logging.appender.buffer.flushInterval
, which defines the interval at which appenders with <immediateFlush>false</immediateFlush>
(as set, for example, in logback-main.xml) are flushed
the level of logback status messages (ERROR, WARN, INFO
) that are passed to system.err and automatically appended to the application server log files, as well as the number of status messages stored with each application server, for example:
intershop.logging.engine.statuslogging.level=WARN intershop.logging.engine.statusbuffer.size=500
The following table lists the columns of the log output as provided by the default error and warn appenders defined in logback-main.xml. This information is expected, for instance, by the Intershop Commerce Insight (ICI).
Column | Description |
---|---|
1 | Date, time and time zone (in square brackets), e.g., [2008-04-16 12:34:55.501 CEST+0200] |
2 | Log level |
3 | Host name or IP address |
4 | Intershop Commerce Management instance, e.g., ES1 |
5 | Application server ID, e.g., appserver0 |
6 | [Site name] |
7 | [Request application URL identifier] |
8 | Log category, i.e., the class name |
9 | [Marker], set by the log message author |
10 | Request type, e.g., storefront, job, back office, etc. |
11 | [Session ID] |
12 | [Request ID] |
13 | Java thread name (in double quotation marks) |
14 | Log message |
Hence, a default log output could look as follows:
[2012-07-16 12:34:55.501 CEST+0200] WARN 127.0.0.1 ES1 appserver0 [] [] com.intershop.adapter.saferpay.AcSaferpayCartridge [] [] [] [] "main" cartridge property: 'intershop.cartridges.ac_saferpay.sac.installation .path' is *not* found!
To support PA-DSS compliance for Intershop Commerce Management-based e-commerce applications, Intershop Commerce Management features a simple logging system accessible to users in Commerce Management. It records payment-relevant Commerce Management user operations, including
into the dedicated log file audits-PCI-<IS.AS.HOSTNAME>-<IS.INSTANCE.ID>-appserver<ID>.log, located in <IS_SHARE>/system/log. The message states the result of the performed operation (success or error), as well as any additional data, if available.
Commerce Management auditing log is enabled by default. To this end, the standard logging configuration file logback-main.xml includes a dedicated appender definition and corresponding category assignments.
Note
This section replaces the outdated article with the ID 23989Y and the title Problem in Rolling Log Files.
Sometimes systems run into a problem with rolling log files.
With the out-of-the-box logging configuration the issue does not occur. However, when own logging appenders are configured and they accidentally or intentionally log into the same file, multiple handles for the log file are open. Thus, the file cannot be renamed successfully and consequently, rolling is prohibited.
Note
In general, each log file should have only one defined appender.
You find messages like these in the appserver.log (example for more than one appender writing into a rolling job log):
[2013-01-21 17:40:00.204 CET]: 17:40:00,204 |-WARN in c.q.l.co.rolling.helper.RenameUtil - Failed to rename file [d:\eserver9\share\system\log\<job>-appserver0.log] to [d:\eserver9\share\system\log\<job>-appserver0.log3655490412870351.tmp]. [2013-01-21 17:40:00.204 CET]: 17:40:00,204 |-WARN in c.q.l.co.rolling.helper.RenameUtil - Attempting to rename by copying. [2013-01-21 17:40:00.204 CET]: 17:40:00,204 |-WARN in c.q.l.co.rolling.helper.RenameUtil - Could not delete d:/eserver9/share/system/log/<job>-appserver0.log
This issue can be solved by reviewing all logging configurations in the system and trying to find those appenders that log into the same file.
Reconfigure them to write into different files and restart the system.
In case it is intended that different appenders write into the same file (possibly even from different servers), there is some support for this by Logback (although with some restrictions), see Logback Project | Chapter 4: Appenders.
The directory for files specifically uploaded to one application server is: [IS_HOME]/config/appserver0/loggingextension
Note
Up to ICM 7.4.5 the directory for files specifically uploaded to one application server is: [IS_SHARE]/system/config/servers/<ip>/<Instance>/<appservername>/loggingextension (for example on Windows D:\eserver9\share\system\config\servers\10.0.56.111\ES9\appserver0\loggingextension)