The purpose of this document is to give you an idea of what the logging framework in Intershop Commerce Management can do.
Logging in general means recording events for documentation purposes and for analyzing processes, especially for finding the cause of an issue if something has gone wrong.
Within a software system, logging generally works as follows
Since Enfinity Suite 6.3, the logging system is based on SLF4J and Logback.
This glossary describes the terms used here (monospaced
style denoted terms directly correspond with classes/interfaces names):
Term | Description |
---|---|
Log Statement | A line of code that produces a log event |
Log Event | An atomic set of data to be recorded |
Appender | A class that actually writes a log event to a log target file such as a file or database, sends an email, or sends it to a monitoring application. |
Log Category | Also often called logger. In most logging systems, each log event is directly assigned to a category. The set of all categories forms a tree that makes it easier for most use cases to select log messages for filtering and routing to the appenders. |
Logger | Two meanings (sometimes used interchangeably): 1. A class/object that has methods for submitting log events and 2. A log category. |
SMC | System Management Console (or System Management as of IS 7.5 or later) |
It is important to use the correct log level, otherwise the message may be ignored.
Level | Target User | Content | Reason |
---|---|---|
Trace | developer | very detailed information, content of variables |
Debug | developer | summary information, high level content of variables (like external IDs) |
Info | administrator | main business goal reached - description describes what business goal is reached with identification of the business object. This is a signal to the administrator that the application is working well. Additionally, developers can see milestones of the business process during analysis of failures. |
Warn | administrator | something went wrong because of missing resources or timeouts to external systems. The message should contain enough information for the administrator to identify the environment issues. |
Error | developer | something went wrong, the stacktrace needs to be analyzed to avoid these kinds of problems in the future. |
A good explanation for the levels and many other logging-related hints can be found at javacodegeeks.
Note
Business user information (e.g. an import file that contains incorrect content) must be provided to the business user, usually via the UI or specific log files for import/export. Business user information must not be located in debug, warn, or error.log files.
SLF4J (Simple Logging Facade for Java) provides an API for sending log messages. It is independent of the actual implementation of the engine that processes those log messages. Furthermore, the SLF4J project provides libraries so that all commonly used APIs for submitting log messages, such as Log4J, Java Common Logging, and so on, are fulfilled and their messages are passed to the desired implementation.
Logback provides a native implementation for SLF4J and is intended as a successor to the popular Log4J project.
The following sections only give a brief overview of the basic concepts. For detailed information, please refer to the latest official documentation of these two projects: SL4J documentation and Logback documentation.
import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Wombat { final Logger logger = LoggerFactory.getLogger(Wombat.class); private Integer current; public void setTemperature(Integer temperature) { final Integer old = current; current = temperature; logger.debug("Temperature set to {}. Old temperature was {}.", current, old); if (current.intValue() > 50) { logger.info("Temperature has risen above 50 degrees."); } } }
This example shows the basic way of submitting log events via SLF4J:
Logger
object is retrieved from the LoggerFactory
, typically with a class name (retrieved from the class object).Logger
object provides methods for submitting events, where the method name is one of the five log levels (error, warning, info, debug and trace).The Mapped Diagnostic Context (MDC) is a way to enrich logging events with contextual runtime information that is not in the scope of the code containing the log statements. Like:
It is basically a map of key value pairs of type String
held as a InheritableThreadLocal
, which means that child threads automatically get all values from the parent thread as they existed at thread creation time.
Values of the MDC can be used to filter logging events to enrich the log messages or to route the logging events to different targets.
package org.slf4j; public class MDC { public static void put(String key, String value); public static String get(String key); public static void remove(String key); public static void clear(); }
Logback is optimized for performance which actually means to filter out and drop logging events as early as possible. The process is as follows:
LoggingEvent
objectNote
The configuration of the logging system for the server is customized to provide log configuration enhancements for implementation partner. This means Logback is configured programmatically instead of relying on the basic mechanisms built in Logback.
Before the logging configuration is initialized, especially at server startup and class loaders, it's not possible to use SLF4J, so System.out.println
is the only way to log messages from the application. These messages are unformatted and non-encoded output. Please reduce or avoid this type of output to System.out
or System.err
.
Note
Per default, these log messages are written directly to the standard out.
The logging framework of the Intershop solution also provides some additional features that are not part of Logback:
The SLF4J and Logback Jar files are integrated via dependencies. The log configuration is combined from different sources: logback-main.xml and log configurations from cartridges.
Note
Per default, all log messages are written to the standard out, using JSON log format.
The Intershop solution provides a utility class com.intershop.beehive.core.capi.log.MDC
that allows objects of any type to be placed in the SLF4J-MDC, which actually only supports string values. This is achieved by putting multiple attributes of the passed object into the MDC in addition to the original object itself. Which attributes of a given type are added can be configured.
intershop.logging.mdc.types = List intershop.logging.mdc.List.class = java.util.List intershop.logging.mdc.List.attr.first = Value.isEmpty() ? null : Value.get(0) intershop.logging.mdc.List.attr.last = Value.isEmpty() ? null : Value.get(Value.size() - 1)
List<Integer> a = Arrays.asList(new Integer[] {1, 2, 3}); List<Object> b = Arrays.asList(new Object[] {"a", a}); MDC.put("xyz", b);
xyz = [a, [1, 2, 3]] xyz.first = a xyz.last = [1, 2, 3] xyz.last.first = 1 xyz.last.last = 3
The issue with Java util logging is that this configuration is global for the JVM. This may not work well together with a servlet container running multiple applications.
There are two properties that control how Java logging configuration is affected:
# should logging messages from Java logging be passed to the Intershop logging system (true / false)? intershop.logging.javaloggingadapter.enable=true # should a possibly existing Java logging configuration be overwritten by a new one so that the only # receiver of all those messages is the Intershop logging system intershop.logging.javaloggingadapter.exclusive=true
# # Dynamic file appender # # e.g., used when creating an impex log # intershop.logging.dynamictarget.categories=root intershop.logging.dynamicfiletarget.encoding= intershop.logging.dynamicfiletarget.pattern=[%date{yyyy-MM-dd HH:mm:ss.SSS z}] [%thread] %-5level %logger{36} - %msg%n # # Auto flush of buffered appenders. # # The flush interval time has to be given in seconds, default is 30 seconds. # A value less or equal to zero disables this feature # intershop.logging.appender.buffer.flushInterval=30 # # Logging of messages from the logging engine # # Print messages to console with a level higher or equal to the configured level (ERROR, WARN, INFO) # Size of the buffer for the last messages kept in memory # intershop.logging.engine.statuslogging.level=WARN intershop.logging.engine.statusbuffer.size=500 # # Enrichment of stack traces with class packaging data # intershop.logging.engine.stacktrace.packaginginfo.enable=false
Limitation
The auto flush feature is only supported for appenders that inherit from ch.qos.logback.core.OutputStreamAppender
and use the ch.qos.logback.core.encoder.LayoutWrappingEncoder
.
The Logging module within the SMC provides:
Changes made here are stored in the logging.properties or the <ServerName>.properties (depending on if the change was made for a certain server or the whole cluster).
# # Values that overwrite the configuration of the native logback configuration files # Entries can be added and changed via the SMC # # intershop.logging.appender.<Appender Name>.level = [THRESHOLD,FIXED],[ERROR, WARN, INFO, DEBUG, TRACE] # intershop.logging.appender.<Appender Name>.categories = <category1> <category2> ... # intershop.logging.additionalfiles.cluster=SQL_Pipeline_Appender.xml intershop.logging.appender.SQL_Pipeline@SQL_Pipeline_Appender.xml@c.level=FIXED,DEBUG intershop.logging.appender.SQL_Pipeline@SQL_Pipeline_Appender.xml@c.categories=com.intershop.beehive.orm.internal.query.SQLManagerImpl
Possible changes of appenders via the SMC are:
Level Filter
The changes displayed in the SMC are based only on the first filter assigned to the appender. The SMC will only display values if it is a LevelFilter
or ThresholdFilter
. Usually, this happens when the first filter of the filter chain of the appender is changed, or when an additional filter is added. In all other cases, the SMC will show "Level filter = None".
Assigned Categories
Here you can view/modify the logging categories to which the appender is assigned and therefore receives logging events from. Additionally, checkboxes can be checked to show only categories that are known to the logging system at the time. This means that either such a category is mentioned in any configuration, or any category is used by any code (to send an event).
A log process can be used to add additional temporary filters or appenders to the configuration of the logging system. This is useful, for example, to create a new log file for a particular business process, or to apply filters to test cases that are intended to trigger and test the correctness of error log messages.
To enable business-task-specific filtering of log messages, a key/value pair is introduced within the MDC, namely the process type as the key and the process ID as the value.
Note
Only one log process per unique process type can be active within a thread at a time.
Currently, three different types are existing:
com.intershop.beehive.core.capi.log.ProcessSingleFileAppender
CreateProcessFileLogAppender
intershop.logging.dynamicfiletarget.encoding
and intershop.logging.dynamicfiletarget.pattern
com.intershop.beehive.core.capi.log.ProcessSingleXMLFileAppender
CreateProcessFileLogAppender
com.intershop.beehive.core.capi.log.ProcessSingleXMLFileAppender
or pipelet ReadXMLLogFile
com.intershop.beehive.core.capi.log.ProcessTemplateBasedAppender
CreateProcessTemplateBasedLogAppender
If such a process appender is not explicitly assigned to a specific log category, the categories are stored by default with the property intershop.logging.dynamictarget.categories
.
Note
Process appenders are not visible in the appender lists in the SMC. However, the logging status view may provide messages that reflect the creation and / or action of such appenders.