ICM 7.10 now supports MS SQL Server as an additional database type besides Oracle.
DBMigrate for Microsoft SQL Server is supported starting with 7.10.2 GA only. This implies that customers who want to migrate must first migrate to 7.10 on Oracle before they can begin migrating to Microsoft SQL Server.
For more information on how to configure MS SQL Edition and Intershop 7 to be used as a development database, see Guide - Setup Microsoft SQL Server as Intershop Development Database.
For more information on how to configure SQL Server and Intershop 7 for a production environment, see Guide - Setup Microsoft SQL Server as Intershop Production Database.
For details on deployment, see Guide - 7.10 Database Deployment Configuration.
Intershop recommends using the data created by DBInit for development purposes. DDL preparers in DBMigrate may need to be converted to DBInit to create the initial table structures and indexes if only DBMigrate was used.
As of 7.10 GA, DBInit does not clear the database contents by default. To do this, an additional parameter must be added to the DBInit call:
cd %IS_HOME%\bin dbinit.bat --clean-db=<FLAG>
Available values for the cleanup are only
/ yes
/ no
.
It is also possible to perform only the DDL database preparation steps to prepare the table, indexes, stored procedures, and functions:
dbinit.bat --ddl=true
More details can be found in Cookbook - DBInit.
If the custom preparer is to be run only for a specific database, the additional sqldialect
parameter can be used.
The preparer is only executed if the database system matches the sqldialect
. Currently, the dialects Oracle
and Microsoft
are available, e.g., in this example the script will only be executed if the system is running on an Oracle database system.
pre.Class1 = com.intershop.beehive.core.dbmigrate.preparer.database.ExecuteSQLScriptPreparer \ resources/bc_rating_orm/dbmigrate/scripts/RecreateRatingCommentIndexes.sql \ sqldialect=Oracle
There is no difference compared to running DBMigrate on Oracle.
The SQL syntax for Oracle Database and Microsoft SQL Server is not completely identical. They are slightly different. However, it is possible to provide both SQL dialects in the same file. For more details, see Reference - Query File Syntax and Cookbook - Mastering Database Independence.
The syntax for stored procedures in Microsoft SQL Server also differs from that in Oracle Database. See SQL Server Create Procedure for further details.
For Microsoft SQL Server, the stored procedures are located in: /staticfiles/cartridge/lib/resources/{cartridge}/dbinit/scripts/microsoft. The stored procedures must be converted.
Sometimes it is necessary to run specific code for different databases. To determine which database is currently in use, you can use the com.intershop.beehive.core.capi.cartridge.SQLDialect
enum:
... import com.intershop.beehive.core.capi.cartridge.SQLDialect; ... if (SQLDialect.ORACLE.isActive()) { // Oracle specifics } else if (SQLDialect.MSSQL.isActive()) { // Microsoft SQL Server specifics } else { throw new IllegalStateException("Unknown SQL dialect " + SQLDialect.getCurrentSQLDialect()); } ...
To migrate the data from Oracle to Microsoft SQL Server or Azure SQL Managed Instance, use the Microsoft SQL Server Migration Assistant (SSMA) for Oracle tool.
dbinit -classic --ddl=true
): This is necessary because the assistant does not create the necessary table structure and indexes.Provide the login parameters for databases again.
Note
After analyzing both databases, a summary with failed prerequisites is displayed. Please note that warnings for tables which already contain data can be ignored.
select name, base_object_name from sys.synonyms;
base_object_name
values are like [icmdbXYZ].[dbo].[CLASSIFICATIONCOUNT$1]
i.e. have the DB and schema qualifiers, then you need to remove them.[CLASSIFICATIONCOUNT$1]
:DROP SYNONYM CLASSIFICATIONCOUNT; CREATE SYNONYM [CLASSIFICATIONCOUNT] FOR [CLASSIFICATIONCOUNT$1];
When performing the steps described in the previous chapter, keep in mind that some of these steps can take a considerable amount of time.
The actual execution time required depends on the following parameters:
The following times have been measured as an example to know the expected duration (based on inSPIRED-b2x demo data):
Step | Duration |
---|---|
7e) Convert Schema | 61 minutes |
8) Migrate Data | 9 minutes |
12+13) Synchronize Synonyms with Database | 0 minutes |
14+15) Synchronize Sequences with Database | 0 minutes |
16) Execute script | 0 minutes |
Microsoft suggests to adapt one to two connections for each core.
Some customers want to reduce the downtime of their system. A two-step migration can reduce the downtime significantly. In the first step all "non-transactional" data is transferred (such as products, content, configurations, ...), in the second step all "transactional" data is transferred (such as users, baskets, orders, ...).
Intershop provides a list of tables that is suitable for most customers.
Note
In addition, some businesses require changes to non-transactional data on short notice. Therefore, it is important to inform the business users about the transfer period. It may also require additional effort to do the modification a second time on the new platform.
See the attachment for each step (step_1_nontransactional.txt, step_2_transactional.txt).
Updating XML and text indexes takes a lot of time. To be more effective, they can be disabled before the migration and rebuilt after the migration.
This is a sample script that helps to disable all indexes shown in a table that has at least one XML index:
DECLARE @indexes TABLE (rowid INT IDENTITY(1,1), indexname nvarchar(128), tablename nvarchar(128)); DECLARE @i INT = 1, @count INT, @indexname nvarchar(128), @tablename nvarchar(128), @sql NVARCHAR(MAX); INSERT INTO @indexes SELECT name as indexname, OBJECT_NAME(object_id) as tablename FROM sys.indexes WHERE is_primary_key = 0 and object_id IN (select object_id from sys.indexes where type_desc = 'XML') ORDER BY OBJECT_NAME(object_id), name select @count = COUNT(1) from @indexes WHILE @i <= @count BEGIN SELECT @indexname = indexname, @tablename = tablename from @indexes where rowid = @i SET @sql = CONCAT('ALTER INDEX ', @indexname, ' ON ', @tablename, ' DISABLE') PRINT @sql EXEC sp_executesql @sql SET @i = @i+1 END
The following script shows how to rebuild the indexes:
DECLARE @indexname nvarchar(128), @tablename nvarchar(128), @sql NVARCHAR(MAX); SELECT TOP 1 @indexname = name, @tablename = OBJECT_NAME(object_ID) FROM sys.indexes where is_disabled = 1 AND OBJECT_SCHEMA_NAME(object_id) = SCHEMA_NAME() WHILE @indexname IS NOT NULL BEGIN SET @sql = CONCAT('ALTER INDEX ', @indexname, ' ON ', @tablename, ' REBUILD') PRINT @sql EXEC sp_executesql @sql SET @indexname = NULL SET @tablename = NULL SELECT TOP 1 @indexname = name, @tablename = OBJECT_NAME(object_ID) FROM sys.indexes where is_disabled = 1 AND OBJECT_SCHEMA_NAME(object_id) = SCHEMA_NAME() END
A search query in an XML column such as the XML product attributes is much slower than in Oracle. Either use direct custom attributes or add these attributes to the Solr search index, which is the recommended solution.
Check the following document for replication with SQL Server: Guide - Replication with Microsoft SQL Server/Azure SQL Managed Instance.
This tool contains some tools for database-related content to help the developer.
For a comprehensive guide, see Cookbook - Developer Tools dev_query.
To check the correctness of the queries, a new test has been added.
See Recipe: Create Test Case for a Particular Query for its usage.
All queries on the server can be tested for correct syntax. To do so, execute the tests.unit.com.intershop.dev.query.tests.QuerySyntaxCheckTest. This test can be found in the cartridge dev_query.
It also checks for suspicious elements in queries that do not run in SQL Server.
There is a prepared launch configuration to start the test. The following attributes are required:
// Microsoft SQL Server configuration -DCONNECTION_URL=jdbc:sqlserver://<host>:<port>;databaseName=<database name> // Oracle configuration # using sid : jdbc:oracle:thin:@<HOST>:<PORT>:<SID> # using service Name : jdbc:oracle:thin:@//<HOST>:<PORT>/<SERVICE_NAME> -DDB_USER_NAME=<db user name> -DDB_USER_PASSWORD=<db password>
Note
This test does not check if the query is semantically correct.
Several tools have been added to help developers analyze the queries sent to the database system.
These tools are located in the dev_query cartridge.
It is possible to record all queries sent to a database system, see Recipe: Record Query Call Log Files.
The recorded queries can be replayed against another database system to verify that the syntax and semantics remain the same or to measure the performance of the database system.
This allows you to check for differences between databases.