Can you migrate from Derby to MSSQL for the content store? When we originally did the install, Derby was the only option. In the past few versions, support for SQL and Oracle have been added. Your documentation says to not use Derby for large sites, our is already at 500 pages and growing, so we need a future-proof solution.
We have not developed any migration tools to date; however, this is something we are looking at for the first half of 2013. We’ll be eager to work with you on this topic.
I would also like to be able to do this. Our site is over 600 and we need to look to migrate as well.
Any news, update or a timeline for the completion of this?
Your timing here is uncanny. We have just today begun testing a new process to migrate content from Derby to MySQL. I cannot say definitively at this point how long the testing process will take, but I can assure you that progress is being made. I will update this thread once we have a better idea as to when this process might be made available to our customers.
Our documentation to migrate Percussion from Derby to MySQL is now available. Note that this is a high risk process, and should only be performed by someone with a solid familiarity with database technologies:
Hi Nathaniel. Thanks for providing us with this migration option; however, due to limitations within our system, we are only able to use MSSQL. A while ago we were told a migration option for MSSQL would be in the works, is this still true? We desperately need a solution other than Derby, asap.
We have no reason to believe that this same process will not work for MS SQL Server databases as well, it’s just that no one on our end has had a chance to test this yet. If you are indeed looking to migrate ASAP, as long as you back everything up first, I certainly wouldn’t stop you from attempting this process using a SQL Server destination database. I’m sure other customers would be very curious to know what your results are. The one main note is that your will need to add MSSQL drivers to OpenDBCopy’s “lib” folder, and update the “SQLDrivers.xml” file with the driver information. Their doc outlines how to set this up: http://opendbcopy.sourceforge.net/use…
Yesterday I attempted the migration from Derby to MSSQL. I will break this down into multiple posts for the sake of room and time.
First thing I should note is that I had a difficult time getting openDBcopy to run on my 32bit windows machine. The opendbcopy launch command that runs by default (using the start.bat file) is:
“java -Xms1024m -Xmx3072m -classpath “;lib\opendbcopy.jar;lib\log4j-1.2.8.jar;lib
This command designates 3GB of memory to the opendbcopy application. Unfortunately, 32-bit machines have a limit of 4GB, so it is not rational to expect a 32-bit windows machine to have 3GB of memory available. I was receiving this error when trying to open the application:
“Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
To get around this error, I used the following command to run opendbcopy with less memory:
“java -Xms512m -Xmx1350m -classpath “;lib\opendbcopy.jar;lib\log4j-1.2.8.jar;lib\xerces.jar;lib\jdom.jar;lib\ostermillerutils_1_03_00.jar;;” opendbcopy.controller.MainController”
The application runs fine on my 32-bit window 7 machine when opening it with this command.
I will get into the next issue with my next post.
The second piece of information everyone should know is what driver they will likely need to use for the “Destination Database Connection” setting in openDBcopy.
For java applications like openDBcopy that need to communicate with a Microsoft SQL Server, one should use the JDBC Driver for SQL Server. In openDBcopy, the driver name is “jTDS MSSQL Server JDBC Driver (http://jtds.sf.net)”
Next, under “Table Mapping” I noticed a difference in some tables between an older Derby install and an MSSQL install. The following tables do not seem to exist for MSSQL:
CONTENTSTATUS_UPG (for updates?)
CT_PERCRSSASSET_BAK (for backups?)
PSX_CONTENTLIST_EXPANDER_PARAM_BAK (for backups?)
PSX_CONTENTLIST_GEN_PARAM_BAK (for backups?)
RXEDITION_UPG (for updates?)
TRANSITIONS_BAK (for backups?)
The only one I was concerned about was the “CT_PAGE_PAGE_CATEGORIES_TREE_SET” table.
I will post one more time with information about where the migration failed.
The next step is where the conversion completely failed. I am hoping you can help me with it. I believe it may be a field data type mapping issue, but I have not had too much time to explore it. A few hours into the data migration, I received the following error:
opendbcopy unable to convert between org.apache.client.am.Clob
Unfortunately, I lost the log data when I reopened openDBcopy and the log was overwritten, but the above error and the table name is all the log recorded. The error occurred on table “CT_PAGE” which has a number of “ntext” data fields. My theory is that openDBcopy is unable to put “Clob” type data into a “ntext” type data field. If you have some insight into this, please let me know.
Chris, thanks, great note. I had maxed out the script’s memory parameters on my local box when I was testing this due to out-of-memory errors that I was encountering, but I hadn’t considered the process memory limitations of 32 bit machines. I will update that script’s with params that will work for both 32 and 64 bit. Thanks again!
There is not much insight I can give into this until I have a chance to block off some time and run through this for MSSQL (at which point I’ll update the doc for both database types). I’m not sure what would cause that error. I can say that I am unfamiliar with this “CT_PAGE_PAGE_CATEGORIES_TREE_SET” table (it doesn’t exist in an out-of-box install). Is there any chance that someone on your end may have created this for some custom web application? At any rate, I wouldn’t worry about leaving that data behind.