Вы находитесь на странице: 1из 12

TOP FIVE THINGS TO

ENSURE IN AN ECM
UPGRADE/MIGRATION

Aakash Sharma
Lead Engineer Documentum, ECMP, HCL Technologies India
Introduction

Today's content-storage landscape has been shaped by a confluence


of evolving realities: On the one hand, content storage has become
both more reliable and drastically less expensive, while at the same
time government- and industry-ordained regulatory compliance
initiatives have made content storage and management a stringent
requirement for companies of all sizes.

Additionally, companies have to stay on top of new and emerging


technologies, or they risk becoming virtually obsolete in their
competitive field. As such, they are continually looking toward the
latest content storage solution and migrating or upgrading from some
of their more legacy environments,

With content management now more important than ever, and with
the sheer, nearly choking amounts of data now being stored by
companies worldwide, data migration has become one of the hottest
market segments of the information technology (IT) industry.

Recently, there has been a lot of discussion about content migration;


in the blogosphere, on the message boards, and in my company.
With Content Management giants like EMC, IBM releasing new
versions of their content management softare they are certainly
giving enough fuel to push to everyone to upgrade or migrate to new
versions of content management sofware. Or, perhaps, everyone is
virtualizing their infrastructure and needs to move Docbases and
servers to VMware. Whatever the reason, everyone seems to be
talking about it, including me.
In this white paper I am trying to put a quick survey of the things
that we are doing in our current project with client Syngenta. As a
large portion of my work currently involves content migration the
diversity of the projects and the degree to which content migration
was required started me thinking about how each project should
approach their migration needs. Some projects required the simplest
of approaches while other required “a big hammer.” The “big
hammer” approach would, of course, address the needs of the simple
migration, but at too great a cost. The converse was not true: the
simple approach would not meet the needs of the complex
migrations. In an effort to best address the needs of each project
(i.e., cost, effort, time); I drew upon personal experiences and those
of colleagues to put together this brief overview of top five things to
ensure in a migration or upgrade in an ECM platform.

1. Estimate the time-efforts right

Any migration is difficult and must be planned carefully…


This involves determining how much time would be required for
completion of migration from old system to a new system. All hard
work that you have done in developing the migration utilities and
approaches could come a cropper if you erred in getting the time
estimates right. This is something that your client would be most
curious to know about and most willing to talk about (All of sudden
your technical talks about migration which were given a yawning ‘OK’
would start to seem so engaging to your client the moment you put
forth the discussion on time required for migration)
Figuring this out can be a difficult task. Obviously to some extent you
can estimate that how much time it takes for a batch with smaller
number of documents to finish migration and then generalize based
on total documents that are to be migrated from old system to new
system. But depending solely on it would not be entirely correct.
Following rules may help:

• Take not only number of documents that are to be migrated but


also content size. The number of documents may be small but if
their size is too big then they would take more time in migration.

• Make sure the count and size estimations are inclusive of all
versions and rendtions of a document. E.g there may be tiff
renditons of a document that are to be migrated across and tiffs
are generally bigger in size

• Do include in your plan sufficient time for backups and restoration


of backups and other activities which may be pre-requisite to
before you kick start migration or post requisite after you are
through with migration activities

• Do include some time for problems that may be encountered


during migration and would need to be resolved in migration
period itself. E.g of such problems could be halts during migration
due to Meta data inconsistencies where you have to do manual
corrections and then proceed with the migration.

• Do maintain time estimates when you are doing migration at


offshore environment and pre-production. This data could help in
estimation of time during main migration in production
environment

• Do take network setup and speed in to consideration. If the


network speed is good and the destination system is on the same
network from where the migration activities are carried out there
would be lesser chances of network traffic related issues and data
transfer would also be faster

• Lastly do keep some time in buffer for any unforeseen problem


that may effect migration

2. ‘Freeze’ the source system at the right time

Before the content is extracted from the old system the users who are
currently working on the documents would need to complete their tasks and
check-in all such documents back in to the System. This is normally
referred to as ‘freeze-point’.

It can be difficult to manage the migration of content, while new content is


being created or imported by the users in the existing system.

For this reason, a ‘freeze’ is normally instituted, preventing no further


changes by the user to the content present in the system. This gives a little
breathing room for the migration to be completed, without having to deal
with ongoing updates.

Determining when to put a ‘freeze-point’ is critical. The extracted content


can not be migrated to the production system straight away. It is too big a
risk to migrate all the content directly in to the production system (A
customer may get nightmares on just the thought of this???). Based on the
argument above it would seem reasonable to do an upgrade/migrate first
on to the test/pre-production system. But then back to the initial question
when to put a freeze point. Does it mean that we put a freeze right at the
time when we are doing an upgrade/migration on pre-production/test
system? This would mean that the old system will remain freezed till the
time content is migrated to pre-production, tested, all issues and bugs have
been fixed and the final migration on production is over. But what would the
users of old system would do during this time as it appears that this time
lag between freeze point on old system and final system ready with
migrated content may extend to a month. As no company would like to
keep their users on ‘paid-holiday’ so apparently the notion of putting the
freeze point right at the time of test/pre-production migration can be
dismissed.

So we can say that it’s best to put a freeze point at the time you are ready.
Remember, the longer the freeze, the harder it will also be to restart and
re-engage the content authors and owners.

As we discussed there are time lags between the production migration and
migration on to preproduction and test systems. To over come this we
followed an approach of ‘Delta Migration’. This approach suggests that the
moment you are ready with the migration on to the test or preproduction
system make an extract of documents from the repository. However this
doesn’t mean putting up a freeze point on to the old system. The end-users
can continue to work in normal manner with out knowing what’s going
behind the scenes (Obviously for their better!!)

Once all tests have completed on preproduction system, all issues have
been resolved and a green signal has been given for production migration
we would put up a freeze point on the old system. No further modifications
on old stuff we are moving to the new system. But then what we gain out of
it wouldn’t it take same amount of time to take the dump again and load in
to the production system. But here we can use our old dumps that we used
for pre-production. But what you say lot of documents may have been
changed and new documents added in to the system between the
preproduction and production dates. Yes you are correct and precisely that’s
what Delta Migration Approach targets to achieve. During the delta
migration we would make an extract of only those documents that have
been modified or newly created between the dates of preproduction and
production migration (i.e the current date). Infact, the freeze point can be
delayed to the point when we have migrated the old dumps used in
preproduction in to the production system. Normally this would account for
almost 90% of the content that is to be migrated across. Then we would put
up a freeze point on the old system extract the rest 10% of the data and do
the loads in the production. This should be a weekend activity. Thus users
can get back to their normal work on Monday itself if you had put up a
freeze point last Friday.

3. Deciding on the migration tool

Depending upon the complexity of the migration it would also be


prudent to ask that you need to use some specialized tool for
migration or not. There are plenty of migration tools available in the
market like Buldoser for Documentum etc. There are companies that
specialize in building such tools. The challenge is how to make
migration quick, easy and complete and with minimal of monetary
investment in software.
Moreover, before looking out for external tools for migration one also
needs to explore the option of using some default native migration
utilities or tools that a content management system may offer (Free
of cost of course). In case of Documentum it is Dump and Load utility
and in case of Oracle Database Export and Import Feature. Normally
such utilities are suitable for smaller migration. Example when only
thousands of documents are getting migrated in to a new system
with out any change in business rules it would be appropriate to use
such a utility rather than investing in a migration tool. Be aware of
short comings of such native tools before attempting a migration
using them. Some tools are given in table below:

Tool Vendor URL


Open Migrate TSG ww.tsgrp.com
Buldoser Crown Parners www.crownpartners.com
Tadbits Theodore Watson www.tadbits.com

4. Buckle your shoes – Preparation before migration

• Run consistency checker or Clean up jobs – Run consistency checker jobs


to identify inconsistencies in the current system that may effect migration
or upgrade Resolve any issues identified by the Consistency Checker. There
is no sense in migrating dirty or unnecessary data or coping with referential
integrity issues during the migration.

• Maintain migration Details – Fill in all the required steps needed before
starting migration of a batch in the form of checklist. This checklist helps to
ensure all team members of migration stream are following all required
steps. Also ensures you have documented all the necessary account names
and passwords, and estimate needed disk space. During migration maintain
sheets for the loads like start time, end time, batch size etc. It is a little
dated, but still relevant and helpful.

• Connectivity and disk space – Verify that the connectivity among the
Servers and other devices involved in the migration/upgrade is adequate
(i.e. Bandwidth is sufficient to move large amounts of data) and that
enough free disk space is available for the temporary storage needed during
the migration.
• Verify downtime – Verify that your planned downtime for the migration is
adequate and well advertised to the users. You may want to do some
testing by copying large files across the network to gauge transfer rates.
You should also plan your migration during a time when it will impact your
customer the least. This usually means at night or over the weekend, but
make contingencies plan for it takes longer and overlaps expected
operational hours.

• Practice or Trial runs– Practice the migration in offshore or VMware


environment before attempting the real thing. Practice often identifies
unknown and unexpected situations or problems. It is advisable to set up
an environment identical to the source environment at offshore. This
dummy environment may not have same size of content but should
simulate real environment to some extent.

5. Build Migration/Upgrade Strategy

Every migration/upgrade is different and thus needs to be addressed


accordingly. It is pivotal to build migration strategy to overcome the
challenges that migration may offer. Below mentioned are some scenarios
and a strategy that may be appropriate to handle it:

Case Change in Change Operating Change Object Model


Hardware System
Platform
Case 1 Y Y
Case 2 Y Y Y

Case 1
The first case is very simple: there is a Content Server, the database
server and the content files all on a homogeneous hardware platform. We
need to move the Content Server to a more modern, powerful and robust
platform. There is no need to change operating systems, hardware
architecture, or Content Server versions. All we really want to do is clone the
existing Docbase to the new hardware platform quickly and easily.

Challenge
The challenge in this scenario is how to make this migration quick, easy and
complete. We would also want to do it with minimal monetary investment in
software tools.

Approach
In this scenario our prime objective is to ensure that the System function
smoothly in the same way as before the transition to new hardware platform.
The first approach to this scenario should be to look for any native tool that
could meet this challenge.

In case of Documentum we can opt for Dump and Load operations. However,
before using any such tool make sure to go through with the latest releases
of Content Server and the features of such a tool. Many a times tools like
Dump and Load can miss several very important object types like workflow
instances, Object types without any instances, registered tables etc. Be
aware of short comings of such default tools before attempting a migration
using them.

Make sure to run Consistency Checker and other jobs to know the state of
the system after the migration. Resolve any issues identified by the
Consistency Checker. Compare the State of the System through audit or
other reports in the new environment. Do some basic operations on the
migrated content to identify any inconsistencies in the new system. This
comparison should be a good indicator of the thoroughness of your migration
and state of the new system.
The bottom line with this approach is that it is simple and utilizes built-in
Capabilities and tools. However, there could be some potentially serious
drawbacks to using such an approach that you should be aware of before
attempting.

Alternate Approach
Alternatively, if the migration described by this scenario is really just a
Virtualization of your existing Content Server, then a simpler approach could
be to use VMware’s Converter utility. The Converter utility will create an
exact copy of a physical system in a VMware image

Case 2
The second scenario to consider is one that contains a little bit of everything.
We must deal with the changes in hardware and addition, the content of the
source Docbase must be split into two different doctypes based upon a
business rule, the object name will become a combination of other
Meta-data values, and the title must be trimmed.

Challenge
The challenges in this scenario are to identify, segregate and move content
based upon specific business rules, and apply metadata conversion and
mapping rules as the content is migrated from one repository to another. The
hardware, platform, OS and databases are also different between the source
and target environments.

Approach

The complexity of this migration demands a custom or 3rd party tool.


Although custom written migration tools are always an option (and fun to
write!), their cost and general applicability from one migration to another is
usually limiting, and therefore will not be considered here. Instead, this
approach will focus on the use of 3rd party tools. There are a number of 3rd
party tools available to accomplish this kind of migration. Infact in my
current project we address this case specifically by using Buldoser Tool by
Crown Software.

Summary

Like the cases discussed above other cases can be constructed which can be
more complex and demanding but that would be out of scope for this paper.
However, essence here is to analyze the situation and decide the best
suitable option by which the ETL operation can be supported. Moreover, ECM
upgrade/Migration is a specialized job and you would find you would get
better analyzing and predicting things with each migration. I hope the five
do’s and don’t’s of migration/upgrade that I have try to discuss would be
useful to some extent. I wish a smooth upgrade migration that you may
undertake.

References and Resources

http://www.f5.com/news-press-events/news/2008/20080228.html

http://developer.emc.com/developer/edn_redirect_secure.htm?redirectUR
L=http://developer.emc.com/developer/Articles/SevenDCTMJobs.pdf

Вам также может понравиться