IBM Information Management Software Portfolio Overview

Posted by Frank Fillmore on April 5, 2012 under DB2 for i, DB2 for Linux Unix Windows, DB2 for VSE&VM, DB2 for z/OS, DB2 Migrations, IBM Smart Analytics System, Informix, InfoSphere, Netezza, Optim, Q-Replication. Tags: , , , .

The recording of April 5, 2012 webinar “IBM 2012 Information Management Product Strategy and Portfolio Overview” is located here.

The presentation materials: DB2 101 – IBM Information Management Software Portfolio Overview

Successful Oracle to DB2 Migration at a Top 50 Financial Services Firm

Posted by Frank Fillmore on July 25, 2011 under DB2 for Linux Unix Windows, DB2 Migrations, Information on Demand Conference, InfoSphere, Oracle, Q-Replication. Tags: , , .

Whew!  The last four months have been a whirlwind.  A colleague, Jim Herrmann, and I have been working since March on data migration from Oracle to DB2 for a worldwide Top 50 integrated financial services firm.  A team of specialists from IBM and The Fillmore Group – including Joe Geller, Teresa Wan, Rebecca Bond, and John Schatko – assisted the customer with all aspects of application performance testing and tuning, code remediation, and data migration.  On Sunday, July 17 the production cutover occurred – and it was a huge success.  Props especially to IBMers Jeff Richardson, Jay Lennox, and Christian Zentgraf for providing technology leadership.

One of the keys to the migration of the data was the need to limit the site outage into a window no longer than 12 hours, from midnight to around noon on Sunday.  For an earlier migration, smaller in terms of data volume, the customer had successfully used the IBM Data Movement Tool (IDMT).  Given the metrics of the latest migration, the IDMT approach alone would have required an outage of up to 5 days – unacceptable for an online banking system on which millions of customer rely to pay bills and check balances.

The Fillmore Group proposed using InfoSphere Replication Server (Q Replication) to copy updates to the live OLTP data from Oracle to DB2 while the batch IDMT was running asynchronously to move the data from Oracle to DB2 tables.  By using IDMT and Q Replication in combination, we were able to reduce the outage to a much more manageable 12 hours.

We followed a template The Fillmore Group had used in 2010 at Constant Contact for their no-outage upgrade from DB2 for Linux, Unix, Windows v8.2 to DB2 v9.5.  Dan Berry and Sankar Padhi delivered a detailed technical overview of that project at the IBM Information On Demand conference last year: Zero Downtime Upgrades Using Q-REP at Constant Contact  The Oracle migration was incrementally more complex, but not all that difficult.

Q Replication Dashboard v10.1 Fixpack 1 Now Available

Posted by Frank Fillmore on June 16, 2011 under InfoSphere, Q-Replication. Tags: , , .

On June 9, 2011 IBM released Fixpack 1 of the Q Replication Dashboard v10.1.  Cool new features include an Alert Manager that sends e-mail messages when replication thresholds are breached (e.g. end-to-end latency exceeds 30 seconds).  After some tire-kicking, I will let you know if this is a viable substitute for the asnmon monitoring feature of Q Replication.  The dashboard has also been integrated with Tivoli Enterprise Portal.

Full details and the free download are at

Washington Area DB2 Users Group, June 15th

Posted by Kim May on June 7, 2011 under Baltimore Washington DB2 Users Group, DB2 Education, DB2 for Linux Unix Windows, DB2 Migrations, InfoSphere, International DB2 Users Group (IDUG), Optim, Uncategorized. Tags: , , , , , .

The Mid-Atlantic IBM Information Management technical support team has announced the date and location of the first Washington Area DB2 Users Group.  This group is being coordinated by IBM’s Warren Heising and Prem Rangwani to provide education and networking to local Information Management users.  Please contact Warren Heising at IBM if you would like to attend (  Details below.

Read More…

A Workaround for LOADDONE in Q Replication

Posted by Frank Fillmore on May 25, 2011 under InfoSphere, Q-Replication. Tags: , , .

Here is another technique to use only when standard practices won’t work.  Let’s say you are performing a manual load in Q Replication (either InfoSphere Replication Server or DB2 Homogeneous Replication Feature).  Ordinarily you would INSERT a LOADDONE into the IBMQREP_SIGNAL table.  The INSERT into the Q Replication Control Table is logged and Q Capture will see it.

BUT, what if Q Capture is way behind?  Various operational problems could cause this.  The manual load has actually completed, but the WebSphere MQ spill queues are growing because Q Apply doesn’t know that.  It might take hours for Q Capture to get to the point in the log where the LOADDONE is posted.

Here are the steps to bypass LOADDONE and initiate the draining of the spill queues by Q Apply:

  1. Stop Q Capture.
  2. Wait for the Receive queue to empty.  The reason for this is that Q Capture sets an indicator in the message placed on the Send queue specifying that a subscription is still being manually loaded.
  3. Stop Q Apply.
  4. For each subscription where the LOAD has completed: change the STATE value in the IBMQREP_TARGETS table from ‘E’ to ‘F’ and change the STATE value in the IBMQREP_SUBS table from ‘L’ to ‘A’.
  5. Start Q Apply.
  6. Start Q Capture.

Thanks to Christian Zentgraf and Anupama Mahajan of IBM and my TFG colleague, Jim Herrmann, for developing this procedure.

Upcoming Training Classes

Posted by Kim May on April 21, 2011 under Authorized Training Partner, DB2 Connect, DB2 Education, DB2 for Linux Unix Windows, DB2 for z/OS, InfoSphere. Tags: , , , , , , , , .

The Fillmore Group’s Frank Fillmore has been delivering training classes on IBM’s behalf for so many years that the team here at TFG has the drill down cold:  get the IBM enrollment reports once or twice a week and figure out which classes are likely to run, advertise the ones that look promising, then cross our fingers and hope the enrollment is sufficient to be able to notify the students that class is ON!!

From there it is off to the store to stock up on sodas and treats for Donna and planning on a weekend in the classroom to do the lab set-up for Frank (sorry, Frank!)  When everything is set and the students arrive, Frank goes into delivery mode and somehow stays focused in spite of lab malfunctions, power outages and jetlagged students.   

Over the past two years we’ve seen class enrollment going up, just a bit.  I hope that as you read this you will give some thought to the benefits of IBM Information Management training.  We work really hard to deliver a great experience to everyone who comes to Towson for a class.  We have Information Server (DX447), DataStage (DX444) and DB2 Connect (CF602) classes coming up.

DB2 101 at the Iron Bridge Wine Company

Posted by Kim May on April 8, 2011 under DB2 Education, DB2 for Linux Unix Windows, DB2 for z/OS, DB2 Gold Consultants, IBM DB2 Services, InfoSphere, Netezza, Optim, Oracle. Tags: , , , , , , , .

The Fillmore Group and IBM are hosting a second session of “DB2 101” – our Information Management portfolio overview lunch and learn, on Friday, April 15th, from 11am – 2pm, at the fabulous Iron Bridge Wine Company in Columbia, Maryland, mid-way between Baltimore and Washington.  We felt local database users would benefit from gaining a better understanding of the IBM data management strategy and developed this informative presentation, which will be presented by DB2 Gold Consultant Frank Fillmore, along with Netezza’s Mickey James.  To join us, register here.

How to Manually Activate a Q Replication Subscription on the Target

Posted by Frank Fillmore on March 23, 2011 under InfoSphere, Q-Replication. Tags: .

It’s rare I begin a blog post which will explain how to do something with “DON’T DO IT!”, but I’ll make an exception this time.  Read what follows, but don’t do it unless extreme circumstances warrant.

A colleague, Jim Herrmann, and I have been working on a Q Replication implementation at a large financial institution.  Through a procedural error, we cold started a Q Replication environment, then deleted the schema messages from the WebSphere MQ receive queue that Q Replication had sent from source to target to activate subscriptions.  Ooops…  We had begun a performance test so there were almost 700k messages added to the receive queue.  When we started Q Apply, it complained about missing messages (the schema messages we had deleted).  We could have deleted all of the messages and executed another cold start, but that would have required that we re-run the performance test.  Due to scheduling constraints, rerunning the test wasn’t an option.  We could have activated the subscriptions by re-issuing the CAPSTART command into the IBMQREP_SIGNAL table, but Q Apply wouldn’t see those messages until after the 700k messages were processed.  A classic chicken-and-egg problem.  What to do?

With Q Capture and Q Apply down we manually updated the IBMQREP_TARGETS table.  First, we UPDATEd the STATE column to ‘A’ for each subscription.  Then, for each subscription we matched the SUB_ID in the target to the source.  For example, for SUBNAME MY_TAB_0001, we updated the SUB_ID in IBMQREP_TARGETS to the value found in IBMQREP_SUBS.

Next we had to modify the IBMQREP_TRG_COLS table.  This update relied on the fact that we had a QA environment that almost matched the performance environment.  Jim drafted some nifty SQL to create the necessary DML code.


Then we ran the resulting UPDATE statements against the target Q Replication Control Table database.

Finally, Q Apply relies on WebSphere MQ for assured delivery of messages – including the schema messages we had inadvertently obliterated.  The following message was posted in both the Q Replication log and the IBMQREP_APPLYTRACE table.

ASN7551E  “Q Apply” : “ASN” : “BR00000” : The Q Apply program detected a gap in message numbers on receive queue “SERVER1.CUSTDB.RECEIVE.DATAQ” (replication queue map “DB2S_TO_DB2T”).  It read message ID “515245504D875D2C00000000000000000000000000000019”, but expected to find message ID “515245504D875D2C00000000000000000000000000000001”.  The Q Apply program cannot process any messages until it finds the expected message.

We had to tell Q Apply to ignore the missing messages.  To do this, we added a message to the IBMQREP_DONEMSG table.


Then we started Q Apply.

All of the usual caveats are relevant:

  • Don’t try this at home.
  • Professional driver, closed course.
  • Your mileage may vary.

This was a work-around in a tight spot.  Not business as usual.

Feeding a Netezza Warehouse

Posted by Frank Fillmore on March 9, 2011 under InfoSphere, Netezza, Q-Replication. Tags: , , , .

By now you’ve probably heard that IBM has acquired Netezza.  So how do I get my data into the darn thing?

We’ve worked a lot with IBM change data capture technologies: SQL Replication, Q Replication, and InfoSphere Change Data Capture (ICDC).  We also work with InfoSphere Warehouse, IBM’s internally developed data warehousing platform.  Usually we use one of the former to feed the latter.  As changes occur in a transactional database, the deltas are shipped near-real-time to the target data warehouse.  There might be some transformation, cleansing, or other manipulation (that’s the “T” in “ETL”) for which we would use DataStage, but a (surprisingly large) number of customers just copy data to the reporting and analysis platform with little massaging.

Enter Netezza.  The usual “drip-feed” (i.e. as soon as it changes in the source, send it to and update the target) methods listed above don’t play as well.  Netezza is optimized for bulk data loading.  One-row-at-a-time INSERT/UPDATE/DELETE propagation by a replication technology isn’t efficient.  The solution is to create intermediate mini-bulk delimited files that can be ingested, say, every five minutes.

There are a couple of ways to create these files.  One is to use InfoSphere Data Event Publisher (EP).  Think of EP as Q Replication without the Apply component that posts the deltas to a target database.  EP can publish XML or delimited files.  Viola!  Our Netezza problem is solved.  If you need complex business rules applied to transform the data, use ICDC or Q Replication to feed DataStage and have it created the delimited files.

Q Replication Packaging Revealed!

Posted by Frank Fillmore on March 9, 2011 under DB2 for Linux Unix Windows, DB2 for z/OS, InfoSphere, Q-Replication. Tags: , .

Led Zeppelin had a concert film, song, and album with the title “The Song Remains the Same”.  Even though it comes in different packaging, Q Replication functionality remains consistent. 

Data interoperability – what IBM calls Information Integration – is a core competency of The Fillmore Group.  We deliver consulting services and IBM authorized training for software products like Information Server (think DataStage), InfoSphere Change Data Capture (ne’e DataMirror), InfoSphere Federation Server, and more.

Right now we’re working a lot with DB2 Homogeneous Replication Feature and it’s sibling InfoSphere Replication Server.  BTW – a moment’s digression – what is the difference between these two?  Both feature what is commonly known as Q Replication: IBM’s high volume, low-latency, assured-delivery data replication solution.  The differences come down to licensing and capabilities.  When you install DB2 Enterprise Server Edition (DB2 ESE), all of the code for the DB2 Homogeneous Replication Feature is there.  To activate it, you must purchase and apply the appropriate license key.  This gives you the ability to create DB2 for LUW to DB2 for LUW replication topologies.  Of course DB2 for z/OS can be both a source and target for Q Replication, but the licensing to include DB2 for z/OS is beyond the scope of this post.  DB2 Advanced Enterprise Server Edition (DB2 AESE)  includes a limited-use license for the DB2 Homogeneous Replication Feature.  To learn more about the feature-rich DB2 AESE bundle, join us for a free webinar on Wednesday, March 16.  By contrast, InfoSphere Replication Server can replicate from and to heterogeneous sources and targets.  At one very large integrated financial services firm, Q Replication is being used to:

  • migrate Oracle databases to DB2 for AIX without taking an outage
  • populate a warm standby subsystem for DB2 for z/OS
  • integrate DB2 for z/OS data into an Oracle application
  • create a DB2 for AIX reporting database with near-real-time DB2 transactional data

Clear?  Good.  If not, drop me a note ( to talk about your specific replication use case.