Keeping it in the Family: Batch Movement of Data Between DB2 Databases and/or Subsystems

Posted by Frank Fillmore on May 15, 2013 under DB2 for Linux Unix Windows, DB2 for z/OS. Tags: , .

A few weeks ago a customer was confronted with a common challenge.  They had to move terabytes of data – billions of rows – from DB2 for LUW to DB2 for z/OS.  I suggested a “cursor-based load” (aka the “cross-loader”).  This DB2 for z/OS DBA team is top-notch, with centuries of collective experience, but I mostly was met with blank stares.  So here’s the brief refresher (or introduction) I gave them on the cross-loader.

The common method for moving data from DB2 for LUW tables to DB2 for z/OS could be a serial process where the data is

  1. extracted from DB2 for LUW
  2. transported to the z/OS platform
  3. loaded into DB2 for z/OS

This can be done as three discrete steps in a variety of ways.  The problem with three discrete steps is the I/O overhead of landing the data to disk three times: to the local extract file, FTPed to z/OS, loaded into DB2.  In addition, the steps are serial (e.g. the extract must complete before the FTP can begin).

 An alternative is using a cursor-based LOAD (aka the “cross-loader”).  The cross-loader has been available in DB2 for z/OS since v7.

 Steps to enable the cross-loader:

  1. Set up DRDA definitions in the DB2 Communications Database (CDB) in the DB2 for z/OS subsystem that will be running the loads (i.e. the “target”).
  2. Create DB2 for z/OS nicknames for the tables to be loaded from the DB2 for LUW source.

In the example below the table PAOLOR7.DEPT is a nickname.  The LOAD utility jobs can be scheduled to minimize the impact on production workloads.

This would be relatively easy to set up and test in a development environment.   This is the fastest, easiest, simplest way to move the data to z/OS.  This method will also use the least machine resources.


References for cursor-based load

DB2 10 for z/OS Information Center “Loading data by using the cross-loader function

Redbook “Moving Data Across the DB2 Family

Read More…

IBM Champions: Four (and counting…) @ The Fillmore Group

Posted by Frank Fillmore on May 15, 2013 under DB2 Gold Consultants, IBM Champion. Tags: .

Congratulations to longtime colleague, Joe Geller, for being named one of the newest IBM Champions.  Joe currently is deployed at a worldwide Top 50 integrated financial services institution.  Joe is the go-to guy for DB2 for LUW performance tuning for an online banking application.

That brings to four – including Joe – the number of IBM Champions affiliated with The Fillmore Group:

  • TFG Vice President of Business Development Kim May
  • DB2 Gold Consultant and raconteur David Beulke
  • and yours truly

Learn more about the IBM Champion program.


IBM #BigData Announcements – What You Should Know: the Details on BLU, IBM Unlimited License Agreement #IULA and more

Posted by Frank Fillmore on May 13, 2013 under Big Data, BigInsights, BLU Acceleration, Hadoop, IBM Champion, InfoSphere, InfoSphere Streams, Netezza. Tags: , , , , .

IBM Champion Kim May of The Fillmore Group and Warren Heising of IBM deliver the details on “IBM Big Data Announcements – What You Should Know”.  A replay of the May 9th, 2013 webinar can be found here.  The presentation materials can be accessed MA VUG Warren Heising Big Data Announcements 5 9 13

Can the Data Warehouse and #BigData Coexist?

Posted by Frank Fillmore on May 6, 2013 under Big Data, BigInsights, Hadoop, IBM Champion, IBM Pure Systems, InfoSphere Streams. Tags: , , , .

I had the opportunity to chat with Jeff Kelly, Big Data Analyst, at Wikibon.  You can about our discussion on the Silicon Angle blog or view the interview here.

IBM Authorized Training is Evaporating Before Our Eyes

Posted by Frank Fillmore on April 21, 2013 under Authorized Training Partner, Big Data, BigInsights, DB2 Connect, DB2 Education, IBM Mid Market Customers, International DB2 Users Group (IDUG). Tags: .

The Fillmore Group’s (TFG) blog ordinarily is focused on technical topics, but when a potentially disruptive business issue impacts our information management software clients I need to give folks a heads-up.  For 20 years TFG has partnered with IBM to deliver IBM-branded technical training to our mutual customers.  The nature of the business relationship between TFG and IBM that enabled us to deliver that education has changed several times over the past two decades and is changing again in 2013.  Unfortunately the new relationship has yet to be defined while the existing terms and conditions are set to expire at the end of June – in just over two months.

So why does any of this matter to you?  Well if you plan to attend an IBM Information Management training class in the second half of 2013, check out the schedule of courses.  Spoiler alert: there isn’t one.  In years past IBM has requested our public training schedule for the two dozen or so courses we offer at least four months before a new half will begin (e.g. early September of 2012 for classes that will be delivered beginning January 2013).  Between now and the end of June all existing Authorized Training Partners have been asked to arrange new relationships with aggregators engaged by IBM.  What are the new terms and conditions?  No idea.  But the only thing scarier than hearing “I’m from the government and I’m here to help you” is “You’re gonna love this new IBM program” – especially when the program’s details haven’t been publicly shared.

The tuition for most IBM courses runs between $2,000 and $3,000.  Add an additional $1,500 for travel and lodging expenses.  Most IBM training attendees must travel far from home to attend instructor-led classroom training (unlike Oracle and Microsoft customers who can usually find classes at the local training provider – say, a community college – near their homes and businesses).  Sleep in your own bed and don’t miss you kids’ soccer games.  But that’s a rant for another day.

You probably need to get prior budgetary approval for a nearly $5k expenditure and plan the schedule around project deadlines and vacations.  Right now you can’t plan for any classes beyond the end of June – remember, they don’t exist – and there’s no telling when or *if* you will be able to make those plans.  I say *if* because TFG offers the only public schedule in the US for classes on Information Server and the hot, hot, hot BigData technologies of BigInsights and InfoSphere Streams.  If you expect to deploy these technologies in 2013, you might be reading the manual or engaging scarce (and expensive) consulting resources.

So, if this matters to you what can you do?

  • Post a comment on this blog entry.
  • Send an e-mail to your IBM representative and ask “What the heck is going on?”
  • Drop me a Tweet @ffillmorejr and tell me the impact this is having on you and your team: #IMtraining.

IBM Data Strategy 2013: #BigData #BLU and much, much more…

Posted by Frank Fillmore on April 16, 2013 under Big Data, BigInsights, BLU Acceleration, DB2 for Linux Unix Windows, DB2 for z/OS, DB2 Migrations, Hadoop, IBM DB2 Services, IBM Information Management Software Sales, IBM Mid Market Customers, IBM Pure Systems, IBM Smart Analytics System, IDAA, InfoSphere Streams, Netezza, Optim, Oracle, pureScale, Q-Replication. Tags: , , , .

Thank you to all of the customers, colleagues, and IBMers who joined The Fillmore Group on April 11, 2013 at Cinghiale in Baltimore, USA to discuss IBM’s Data Strategy – which we call “DB2 101”.  As it happens, many DB2 customers don’t realize the strength and depth of the IBM Information Management portfolio.  And many non-DB2 customers don’t know about the governance and management tooling available in InfoSphere and Optim (e.g. Guardium) which IBM provides for Oracle, SQL Server, Teradata, and other-vendor database servers.

Prominently featured were the highlights of IBM’s April 3rd announcement held at Almaden Lab which my colleague, Kim May, and I attended.

Also of note is the bundling of functionality in DB2 Advanced Enterprise Server Edition (AESE).

  1. pureScale – shared-disk clustering
  2. Data Partitioning Feature (DPF) – shared-nothing hash partitioning
  3. DB2 for LUW v10.5 columnar compression database

The presentation is available at DB2 101 – IBM Information Management Software Portfolio Overview 2013-04  The audio-video recorded delivery can be viewed here.

Sheryl Larsen is a DB2 for z/OS Evangelist – and an IBMer

Posted by Frank Fillmore on April 16, 2013 under DB2 for z/OS, DB2 Gold Consultants, Information on Demand Conference, International DB2 Users Group (IDUG).

Friend, colleague, and DB2 Gold Consultant Sheryl Larsen is joining IBM on April 22, 2013 and will become the “new Roger Miller” according to an IBM executive.  Best wishes to Sheryl in her new gig!


Separated at birth?

IBM InfoSphere Data Replication (IIDR)

Posted by Frank Fillmore on April 10, 2013 under DB2 for i, DB2 for Linux Unix Windows, DB2 for z/OS, IBM Information Management Software Sales, Information on Demand Conference, InfoSphere, Q-Replication. Tags: , , .

IBM has bundled its replication technology into a single package: IBM InfoSphere Data Replication (IIDR).  IIDR combines three components:

  • SQL Replication (heritage DataPropagator- Relational)
    • Easy to set up
    • Staging tables
  • InfoSphere Replication Server (IRS – heritage Q Replication)
    • High volume, low latency
    • Native Oracle and DB2 sources and targets
    • WebSphere MQ transport layer
  • InfoSphere Change Data Capture (ICDC – heritage DataMirror)
    • Broadest set of heterogeneous sources and targets
    • TCP/IP transport layer

There are a few of important take-aways.

  1. IBM substituted an implementation decision for a buying decision.  For quite a while SQL Replication was bundled free with DB2 for LUW.  ICDC and IRS were separately purchasable technologies with a lot of functional overlap.  In 2010 my colleague, Kim May, delivered an IBM Information on Demand (IOD) presentation distinguishing between the three.  Now you purchase the IIDR bundle and determine which technology is best suited for a particular use case.  As with most IBM software on distributed platforms, the cost is based on Processor Value Units (PVUs).
  2. New feature/functionality will be built into IIDR rather than the former heritage ICDC and IRS packaging.
  3. There will be a convergence of the technologies over time.  Many of the prospective changes are still IBM Confidential, but IBM is looking to consolidate components where it makes sense to do so.  There is a long-term roadmap that I hope IBM be sharing shortly.
  4. Upgrading and migration paths are a work-in-progress.  If you currently own ICDC or IRS and want to move up to IIDR, contact The Fillmore Group for pricing and implementation assistance.

Flipboard #BigData Magazine @

Posted by Frank Fillmore on March 28, 2013 under Big Data. Tags: .

Flipboard is an online content and social network aggregation app for Apple iPad, iPhone and Google Android devices.  It’s a smart, interactive Reader’s Digest for the Internet age.  Flipboard v2.0, just released, allows you to be DeWitt (or Lila) Wallace and create your own “magazine” of Tweets, Facebook postings, Instagrams, and content from providers like TechCrunch and the Wall Street Journal.  Think of it as a personal content “playlist” around a particular theme.

My BigData magazine can be found here.  Become a reader and peruse the content I have collected from around the web.  I’ll be adding new content every day or so.

Career Advice for a Friend #BigData

Posted by Frank Fillmore on March 25, 2013 under Big Data, IBM Information Management Software Sales. Tags: , .

My friend and (épée) fencing teammate, Alan, e-mailed me the following question: “I’m in the middle of a Master’s degree program (Information Systems).  I’m about half-way through, and I’m stuck on picking a concentration.  Specifically, I’m stuck between Data Warehousing and Business Intelligence (DW & BI) and Information Security (InfoSec).”

Here is my reply:

First, it’s difficult to predict future demand, but InfoSec is a growing, evolving field.  We work with IBM appliances called Guardium (which provides trusted user auditing) and Identity Insight (“who is who; who knows who”).

The data warehouse space is, I believe, being overtaken by “Big Data”.  The classic DW development cycle from data modeling and star schema design to Extract/Transform/Load (ETL) to dashboards and Key Performance Indicators (KPIs) is time-consuming and expensive.  It might be replaced in many instances by Hadoop (MapReduce).

There is, of course, room and requirements for both development models, but in 2013 I would want to build a career around InfoSec or BigData rather than the maturing DataWarehouse paradigm.