Training is happening in 2010

Posted by Kim May on May 14, 2010 under Authorized Training Partner, DB2 Education, DB2 for Linux Unix Windows, Q-Replication. Tags: , , , , , , , .

While there are many indications the “Great Recession” is over, the budget lag and fear of one last dip seems to be constraining IT project funding – EXCEPT – of all places, training.  While this seems perfectly logical (when there are fewer hands to do the work, making sure those hands are skilled is critical), historically, training budgets are the first to be cut when there’s a downturn.  Maybe organizations are wising up this time around…here at The Fillmore Group we are receiving more training requests than anytime in the past five years.  Read More…

Q Replication Dashboard – v9.7.1

Posted by Frank Fillmore on February 8, 2010 under InfoSphere, Q-Replication. Tags: .

Be sure to upgrade to the latest version of the Q Replication Dashboard v9.7.1.  More flexibility is provided – including monitoring Oracle Q Replication sources.  Check the link for all of the details.

DRDA Performance for Q Replication ASNTDIFF Utility on DB2 for z/OS

Posted by Frank Fillmore on February 8, 2010 under DB2 for z/OS, InfoSphere, Q-Replication, SQL Tuning. Tags: , .

As you know, I work with IBM’s Q Replication technology – a lot.  Q Replication functionality is delivered in InfoSphere Replication Server.  The challenges are amplified when working on DB2 for z/OS with *really* large tables.  One financial institution at which I am working has tables with over 1 billion rows and hundreds of partitions.  Of course, DB2 for z/OS can manage tables of that size, but what about the tooling?

Q Replication comes with a utility called ASNTDIFF.  ASNTDIFF compares a checksum of rows in the source and target tables being replicated to validate that there are no discrepancies.  Challenge #1 is that when replicating between DB2 for z/OS subsystems, ASNTDIFF runs under Unix Systems Services (USS) that provides the Unix APIs enabled in z/OS.  There are considerations for USS applications that will form the basis for another post.

Challenge #2 is that ASNTDIFF retrieves the rows from the remote system (Application Server or AS) using a three-part-name query across a Distributed Relational Database Architecture (DRDA) connection.  For example:

SELECT * FROM <location>.<schema>.<tablename>

where location is found in the DRDA Communications Database (CDB) portion of the DB2 for z/OS Application Requestor (AR) catalog tables.  You typically run the ASNTDIFF utility on the replication target DB2 for z/OS server.  That’s because the CDB has probably already been configured to support cursor-based loading of the target tables.  Why is this a challenge?  Well, a three-part-name query across a DRDA pipe against a 1 billion row table ran for about 18 hours.  Ouch!

So the basic problem is: how can I get three-part-name queries running across a DRDA connection between DB2 for z/OS subsystems to run faster?  I asked a couple of IBMers and Jim Pickel pointed me in the direction of exploiting OPTIMZE FOR n ROWS.  There’s a good explanation of this in “Limiting the number of DRDA network transmissions”.  Right now the ASNTDIFF utility adds OPTIMIZE for 1000 ROWS and FOR READ ONLY to every ASNTDIFF query.  We’re experiementing with the recommendations in the “Limiting…” document to see if we need to override this hardcoded parameter.  I’ll keep you posted as to our findings.

Q Replication Now Captures Oracle Log Changes Natively

Posted by Frank Fillmore on September 23, 2009 under InfoSphere, Oracle, Q-Replication. Tags: , , .

Most of you are aware that DB2 9 for LUW delivers PL/SQL APIs in v9.7.  That is, you can run Oracle applications against DB2 for LUW databases with little or no modification.  What you might not have heard is that IBM’s high availability, low-latency replication product – Q Replication – will now natively capture changes natively in Oracle logs.  That is, you can replication your Oracle Financials to a InfoSphere Warehouse.  Q Replication is a feature of IBM’s InfoSphere Replication Server, so you might not have noticed the new functionality with all of the name changes.

Two links describe the enhancement.  David Tolleson and Ed Lynch of IBM have both provided good information on the specifications and details.

New Q Replication Dashboard-TFG and IBM Webcast

Posted by Kim May on August 20, 2009 under Data Studio, DB2 Gold Consultants, Q-Replication, TFG Blog. Tags: , , , , .

Just announced and (I think) just released!  Frank will deliver a webcast on the latest version of the Q Replication Dashboard.  He installed it last night at a customer site and is already deciding which new features he likes best.  Stay tuned – I am sure his prep for the webcast will fuel some blog entries.  Here’s the invitation to the webcast:

The Fillmore Group and leading experts from IBM will deliver a 45-minute webcast to introduce the Q Replication Dashboard and improvements included in the new, redesigned Dashboard scheduled for release this month (or maybe yesterday).

The Q Replication Dashboard is the free monitoring tool available from IBM that replaces the Java Q Replication Dashboard and the Data Studio Administrative Console (DSAC) with a thin-client, browser-based interface. The Dashboard provides real-time Q Replication monitoring. In the webcast presenter and DB2 Gold Consultant Frank C. Fillmore, Jr., will introduce and explain how the new redesigned Dashboard’s expanded features and better ability to customize will enhance your organization’s Q Replication monitoring capabilities. Frank will demonstrate and explain the new Dashboard’s:
•?Improved performance for replication environments with large numbers of subscriptions and queues
•?Greater customization and filtering capabilities
•?Ability to access more data
•?Better access to historical data
•?Simplified security
•?Improved visual appeal
•?Smaller (512mb requirement) footprint for both the client and server

Two sessions of the webcast will be delivered on September 15th, one at 9:00am EDT and another at 7:00pm EDT.

To register: E-mail kim.may@thefillmoregroup.com and include your name, company name, telephone number and the session (9:00 am or 7:00 pm) you would like to attend. Read More…

Data Studio Administration Console (DSAC) for Q Replication Updates

Posted by Frank Fillmore on July 17, 2009 under Data Studio, Optim, Q-Replication. Tags: , .

If you are using Data Studio Administration Console (DSAC) to monitor a Q Replication environment, there are a few important things you should know.

  1. DSAC v1.2 Fixpack 1 is available for *free* download.  You can access both the base code and the Fixpack at the Blogroll link in the lower right hand corner of this page under “Q Replication Tools”.  There are a couple of installation “gotchas” that you need to know that I covered in and earlier post.
  2. What is old is new again.  The original Q Replication Dashboard was a Java-based heavy client.  That was replaced by the current DSAC implementation: browser-based, thin-client with a lite webserver imbedded.  Q Replication monitoring is now moving away from the “Data Studio” umbrella and will be called Q Replication Dashboard again.  My guess is that the new implementation will retain a lot of the “look and feel” of DSAC, so there’s no risk in starting to use DSAC today for Q Replication monitoring.  Watch the blog for details as they become available.
  3. In a follow-up to Kim’s What’s in a Name post from a while back, IBMer Anjul Bhambhri made an off-hand remark at the June meeting of the Baltimore/Washington DB2 Users Group which identified an important IBM product naming convention distinction.  Products named “Data Studio” are available for *free*; these are typically tools which provide basic functionality (e.g. DSAC).  Products named “Optim” are for-fee products offering more integration and/or robust functionality.  “Optim” is a name that derives from IBM’s acquisition of Princeton Softech.  Most folks associated “Optim” with data archiving.  Now “Optim” will be the brandlet that covers IBM cross-database tooling.  Clear?  I hope so.  At least until the next change.

If you’re not using DSAC for Q Replication real-time monitoring, how come?

Q-Replication Administration Console Webcast

Posted by Kim May on June 26, 2009 under Data Studio, Q-Replication. Tags: , , .

In conjunction with the Data Studio team Frank Fillmore just tentatively set the date for a webcast to review the functionality of the *free* Data Studio Administration Console (DSAC) features to monitor and support Q-Replication, as well as to give a preview of the new features in the upcoming V9.7 release due out late this summer.

The date: July 30th. We will do some marketing and also advertise the webcast by way of IDUG. If you are interested in attending, check back in mid-July when I should have the date and time confirmed.

New TFG Services Offerings – Data Studio Administration Console

Posted by Kim May on February 26, 2009 under Data Studio, Q-Replication, TFG Blog. Tags: , , , .

I just emailed our website designer and asked her to revise the consulting page so that I can add some services offerings to the site. Three of these offerings I have developed while working with IBM’s Ron Reuben, as he is trying to create an ecosystem of services to surround and better enable Data Studio. The first one we’ve put together, based in part on our experience and appreciation of Q-Replication, is for a Data Studio Administrative Console “Quickstart” to install DSAC, set up monitoring for Q-Rep, and mentor onsite staff. This is MY favorite, as DSAC is currently available from IBM for **FREE**. Here goes, text-style. If you want to really experience the full impact of the marketing slick, check it out on the website in color next week!

The Fillmore Group Services for
Data Studio Administration Console Quickstart

Highlights
• The Data Studio Administration Console (DSAC) is a free download.
• The DSAC provides real-time monitoring for Q-Replication.
• Thin client, browser-based interface replaces the Java Q-Replication dashboard.
• Easy to learn, deploy, and use. Read More…

Q Replication Automatic Load and the Things That Can Break

Posted by Frank Fillmore on January 22, 2009 under DB2 for z/OS, Q-Replication. Tags: , , .

Choosing a particular parameter value when creating a Q Replication subscription can sometimes set off a complicated chain reaction.  An example is the “Automatic Load” option.  When you create a new Q Replication subscription, tables in the source DB2 environment frequently are already populated with data.  One way to get the existing table data from the source to the target is to use the Automatic Load feature.

In this particular case study (from a real customer environment), we are replicating from one DB2 for z/OS subsystem to another using unidirectional replication.  The source DB2 subsystem is DB2S and the target is DB2T and they are on separate System z servers.

When Q Capture is started with a new subscription defined as having an Automatic Load or when a CAPSTART command is issued to the SIGNAL table for such a subscription, two things happen:

1. Q Apply on DB2T invokes the DSNUTILS DB2 Stored Procedure on DB2T running in Work Load Manager (WLM).

2. DSNUTILS initiates a DB2 cursor based load using three-part-names from DB2T acting as the Application Requestor (AR) across a Distributed Relational Database Architecture (DRDA) connection to the source tables on DB2S acting as the Application Server (AS).

If everything is set up correctly, existing table data from the DB2S is loaded into the corresponding table in DB2T.  Unfortunately, things are not always set up correctly.  All of components referenced here and their installation/configuration are well documented elsewhere, so I won’t repeat all of that here.  What I want to provide is a simple checklist and point out the things that might be preventing an Automatic Load from working.

1. DSNUTILS is not set up or not set up correctly on DB2T. (http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/wareh/t0005992.htm)

2. In order to run DSNUTILS in DB2T, the WLM NUMTCB parameter must be set. (http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.ii.doc/admin/cqrld003.htm)

3. The DRDA communications database (CDB) in DB2 for z/OS is not configured correctly.  Check out the examples on pages 98 and 99 in “WebSphere Information Integrator Q Replication: Fast Track Implementation Scenarios” (SG24-6487).  This manual can be found at http://www.redbooks.ibm.com

4. The userid that DSNUTILS passes to DB2S doesn’t have the authority to SELECT from the source table (SQLCODE -551).  When a Q Replication Automatic Load is attempted, a z/OS dataset log file will be generated.  Look in the log file for errors and correct them.  Depending on how you configured Q Replication on System z, the Q Apply log itself might be a z/OS dataset or a Unix Systems Services (USS) file.

5. The userid passed to DB2T will need to have sufficient authority to INSERT rows into the target tables by virtue of the cursor based load.  Look in the Automatic Load and/or Q Apply logs.

6. If there is Data Manipulation Language (DML) INSERT/UPDATE/DELETE activity on DB2S source tables, Q Apply will spawn spill queues while waiting for the Automatic Load to complete.  Look for z/OS datasets with high level qualifiers of IBMQREP.SPILL.MODELQ.

ASNCLP Scripting for Replication

Posted by Frank Fillmore on October 7, 2008 under Q-Replication. Tags: , .

For those of you who have worked on IBM’s SQL Replication (formerly DataPropagator-Relational) and Q Replication you are familiar with the Replication Center, a Java-based, fat-client which is part of Control Center.  Replication Center is good for folks who are learning IBM’s replication software suite or who have a single task that needs to be performed quickly.
On the other hand, managing dozens or hundreds of subscriptions is impractical through the Replication Center.  It makes more sense to use the replication scripting language called ASNCLP (ASN Command Line Processor).  It is documented in ASNCLP Program Reference for Replication and Event Publishing (SC19-1018).
To invoke ASNCLP, first install the DB2 Data Server Client (available for free at http://www-01.ibm.com/support/docview.wss?rs=71&uid=swg21288110) or one of the DB2 engines such as Enterprise Server Edition (which bundles the client functionality).  If you already use Replication Center, you have what you need to use ASNCLP.
Then type ASNCLP at any Windows DOS prompt for an interactive ASNCLP session.  Alternatively, code a script using the ASNCLP language and run that file as follows:

ASNCLP -f queue_map.qrp

In order to run the following script, you must configure access to the source and target DB2 databases – in this case D01G and D03G – which both happen to be DB2 for z/OS subsystems.

queue_map.qrp
# Setting the environment.
# The SET OUTPUT command creates two SQL scripts: qcapqmap.sql, which adds
# definitions for the queue map to the Q Capture control tables, and
# qappmap.sql, which adds definitions for the queue map to the Q Apply
# control tables.
ASNCLP SESSION SET TO Q REPLICATION;
SET LOG “rqmap.err”;
SET SERVER CAPTURE TO DB D01G ID V110079 PASSWORD “xxxxxxxx”;
SET CAPTURE SCHEMA SOURCE ASN;
SET SERVER TARGET TO DB D03G ID V110079 PASSWORD “xxxxxxxx”;
SET APPLY SCHEMA ASN;
SET OUTPUT CAPTURE SCRIPT “qcapmap.sql” TARGET SCRIPT “qappmap.sql”;
SET RUN SCRIPT LATER;

# Creating a replication queue map.
# This command generates SQL to create a replication queue map,
# SAMPLE_ASN1_TO_TARGET_ASN1. It specifies a remote administration
# queue and receive queue at the Q Apply server, and a send queue at
# the Q Capture server. The command also sets the number of agent threads
# for the Q Apply program to 4 (a quarter of the default 16), and specifies
# heartbeat messages be sent every 5 seconds.

CREATE REPLQMAP SAMPLE_ASN1_TO_TARGET_ASN1
USING ADMINQ “ABC.ALL.ASN1.DB2.ADMIN”
RECVQ “ABC.ALL.ASN1.DB2.DATA”
SENDQ “ABC.ALL.ASN1.DB2.DATA”
NUM APPLY AGENTS 4
ERROR ACTION S
HEARTBEAT INTERVAL 5;

# Ending the ASNCLP session.
QUIT;

The resulting files, qappmap.sql and qcapmap.sql are run when attached to the Q Apply DB2 server (“target”) and Q Capture DB2 server (“source”) respectively.  These are standard SQL statements and can be run from anywhere (Command Editor on Windows, SPUFI on ISPF/PDF, etc.).