TFG 2009 Updated Capabilities Statement

Posted by Kim May on May 12, 2009 under DB2 for Linux Unix Windows, DB2 for VSE&VM, DB2 for z/OS. Tags: , .

The Fillmore Group 2009 Capabilities Statement

All those name changes – grrrrr!

I am attaching our updated 2009 capabilities statement. I updated some of the names of the skills and areas where we deliver DB2 services. And of course I am now proving to the world that I can actually attach files to my blog entries. Ta da!

Recognition is Sweet!

Posted by Kim May on January 27, 2009 under DB2 for VSE&VM, DB2 Migrations, WAVV. Tags: , .

As an organization that has been doing essentially the same thing for over twenty years it’s always wonderful to be recognized for our experience.  Frank cringes when I tell customers our consultants come in two sizes – bald and gray – but the truth is that it takes a long time to gather the skills that make a consulting team experienced and comfortable directing others. 

So rather than judging our consultants by their biological age, we sometimes base their DB2 Age on the version where they started their career, yes, sort of like dog years.  Although I can honestly write that we can start a discussion at pre-DB2 SQL/DS, IBM’s first relational database, later renamed DB2 Server for VSE&VM. 

Over the years we’ve worked with a number of DB2 Server for VSE&VM customers and have watched all but a handful move onward and upward to newer versions of DB2 and yes, sadly, SQL Server and Oracle.  We have also watched many of these customers suffer through poorly planned migrations.  Frank and I have tried for years to persuade IBM to create a formal migration roadmap for customers with little success – until now!

Starting this week IBM has a page on ibm.com endorsing and recommending The Fillmore Group to organizations ready to modernize.  The phone hasn’t started ringing yet, but I am hoping the remaining DB2 Server for VSE&VM customers find us before they migrate, and make the process as straightforward as possible.  Knowledge is power!  Check it out at the link below:

http://www-01.ibm.com/software/data/db2/migration/upgrade-vse.html?wm=7100001f4734&cm_sp=ZZ999-_-SWB00-_-4734

db2cli.ini Mysteries Revealed!

Posted by Frank Fillmore on January 16, 2009 under DB2 for Linux Unix Windows, DB2 for VSE&VM, DB2 for z/OS. Tags: , , .

After a Winter Solstice break, I’m back to blogging.

In recent days, I have been working with customers who have distributed applications that access a host DB2 system: one is DB2 for z/OS and the other is DB2 for VSE.  In each case, the customer is using a tool – Visual Basic and a Hyperion three-tier (browser, mid-tier server, and host database) reporting tool respectively – that exploits the ODBC API.  While ODBC has been around for decades (I used it to access SQL/DS data on VM with MS Excel at The World Bank in 1991), some review is in order.

Open Database Connectivity (ODBC) is a “lingua franca” or common language used by, primarily, Windows applications to access data stored in a variety of databases, data stores, and file systems.  The implementation of the ODBC API meant that each desktop productivity tool or programming language didn’t need to have an individual interface for each of the possible back end data stores that customers might want to access.  For example, over the years I have used Word Perfect, Excel, Access, DataDirect Explorer, Hyperion, PowerBuilder, VisualBasic, FoxPro, Clipper (among many others) to interface with data stores as disparate as DB2, Oracle, SQL Server, Sybase, and Access.  ODBC sat in the middle between the application/tool and the data store transforming a request into a common ODBC call, then delivering to the database a request it understood – in most cases an SQL Data Manipulation Language (DML) statement such as SELECT or UPDATE.  One final note is that IBM’s Call Level Interface (CLI) is a superset of the ODBC API.

The reason for this lengthy introduction is to establish a foundation to talk about how you can manage and manipulate the way an ODBC tool or application interacts with DB2 as a back-end database.  In order to access DB2 on a host (z/OS, System i, z/VSE, z/VM) or distributed (Linux, Unix, Windows) platform there needs to be a DB2 client.  The client can be DB2 Connect to access a host database, a version of DB2 distributed database (e.g. Enterprise Server Edition), or the new consolidated Data Server client.  The client might be located on a mid-tier platform along with a web server.  End users would only need a browser to connect to the web server; the mid-tier application would issue calls to the database.  Or each end-user can have an appropriate client installed on their desktop.

Wherever the client is located in your topology, there will likely be a directory called SQLLIB and in it will be a file called db2cli.ini.  The db2cli.ini file contains a series of parameters to govern the ODBC interface to your database.  When configuring the client database connection(s), you need to specify that they will accept ODBC API statements.  This can be done when manually configuring the database connection via the GUI wizard.  Alternatively, you can issue the follow command from the DB2 Command Window:
db2 catalog system odbc data source <database alias>

So what does the db2cli.ini file look like and what are the important parameters?

Let’s say we have cataloged a connection to a remote DB2 database using the alias MY_DB.  We specified that this is an ODBC database by either of the two methods mentioned above.  Here is the db2cli.ini entry.  You will have one entry for each database connection that you have cataloged.

[MY_DB]
UID=userid
PWD=password
SysSchema=SYSIBM
SchemaList=”’SYSIBM’,’PAYROLL’,’FINANCE’”
DeferredPrepare=1
ConnectTimeout=10
ReceiveTimeout=120
LockTimeout=60
AppendForFetchOnly=1
AutoCommit=1
ConnectType=1
CursorHold=0
TxnIsolation=1

So what do we have here?  Overall I have configured an ODBC read-only reporting tool to access DB2.  Note: the explanations below are based on the values chosen for each parameter in the example.

You might or might not want to imbed the userid and password in an in-the-clear text file.  Some shops do this to minimize authentication and privilege maintenance by specifying a single userid and password to be used by everyone for database connections.

SysSchema gives the creator or owner name for the DB2 system catalog (metadata) tables on the target server.  Typically this is SYSIBM although on DB2 Server for VSE&VM it’s SYSTEM.  SchemaList is a list of creator/owners of the tables that will be accessed by this application.  By limiting the number of schemas, you reduce the time it takes to populate the table names in GUI point-and-shoot SQL generation tools like MS Query.  Note that each schema name is enclosed in single quotes(‘) and the entire string is enclosed in double quotes (“).

The DeferredPrepare reduces network traffic by sending the dynamic SQL prepare statement along with the execute request.  The three adjacent Timeout parameters will cause the request to fail if a connection to the server database is not made within 10 seconds, data is not returned from the server in 120 seconds, and lock wait extends for 60 seconds respectively.  Obviously these thresholds are only examples and will vary from shop to shop.  The benefit to setting them is that in most cases, the default timeouts are indefinite.  When users get tired of waiting, they cancel the application or reboot.  That means resources will continue to be consumed on the DB2 server even after the end-user has given up.  In addition, by setting and tuning these parameters, the database administrator can focus on the reasons user requests are lengthy (e.g. timeouts due to I/O waits vs. locking).

AppendForFetchOnly means that FOR FETCH ONLY will be added to the end of all SQL SELECT statements.  These unambiguous read-only selects – remember, this is a read-only reporting database – might help with blocking from the server to the application (more on this is a future post).  AutoCommit  makes each individual SQL statement sent to the server its own logical unit of work; no ROLLBACK WORK will be possible.  ConnectType is set for Remote Unit of Work meaning that only a single database will be accessed within a logical unit of work.  CursorHold is set to destroy cursors after each commit (in this case after each SQL request because AutoCommit is also specified).  TxnIsolation will use the uncommitted read (“dirty” read) isolation level to avoid locking.

The goal here is to optimize SQL SELECT statements and to minimize DB2 server resource consumption as much as possible.  You application profile might be very different and require different settings for these parameters.  For example, if your application has complex update logic and many SQL statements comprise a single logical unit of work, you won’t want to specify AutoCommit=1.

A full description of all of these parameters, and many more, is found at the DB2 Information Center
(http://publib.boulder.ibm.com/infocenter/db2luw/v9/index.jsp).