Monday, September 27, 2010

Debugging CICS COBOL programs using MicroFocus COBOL Animator (for Open Systems)

Animator is a MicroFocus tool for debugging your COBOL programs. It intercepts the execution of an application program, showing execution of your code line by line. Application screens are displayed in a different window, as and when required. This process is referred to as animating or animation of a COBOL program.

For more information on Animator, click

TXSeries supports cross-session debugging with Micro Focus Server Express COBOL. Cross-session debugging enables the user to use Animator in a different terminal window from that in which the program to be debugged is running.
To configure Animator, to animate CICS COBOL application programs, you can read through this page.

Animating a CICS COBOL program

1. Create required TD and PD entries, so that TXSeries can find and execute your COBOL application program.
Run CADB and setup the program for debugging.(Lets call this terminal session as TERM1) (See Screenshot)

In the above screen, the following fields are required.

1. COBANIMSRV Id - Any string (max. 40 characters), containing only alphabets or numbers

2. PROGRAM : The name of the program to be debugged/animated. Note : This should be the same name as that of the PD entry.

Press enter to accept the debugging values in CADB.

3. In a new terminal session (lets say TERM2), set the following variable

export COBANIMSRV = value of “COBANIMSRV Id” from CADB screen

4. Now, in TERM2, run the command


The screen should be blank, waiting for execution of a COBOL program.

5. Go back to TERM1, and run the transaction, which invokes the COBOL program.

E.g. cicslterm -r TXREG1 -t "transaction_name"

6. Now, in TERM2, you should see your COBOL code, in animator.(see screenshot)

7. You can now perform various debugging tasks, as provided by Animator.

E.g. To execute the program, just press 'g' (Go). You will be see each line of code highlighted as program is being executed.

Remote Task Information (or Transaction life mapping) feature in TXSeries : What is it?

Well, if you have been using TXSeries, one of the difficult problems that you would have come across is to debug transaction error scenarios that span across multiple regions. Co-relating critical data for a transaction or a task across all the involved regions is always important while debugging such scenarios. In most situations, a TXSeries setup involves multi-region setup that are all connected using inter system communication facilities (DPL, TR etc). So, if you imagine a typical setup, 3 TXSeries regions are connected with each other and workload is distributed across these three TXSeries regions using LINK command or API.
Remote task information (or also called as Transaction life mapping) feature in TXSeries 7.1 helps gather information about corresponding task in the immediate front end region and the originating region. For example, in the above figure, if you are looking for information for CPMI transaction in region “C”, you will be able to get task information about CPMI running in region “B” and information on the task T1 running in region “A”. Critical information such as, Process ID, Task ID, Program name, Terminal ID (originating terminal) and Task start time are generated. Today, this feature logs data in an extra partition TDQ under /var/cics_regions//data directory.

This was for what this feature is, now let’s just look at a few scenarios where this new feature comes in handy…
Making sense of the monitoring data…
Usually, whenever a user reports the problem of transaction response time being large (i.e slow transactions), one of the immediate request is to generate monitoring information. This information comes along with various parameters such as transactions start time, elapsed time and so on, which is quite handy in figuring out where the delay is caused. However, in the case of transactions spanning across multiple regions, just this monitoring information usually would not help. Yes, if the problem is occurring within the span of any given region, it would still be alright, but if the monitoring information does not show any apparent issue, that’s when co-relation across various regions becomes important.

The data generated by remote task information feature helps the user to understand the behavior of the transaction prior to landing into this current region. For example, using the data such as PID, TID, Transaction / task start time and so on, it’s easy to understand where the transaction spent how much time. If the transaction response within a given region is alright, the co-relation allows us to see if the delay happens to be over the network for example.

Sometimes, if all the transactions in the backend region run under the CPMI name , the remote task information comes in handy to differentiate each one of them while relating the monitoring data across regions. Typically, PID and task ID of each of those transactions can be mapped using the remote task information data for differentiation.
Task/Transaction hanging in the backend region…
On many occasions, we see transactions hanging on the backend due to some issue on the corresponding front end transaction. For example, an application server process on the front end may either have got terminated (possibly due to timeout and force-purge) or the application server process could be hanging for some reason. To be able to relate the hanging backend transaction to the front end transaction, the data generated by the remote task information can be used to identify the triggering PID and transaction ID. This co-relation can help isolate the problem to a particular region or a process. One such scenario could be, backend transaction hanging as the front end process went away due to forcepurge as a result of client timeout. Just tweaking the timeout value can solve this problem as well. Even if the problem is slightly more complex, co-relating the data can help isolate processes with hang situations and map the overall life of that transaction, thus helping understand the scenario better.

There are some more cases in which this feature can be handy. For example, one of the data that you can get to know is the terminal ID from where the transaction got initiated. This information that can help identify branches or end clients which are initiating the transactions.

Remote task information is not enabled automatically in your TXSeries 7.1 region. Parameters “EnableTaskInfo”, “SendTaskInfo” and “DumpRemoteInfo” need to be set to “yes” in the region stanza.

Overall, a very handy feature – especially for troubleshooting. This feature is currently supported with CICS_TCP and PPC_TCP protocols and across TXSeries regions running on TXSeries 7.1. Not supported with prior versions of TXSeries. Also, since the remote task information generates lots of data, you might want to use it based on the need appropriately to avoid any degradation in performance.
TXSeries library @ would help find more information on usage of this feature.

Do comment on the feature and let me know if any of you readers have any suggestions…

Friday, September 24, 2010

TXSeries Region restart taking long time than expected???

TXSeries Region restart taking long time than expected???

             If TXSeries CICS region is stopped with -f option or if a transaction using SFS server file terminates abnormally and started with StartType=cold option, then one of the possible reason for starting the region taking long time than expected can be open file descriptors if region is configured with SFS server. Even the region is stopped forcefully, some times the OFDs related to that region still remain open.

             For checking the open file descriptors for a specific region perform following operation
sfsadmin list ofds -server /.:/cics/sfs/<sfs server Name> |grep <region Name>

             A file can become unavailable if it is not appropriately closed. For instance, if a client of an SFS server terminates abnormally, files that were exclusively opened by the client may remain open, preventing access by other users. While restarting the region, for each file it will wait. Depending on number of open file descriptors, the region start will wait for that much time.

             You can terminate the ofds using following command

sfsadmin terminate ofd -server /.:/cics/sfs/<sfs server Name> <ofd number>

             For more information on OFD, check

Thursday, September 16, 2010

Resolving hostname issue in UNIX Platforms

A very common stop point when configuring a region on HPIA and other unix machines for any tests is the hostname issue. There are many a instance when your configuration searches for hostname and gets a "-s" or you get a blank space or you miss checking for it. The hostname represents the dns name of that machine in the network. This hostname is used in multiple places, for instance, for setting default SFS server name.

The following is the console log usually found when there is no hostname declared and region fails to start:

ERZ010144I/0375 09/06/10 14:35:11.313616000 ISCPTF02 22554/0001 : Application server 102 started
ERZ044012E/0023 09/06/10 14:35:11.317421000 ISCPTF02 22553/0001 : Unable to convert TCP/IP host name '' into a network adapter address for LD entry 'TCP'. Error number was 2.
ERZ044009E/0010 09/06/10 14:35:11.319621000 ISCPTF02 22553/0001 : CICS listener 'TCP' process 'cicsip' start was unsuccessful
1 22552 10/09/06-14:35:11.458937 6c1c041c T ppc_tcpSockInit: gethostbyname error: -s not found
1 22552 10/09/06-14:35:11.459940 0000000c T /cics/FSB/encsrc/src/ppc/tcp/ppc_tcpSock.c 869

ERZ010040I/0055 09/06/10 14:35:11.460257000 ISCPTF02 22552/0001 : CICS control process 'cicsas' terminated
1 22554 10/09/06-14:35:11.464153 6c1c041c T ppc_tcpSockInit: gethostbyname error: -s not found
1 22554 10/09/06-14:35:11.464635 0000000c T /cics/FSB/encsrc/src/ppc/tcp/ppc_tcpSock.c 869
ERZ010040I/0055 09/06/10 14:35:11.464837000 ISCPTF02 22554/0001 : CICS control process 'cicsas' terminated

Steps to resolve the issue:

1. Go to sam (type "sam") in command line for HPIA , similarly, if its AIX, type smitty.
2. Select networks and communications and then goto hosts
3. There, it will show you suggested names and the current names. Now modify as required and save the configuration. Now your hostname must be displayed.

Steps to configure a TXSeries region with a particular language

IBM TXSeries for Multiplatforms supports multiple locales. Details of TXSeries supported locales can be found in the infocenter link

Here are the steps to configure a TXSeries region with a particular language(locale)

1) Create a new region
cicscp –v create region japreg

2) Setup region environment to work with the language.
Suppose we are configuring a region with Japanese language, we need to export the following environment variables in the regions environment file.

3) Copy the Japanese map files into regions map directory.
cp $CICSPATH/msg/Ja_JP /var/cics_regions/japreg/maps/prime

Copying the Japanese map files into regions prime folder will enable the region to pick the Japanese map files whenever we are running CICS Supplied transactions like CSTD,CEBR etc

4)Start the region
cicscp –v start region japreg StartType=cold

To check whether the region is properly configured with Japanese language, we can check for the following ‘ERZ010135I ‘ message in the console file of the region.

ERZ010135I/0362 09/15/10 23:21:12.642808969 test2 46208/0001 : CICS region 'japreg' is being started with locale categories Ja_JP Ja_JP Ja_JP Ja_JP Ja_JP Ja_JP

Boost your application performance using SET NEWCOPY

Caching a program saves reloading costs thereby improving the performance.
The SET PROGRAM NEWCOPY or SET PROGRAM COPY (NEWCOPY) or SET PROGRAM PHASEIN or SET PROGRAM COPY (PHASEIN) commands for a Micro Focus Server Express COBOL or Net Express program removes every program that was previously loaded by the application server, so that a fresh copy of every such program is used after one of these commands is run.
Here we will learn how SET NEWCOPY works in Windows for both .cbmfnt and .gnt files.

1.First create a region "cicscp -v create region test123"(say region name is test123)
2.Take a simple "hello world" program (P1) in COBOL.
Take another program (P2) to be written with EXEC CICS SET PROGRAM(P1) NEWCOPY.

3.TD,PD entries are added for the first program
cicsadd -c td -r test123 QWER ProgName=P1 which will add the transaction definition entry.
cicsadd -c pd -r test123 P1 PathName=hello where hello is the executable.
Similarly TD,PD entries are added for the second program.

4.Then compile both the programs using cicstcl –elCOBOL .ccp to generate .cbmfnt files" where .ccp is ur cobol program.
Cold start the region using "cicscp -v start region test123 StartType=cold"
5.Do a cicsteld and run the transaction of the first program "P1"and observe the output.
Modify this program and recompile.
6.Now run transaction of the second program P2 once, and then run for P1 again. Observe the output,it should reflect the changes made in the first program.

The same scenario can be run using the following:

Steps to generate .gnt file on windows:
SET COBCPY=C:\opt\cics\include

Wednesday, September 15, 2010

Four Easy steps to Configure TXSeries Region with DB2

If you are going to use TXSeries with DB2 but not sure how to start that systematically,no worry...just follow simple steps mentioned below and it's done...

1)Configure the region-

First of all create a region (say test) by "cicscp -v create region test" and Set the CICSREGION environment variable to the name of the region.

Add the following entries corresponding to created region like-

cicsadd -c td -r $CICSREGION UXA1 ProgName="UXA1C" to add transaction entry definition

cicsadd -c pd -r $CICSREGION UXA1C PathName="uxa1" to add program entry definition

2) Build the programs.

Compile your C program and put it in your region's bin directory. I am assuming here that your program executable name is "uxa1" same as used in above PD entry addition commands.

3) Enable the XA connection.

The XA standards define the interfaces between the transaction manager, application program, and the resource manager to achieve the two-phase commit in a DTP(Distributed transaction processing) environment and hence you need to define the XA connection to the region as described below using the cicsadd command because using the XAD connection your program will be talking to DB2.Here i am using default database cicstest you can have different database also.

cicsadd -c xad -r $CICSREGION XADdef SwitchLoadFile=cicsxadb2 XAOpen=”cicstest,username,password”

Here XAOpen string is using switcload file cicsxadb2 which is provided with TXSeries and hence you don't need to build it.

Also XAOpen string contains crucial information to be used for making connection with "cicstest" is your database name and "username and passwords" are same as you have set up during database installation.

4) Now just cold start the region "cicscp -v start region test StartType=cold" and you are ready to run your transaction.

Monday, September 6, 2010

Getting around starting woes

If you're doing a fresh installation of TXSeries on a machine i.e., the first time TXSeries is being installed on that machine you might hit a couple of issues.

NLSPATH or LANG isn't set

For example, when you try to start/create a region using the command :
"cicscp -v start region XYZ StartType=cold"

You are likely to see this message :
ERZ057001E/00xx: Cannot access message catalog for message ERZ038038I
Please check if NLSPATH or LANG variable is set

This happens because the LANG environment is set to "C" by default, since there is multiple language support with TXSeries and the user has to set this variable.
Also, we need to verify if the path that contains the directory (catalog) that contains CICS messages. This is usually found in this path : /msg/ or /usr/lpp/cicssm/msg ( /opt/lpp/cicssm/msg for non AIX )


Another issue that you may hit is when we try to start a region/sfs for the first time, you may see SFS_NO_SUCH_FILE_SYSTEM where even though the sfs server exists, it fails to start.
In this case, we need to stop the sfs server and do a cold start.

Stop : cicscp -v stop sfs_server sfsSeverName
Cold Start: cicscp -v start sfs_server sfsSeverName StartType=cold

(Warning : Cold starting an sfs server will erase user's files and TDQs. Never cold start an sfs server in the middle of a production environment )

More on ERZ038255E :

Thursday, September 2, 2010

TXSeries..CICS.. What is this all about..

Well, there is a lot of TXSeries and CICS talk here. If you are like me (ie. like I was a few years back), you probably doesn't know anything about TXSeries. And perhaps you know CICS only as some kind of a mainframe specific skill.

I'll try to demystify some of these CICS/TXSeries talks to begin with in this blog.

All right.. so to start with let's figure out what CICS is..

CICS stands as an acronym for Customer Information Control Systems. A mouthful eh?
Well, I know that doesn't tell you anything about what CICS is all about.

Ok..ok.. let me try explaining this in a more simple way. CICS is a software originally designed to run on IBM mainframe systems, which helps customers to develop and deploy their business applications.

What does that mean? In modern parlance, you call CICS as a middleware. In a nutshell what it does is, it runs business applications for you. We'll see how is that done and what makes CICS so successful and mysterious.

So, we said CICS helps you running your applications. Let's see what is required to run applications. What do applications do normally? 3 things...
  1. Take some kind of inputs from a user or give some output to a user
  2. Store or retrieve some kind of data based on some logic
  3. Correlate the inputs in some way with the stored data
If I make this a bit generic, it would translate into
  1. Ability to interact with users through user interfaces (presentation)
  2. Ability to handle data
  3. Ability to write business logic to handle the data and inputs.
The key reason for people using CICS (and why CICS is still successful after 40 years.. about that later) for managing their applications is that CICS provides all these essential services required by applications.

To begin with on the mainframes, data used to be (and even now in many cases) stored on what is called datasets or Files in a VSAM file system. CICS gives you APIs to access those files and read, write and modify data in these files. It is pretty much like the SQL apis that you use today with RDBMSs. CICS data management APIs get you the data the way you want, without you having to know where the data is and how it is stored.

Similarly, CICS gives you a set of APIs and capability for managing presentation interfaces. If you have seen old mainframe applications, you would have noticed those infamous green screens. Many of them are provided by CICS. Of course, modern day CICS applications seldom use those kind of presentation interfaces. But if you want a bare minimum application up and running all that you need is only CICS and it can give you a character based presentation layer.

Likewise, CICS also provides a number of APIs to manage business logic. You can call different programs from within applications, pass data, do various things to manage things.

In addition CICS supports multiple programming languages like COBOL, C, PL/I, Java which you can write your applications on.

So, in essence CICS gives you a basic infrastructure facility to write and host your applications, without worrying about the details underneath, like how the data is stored, where and how the user interface is presented or where a called program resides. It makes application programming much transparent and allows the application programmer to concentrate on what they want to do. ie. writing business applications.

Yes, that's what makes CICS unique and interesting. That's why it is here for the past 40 years and still going strong.

Hopefully that gives you an idea of what CICS is and what it does... Will come back with more sometime later.