DFSORT is your go‑to utility for sorting, merging, and transforming data. ICETOOL Utilities takes DFSORT even further by providing a multi‑purpose, high‑level interface that lets you perform multiple operations in a single job step. It uses DFSORT to perform multiple operations on one or more data sets in a single job step.
ICETOOL is a versatile DFSORT utility used to perform complex data processing and reporting tasks using one or more data sets in a single job step. It acts as a batch front‑end to DFSORT:
Key benefits of ICETOOL:
Typical use cases:
A standard ICETOOL job usually includes:
PGM=ICETOOL in the EXEC statement.TOOLIN – contains ICETOOL control statements.IN, IN1, IN2) and output (OUT1, OUT2, RPT) DDs.TOOLMSG and DFSMSG – for messages and diagnostics.//ICETOOL JOB (ACCT),'ICETOOL DEMO',CLASS=A,MSGCLASS=X
//STEP1 EXEC PGM=ICETOOL
//TOOLMSG DD SYSOUT=*
//DFSMSG DD SYSOUT=*
//IN DD DSN=YOUR.INPUT.FILE,DISP=SHR
//OUT1 DD DSN=YOUR.OUTPUT.FILE1,DISP=SHR
//RPT DD SYSOUT=*
//TOOLIN DD *
* ICETOOL control statements go here
/*
The COPY operator is the simplest ICETOOL utility. It’s equivalent to a DFSORT OPTION COPY, but easier to combine with other operations.
//STEP1 EXEC PGM=ICETOOL//TOOLMSG DD SYSOUT=*//DFSMSG DD SYSOUT=*//IN DD DSN=YOUR.INPUT.FILE,DISP=SHR//OUT1 DD DSN=YOUR.OUTPUT.FILE,DISP=SHR//TOOLIN DD *COPY FROM(IN) TO(OUT1)/*
Here, ICETOOL copies all records from IN to OUT1. Later, you can add more operations (such as COUNT or DISPLAY) in the same TOOLIN.
//OUT1 DD DSN=YOUR.OUTPUT.FILE1,DISP=SHR
//OUT2 DD DSN=YOUR.OUTPUT.FILE2,DISP=SHR
//TOOLIN DD *COPY FROM(IN) TO(OUT1,OUT2)/*
This is useful when you need to archive data in more than one place with a single read of the input file.
COPY FROM(MASTER) TO(PRINT,TAPE,DISK)
One call to DFSORT, one pass over the input data set, and allows the output data sets to be specified in any order. The COPY operator copies all records from the MASTER data set to the PRINT (SYSOUT), TAPE, and DISK data sets, using OUTFIL processing.
COPY FROM(MASTER) TO(DISK,TAPE,PRINT) SERIAL
Three calls to DFSORT, three passes over the input data set, and imposes the restriction that the SYSOUT data set must not be the first TO data set. The COPY operator copies all records from the MASTER data set to the DISK data set and then copies the resulting DISK data set to the TAPE and PRINT (SYSOUT) data sets. Because the first TO data set is processed three times (written, read, read), placing the DISK data set first is more efficient than placing the TAPE data set first. PRINT must not be the first in the TO list because a SYSOUT data set cannot be read.
COPY FROM(IN) TO(DEPT1) USING(DPT1) COPY FROM(IN) TO(DEPT2) USING(DPT2) COPY FROM(IN) TO(DEPT3) USING(DPT3)
Three calls to DFSORT and three passes over the input data set: v The first COPY operator copies the records from the IN data set that contain D01 in positions 5-7 to the DEPT1 data set. v The second COPY operator copies the records from the IN data set that contain D02 in positions 5-7 to the DEPT2 data set. v The third COPY operator copies the records from the IN data set that contain D03 in positions 5-7 to the DEPT3 data set.
COPY FROM(IN) USING(ALL3)
It uses OUTFIL statements instead of TO operands, so requires only one call to DFSORT and one pass over the input data set.
COPY FROM(VSAMIN) TO(VSAMOUT) VSAMTYPE(V)
The COPY operator copies all records from the VSAMIN data set to the VSAMOUT data set. The VSAM records are treated as variable-length.
COUNT is one of the most useful ICETOOL utilities for quick record statistics. It can print the total number of records and even write the count into a dataset with formatted text.//STEP1 EXEC PGM=ICETOOL//TOOLMSG DD SYSOUT=*//DFSMSG DD SYSOUT=*//IN DD DSN=ORDERS.FILE,DISP=SHR//TOOLIN DD *
COUNT FROM(IN)/*
You’ll see a message in TOOLMSG similar to:
ICE202I 00001234 RECORDS FOUND
//COUNTMSG DD SYSOUT=*//TOOLIN DD *
COUNT FROM(IN) WRITE(COUNTMSG) -
TEXT('TOTAL ORDERS PROCESSED: &COUNT')/*
&COUNT is replaced with the actual record count.COUNT FROM(IN1)
It prints a message containing the count of records in the IN1 data set.
COUNT FROM(IN2) USING(CTL1)
It prints a message containing the count of records included from the IN2 data set.
COUNT FROM(INPUT1) EMPTY
Sets RC=12 if INPUT1 is empty (that is, INPUT1 has no records), or sets RC=0 if INPUT1 is not empty (that is, INPUT1 has at least one record).
COUNT FROM(INPUT2) HIGHER(50000) RC4 USING(CTL2)
Sets RC=4 if more than 50000 records are included from INPUT2, or sets RC=0 if 50000 or less records are included from INPUT2.
COUNT FROM(IN2) WRITE(CT2) TEXT(’Count is ’) -
EDCOUNT(A1,U10) WIDTH(80)
Prints a message containing the count of records in the IN2 data set. Writes an 80-byte record with the specified string and an edited count to the CT2 data set. If IN2 contains 3286721 records, the 80-byte output record in CT2 would look like this:
Count is 3,286,721
COUNT FROM(IN3) WRITE(CT3) DIGITS(6) SUB(2)
Subtracts 2 from the count of records in the IN3 data set. Prints a message containing the modified count. Writes a 6-byte record with the modified count to the CT3 data set. If IN3 contains 8125 records, the 6-byte output record in CT3 would look like this: 008123.
DATASORT FROM(INPUT) TO(OUTPUT) HEADER TRAILER USING(CTL1)
(first record) and a trailer record (last record). The CTL1CNTL data set contains:
SORT FIELDS=(16,13,CH,A)
DATASORT FROM(IN) TO(OUT) HEADER(2) TRAILER(3) USING(CTL2)
This example illustrates how you can sort the data records between header records (first records) and trailer records (last records), and modify just the data records or the header, data and trailer records. The CTL2CNTL data set contains:
INREC IFTHEN=(WHEN=(24,2,CH,EQ,C’23’),
OVERLAY=(30:C’Old’))
SORT FIELDS=(1,14,CH,A)
OUTFIL FNAMES=OUT,
IFTHEN=(WHEN=(24,2,CH,EQ,C’23’),
OVERLAY=(35:C’First’)) //TOOLIN DD * MERGE FROM(IN01,IN02,IN03,IN04,IN05) TO(OUTPUT) USING(MERG) //MERGCNTL DD * OPTION EQUALS MERGE FIELDS=(21,4,CH,A) /*
This example merges 5 input files to an output file. EQUALS is used to ensure that records that collate identically are output in the order specified in the FROM operand.
For example, if IN01, IN03 and IN05 all have records with a key or ‘AAAA’ in positions 21-24, the output will contain the ‘AAAA’ record from IN01, the ‘AAAA’ record from IN03 and the ‘AAAA’ record from IN05, in that order.
//TOOLIN DD * MERGE FROM(INPUT1,INPUT2,INPUT3,INPUT4) - FROM(INPUT5,INPUT6,INPUT7) VSAMTYPE(F) USING(MRG1) //MRG1CNTL DD * MERGE FIELDS=(52,8,UFF,D) OUTFIL FNAMES=OUT1,INCLUDE=(15,3,SS,EQ,C’D21,D33’) OUTFIL FNAMES=OUT2,SAVE /*
This example merges 7 input files to 2 output files. It uses two OUTFIL statements to create the two output files; each output file will have a different subset of the merged records. VSAMTYPE(F) tells DFSORT the record type is F (only needed for VSAM input files).
RANGE FROM(DATA1) ON(VLEN) HIGHER(10)
RANGE FROM(DATA2) ON(31,18,ZD) LOWER(+123456789012345678)
RANGE FROM(DATA3) ON(29001,4,FI) -
HIGHER(-10000) LOWER(27)
RANGE FROM(DATA2) ON(45,3,PD) EQUAL(-999)
RANGE FROM(DATA3) ON(40,1,BI) NOTEQUAL(199) The first RANGE operator prints a message containing the count of binary values from positions 1-2 of the DATA1 data set that are higher than 10.
The second RANGE operator prints a message containing the count of zoned decimal values from positions 31-48 of the DATA2 data set that are lower than 123456789012345678.
The third RANGE operator prints a message containing the count of fixed-point values from positions 29 001-29 004 of the DATA3 data set that are higher than -10 000 but lower than 27.
The fourth RANGE operator prints a message containing the count of packed decimal values from positions 45-47 of the DATA2 data set that are equal to -999.
The fifth RANGE operator prints a message containing the count of binary values from position 40 of the DATA3 data set that are not equal to 199. This RANGE operator could be used to count the number of records that do not have ‘G’ in position 40, because 199 (X’C7′) is the EBCDIC code for ‘G’. Alternatively, the COUNT operator could be used with OMIT COND=(40,1,CH,EQ,C’G’).
RANGE focuses on numeric ranges and is perfect for threshold‑based reporting.
Record layout:
20–25: Amount (BI – binary integer)
//RNGMSG DD SYSOUT=*//TOOLIN DD *
RANGE FROM(IN) WRITE(RNGMSG) -
ON(20,6,BI) -LOW(1000) HIGH(5000)/*
The message in RNGMSG shows how many records fall inside the specified range. You can use different ranges or multiple RANGE statements to segment data.
RESIZE FROM(IN1) TO(OUT1) TOLEN(40)
The IN1 data set has RECFM=FB and LRECL=10 with these 10-byte records:
Bird Bluejay 4 Charlie Rodent Rat 2 Sara
The OUT1 data set has RECFM=FB and LRECL=40 with these 40 byte records:
Bird Bluejay 4 Charlie Rodent Rat 2 Sara
RESIZE FROM(OLD) TO(NEW) TOLEN(15) USING(CTL1)
This example illustrates how you can create larger records from a subset of smaller sorted records. The CTL1CNTL data set contains:
OMIT COND=(2,4,ZD,EQ,0) SORT FIELDS=(1,1,CH,A)
The OLD data set has RECFM=FB and LRECL=5 with these 5-byte records:
C0005 B0000 A0008 I1234 F0053 D0123 H0001 G0000 E0022
The NEW data set will have RECFM=FB and LRECL=15 with these 15-byte records:
A0008C0005D0123 E0022F0053H0001 I1234
Note that before the records were resized, the two records with 0 in positions 2-5 were omitted, and the remaining records were sorted as directed by the DFSORT control statements in CTL1CNTL. The last output record was padded with blanks on the right to 15 bytes.
RESIZE FROM(IN3) TO(OUT3) TOLEN(3) USING(CTL2)
This example illustrates how you can break up large records into multiple smaller records. The CTL2CNTL data set contains:
OUTFIL FNAMES=OUT3,OMIT=(1,3,CH,EQ,C’ ’),OVERLAY=(10:X)
The IN3 data set has RECFM=FB and LRECL=18 with these 18-byte records:
000111222333444555 666777888999
Every 3-byte field in each large IN3 record will be broken up into a single 3-byte field and then padded on the right with blanks to 10-bytes. TOLEN(3) indicates that the resized records will have a length of 3 bytes. OVERLAY=(10:X) expands each resized record to 10 bytes in OUT3. OMIT=(1,3,CH,EQ,C’ ‘) removes any resized records that are completely blank (that is, the two blank resized records resulting from the blanks in positions 13-18 of the second input record).
The OUT3 data set will have RECFM=FB and LRECL=10 with these 10-byte records:
000 111 222 333 444 555 666 777 888 999
SELECT FROM(INPUT) TO(DUPS) ON(11,8,CH) ON(30,44,CH) ALLDUPS
Sorts the INPUT data set to the DUPS data set, selecting only the records from INPUT with characters in positions 11-18 and characters in positions 30-73 that occur more than once (that is, only records with duplicate ON field values).
SELECT FROM(INPUT) TO(ONLYONE) ON(23,3,FS) NODUPS
Sorts the INPUT data set to the ONLYONE data set, selecting only the records from INPUT with floating sign values in positions 23-25 that occur just once (that is, only records with no duplicate ON field values).
SELECT FROM(FAILURES) TO(CHECKOUT) ON(28,8,CH) ON(1,5,CH) - HIGHER(3)
Sorts the FAILURES data set to the CHECKOUT data set, selecting only the records from FAILURES with characters in positions 28-35 and characters in positions 1-5 that occur more than three times (that is only recorded with four or more duplicate ON field values).
SELECT FROM(BOOKS) TO(PUBLISHR) ON(29,10,CH) FIRST
Sorts the BOOKS data set to the PUBLISHR data set, selecting only the records from BOOKS with characters in positions 29-38 that occur only once and the first record of those with characters in positions 29-38 that occur more than once (that is, one record for each unique ON field value).
SELECT FROM(BOOKS) TO(PUBLISHR) ON(29,10,CH) FIRST - DISCARD(SAVEREST)
This example creates the same PUBLISHR data set as Example 4. In addition, it creates a SAVEREST data set that contains all of the records not written to the PUBLISHR data set.
SELECT FROM(MASTPULL) TO(MATCH) ON(5,8,CH) FIRSTDUP
This example shows how you can use a list of account numbers in a “pull” data set to only select records with those account numbers from a “master” data set. The MASTPULL DD would have the “master” data set and “pull” data set concatenated together (in that order).
SELECT FROM(INPUT) TO(ONLYONE) ON(23,3,FS) NODUPS USING(CTL1)
This example shows how you can use USING(xxxx) to supply an OUTFIL statement to modify the TO data set. SELECT chooses the same output records as for “Example 2” on page 669, but an OUTFIL statement is used to further modify those records for output to the ONLYONE data set.
The CTL1CNTL data set contains:
OUTFIL FNAMES=ONLYONE, REMOVECC, INCLUDE=(23,3,FS,LT,100), OUTREC=(1:1,7,8:C’|’,11:11,7,19:C’|’,23:23,3,FS,M11, 27:C’|’,30:30,15), TRAILER1=(/,’TOTAL= ’,TOT=(23,3,FS,M11,LENGTH=6))
and the ONLYONE data set might look as follows:
DFSRT2 | EISSLER | 005 | DOC.EXAMPLES DFSRT1 | PACKER | 008 | ICETOOL.SMF.RUN USR002 | EISSLER | 012 | DOC.EXAMPLES SYS003 | YAEGER | 032 | ICETOOL.TEST.CA TOTAL= 000057
SORT FROM(MASTER) TO(PRINT,TAPE,DISK) USING(ABCD)
One call to DFSORT, one pass over the input data set, and allows the output data sets to be specified in any order. The SORT operator sorts all records from the MASTER data set to the PRINT (SYSOUT), TAPE, and DISK data sets, using the SORT statement in the ABCDCNTL data set and OUTFIL processing.
SORT FROM(MASTER) TO(DISK,TAPE,PRINT) USING(ABCD) SERIAL
Three calls to DFSORT, three passes over the input dataset, and imposes the restriction that the SYSOUT data set must not be the first TO data set. The SORT operator sorts all records from the MASTER data set to the DISK data set, using the SORT statement in the ABCDCNTL data set, and then copies the resulting DISK data set to the TAPE and PRINT (SYSOUT) data sets. Because the first TO data set is processed three times (written, read, read), placing the DISK data set first is more efficient than placing the TAPE data set first. PRINT must not be the first in the TO list because a SYSOUT data set cannot be read.
SORT FROM(IN) TO(DEPT1) USING(DPT1) SORT FROM(IN) TO(DEPT2) USING(DPT2) SORT FROM(IN) TO(DEPT3) USING(DPT3)
Requires three calls to DFSORT and three passes over the input data set
SORT FROM(IN) USING(ALL3)
It uses OUTFIL statements instead of TO operands, so requires only one call to DFSORT and one pass over the input data set.
This example shows two different methods for creating sorted subsets of an input data set. Assume that: v The DPT1CNTL data set contains:
SORT FIELDS=(51,2,BI,A,18,5,CH,A,58,4,BI,A) INCLUDE COND=(5,3,CH,EQ,C’D01’)
The DPT2CNTL data set contains:
SORT FIELDS=(51,2,BI,A,18,5,CH,A,58,4,BI,A) INCLUDE COND=(5,3,CH,EQ,C’D02’)
The DPT3CNTL data set contains:
SORT FIELDS=(51,2,BI,A,18,5,CH,A,58,4,BI,A) INCLUDE COND=(5,3,CH,EQ,C’D03’)
The ALL3CNTL data set contains:
SORT FIELDS=(51,2,BI,A,18,5,CH,A,58,4,BI,A) OUTFIL FNAMES=DEPT1,INCLUDE=(5,3,CH,EQ,C’D01’) OUTFIL FNAMES=DEPT2,INCLUDE=(5,3,CH,EQ,C’D02’) OUTFIL FNAMES=DEPT3,INCLUDE=(5,3,CH,EQ,C’D03’)
Here is an example of using one SORT operator for a simple JOINKEYS application.
//SRTJK EXEC PGM=ICETOOL //TOOLMSG DD SYSOUT=* //DFSMSG DD SYSOUT=* //JNA DD DSN=MY.INPUTA,DISP=SHR //JNB DD DSN=MY.INPUTB,DISP=SHR //OUT DD SYSOUT=* //TOOLIN DD * * SORT operator with JOINKEYS application. SORT JKFROM TO(OUT) USING(CTL1) /* //CTL1CNTL DD * * JOINKEYS application control statements for SORT operator. JOINKEYS F1=JNA,FIELDS=(5,4,A) JOINKEYS F2=JNB,FIELDS=(11,4,A),SORTED REFORMAT FIELDS=(F1:1,20,F2:5,15) * Main task control statement for SORT operator
OPTION EQUALS SORT FIELDS=(1,3,ZD,A) /*
ICETOOL utilities are indispensable for Mainframe professionals, providing robust capabilities for data processing and manipulation. By mastering ICETOOL operations, mainframe developers can streamline complex data tasks, enhance efficiency, and unlock the full potential of their data processing workflows. The examples provided serve as a starting point for exploring the diverse functionalities of ICETOOL in Mainframe environments.
OUTER JOIN Queries are a valuable tool in SQL, allowing you to retrieve data from…
OUTREC control statement is used to reformat (adds, deletes, or reformats fields) each record after…
The Product Owner role has shifted from just being a requirements proxy to a strategic,…
Business Value: In the world of Agile development, the user story has long been the…
The SAFe Scrum Master certification has become one of the most sought-after credentials for Agile…
The Professional Scrum with Kanban (PSK) course enhances your organization's ability to deliver value efficiently…
Effective User interviews play a crucial role in Scrum methodology, helping Product Owners and Scrum…
Product Owners should be well-versed in various user research tools and techniques to effectively understand…