USEFUL UNIX COMMAND LINE LOOPY STUFF: Loops are not just for shell scripts! One can type (or paste) some simple but very useful multi-line loops directly into the UNIX command line. NOTES: For CareManager, the various "drmtool" scripts are placed under the /u/qa/drm/ directory. For CareLink, they are placed under the /u/carelink/qa/drm/ directory. Many of the example loops shown below require that certain new environment variables (e.g., $dan) are "taught" to your current login shell. This can be done by dotting the cldirs script into your current login shell. The how-to is below: Log into CareLink and set up some needed variables in your current UNIX session (be sure the cldirs script is present in the path shown below): . /u/carelink/qa/drm/cldirs # Note and use the leading "dot space" This will establish the variable "$dan" and many others that are very useful for navigating CareLink. Using these variables saves one a lot of typing (the example immediately below transports one into the /u/carelink/qa/config/xlate/perl/ directory): cd $qwap ALERT: Many of the loops below - all for CareLink - REQUIRE the variables set by the cldirs command above. One can always find the latest drmtools.tar on mickey under /home3/dmartin8/ or the individual tools within it under /home3/dmartin8/drm/ =============================================================================== # The loop below is useful when an old queue file may have been # compressed. It is conditionally uncompressed and then # the $dan/grepque.ksh script is called to seek the # desired string within all the queue files in both the # train and qa environments on CareLink for a single date. # Value of $dan is "/u/carelink/qa/drm" on the CareLink server. # If train & qa are shadowed the below is a real timesaver, # Change the values for que and seek as needed: que="05122004"; seek="CMSP2.DANIELLE" for it in train qa do dir="/u/carelink/${it}/queue" cd $dir for q in * do if [[ -d $q ]] then cd $q [[ -f "${que}.Z" ]] && uncompress ${que}.Z cd .. fi done mvqs="$dan/grepq_${it}_${que}.txt" $dan/grepque.ksh "$seek" $que mv /tmp/grepque.txt $mvqs cd $dan done NOTE: The script shadmsg.ksh incorporates the loop above and adds many other bells & whistles, making it a better choice. =============================================================================== # The below searches all the output files from the # see_anyq.ksh script for a given date seeking a specific # string. This will only work if see_anyq.ksh has been # run with an answer of N to the summary-only option. # Change the values for que and seek as needed: que="05122004"; seek="CMSP2.DANIELLE" for it in see_anyq-*${que}.txt do echo "\nFILE: \"$it\" SEEKING: \"$seek\" :" | tee -a ${seek}_${que}.txt sed -n "/$seek/p" $it | tee -a ${seek}_${que}.txt sleep 5 done =============================================================================== # This loop quickly reveals the content of the .CmPlacerNum file in # all three environments and the last date and time each was updated. dqst=`date +%m%d%Y`; ofi="$dan/PlacNum_${dqst}.txt" echo "\nAs of: `date`" | tee -a $ofi for dir in live train qa do fil="/u/carelink/${dir}/config/xlate/perl/.CmPlacerNum" ls -l $fil | tee -a $ofi cat $fil | tee -a $ofi echo done =============================================================================== # Compare queue directories among environments. You will have # to disregard entries like ./ and ../ and config.template outf="/tmp/qchk.txt"; echo "As of: `date`" >$outf for it in live train qa do echo "\nQueues for the CL $it environment are:" >>$outf ls -l /u/carelink/${it}/queue | sort | tee -a $outf done echo "\n Results are also captured within $outf" | tee -a $outf =============================================================================== # These 10 lines check the permissions and ownership for every directory in $PATH # and the output will be duplicated into $ofi in the same order as the search-order # in $PATH. Any nonexistent directories in $PATH will be duly noted. dqst=`date +%Y%m%d`; ofi="/tmp/PATH_${dqst}.txt" echo "\nAs of: `date`\n" | tee $ofi echo "${PATH}\n" | tee -a $ofi | awk -F":" '{for (i = 1; i <= NF; ++i) printf("%s\n", $i)}' | while read dir do ls -ld $dir 2>&1 | tee -a $ofi done ls -l $ofi ALERT: The value in $PATH is changed when programs like "live" and "clladm" are run. To get the complete picture, the loop above should be repeated for all three environments (live, train, and qa). =============================================================================== # The loop below assumes that you wish to seek three different "CL" # numbers in all the log and dbg files for a given date (they were # previously named *0429.Z under the ./archive/ directory and all had # already been uncompressed). Sure, sed has to pass through each *0429 # file three different times, but no matter because sed is very fast. for lg in *0429 do for cl in CL7635857 CL7635858 CL7635859 do echo "\nFile: \"$lg\" - String: \"$cl\":" >>/tmp/lgdb0429.txt sed -n "/$cl/p" $lg >>/tmp/lgdb0429.txt done done =============================================================================== # The compound UNIX command below will always reveal how many instances of ncl_sub # reside on a given server, including which instance will be snagged first within # the current $PATH if the User does not explicitly qualify the one to be run # (ncl_sub and ./ncl_sub will likely run different versions). The cksum # utility reveals which clones are exact copies of which others - but unfortunately # there is no sure way to determine which iteration is the most current version. date; which ncl_sub find /u/carelink /u/ccuser /usr -name ncl_sub -print 2>/dev/null | while read sub do echo ls -l $sub cksum $sub done =============================================================================== # Below are two equally effective ways to employ sed to process a small # list of CL numbers (four in the examples below) to copy out all the # HL7 messages having any of the CL numbers from a queue file into a new file: for cl in CL7635857 CL7635858 CL7635859 CL7635967 do sed -n "/$cl/p" 06022004 >>/tmp/cl_hl7.txt done sed -n ' /CL7140675/p /CL7140678/p /CL7140685/p /CL7140696/p' 06022004 >/tmp/cl_hl7.txt # Only the four messages within 06022004 (out of perhaps many thousands) # that contain one of the CL numbers will be copied into the cl_hl7.txt file. =============================================================================== # The below runs sed against a list of queue directories to seek out # specific "CL" numbers (or any other desired strings) in both train # and qa. All messages containing the CL numbers will be copied out. qfi=`date '+%m%d%Y'` # always todays queue file name (e.g., 07102003) dan='/tmp' # change to where you want the output to go for e in train qa do # sub different interface queues below if needed for q in his_in adt_in do cd /u/carelink/${e}/queue/${q} # sub your own CL numbers below: for t in CL729484 CL729485 CL729486 do ofi="$dan/${t}_${e}_${q}_hl7" sed -n "/$t/p" $qfi >$ofi ls -l $ofi ; sleep 3 done done done =============================================================================== # The loop below quickly runs the perl_audit script in all three environments: dqst=`date +%m%d%Y`; for e in live train qa do perl_audit $e >paudit_${e}_${dqst}.txt done =============================================================================== # The loop below takes the output from the loop above and creates # the data files for all three environments as required by the # chkplcall.ksh script. NOTE: Be sure to process these data # files with the vi commands outlined within the chkplcall.txt file. for e in live train qa do cp paudit_${e}*.txt perlsubs_${e}.txt done =============================================================================== # This example shows how a loop and pipes can be combined to proceess # the output of a script multiple times in all three environments: for e in live train qa do perl_audit $e | sed '8,$s/^........................................//' | sed -n '/=======/!p' | sort -u >perlsubs_${e}.txt done # The loop above uses sed to modify the output of the perl_audit script, # using the same edits as shown for vi within the chkplcall.txt file. =============================================================================== # The task is to mass-edit all of the router files so that they properly # reference the local perl file (province.pl). A further requirement is # that each router file will first be backed up before being edited. # The first two lines set up variables used within the loop, including # appropriate values for the "find-string" and "replace-string" vars. cd /u/carelink/qa/config; dst=`date +%Y%m%d`; lpl='province.pl' fst='#_PERL_SCRIPT_FILES_'; rst="#_PERL_SCRIPT_FILES_ $lpl" for rut in router.*.cfg do nf="${rut}_$dst" mv $rut $nf sed "s/$fst/$rst/g" $nf >$rut done # The above can be adapted to multiple uses by changing the var values. # Notice the use of single-quoting for fst and double-quoting for rst and nf. # The sed command also is double-quoted due to the need for variable expansion. =============================================================================== EOF