Wednesday, August 29, 2012

PowerShell : Code to FTP files from windows to unix server


Recently I had to write a script in windows to interact with Sql Server and FTP for one of the client requirements.

There I thought about using windows batch scripting, VBScripting but PowerShell turned out to be a complete solution for me.

I am just too amazed with the enormous features of PowerShell, It's ability to build and test code at the command prompt just like Unix.
Its best alternative for shell scripting geeks on windows platform.

A beginner may find Powershell bit complicated at the beginning but as you go on digging.... you will thrive to learn more and more.

henceforth I'll be sharing some useful powershell codes which I come across.

Here is the code I wrote to transfer files using ftp from windows to unix server.

# FTP FILES TO UNIX SERVER

foreach ( $FILE1 in $(get-childitem C:\Niraj\PowerShell\ps\abc) )
{
$ftpRemote = "ftp://UnixServerName/pathAfterUserHome/$FILE1"
$ftpLocal = $FILE1.FullName
$username = "NjUnixUsr"
$password = "pass1234"
$client = New-Object System.Net.WebClient
$credentials = New-Object System.Net.NetworkCredential -arg $username, $password
$client.Credentials = $credentials
$ErrorActionPreference = "silentlyContinue"
$client.UploadFile($ftpRemote, $ftpLocal)
if ($? -eq $false)
{  "$FILE1 transfer failed with error $error[0] " }
else
{  "$FILE1 ftp successful" }
}

OR


#below 3 lines creates an empty file with not even a single space or line in it.
"" > c:\nj\output\test123.txt
[array]$txt1 = Get-Content c:\nj\output\test123.txt
$txt1[1..($txt1.Count)] > c:\nj\output\test123.txt

#below 5 lines does the task of getting all filenames in test123.txt file on every line
$files2 = get-childitem c:\nj\images
foreach ( $FILE2 in $files2 )
{
"" + $FILE2 >> c:\nj\output\test123.txt
}

# Here process each file in for loop for ftp transfer
foreach ( $FILE1 in $(cat c:\nj\output\test123.txt) )
{
$ftpLocal = "C:\nj\images\$FILE1"
$username = "NjUnixUsr"
$password = "pass1234"
$client = New-Object System.Net.WebClient
$credentials = New-Object System.Net.NetworkCredential -arg $username, $password
$client.Credentials = $credentials
$ErrorActionPreference = "silentlyContinue"
$client.UploadFile($ftpRemote, $ftpLocal)

# Error check
if ($? -eq $false)
 {
  "$FILE1 transfer failed with error $error[0] "
 }
else
 {
  "$FILE1 ftp successful"
 }
}

Monday, August 27, 2012

PSOFT : Tools delivered Import/Export dms scripts

# Version : PT8.5x

cd $PS_HOME/scripts

To import & export only PeopleTools tables i.e. No application data

1.  mvprdimp.dms 
2.  mvprdexp.dms

To import & export all User Profiles

1. userimport.dms
2. userexport.dms

To import & export PeopleTools security tables

1. securityimport.dms
2. securityexport.dms

Wednesday, August 22, 2012

UNIX : File compression commands


Compression commands available in AIX unix.

  1. compress/uncompress
  2. gzip/gunzip
  3. bzip2/bunzip2


      We use tar command to encapsulate file/dirs in one file to be able to use above compression commands

Examples of Tar :

# To create tape archive of all files inside dir DIR123 in DIR123.tar
tar -cvf DIR123.tar DIR123/*

# To display the names of the files in the out.tar disk archive file on the current directory
tar -vtf out.tar

# To expand the compressed tar archive file, fil.tar.z, pass the file to the tar command, and extract all files from the expanded tar archive file
zcat fil.tar.Z | tar -xvf -

# To extract all content of tar file
tar -xvf abc.tar

# To extract file named Niraj.txt from tar
tar -xvf abc.tar Niraj.txt

Examples of compress & Uncompress :

# To compress a file
compress file1

# To uncompress a file
uncompress file1.Z

# To compress files greater than 100mb
find . –size +104857600c | xargs compress

# To compress only files in current directory which are not compressed
for var1 in ` ls -ltr | grep ^- | awk -F" " '{ print $9 }' | grep -v Z$`
do
compress $var1
echo “file $var1 compressed”
done


( gzip & bzip2 are almost identical in flags usage & syntax. They differ in their compression ability. gzip provides greater level of compression compared to bzip2 )

Examples of gzip & gunzip

# Redirect the files contained inside tarred-gzipped file keeping file intact
gzip -cd gzippedfile.tar.gz | tar tvf - >> outputfile

# gzip normal text file
gzip ABC.txt

# gzip the contents inside a directory
tar cvf - filelist* 2>> tar_error_log | gzip -c >> filelist123.tar.gz

# recover gzipped contents
gzip -cd filelist123.tar.gz | tar xvf - >> filelistdir

# gunzip the gzipped file
gunzip ABC.txt.gz
OR
gzip -d ABC.txt.gz

Examples of bzip2 & bunzip2

# To recursively compress all files under documents_folder
bzip2 -r documents_folder

# Redirect the files contained inside tarred-bzipped file keeping file intact
bzip2 -cd bzippedfile.tar.bz2 | tar tvf - >> outputfile

# bzip2 normal text file
bzip2 ABC.txt

# bzip2 the contents inside a directory#
tar cvf - filelist* 2>> tar_error_log | bzip2 -c >> filelist123.tar.bz2

# recover bzipped contents
bzip2 -cd filelist123.tar.bz2 | tar xvf - >> filelistdir

# gunzip the bzipped file
bunzip2 ABC.txt.bz2
OR
bzip2 -d ABC.txt.bz2

PSOFT : Blank Pages in PIA after Upgrading to PT 8.5x

ISSUE :
Pages not being loaded when clicking on Content refs of Folders on the left hand menu or PT8.5x top navigation. i.e. on logging in PS left hand menu appears but on clicking on any Crefs of folder the blank page is displayed.
We faced this issue when we took the projects from 8.48.19 to our new upgraded 8.51.06.

CAUSE : 
From PT 8.50 "DEFAULT TEMPLATE" value "Usage Type" was changed from "Frame Template" to "Inline frame template".

SOLUTION :
To resolve the issue do the following.

i.e. Navigate to Portal Administration->Navigation->Structure and Content.
Select Portal Objects->Templates
Edit "8.50 default template".
Change "Usage Type" from "Frame Template" to "Inline frame template".
Save.
Reboot the web and app servers to have this change take affect.

IF you can't check/change this setting via PIA please issue the following query:
select * from PSPRSMDEFN
WHERE PORTAL_OBJNAME = 'DEFAULT_TEMPLATE' AND PORTAL_REFTYPE = 'C'
AND PORTAL_PRNTOBJNAME = 'PORTAL_TEMPLATES' AND PORTAL_CREF_USGT = 'FRMT'
AND PORTAL_URI_SEG1 = 'WEBLIB_PT_NAV'
AND PORTAL_URI_SEG2 = 'ISCRIPT1'
AND PORTAL_URI_SEG3 = 'FieldFormula';

If the PORTAL_CREF_USGT field value is "FRMT", then the following sql statement needs to be run to change the PORTAL_CREF_USGT value to "IFRM".

UPDATE PSPRSMDEFN SET PORTAL_CREF_USGT = 'IFRM'
WHERE PORTAL_NAME = 'XXXX_SITE_PORTAL_NAME' AND PORTAL_OBJNAME = 'DEFAULT_TEMPLATE' AND PORTAL_REFTYPE = 'C'
AND PORTAL_PRNTOBJNAME = 'PORTAL_TEMPLATES' AND PORTAL_CREF_USGT = 'FRMT'
AND PORTAL_URI_SEG1 = 'WEBLIB_PT_NAV'
AND PORTAL_URI_SEG2 = 'ISCRIPT1'
AND PORTAL_URI_SEG3 = 'FieldFormula';

Where 'XXXX_SITE_PORTAL_NAME' is the PORTAL_NAME value from the select statement.

After making the SQL updates,

Bring down appserver and webserver, clear both appserver/webserver cache

Thursday, August 16, 2012

UNIX : Sed command & useful examples


SYNTAX  :   sed [options] '{command}' [filename]

sed is very useful stream editor that accepts a series of commands and executes them on a file (or set of files) non-interactively.

How sed Works
The sed utility works by sequentially reading a file, line by line, into memory. It then performs all actions specified for the line and places the line back in memory to dump to the terminal with the requested changes made.
After all actions have taken place to this one line, it reads the next line of the file and repeats the process until it is finished with the file.
As mentioned, the default output is to display the contents of each line on the screen.

Two important factors
1. The output can be redirected to another file to save the changes;
2.  The original file, by default, is left unchanged.

Few basic & Advanced examples of sed :

1) Substitute command 's/{old value}/{new value}/'

i] Single change (changes dba with admin )
echo "peoplesoft dba" | sed 's/dba/admin'

ii] Multiple changes
type 1 :
$ echo I’ll go to gym after cricket match  | sed -e 's/gym/office/' -e 's/after/before/'
type 2:
$ echo I’ll go to gym after cricket match | sed 's/gym/office/; s/after/before/'
type 3:
$ echo I’ll go to gym after cricket match | sed '
> s/gym/office/
> s/after/before/'

iii] For global changes throughout file through all lines just append g at the end of command 's/after/before/g'

iv] replace tab by spaces
sed 's/    / /g'

2) Replacement with particular patterns

i]this will replace only in lines containing string "wow"
$ sed '/wow/ s/1/2/' sample_one

ii] will replace 1 by 2 in lines having "abc" and 1 by 3 with lines containing "xyz"
$ sed '
> /abc/ s/1/2/
> /xyz/ s/1/3/' sample_one

iii] using script file as sed options
$ cat sedlist
/abc/ s/1/2/
/xyz/ s/1/3/

and use that file as
$ sed -f sedlist sample_one

iv] substitute "html" with "xml" only in the fifth and sixth line
$ sed '5,6 s/html/xml/' sample_one

v] prohibit display(verbose) use -n
sed -n -f sedlist sample_one

vi] display 2nd to 6th lines
sed -n '2,6p' sample_one

3) deleting lines with syntax-> '{what to find} d'

i] To remove lines containing "man"
sed '/man/ d' sample_one

ii] delete first 3 lines
sed '1,3 d' sample_one

iii] delete the line if "sta" were the first three characters of the line
sed '/^sta/ d' sample_one

iv] delete the line only if "pky" were the last three characters of the line
sed '/pky$/ d' sample_one

v) delete all blank lines from file
sed '/^$/ d' {filename}

iv] delete first line throught to the first blank line
sed '1,/^$/ d' {filename}

vii] combined command 
sed '/man/ s/1/2/; /sta/ s/1/3/; /^$/ d' sample_one

4) Huge files sized in GBs cant be opened in vi editor. Here is trick to get desired log context from file.
i.    grep –in “JavaException” hugefile
      will return JavaException matching line with line number.
ii    Now reduce the filelength by getting few lines before and after JavaException line number as
      sed –n ‘no1,no2p’ hugefile < newtempfile      ( no1 & no2 are starting & ending numbers )
      Now you can open newtempfile and troubleshoot errors backward & forward .

5) To comment lines from 4th to 22nd in a script and uncomment lines from 26th to 34th
sed -e '4,22 s/^/#/' -e '26,34 s/^#//' abc.sh > newfile

6) You want to read script but dont want unneccessary comments
sed -e '/^#/d' nirajScript.ksh | more

7) To display only clients status for App server & removing headers & any blank lines from output.
psadmin -c cstatus –d FSDEV 2> /dev/null | sed -e '1,3 d' -e '/^$/ d' -e '$ d'

Use of Next parameter in sed
The Next command is used for multiline pattern space manipulation.
it allows you to juggle text that you wish to control over multiple lines.
Where the next command outputs the contents of the pattern space then reads the next line...
the Next command reads the contents of the next line then appends it to the pattern space
separating first line from the second with a "\n" character.

8) To delete two consecutive empty lines in a file
sed  ‘/^$/{
N
/^\n$/D 
}’ datafile123

9) To replace a path in a file from  /home/psoft/output/ to /var/spool/report/
As / is special shell character it has to be escaped either by ‘\’ or we can use @ here instead of ‘/’
Type 1:
sed ‘s@/home/psoft/output/@/var/spool/report/@g’ datafile123
Type 2:
sed ‘s/\/home\/psoft\/output\//\/var\/spool\/report\//g’ datafile123

10)sed & Usage: Substitute /home/psoft/ to /home/psoft/log
sed 's@/home/psoft@&/log@g' file123.txt

11)sed & usage : Match the whole line 
sed 's@^.*$@<<<&>>>@g' path.txt

In the above example regexp has “^.*$” which matches the whole line. Replacement part <<<&>>> writes the whole line with <<< and >>> in the beginning and end of the line respectively.

Regular expressions behave slight differently in sed, awk,grep, shell.
So here is how they can be used in sed

Character
Description
^
Matches the beginning of the line
$
Matches the end of the line
.
Matches any single character
*
Will match zero or more occurrences of the previous character
[ ]
Matches all the characters inside the [ ]


Examples:

Reg Expression
Description
/./
Will match any line that contains at least one character
/../
Will match any line that contains at least two characters
/^#/
Will match any line that begins with a '#'
/^$/
Will match all blank lines
/}$/
Will match any lines that ends with '}' (no spaces)
/} *$/
Will match any line ending with '}' followed by zero or more spaces
/[abc]/
Will match any line that contains a lowercase 'a', 'b', or 'c'
/^[abc]/
Will match any line that begins with an 'a', 'b', or 'c'



ORA : How to Release the lock in the table.


ORA-0054 : "resource busy and acquire with NOWAIT specified or timeout expired"

Many times we come across this ora error while trying to Drop,  Truncate, alter Table or  Create or Rebuild 
Index .There we need to analyze who has a lock on the table being dropped/truncated, or on the table whose index is being created/altered, and if it is valid. 

Then carry out below steps to get rid of that lock by killing the responsible session for it.
1. Connect to sqlplus using sys or system
2. select object_id from dba_objects where object_name='<tablename>'; 
3. select * from v$locked_object where object_id=<Object id found in point no.2>; 
Please Note the "oracle username" and "session_id". 
4. Or it can be done by querying  v$access 
select sid from v$access where owner='<table owner>' and object='<table name>'; 
Note session id number or "sid". 
5. select sid, serial#, command, taddr from v$session where sid=<session id number>; 
6. Now you have found the user and what they are doing. 
Investigation into the validity of this session needs to be made.
Many times it may be a job that ran before or a hanging query. If it is determined that this session needs to be terminated, 
go to step 7, or else wait until the user has completed the action. To find out what they are doing, look at the command number in the COMMAND column.
7. To terminate the session: 
alter system kill session '<session_id, serial_no>'; 
8. The session should now be killed and the lock should release.
Rechecking "v$locked_object" will tell you this. If the lock does not immediately release, there may be a rollback occuring. To check this, goto step nine, else dropping the table should work now. 
9. To check for rollback: 
select used_ublk from v$transaction where ADDR=<value from TADDR in v$session>; 
If there is a value there, this is the number of undo blocks used by the transaction. Wait one minute and again select "used_ublk" from  "v$transaction" where ADDR=<value from TADDR in v$session>;
Note the value. If it is decreasing, a rollback is occuring and based on the difference between these values, you can guesstimate the time required to complete the rollback. For example, if the first query returns a value of 
80000 and the second one returns 70000, it took 1 minute to rollback 10000 blocks. Based on this number, you can guestimate the time to complete the rollback. In this case, it would be 7 minutes. 
10. In some cases, you might need to kill the session at the OS level as well. Some 3rd party database monitoring software and certain unexplained situations will cause the lock to remain despite not showing up in 
"v$locked_object" after killing the session. 

Example :
-----Query to find object id for table------
select object_id from dba_objects where object_name='PSOPRDEFN';
37127
-----get sessions locking objects------
select * from v$locked_object where object_id=37127
XIDUSN    XIDSLOT     XIDSQN  OBJECT_ID SESSION_ID ORACLE_USERNAME  OS_USER_NAME
---------- ---------- ---------- ---------- ---------- ------------------------------ ------------------------------
PROCESS                  LOCKED_MODE
------------------------ -----------
        10          7    1665191      37127       1077 SYSADM                         sys_niraj
7608:2460                          3
         0          0          0      37127       1084 SYSADM                         psoft
1233114                            3

------get sessions-------
select sid, serial#, command, taddr from v$session where sid in (1077,1084);
    SID    SERIAL#    COMMAND TADDR
---------- ---------- ---------- ----------------
      1077       1401          6 0700000096C61080
      1084       1328          6 0700000096CB9E40

---kill locking sessions as below---
---alter system kill session 'sessionid,serial';

SQL> alter system kill session '1077,1401';
System altered.
SQL> alter system kill session '1084,1328';
System altered.

Wednesday, August 15, 2012

PSOFT : Brief on PeopleSoft Internet Architecture

The entire process flow to access a PeopleSoft application page can be explained in 13 steps.
1.  Link interpreted as URL address by Web browser, which includes the name of a servlet on the Web server.
2.  Servlet running in the servlet engine interprets request and comes up with a list of objects that are required to build the page.
3.  Request for all required objects sent to application server in the form of a Jolt message.
4.  Tuxedo receives the Jolt message, and converts it into a service request, which it routes to the appropriate PeopleSoft server process.
5.  PeopleSoft process converts service request into SQL statement(s).
6.  The version of the object which is required by the request is matched between the database and the physical cache present. If the cache is latest, it is issued from there otherwise, SQL statement(s) are sent to database for fetching. After the operation, the cache is updated along with its version number.
7.  Data requested is supplied by the database.
8.  Tuxedo acknowledges the receipt of data and closes connection with the PeopleSoft process.
9.  PeopleSoft process constructs HTML code out of object data.
10.Data forwarded by Tuxedo through Jolt requesting Java servlet.
11.Servlet forwards the HTML page requested by browser.
12.When all objects are in place, HTML page is forwarded to the Web services.
13.Browser views page.

The default screen which comes when you enter the URL resides on webserver called signin.html. It points to various javascripts and iScripts. By default, according to your installation of db, it takes the language. Generally it is English. If you select any other language, you click it on signin.html and it updates the javascripts.

When you login to PIA, first it checks the parameter in appserver config file to see validate sign on with db option is enabled or not.
If yes, it sends your userid/password as a connection string to db and validates. If no, then from config file, connect id and password
are fetched and made an internal connection. After it logs in successfully, the OPRID is authenticated in db. It checks lastupddatetime, acctlock, encrypted, symbolicid etc from PSOPRDEFN table. Then it logs in through the accessid fetched from PSACCESSPRFL corresponding to the symbolic id that is fetched from PROPRDEFN with your OPRID. It makes persistent connection to db. It checks version in PSVERSION table to ensure all data is in sync and no cache errors. Then it loads the iScript 'Lframe','Rframe' etc. from WEBLIB_NAV_MAIN weblibrary which is assigned to you through a role. This is a manadatory weblibrary to load the PIA. In addition to it, you need to have WEBLIB_PORTAL, WEBLIB_TIMEOUT, WEBLIB_PT_NAV etc to move accross PIA and perform basic operations. But, these are optional. The html is generated from Application Packages and iScripts. The page you see afer you login, that comes from PT_BRANDING application package. Your profile page comes after it authenticates you in db and makes several db updates. In PSACCESSLOG it makes entry from the host/ip address you log in. If other Audits are set, it updates in Audit tables and executes triggers associated to them.
in the meantime, the cookie from application server that your OPRID is authenticated,
http://server:port/servlet_name/SiteName/PortalName/NodeName/content_type/content_id?content_parm is the URL format.
It checks which PORTAL, NODE you are trying to login. Examples of portal are customer, employee etc. Node is ERP, HRMS, CRM etc. It queries PSPRSMDEFN table to have data validated. It queries all independent entries like content_type ex c for component, h for homepage, q for query etc, content id like component and pagename and finaly your market you use. In our case that is GBL(global).

According to your roles in PSROLEUSER table, it queries all related security tables to findout what all implicit permissions that you have. It queires PSROLECLASS to sync role along with permission list, PSAUTHITEM to have all the tools, menu, query and other permissions. It checks the checksum with registry stucture according to the PORTAL name you enter in URL (or by default employee is treated) and loads the page according to its visualisation. It also checks your personalisation to your favorites and worklists before loading the initial page. After loading everything again it checks the PSVERSION to sync cache.

Now if you are logging in through LDAP, the process differs a little. First it checks the directory connectivity and then goes to the userid/password authentication. Then it executes signon-peoplecode before rendering the user its profile homepage.

Now, there are separate component interfaces you need to have access to see them on your profile homepage like password change etc.

When you click on a menu appearing you in your profile, it first checks PSPNLBTNDATA table to check its existance and then checks the PSPRSMDEFN and some other registry allocation tables to check its integrity and then checks your privileges to access that menu through a complex query by joining PSROLEUSER, PSROLECLASS and PSAUTHITEM table. Accordingly checking which mode you have access to the component, it opens it.
If it is a search record, search page, according to your permission (display, update display, correct history), it opens it. If you update any page, it temporarily stores them in database buffer cache and then pushes them into db after you click the save button. But, on clicking the save button, all the codes that are written in the save button will be executed. It includes all validations in saveprechange, savepostchange and other events.
Few important Definitions:  

ConnectID - people, which is used only for initial connection to check connectivity. Even if you make a three tier connection, the application server makes a two tier connection with db by checking with connect id first.

AccessID - sysadm. It is used to perform all the select, update and all sort of sql operation with db according to your access privileges internally by system. It looks like OPRID performs all the operation, but actually db does not recognize any OPRID for sql select/updates. Accroding to our usage, sql is generated and executed by accessid. This is not the case before PeopleTools 8. There all OPRIDs were db users ids. They perform the sql operations.

OPRID- user. It is used for login, audit & all sort of activity to do and to isolate the user at application level.

SymbolicID - It is used to uniquely identify accesid in PSACCESSPRFL table. We can have more than one accessid in a table. So, to use them, we need to have more than one symbolic id in that table. According to the symbolic key, OPRID can use the AccessID. [Remember accessid can only execute in db not OPRID] Certain accessids are revoked certain features in db. So, those activities cannot be performed by an OPRID which is assigned that specific accessid.

TUXEDO - Transaction Under Unix Extended Distributed Operations - We in Poplesoft only use ATMI(Application to Transaction Monitoring interface) feature of TUXEDO. We use it for messaging, administrating and monitoring transactions. It is written in C++. Hence, it has compatibility to execute any native code. Hence, it can execute C, C++ as all internal transaction(semaphore creation, context switching, spawning, validation and parameterising) happens in these languages.

JOLT- The PIA works in java. But, internal system works in C++. To sync, JOLT works as a web interface extension for TUXEDO. It is Java Online Transanction. It takes inputs from Tuxedo and converts to java native code and renders in webserver. The HTML generation happens there according to the code being generated by application server. That is served to web browser by java servlets.

Note : I cant recall where I got this from but felt worth to share it here.

Tuesday, August 14, 2012

PSOFT : Unix script to monitor number of active client connections & CPU%


Need for Script : This script tracks number of active client connections to peoplesoft domain which are in busy or busy wait state. if there are maximum number of connections then that may slow down the performance of your PS environment if hardware resources are not that strong.
Same was the case with one of our servers where I had to implement a solution to restrict number of active  busy client connections.
Also this script alerts on finding CPU % beyond specified threshold.


#######################################################################################
#!/bin/ksh
#This script monitors number of active client connections in App Server & unix server CPU Utilization
#Author : Niraj Patil
#Platform : AIX 6.1 , PplTools 8.48.19
########################################################################################


DTIME=`date +%Y%m%d%H%M`
LOGFLE=/home/NjServer/myScripts/ConnectionMonitor${DTIME}.log
## Running .profile for Cron
. $HOME/.profile
echo "Beginning script........." > $LOGFLE
EMAIL=`cat /home/NjServer/myScripts/mail`
domname=`echo $USER | tr 'a-z' 'A-Z'` ; export domname
bzcnt=0
cd $PS_HOME/appserv

for bz in `psadmin -c cstatus -d $domname 2> /dev/null | sed -e '1,3 d' -e '/^$/ d' -e '$ d' | awk -F" " '{ print $5 }'`
do
if [ "$bz" = "BUSY" -o "$bz" = "BUSY/W" ]; then
  let bzcnt=$bzcnt+1
fi

#Mail if no. of client connections are > 20
if [ $bzcnt -ge 20 ]; then
  psadmin -c cstatus -d $domname 2> /dev/null | mail -s "Alert:Maximum no of client connections in $domname App Server on `hostname` at `date`" $EMAIL
  echo "Alert:Maximum no of client connections in $domname App Server on `hostname` at `date`" >> $LOGFLE
  break
fi
done
cd -

#Monitor CPU%
percnt=0
for bb in `ps aux | sort -rn +2 | sed '1 d' | awk -F" " '{ printf("%2f\n",$3); }'`
do
let percnt=$percnt+$bb
done
#Mail alert if cpu% is above 85%
if [ $percnt -ge 85 ]; then
ps aux | head -20 | mail -s "`hostname`:CPU_Utilisation reached to ${percnt}% at `date`" $EMAIL
echo "`hostname`:CPU_Utilisation reached to ${percnt}% at `date`" >> $LOGFLE
fi

PSOFT : Hung DBX processes consuming maximum CPU% on AIX


Description : I came across an issue related to CPU% getting hiked up due to some active dbx processes associated with PSAPPSRV processes consuming high % of CPU on our AIX 6.1.

dbx in AIX is used as a process debugger. which doesn't serve a useful purpose from PeopleSoft performance perspective. hence we always used to kill the dbx processes manually to bring down CPU%.
(in AIX check man pages for details on dbx)


Take a look at this Oracle document ID 1275845.1 for reference.

E-AS: dbx Process Hangs on AIX and can use Significant System CPU resources
Cause :

When an exception occurs for a PeopleSoft process, typically PSAPPSRV, a script, psprocinfo, is run to collect process specific information needed to help identify what might have caused the exception. When psprocinfo runs it will use dbx to collect stack and library specific data from the process. It has been seen recently, that the dbx process will start but may not terminate. In these instances the dbx process has also been seen to use a significant amount of CPU resources which can impact overall usability of the AIX box.
Solution
That dbx hangs and consumes CPU is not a problem PSoft development can resolve directly. In PT 8.51, and later Tools versions, the psprocinfo script was rewritten to capture the data we need using different AIX utilities such that it does not need to use dbx.
This dbx issue is not common. Under most instances dbx does not get stuck in a pointer loop, hence currently the psprocinfo script still uses it if it's available.
Note:
If encountering this issue you can avoid it by renaming the dbx process. When renamed, the psprocinfo script will not be able to execute it hence the overall performance impact on the AIX server is negated. Psprocinfo will collect other available data on the process, will log a message indicating dbx could not be run and will then terminate normally.


So here I wrote & scheduled a script every 30 min to report & kill any dbx process running on server.

_________________________________________________________________________________
#!/usr/bin/ksh
# This script checks for any dbx process running & kills it.
# Usage : MonitorKill_DBEX.ksh 
# ( I intentionally didnt prefer to hv dbx in script name to prevent conflicting it with running dbx processes)
# Niraj Patil

export maillist="nirajdpatil1986@gmail.com groupemail@abc.com"
LOGFILE1=/home/NjServer/logs/MonitorKill_DBEX.log ; export LOGFILE1
echo "\n ----------- MonitorKill_DBEX executed at `date`---------" >> $LOGFILE1
ps -ef | grep -i dbx | grep -i $USER | grep -v grep > /tmp/dbex_processes.log
if [ `ps -ef | grep -i dbx | grep -i $USER |  grep -v grep | awk -F" " '{ print $2 }' | wc -l` -ge 1 ]; then
for PRCKLL in `ps -ef | grep -i dbx | grep -i $USER |  grep -v grep | awk -F" " '{ print $2 }'`
do
echo "Killed dbx process :\n `ps -ef | grep $PRCKLL | grep -v grep`" | tee -a $LOGFILE1
kill -9 $PRCKLL
done
mailx -s "DBX processes spawned by $USER killed on `hostname` at `date`" $maillist < /tmp/dbex_processes.log
else
echo "No dbx process was found for $USER on `hostname`" | tee -a $LOGFILE1
fi
_________________________________________________________________________________


script looks dirty with multiple grep's no??
I know I m not a great unix developer but my script serves the purpose well :)
I just tried to make script as simpler as possible to understand.

PSOFT : Unix Script to monitor PS IB messages



#!/bin/ksh
#This simple script sends out alerts on encountering IB Pub contracts , Sub Contracts & Pub handler mesg in Error/Retry & timeout status
# Platform : AIX 6.1, Oracle 11g, Peopletools 8.51.06
#Usage : FS_Monitor_IB_MSG.ksh
#Author : Niraj Patil

###Check for Alive Subscription contracts messages in Error/New/Timeout###
SRCDIR1=/home/NjServer/src ; export SRCDIR1
LOGFILE1=/home/NjServer/logs/FS_Monitor_IB.log ; export LOGFILE1
MAILLIST1="nirajdpatil1986@gmail.com groupemail@abc.com"
echo "\n --------------------`date`---------------------------- \n" >> $LOGFILE1
SUBCON=`sqlplus ${DB_LOGIN}/${DB_PASSWD}@${DB_NAME} <<EOF > /tmp/subcon.log
set feedback off
set heading off
SET SERVEROUTPUT ON SIZE 100000
Declare
SubCon_MSG VARCHAR2(200);
Begin
select count(*) into SubCon_MSG from psapmsgsubcon
where statusstring in ('ERROR','RETRY','TIMEOUT') or
SUBCONSTATUS in (0,5,6);
dbms_output.put_line(Chr(10)||'PSADMIN ' || SubCon_MSG);
End;
/
SET SERVEROUTPUT OFF
EXIT;
EOF`

###Check for Alive Subscription contracts messages in Error/New/Timeout###
PUBCON=`sqlplus ${DB_LOGIN}/${DB_PASSWD}@${DB_NAME} <<EOF > /tmp/pubcon.log
set feedback off
set heading off
SET SERVEROUTPUT ON SIZE 100000
Declare
PubCon_MSG VARCHAR2(200);
Begin
select count(*) into PubCon_MSG from PSAPMSGPUBCON
where STATUSSTRING in ('ERROR','RETRY','TIMEOUT') or
PUBCONSTATUS in (0,5,6);
dbms_output.put_line(Chr(10)||'PSADMIN ' || PubCon_MSG);
End;
/
SET SERVEROUTPUT OFF
EXIT;
EOF`

###Check for Alive Publication Handler messages in Error/New/Timeout###
PUBHDR=`sqlplus ${DB_LOGIN}/${DB_PASSWD}@${DB_NAME} <<EOF > /tmp/pubhdr.log
set feedback off
set heading off
SET SERVEROUTPUT ON SIZE 100000
Declare
PubHdr_MSG VARCHAR2(200);
Begin
select count(*) into PubHdr_MSG from PSAPMSGPUBHDR
where STATUSSTRING in ('ERROR','RETRY','TIMEOUT') or
PUBSTATUS in (0,5,6);
dbms_output.put_line(Chr(10)||'PSADMIN ' || PubHdr_MSG);
End;
/
SET SERVEROUTPUT OFF
EXIT;
EOF`

SC_Count=`grep "PSADMIN" /tmp/subcon.log | awk -F" " '{ print $2 }'`

PC_Count=`grep "PSADMIN" /tmp/pubcon.log | awk -F" " '{ print $2 }'`
PH_Count=`grep "PSADMIN" /tmp/pubhdr.log | awk -F" " '{ print $2 }'`

# Mail detailed Subscription contracts Message record if there are any in Error/Retry/Timeout
if [ $SC_Count -ne 0 ]
then
sqlplus ${DB_LOGIN}/${DB_PASSWD}@${DB_NAME} <<EOF > /tmp/subcondata.log
set pagesize 1000
set linesize 130
set feedback off
select * from psapmsgsubcon
where statusstring in ('ERROR','RETRY','TIMEOUT') or
SUBCONSTATUS in (0,5,6);
EXIT;
EOF
cat /tmp/subcondata.log | grep -vi SQL | grep -vi Oracle | grep -vi "Partitioning" | grep -vi connected | grep -vi ^$ > /tmp/SUBCONDATA
mailx -s "Subscription contracts messages in Error/Retry/Timeout in $DB_NAME at `date`" $MAILLIST1 < /tmp/SUBCONDATA
echo "Subscription contracts messages in Error/Retry/Timeout in $DB_NAME at `date`" >> $LOGFILE1
cat /tmp/SUBCONDATA >> $LOGFILE1
else
echo "No Subscription Contract message in Error for $DB_NAME at `date`" >> $LOGFILE1
fi

#Mail detailed Publication contracts Message record if there are any in Error/Retry/Timeout
if [ $PC_Count -ne 0 ]
then
sqlplus ${DB_LOGIN}/${DB_PASSWD}@${DB_NAME} <<EOF > /tmp/pubcondata.log
set pagesize 1000
set linesize 130
set feedback off
select * from PSAPMSGPUBCON
where STATUSSTRING in ('ERROR','RETRY','TIMEOUT') or
PUBCONSTATUS in (0,5,6);
EXIT;
EOF
cat /tmp/pubcondata.log | grep -vi SQL | grep -vi Oracle | grep -vi "Partitioning" | grep -vi connected | grep -vi ^$ > /tmp/PUBCONDATA
mailx -s "Publication contracts messages in Error/Retry/Timeout on $DB_NAME at `date`" $MAILLIST1 < /tmp/PUBCONDATA
echo "Publication contracts messages in Error/Retry/Timeout on $DB_NAME at `date`" >> $LOGFILE1
cat /tmp/PUBCONDATA >> $LOGFILE1
else
echo "No Publication Contract message in Error $DB_NAME at `date`" >> $LOGFILE1
fi

#Mail detailed Publication Handler Message record if there are any in Error/Retry/Timeout
if [ $PH_Count -ne 0 ]
then
sqlplus ${DB_LOGIN}/${DB_PASSWD}@${DB_NAME} <<EOF > /tmp/pubhdrdata.log
set pagesize 1000
set linesize 130
set feedback off
select * from PSAPMSGPUBHDR
where STATUSSTRING in ('ERROR','RETRY','TIMEOUT') or
PUBSTATUS in (0,5,6);
EXIT;
EOF
cat /tmp/pubhdrdata.log | grep -vi SQL | grep -vi Oracle | grep -vi "Partitioning" | grep -vi connected | grep -vi ^$ > /tmp/PUBHDRDATA
mailx -s "Publication contracts messages in Error/Retry/Timeout on $DB_NAME at `date`" $MAILLIST1 < /tmp/PUBHDRDATA
echo "Publication contracts messages in Error/Retry/Timeout on $DB_NAME at `date`" >> $LOGFILE1
cat /tmp/PUBHDRDATA >> $LOGFILE1
else
echo "No Publication Handler message in Error/Retry/Timeout in $DB_NAME at `date`" >> $LOGFILE1
fi