Pages

Wednesday, August 19, 2015

Creating a folder path alias in Unix/Linux !!!


In unix/linux navigating through a long folder path repetitively is bit painful task and takes a bit time when you have large number of severs to manage and perform administration task. 

In such situation creating folder path alias is the best way to create abbreviated shortcuts and change folder path quickly without navigating through whole path. 

In this very simple article instruction will be given how to create folder alias for a particular folder path and change directory easily without navigating to whole path. 

Below are the steps required to create folder alias-

1) List out the long folders path which you need to navigate frequently
2) Write down the abbreviated name for each long folder path
3) Update .bashrc and .profile file
4) Load the .profile file for particular os user.
5) Use the abbreviated value to cd to that long folder path

Step 1: 

For example purpose, I have listed out below long most used folders path for implementation – 

/u01/app/oracle/admin/domains/aserver/prd_domain
/u01/app/oracle/admin/domains/mserver/prd_domain
/u01/app/oracle/product/fmw
/u01/app/oracle/admin/domains/cluster
/u01/app/oracle/admin/domains/aserver/domain/servers/AdminServer/logs
/u01/app/oracle/admin/mserver/domains/servers
/u01/app/oracle/admin/cluster/logs

Step 2: 

Abbreviated value Long folder path

Abbreviated value
Long folder path
aserver
/u01/app/oracle/admin/domains/aserver/prd_domain
mserver
/u01/app/oracle/admin/domains/mserver/prd_domain
fmw
/u01/app/oracle/product/fmw
cluster
/u01/app/oracle/admin/domains/cluster
alog
/u01/app/oracle/admin/domains/aserver/domain/servers/AdminServer/logs
mlogs
/u01/app/oracle/admin/mserver/domains/servers
logs
/u01/app/oracle/admin/cluster/logs

Step 3:

Folder alias can be updated either directly in .profile or .bashrc file for a particular os user. The test environment where I have implemented this I have created folder alias inside .bashrc file located inside particular os user home folder. 

So, e.g. sample os user name is ‘oracle’ the .bashrc file must exist at this location /home/oracle/. If .bashrc file does not exist then admin can create this file as well using vi editor. 

Add below entries into .bashrc file

alias aserver='cd ~/../../u01/app/oracle/admin/domains/aserver/prd_domain
alias mserver='cd ~/../../u01/app/oracle/admin/domains/mserver/prd_domain
alias fmw='cd ~/../../u01/app/oracle/product/fmw
alias cluster='cd ~/../../u01/app/oracle/admin/domains/cluster
alias alog='cd ~/../../u01/app/oracle/admin/domains/aserver/domain/servers/AdminServer/logs
alias mlogs='cd ~/../../u01/app/oracle/admin/mserver/domains/servers
alias logs='cd ~/../../u01/app/oracle/admin/cluster/logs

Step 4: 

Now, update .profile file for same user to load .bashrc file
e.g.
source ~/.bashrc

Step 5: 

Now load the .profile file for ‘oracle’ user. 
Use putty to login using oracle user, after successful login, it will land ‘oracle’ user home directory, run below command 

source ~/.profile

Once you run the above command, oracle user profile will be loaded which will set all the folders path alias. 

Now, we can use the abbreviated value to access particular folder. 
e.g. to access this path ‘/u01/app/oracle/product/fmw’ just type this abbreviated value ‘fmw’ as per below snap. 



Friday, August 14, 2015

Automation of Node Manager startup script !!!

Node Manager is one of key component of WLS administration. It has to be setup separately while installing OFMW binaries. This helps to centrally manage the WLS Domain servers.  Node Manager helps an administrator to centrally manage all the WLS servers in domain from WLS Admin console. 

Node Manager also help to auto restart particular WLS managed server in case WLS server become crash due to some issues. 

However, all above listed functionality can only works when Node Manager Process is running on particular vServer or physical servers.  

By default, we have to start node manager process manually using startNodeManager.sh shell script. Starting process of node manager can be slightly different from implementation to implementation depending up on how it has been setup. 

Since, starting node manager is manual process, sometime we forgot this and ended-up with problem why WLS admin console is not able to communicate with all Managed servers. 

Often infrastructure guys do Database Patching and OS patching etc. wherein Physical servers or vServers need to restarted and because of this restart node manager process get dies and we need to start it manually again. 

If we are maintaining large number of vServers then startup manually node manager process takes bit longer which can be avoided completely by setting up auto startup mode.  

In below post I will try to explain how we can automate the starting process of node manager. We can write a shell script which will act as node manager service and will be having start/stop/status/restart operations. 

Once node manager service is running we can automate startup of that service using OS command. 
So, there will two steps as state below –


Thursday, August 6, 2015

SOA Suite Components Data Purging !!!


This post describe how to manage database growth for Oracle Fusion Middleware SOA Suite Backend Dehydration store (‘SOAINFRA‘ schema) which grows over the period of time a lots if not monitored regularly and will result major performance issue, disk out of space issue and increase latency etc.

This will increase the complexity to maintain SOA Database instance as well. 

To address this challenge, several methods for managing database growth are provided such as

1.       Running purge script
2.       Running parallel purge script
3.       Dropping partitioning if implemented
4.       Purging Data from EM console, B2B console and OSB Console.

In this article, I will only focus on how we can run Oracle out-of-box provided purge script to clean up the SOAINFRA schema tables which contains tables for various SOA Suite component such as Mediator, Workflow, BPEL/SCA and decision etc.

Note: B2B components records will not be purged using below script. If B2B implementation has been done then use this link to know who we can purge B2B Meta data and B2B Instance data. It’s very simple.


Fortunately, Oracle SOA Suite 11.1.1.7 or previous versions provides out-of-box purge scripts. These scripts comes along with the setup and once you install the product it can be located at -
MW_HOME/SOA_HOME/rcu/integration/soainfra/sql/soa_purge
e.g.




Pre-Implementation Verification


1)    You must need to take backup of SOA DB instance where purge script has to run, to make sure if something goes wrong we can restore the database easily.

2)      Run the below SQL query to check the current usage of tablespace for SOA DB instance, so later you can do comparison study whether tablespace size has been reduced or not

select tablespace_name,bytes/1024/1024/1024 space,maxbytes/1024/1024/1024 max,autoextensible from dba_data_files;



3)   Get the rows count of all the tables for SOA_INFRA schema using below command and then after purging run these sql again to compare how many tables and row has been purged. Before running the sql we need to run one db package to refresh the stats of tables rows.

Step1:
Execute below SQL to refresh the tables row stats-
EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS ('AIOSIT_SOAINFRA')



Step2:
Run below sql to generate count of each tables rows.
select table_name, num_rows from dba_tables  where owner = 'AIOSIT_SOAINFRA' and num_rows is not NULL and  num_rows > 0 order by num_rows desc;



Implementation Steps

Follow the below steps to run the purging script

Step1:
Open putty or command prompt

Step2:
Change directory to $SOA_HOME\rcu\integration\soainfra\sql\soa_purge

Step3:
In SQL*Plus 'CONNECT SYS AS SYSDBA' and run below commands
GRANT EXECUTE ON DBMS_LOCK TO DEV_SOAINFRA;
GRANT CREATE JOB TO DEV_SOAINFRA;

Note: 'DEV' is the prefix of particular environment which we specify while runnning the RCU for particular environment, so change this value as per your environment

Step4:
Exit from sysdba session

Step5:
Connect to SOAINFRA DB instance now
e.g.
sqlplus DEV_SOAINFRA/Welcome1@SOA11DEV

Note: SOA11DEV should be defined in ‘tnsnames.ora’ file possibly located at this location
/opt/oracle/11.2.0.3/client/network/admin/tnsnames.ora

Step6:
Load the purge scripts by running the main purge script in the MW_HOME/SOA_ORACLE_HOME/rcu/integration/soainfra/sql/soa_purge directory.
e.g. @soa_purge_scripts.sql

This should create some procedures, functions, views, types and packages under DEV_SOAINFRA schema which required for purge script.




Step7:
If you want to execute the scripts in debug mode, run common/debug_on.sql



Step8:
If you want to spool the PLSQL program's output to a log file then set serveroutput on. This would help you understand which are the tables getting purged and also what are the eligible records getting purged.
SQL> SET SERVEROUTPUT ON;
SQL> spool '/tmp/spool.log'



Step9:
Run the below code segment to perform 'Looped purge' option using below script
Note: If you want to become familiar with all the parameters used in below SQL code then please go through 10.3.1 Looped Purge Script using this URL http://docs.oracle.com/cd/E28280_01/admin.1111/e10226/soaadmin_partition.htm#CJHJGGCD

DECLARE
   MAX_CREATION_DATE timestamp;
   MIN_CREATION_DATE timestamp;
   batch_size integer;
   max_runtime integer;
   retention_period timestamp;
  BEGIN
   MIN_CREATION_DATE := to_timestamp('2010-01-01','YYYY-MM-DD');
   MAX_CREATION_DATE := to_timestamp('2010-01-31','YYYY-MM-DD');
    max_runtime := 60;
    retention_period := to_timestamp('2010-01-31','YYYY-MM-DD');
   batch_size := 10000;
     soa.delete_instances(
     min_creation_date => MIN_CREATION_DATE,
     max_creation_date => MAX_CREATION_DATE,
     batch_size => batch_size,
     max_runtime => max_runtime,
     retention_period => retention_period,
     purge_partitioned_component => false);
  END;
  /




Highlighted values has to be change accordingly to individual environment requirements. Please refer below table to get detailed understanding of these items.

Parameter
Description
min_creation_date
Beginning creation date for the composite instances.
max_creation_date
Ending creation date for the composite instances.
batch_size
Batch size used to loop the purge. The default value is 20000.
max_runtime
Expiration at which the purge script exits the loop. The default value is 60. This value is specified in minutes.
retention_period
Retention period is only used by the BPEL process service engine. This functionality is not extended to other components. This parameter checks for and deletes records in the cube_instance table. The value for this parameter must be greater then or equal to max_creation_date. The default value is null. Specify a retention period if you want to retain the composite instances based on the modify_date of the BPEL instances (cube_instance).
In this example, the modify_date of the BPEL instances table, which can be different than the composite created_date, is used as a second level of filtering:
min_creation_date = 1st June 2011
max_creation_date = 30  June 2011
retention_period = 1st July 2011
This deletes all composite instances in which the creation_time of the composite is between 1st June 2011 and 30 June 2011 and the modify_date of the cube_instance is less than 1st July 2011


Step10:
Stop the logging
SQL> spool off

Step11:
Verify the generated log file for purging status



Reference:

http://docs.oracle.com/cd/E28280_01/admin.1111/e10226/soaadmin_partition.htm
http://myexperienceswithsoa.blogspot.com.au/2011/10/using-out-of-box-purge-scripts-in.html
http://oraclefusiontechnologies.blogspot.com.au/2014/03/purging-instances-using-sql-scripts.html


OSB Reporting Data Purging


The reporting module of Oracle Service Bus use to display the reporting data for various Business Transactions. In OSB we use 'Report' action to log the messages in Database. OSB maintains two backend tables to store this data namely WLI_QS_REPORT_DATA and WLI_QS_REPORT_ATTRIBUTE . Reporting data also grow overs the period of time and consume lot of disk space. Oracle Service Bus provide out-of-box capability to purge this data in two ways. We have below two options-

1) Purge OSB Reporting Data based on Date Range
2) Purge All OSB Reporting Data

Apart from above out-of-box option, we have few more options as stated below -

3) Write custom PL/SQL procedure or statement to purge this data on regular basis.
4) Use out of box scripts reporting_runtime_drop.sql and reporting_runtime.sql to delete and create tables if outage approved located at this location $OSB_HOME/dbscripts/oracle
In below example I will show to purge based on given date range using out of box purging facility-


Out-of-Box Purging Option


Step1:
Login to WLS console. Go to " Deployments” and verify below module. It must be active. If it is not active then make this module active.





Step2:
Login to OSB console. Go to Operation >> Reporting >> Message Report >> Click on “Purge Messages”




Step3:
Specify date range and time.

Note: The format of date is M/D/YY. Provided date strictly need to follow this format. E.g. if month is Feb, so we can‘t give 02. We must give 2 only. Similarly If date is 1st Feb suppose then we should give 2/1 not 02/01. Year is only two digit only.





SQL to Purge the OSB Report Data


Run the below sql to purge OSB Reporting Data.

delete from WLI_QS_REPORT_DATA where MSG_GUID in (select MSG_GUID from WLI_QS_REPORT_ATTRIBUTE where trunc(DB_TIMESTAMP) < sysdate-60);
delete from WLI_QS_REPORT_ATTRIBUTE where trunc(DB_TIMESTAMP) < sysdate-60
commit;


Oracle Notes Reference:-

  1. Automated Methods Of Purging The Reporting Tables In Oracle Service Bus (Doc ID 1080146.1)
  2. OSB Best Practice For Clearing OSB Report DB (Doc ID 1479520.1

Custom Table Data Purging


Apart of above SOA Components data purging, there could be requirement of purging some custom table data which has been created for custom application (if exists)

If that’s the case then below two different PL/SQL approach could be useful to purge that data as well.

Data Purging using Truncate


In below code we will be deleting data for a particular table called ‘UTILITY_LOG’ based on date clause using truncate option. It does perform below tasks-

a)      Count the rows from UTILITY_LOG and print the same
b)      Create UTILITY_LOG_TMP table using UTILITY_LOG table definition based up on search criteria for the data which needs to be retain.
c)       Truncate original UTILITY_LOG table
d)      Copy data back from UTILITY_LOG_TMP to UTILITY_LOG
e)      Drop the UTILITY_LOG_TMP table

set serveroutput on;
DECLARE
CNT NUMBER(10) := 0;
BEGIN
DBMS_OUTPUT.PUT_LINE('UTILITY_LOG Purging Started');
select count(1) into CNT from UTILITY_LOG;
DBMS_OUTPUT.PUT_LINE('Total Rows in UTILITY_LOG table :' || CNT);
DBMS_OUTPUT.PUT_LINE('Creating and Inserting UTILITY_LOG Temporary Table Based on Search Criteria');
EXECUTE IMMEDIATE 'CREATE TABLE UTILITY_LOG_tmp as select * from UTILITY_LOG where trunc(CREATED_TIME_STAMP) > sysdate-180';
DBMS_OUTPUT.PUT_LINE('Truncating UTILITY_LOG Table');
EXECUTE IMMEDIATE 'truncate table UTILITY_LOG';
DBMS_OUTPUT.PUT_LINE('Inserting rows into UTILITY_LOG from UTILITY_LOG_TMP Temporary Table');
EXECUTE IMMEDIATE 'insert into UTILITY_LOG select * from UTILITY_LOG_tmp';
DBMS_OUTPUT.PUT_LINE('Dropping UTILITY_LOG Temporary Table');
EXECUTE IMMEDIATE 'drop table UTILITY_LOG_tmp';
DBMS_OUTPUT.PUT_LINE('UTILITY_LOG Purging has been completed');
commit;
END;
/

Data Purging using simple ‘Delete’ clause


In this example, it’s simply use the standard delete clause to remove the rows from UTILITY_LOG table.

set serveroutput on;
DECLARE
CNT NUMBER(10) := 0;
CNT1 NUMBER(10) := 0;
BEGIN
DBMS_OUTPUT.PUT_LINE('UTILITY_LOG Purging Started');
select count(1) into CNT from UTILITY_CONTROL_REPROCESS1;
DBMS_OUTPUT.PUT_LINE('Total Rows in UTILITY_LOG table :' || CNT);
EXECUTE IMMEDIATE 'Delete from UTILITY_LOG where trunc(CREATED_TIME_STAMP) < sysdate-90';
DBMS_OUTPUT.PUT_LINE('Rows has been deleted from UTILITY_LOG');
select count(1) into CNT1 from UTILITY_LOG;
DBMS_OUTPUT.PUT_LINE('Total Rows in UTILITY_LOG table remains :' || CNT1);
DBMS_OUTPUT.PUT_LINE('UTILITY_LOG Purging has been completed');
commit;
END;
/


B2B Messages Purging


select table_name, num_rows from dba_tables  where owner = 'AIOPRD_SOAINFRA' and table_name like '%B2B_%' and num_rows is not NULL and  num_rows > 0 order by num_rows desc;





Wednesday, August 5, 2015

Relation between Weblogic Data Sources DB Sessions Configuration with DB Instance session and process in RACK infrastructure !!!

In this post I will try to summarize the outcome of one POC which I did as part of an incident raised for DB connection failure error.

Scenarios -


SOA Servers were not able communicate with DB properly, business transactions were getting failed intermittently, when I looked into SOA WLS Servers logs, it was showing below error messages-

“ORA-12516, TNS: listener could not find available handler with matching protocol stack”





Later, I tried testing DB connection using sqlplus utility and there also I got same message-


Even, SQL Developer tool also was throwing same error –



When I contacted to DBA, he told even he is not able to login into Database coz of configure DB session and process got exhausted on the DB vServer.

After getting this reason, I stopped couple of Weblogic servers as we have clustered environment and shutting down couple of WLS servers does not impact business transactions.

After shutting down WLS servers, DB session and process count went down and DBA as able to do regular DBA administration on those nodes.

However, above situation has opened few questions for me as below stated –

1)      How much DB session and DB process count has been configured on each DB node?

2)      How much DB session are getting created using various Weblogic Data Sources?

3)      What is the relation between WLS Data sources Initial Capacity, Maximum Capacity, Minimum Capacity configuration with Database sessions count?

4)      What would be optimized connection configuration value for WLS data sources etc.?

5)      Does the WLS data sources sessions count get split between both RACK node equally or each RACK node get that many number of DB sessions?


Current WLS DS Configuration snap -



Solution:

In order to explore answers of all above questions, I did some POC and engage DBA to understand the impact of my POC which it was creating at DB sessions count, so that we can conclude what’s going on.

In my test environment I was having 14 WLS servers running. I had shut down all of them expect ‘AdminServer’ to reduce the session count in DB Instance.  

In WLS Admin server below data sources were deployed and 30 was number configured for initial, minimum and maximum capacity. All three config parameter has same value.

DataSourceName
Schema User Name
Max/Min/Initial Config Connection
Deployed Targets
mds-owsm
XXXPRE_MDS
30
AdminServer
mds-soa
XXXPRE_MDS
30
AdminServer
wlsbjmsrpDataSource
XXXPRE_SOAINFRA
30
AdminServer
OPSSDataSource
XXXPRE_OPSS
30
AdminServer

Looking at DB, we have captured the below session stats –

select count(1), username, inst_id from gv$session;



Session Distribution Explanation -

1)      XXXPRE_MDS schema is getting sessions from two WLS data sources mds-owsm and mds-soa which has 30 sessions configured for Initial, Min and Max each. All config parameter has same value as per Oracle recommendation as result we could see XXXPRE_MDS schema user has 30 sessions initiated in each DB instance node. 15 each Weblogic Data sources. Which means WLS has equally distributed number of sessions to both DB rack node for a Data Source.

2)      XXXPRE_OPSS schema sessions count distributed 14 and 16 in DB instance 1 and 2 respectively. Which means it’s not mandatory that always WLS distribute equally sessions but it’s make sure it creates the number of sessions in Database as per configuration in WLS console. The count in WLS configuration and count in DB session in both DB instance node are matching.


3)      XXXPRE_SOAINFRA schema sessions similarly distributed 17 and 16 (total 33), while it should be 30, it  is because it has 3 extra sessions created by external application e.g. SQL developer



Conclusion:

Based on above stats, I am concluding that if WLS Datasource has 30 connection configuration and we are using SCAN listener to connect to database running on RACK Infrastructure then total 30 sessions get distributes across both DB nodes. So, the thumb rule to define the count of session and process in DB is -

Session count in DB = WLS DS connection configuration * no. of WLS Servers deployed that DS  / No. of DB Instance in RACK
E.g.

Session count in DB = 30 (WLS Config) * 3 OSB WLS Servers / 2 DB Instance in RACK = 45 DB sessions required for one WLS Data source.

Similarly do calculation for all available Weblogic Data sources and configure DB sessions count accordingly.

The best way to find out the no. of sessions required for database is to create an excel sheet, write down all application datasources names, initial, min and max connection configuration, target server name, so that you can conclude the number that your application needs these many DB connection and accordingly DBA has to set session and process count in each database instance.

Sample excel template to capture WLS session count to drive DB session count –

DataSourceName
Schema User Name
Database Name
Max/Min/Initial Config Connection
Deployed Targets
No. Of WLS Servers
Total DB connection Required
Comment
Session Required at DB
XXXSOADataStore
SOAINFRA
SOA11DB
50
OSB_Cluster
3
150
Initial, Min and Max all are equal value
150 / Divided by no. of DB Instance in RACK

I hope this study will give meaningful way of calculating WLS Data source session configuration and accordingly increase or decrease session and process count at DB instance level.