Pages

Sunday, June 28, 2015

Avoid OFMW application outage by Implementing Redundant FMW Binary!!!

In any enterprise wherein SOA middleware application been implemented getting Outage for SOA application patching is big nightmare. SOA application outage effect whole business as many projects get deployed in SOA platform, if SOA is down then whole business is down. This problem become more sever over the period of time when more and more application keep migrating to SOA Infrastructure.

Oracle has a solution to this problem. For maximum availability, Oracle recommends using redundant binary installations on shared storage.

However, EDG guide does not have details steps information that how we can achieve this recommendation, it just have recommendation. Details of this recommendation can be read using below link-


Moreover, I have often seen many implementation wherein this recommendation always overlooked as it needs lots more work effort during design, implementation and maintenance. 

In this post we will go through more detailed steps how we can achieve redundant binary concept to avoid SOA application outage.

Below are the some of the design principal we need to adopt.

  1. Install two identical Oracle homes for your Oracle Fusion Middleware software on two different shared volume
  2. If separate volumes are not available on shared storage, Oracle recommends simulating separate volumes using different directories within the same volume

I will explain above guideline using below examples-

Let’s assume we have a domain (test_prd_domain) which has 2 SCA (WLS_SCA1, WLS_SCA2), 2 OSB (WLS_OSB1, WLS_OSB2) and 1 AdminServer. These WLS servers been deployed in two different physical or virtual servers  as stated in below table.


ZFS Share Name
Size
Mounted Folder Name
vServer Name ( WLS Server Names)
192.168.1.1:/export/fmw_share0
  20G
 /u01/app/oracle/product/fmw
soaserver1.domain.com.au (AdminServer, WLS_SCA1, WLS_OSB1)
192.168.1.1:/export/fmw_share1
  20G
 /u01/app/oracle/product/fmw
soaserver2.domain.com.au (WLS_SCA1, WLS_OSB1)



Step to implement above design-

Step 1:

Create two NFS share on your NFS or ZFS device of 20 gb or more as per your requirement which will contain Oracle fusion middleware binaries. e.g.

192.168.1.1:/export/fmw_share0 (20 GB)
192.168.1.1:/export/fmw_share1 (20 GB)

Note: Instruction how to create NFS is out of scope of this article.

Step 2:

Update /etc/fstab file in both virtual or physical sever e.g.

soaserver1.domain.com.au
192.168.1.1:/export/fmw_share0 /u01/app/oracle/product/fmw nfs rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,nfsvers=3,timeo=600 0 0

soaserver2.domain.com.au
192.168.1.1:/export/fmw_share1 /u01/app/oracle/product/fmw nfs rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,nfsvers=3,timeo=600 0 0

Sample snap from different environment – 



Note: In above configuration the mount folder path ‘/u01/app/oracle/product/fmw’ would be same in both virtual or physical servers. 

Step 3: 

Create this folder path ‘/u01/app/oracle/product/fmw’ to both virtual or physical severs e.g.

mkdir –p /u01/app/oracle/product/fmw

Step 4:

Mount the ‘/u01/app/oracle/product/fmw´ directory to both virtual or physical servers using below command – 

‘mount –a’
or
‘mount /u01/app/oracle/product/fmw’

-a indicate - Mount all filesystems (of the given types) mentioned in fstab.

Step 5:

Install the Oracle fusion middleware binary in one of the server soaserver1.domain.com.au at this path /u01/app/oracle/product/fmw

Step 6:

Now copy the content of ‘/u01/app/oracle/product/fmw’ directory from one server to another server using scp command

scp -rp /u01/app/oracle/product/fmw/* oracle@ soaserver2.domain.com.au:/u01/app/oracle/product/fmw

-r = indicate copy recursive
-p =indicate preserves modification times, access times, and modes from the original file.


Step 7:

Run the ‘df –h’ command at both virtual or physical servers to confirm the share location of ‘/u01/app/oracle/product/fmw’ folder. Also check the used, free size of file system.

Both servers should have different NFS or ZFS share location.

Step 8:

For additional verification run the below command to both servers for ‘fmw’ location. The result should match in both virtual or physical servers.    

find /u01/app/oracle/product/fmw -maxdepth 1 -type d -print0 | xargs -0 -I {} sh -c 'echo -e $(find {} | wc -l) {}' | sort –n



Step 9:

Create a domain and start all WLS server instance.

WLS servers instance (AdminServer, WLS_SCA1, WLS_OSB1) which are running on this server soaserver1.domain.com.au server will be using Oracle fusion middlware binary from this location 192.168.1.1:/export/fmw_share0 share

Sample Snap from different environment – 




WLS servers instance (WLS_SCA2, WLS_OSB2) which are running on this server soaserver2.domain.com.au server will be using Oracle fusion middlware binary from this location 192.168.1.1:/export/fmw_share1 share

Sample Snap from different environment – 


Pros:

  1. Having redundant binary for fmw folder helps a lot during patching process to avoid outage of SOA application and give major relief to business. Business gains more confidence of SOA platform.
  2. In case one of NFS share become corrupted or unavailable, only half server capacity are affected, remain half WLS servers still runs using different NFS share. For additional protection, it has been recommends to mirror disk for these volumes or shares.

Cons:
  1. Increase little bit overhead to the SOA administrator while doing patching. In case patching SOA administrator has to do patching for two different FWM NFS share location


I hope this article will help you to implement OFMW redundant binary for your SOA environment where you can enjoy freedom from SOA application outage.

Thanks for reading. Have a good day :) 

Friday, June 19, 2015

Moving weblogic tlog from file persistent store to database persistent store !!!

This article will show how to change the tlog persistent store from file system to database. 

The persistent store provides a built-in, high-performance storage solution for WebLogic Server subsystems and services that require persistence. The persistent store supports persistence to a file-based store or to a JDBC-accessible store in a database.

One of most used and important sub-component of Weblogic is JTA Transaction Log (TLOG) which by default uses file storage. 

It contains information about committed transactions coordinated by the server that may not have been completed.

There are few pros and cons of having tlog in database as stated below -

Pros – 


1) JDBC stores may make it easier to handle failure recovery since the JDBC interface can access the database from any machine on the same network. With the file store, the disk must be shared or migrated.

2) Leverages replication and HA characteristics of the underlying database.

3) Simplifies disaster recovery by allowing the easy synchronization of the state of the database and TLOGs.

4) Improved Transaction Recovery service migration as the transaction logs to do not need to be migrated (copied) to a new location.

Cons –


1) File stores generally offer better throughput than a JDBC store.

2) File stores generate no network traffic; whereas, JDBC stores will generate network traffic if the database is on a different machine from WebLogic Server.

Additional points to consider before you decide to move tlog-


1) Only one JDBC TLOG store can be configured per WebLogic Server. Conversely, multiple WebLogic Servers can’t share a JDBC TLOG store.

2) You cannot use a data source that is configured to use an XA JDBC driver or is configured to support global transactions. Use a non-XA data source.

3) The database used to store the TLOG information must be available at server startup. If the database is not available, the WebLogic Server instance will fail to boot.

4) Only the JTA sub-system can use the JDBC TLOG store to persist information about committed transactions coordinated by the server that may not have been completed. No other systems can access the JDBC TLOG store.

5) If the TLOG store is changed from one store type to another or from one location to another, the change takes effect only after reboot and all pending transactions in the old store are not be copied to the new store. You must ensure there are no pending transactions before changing the TLOG store type or location

6) If the JDBC TLOG store becomes unavailable, the JTA health state transitions to FAILED and any global transactions will fail. In turn, the server life-cycle changes to FAILED. The JTA Transaction Recovery System then attempts to recover from transient runtime errors if possible and resolves any in-doubt transactions

7) If the database used to store TLOG is corrupted and can not be restored, than all pending transaction information is lost.

8) If database tables or rows used by the JDBC TLOG store are locked for some reason in the database, the database administrator must resolve these locks manually. Otherwise, the JTA subsystem is blocked and will be suspended until the lock(s) are released, or encounters an exception due to lock. The JTA subsystem will remain unable to operate correctly until the lock(s) are released or the value of MaxRetrySecondsBeforeTLOGFail is exceeded.

In order to change existing tlog configuration we need to perform two steps 


1) Create a new JDBC Generic/Multi/GridLink data source. In this example I will use Grid Link Datasource which is recommended

Note: You cannot specify a JDBC data source that is configured to support global (XA) transactions. Therefore, the specified JDBC data source must use a non-XA JDBC driver. In addition, you cannot enable Logging Last Resource or Emulate Two-Phase Commit in the data source. This limitation does not remove the XA capabilities of layered subsystems that use JDBC stores. For example, WebLogic JMS is fully XA-capable regardless of whether it uses a file store or any JDBC store.

2) Change TLog configuration for each WLS server to use newly created Datastore 

3) Verify the change

Create a Grid Link Datasource 












Change tlog configuation for each WLS server 


Select a WLS instance >> go to configuration >> go to services >> 

1) change ‘Transaction Log Store’ from file system to JDBC
2) Select new data store created in previous steps
3) Leave the prefix default name as it is
4) Repeat this steps for each WLS instance except Admin server




Verify the changes 


Follow below steps to verify the change –

Check backend tables -






Once all WLS instance get restarted, it will create backend table inside DB instance used by newly created Datasource


Check .out file for particular WLS instance e.g. WLS_BAM1.out file 



 It must have below sort of message

<Jun 11, 2015 5:53:47 PM EST> <Notice> <Store> <BEA-280067> <JDBC store "WLS_OSB1JTA_JDBCTLOGStore" did not find a database table at "TLOG_WLS_OSB1_WLStore", so it created one using the commands in file "/weblogic/store/io/jdbc/ddl/oracle.ddl".>

<Jun 11, 2015 5:53:47 PM EST> <Info> <Store> <BEA-280071> <JDBC store "WLS_OSB1JTA_JDBCTLOGStore" opened table "TLOG_WLS_OSB1_WLStore" and loaded 0 records. For additional JDBC store information, use the diagnostics framework while the JDBC store is open.>

Check config.xml file 


Entries for each WLS instance for transaction log much have got updated inside Config.xml as given below.


Reference -

http://docs.oracle.com/cd/E21764_01/web.1111/e13701/store.htm#CNFGD222