Wednesday, October 29, 2008

Spring SOA

Haven't got time to write any Blog entry in past 3 to 4 months. Working on my dreams: building my own company (http://www.springsoa.com/), learning taxes, hiring employees/contractors, bidding for projects. Not everything goes smooth and successful, but I am quite happy with what I could accomplish in last couple of months especially at the same time when I am still working on my own projects. Hopefully things will go better, also will add some reusable component library shopping cart for Fusion Middleware (FMW) and may be FMW training sessions.

Friday, June 27, 2008

Refreshing Connection Pool via MBeans

You can easily refresh the connection pool via Enterprise Manager. If you look at MBean Browser (for more information please look at my earlier blog on JMX http://chintanblog.blogspot.com/2007/12/it-is-all-about-jmx-i-worked-on-jmx.html, it is very comprehensive blog) you can search for your Connection Pool it provides management interface to refresh the connection pool as well. I believe EM internally must be using MBean interface.

Anyways, here is programmatic way of accessing MBean related to Connection pool and doing refresh using MBean client API.

To get Mbean Name, you can look at MBean Browser as displayed here. You can also see it supports testConnection, refreshConnection, etc.. methods.


You can see Mbean name and method name in the Mbean browser, you invoke here if you want also.


Here is the code to execute this method using Java API:


Here MBean name and method name I have got it from System MBean Browser. This code works great and does invoke the refreshConnectionPool method.


Testing the connection pool refresh

To replicate the scenario where connection pool get broken and remain broken without calling my refresh method was quite tricky to figure out. I tried multiple different scenarios and here is the one which worked great and as expected.

Here is the use case and steps I came up with:

1) Start database and create connection pool and data source
2) Test the connection pool using Mbean browser "testConnection" method
Expected Result: it should work without any isuse.
3) Stop database and Start the database
4) Test the connection pool using Mbean browser "testConnection" method
Expected Result: it should not work, and should show error (at least for a while) because of stale connection
5) Stop database and start the database
6) Run refresh Mbean Client routine
7) Test the connection pool using Mbean browser "testConnection" method
Expected Result: it should work without any issue.

Execution of the usecase:

Step 1) Start database and create connection pool and data source:
I created connection pool called TestPool which points to Test Schema

Step 2) Test the connection pool using Mbean browser "testConnection" method:
As show below it was successful.


Step 3) Stop database and Start the database
Step 4) Test the connection pool using Mbean browser "testConnection" method
As we already know, testConnection method failed with four different error message each time we invoke test connection method. After these four messages, connection was successful.

Exception occurred testing connection. Exception: java.sql.SQLException: No more data to read from socket.
Exception occurred testing connection. Exception: java.sql.SQLException: OALL8 is in inconsistent state.
Exception occurred testing connection. Exception: java.sql.SQLException: Io exception: Software caused connection abort: socket write error.
Exception occurred testing connection. Exception: java.sql.SQLException: Closed Connection.







Step 5) Stop database and start the database

Step 6) Run refresh Mbean Client routine

Step 7) Test the connection pool using Mbean browser "testConnection" method

This time it just works fine...


Code for MBean Client can be downloaded from here.


Thursday, June 26, 2008

DB Adapter Tricks


1. Dealing with Special Characters in Table/Column name

Recently saw the issue on forum that Database does allow creating table or column with special characters inside the name e.g. $, #, etc.. Based on W3C standard (http://www.w3.org/TR/xmlschema-1/#cElement_Declarations) not all special characters are allowed in XSD element/attribute names. $ is certainly not allowed. Oracle Database adapter creates one-to-one mapping between database column/table name and element/attribute name.

Sample Table: I created table named "mytable" with following script:

create table mytable (
id int,
company varchar2(100),
my$comments varchar2(100),
processed varchar2(100),
processed_time date
);

a) Custom SQL : No toplink mapping is used, in this case conversion from $ value to _ is done automatically. Means, when DB adapter creates XSD file it creates attribute with my_comemnts. That's cool, no work.

b)
Insert/Select operation: That uses Toplink mapping which is basically <<SERVICENAME>>_table.xsd file and <<SERVICENAME>>_toplink_mappings.xml file. We can change _table.xsd file and convert my$comments to my_comments as shown below:

Only these changes are required in order to make things work. The same thing can achieved for table name and other types of special characters.






2. Dealing with Dynamic Queries - Query By Example

For simple use case, you can always use custom query with multiple parameters e.g. "select * from TABLENAME where COL1 = ?1 and COL2 = ?2". It works perfectly fine and input parameters shows up in XSD and also in wsdl contract.

Sometimes, some richer query is required, usually the dynamic where clause. Here is the example:

sometimes, you need query, e.g. select * from tablename where col1 = ?1
sometimes, you need query, e.g. select * from tablename where col1 = ?1 and col2 = ?2

Here you can see the where clause is getting build dynamically. It is supported by Oracle DB adapter and it is called Query By Example. It was very well supported in 10.1.2. but somehow it got deleted by some Jdev developer, the good thing is that it is still supported at runtime and you can make small changes to SELECT operation to convert it into Query By Example.

What it does: It takes lets say object as input (e.g. Employee), and returns collection of Employee as output. Here the trick is, where clause is created based on Employee attribute you provide. So if pass null Employee object it would "select * from Employee" querey. If you provide Employee input with Company name "Oracle", it will return all employee matching company name oracle...

How to implement it:

I created DynamicQueries BPEL process and DBSelect partnerlink which does simple SELECT operation (not custom sql), and no parameters on table name Test. Now, we need to modify two files:

DBSelect_table.xsd: Create an Element called Test.


DBSelect.wsdl
: Specify keyword called IsQueryByExample="true" in your JCA operation as shown below:

That's all it needs. You need to create input variable of Element "Test" and assign it to Partnerlink Input variable and deploy the process.





3. Inserting multiple records in Database in single transaction

Well, I know that is supported out of the box and nothing you need to do special to achieve it. If you create database adapter in your BPEL process with INSERT operation it is exposed as collection of object as Input. You can use XSLT to assign that Collection and all records will be inserted in one atomic transaction.


Working example for all three can be found here.



Thursday, June 19, 2008

File Size from File Adapter

Well, reading/writing file name and file directory are very well documented in File Adapter guide and blogs. Just to recap, in order to read/write file name and directory you are supposed create a header variable of element InboundFileHeaderType (it will be different if it is outbound header variable). The WSDL file is created when you create FileRead or FileWrite operation.



After you create Variable, we just need to specify that variable as part of Header variable in Adapter.




That's all required, and during file polling you will get the fileName and directoryName already populated.


Wait a minute, but if you look at Variable_FileHeader at runtime in Assign Activity, you can see 5 elements are populated rather than 2. E.g.


It means, this header variable is populating 5 elements, where wsdl file is exposing only two! How to get remainder three, or let;s say SIZE which is more interesting to all of us.


Two ways:

1) Hack the wsdl file and put size as one of the element as shown below. (Note fileAdapterInboundHeader.wsdl file is created with ReadOnly attribute so you have to change the attribute prior to modification).


After doing this size attribute is populated and you can assign to any other variable if you like.

2) This is non-intrusive way. As we know that Header Variable (XML Element) already has size, it is just matter of extracting it, so why not to convert Header Variable in String and do string search? e.g. I did following assign activity to get the size and assign it to some string variable called Variable_File_Size. You can see, that I am converting xml element to string and then doing search for string between <size> </size>.




Things didn't end here, as somebody asked me that how to know how many elements/attributes are really supported. I couldn't find the documentation anywhere, so back to the JAD, and I could find the file name called "oracle/tip/adapter/file/FileAgent.java" which had following lines of code:


Based on code, it looks like variables are just getting inserted on the fly, as it is not JAXB but just being treated as DOM element. Ofcourse, it showed me a way what I need to do if I need to insert my custom header in File Adapter.


Here is the entire code used for sample.

Wednesday, June 18, 2008

Business Rules WebDav Repository

I was used to configure WebDav on Oracle Database for Oracle Business Rules, it was pretty hard to configure and very unstable. Recently found out that I can configure WebDav on Oracle Apache.

Usually to install WebDav on vanilla Apache server, we need to install couple of dll file and load those module during Apache startup, but for Oracle Apache all configuration are done out-of-the-box, only thing we need to specify is location and type of repository.


Configuration file:


%soasuite%/Apache/oradav/conf/moddav.conf


To enable default WebDav repository:


<Location /dav_public>
DAV on
</Location>

Once you specify on, http://host:port/dav_public (which located under %soasuite%/Apache/Apache/htdocs/dav_public is ready for use as webdav repository.


To create new WebDav repository:

1) Add following entry in %soasuite%/Apache/oradav/conf/moddav.conf file

<Location /my_webdav_repository>
DAV on
</Location>

2) Create directory called %soasuite%/Apache/Apache/htdocs/my_webdav

3) Restart the Apache server and that's all required for configuring custom repository.


To create password for WebDav repository:

1) Create authentication file with different user, I created

%soasuite%/Apache/Apache/bin/htdigest -c %soasuite%/Apache/oradav/conf/webdav.access webdav-authentication oc4jadmin
%soasuite%/Apache/Apache/bin/htdigest %soasuite%/Apache/oradav/conf/webdav.access webdav-authentication ruleauthor

2) Change %soasuite%/Apache/oradav/conf/moddav.conf to provide the authentication mode:

<Location /my_webdav_repository>
DAV on
AuthType Digest
AuthName "webdav-authentication"
AuthDigestFile %soasuite%/Apache/oradav/conf/webdav.access
Require valid-user
</Location>

3) Restart the server and now my_webdav_repository will be accessible only after username and password authentication.


Tuesday, June 17, 2008

Flex Field Mapping Migrator

Nothing special, just putting together some BPEL client API to create useful utility for BPEL Human Workflow migration.

Flex field mappings are very useful for creating custom views/queues and reports. Flex-Field mapping in worklist are persisted in Workflist Schema with MD5 encoded guid. It is not advisable to promote them from one server to another server via database scripts. BPEL workflow client API supports extracting, creating and deleting such mappings for worklist application.

Here is the utility which I created by using worklist client API, which I have released it as JAR file, and I wrote wrapper BAT file to execute the command.

As mentioned earlier I support three COMMANDNAME export, import and clean. Export will generate workflowmappings.xml file in the current directory and import will take workflowmappings.xml file and import all the payload mappings to the target server. My import program is not very intelligent, it cleans all the mapping on target server and creates them according to workflowmappings.xml file, it is not smart update.

It can be downloaded from here.

Tuesday, May 27, 2008

Can you create a domain with upper case name?

Realized that you can not create domain with upper case, and if you do then you have login in BPELConsole in very weird way. I saw the meta-link and this issue been around for more than a year. I looked into exploded version of BPELConsole, it took me couple of min to fix it, and here is my hack:

Either you can get modified JSP files from here and dump it to %soasuite%/j2ee/oc4j_soa/applications/orabpel/console, or follow the steps described here:

1. %soasuite%/j2ee/oc4j_soa/applications/orabpel/console/index.jsp

Search for lines, with IBPELDomainHandle ch = l.lookupDomain(), and comment it out 2 lines. It is about line number 15 or 16.

Now if you login to BPEL console, you can see the Domain Picker jsp file and I thought it will just work fine, but I had to modify domainPicker jsp file as well to get thing done.

2. %soasuite%j2ee/oc4j_soa/applications/orabpel/console/domainPicker.jsp

a) Search for string called “alreadyAuthed.add( aDomainId )”, it will be about line number 39, and add following line of code

b) Need to change Java Script at the end of page:

c) Finally Change the submit button, and that’s all.

No need to bounce the server to see the magic! It feels like hacking is my business and consulting is byproduct.

Rule Engine Tricks

While working with Oracle Business Rules I found some tricks:

1) Enable Logging

Provide following properties in decisionservice.desc file in your bpel process:

<properties>
<property name="watchRules">true</property>
<property name="watchActivations">true</property>
<property name="watchFacts">true</property>
<property name="watchCompilations">true</property>
</properties>
You have to provide this right in <ruleEngineProvider> and right after the <repository> tag. If you have worked with Rule Engine SDK, then you may realize that it is not really providing all debugging information which you can see via ruleSession.executeFunction("watchFacts"). Well, be happy with what it provides, it is good enough. If you want to look at more information then look at my article on Custom Decision Service.

2) Additional Libraries

If your BPEL process is having decision service and your Rule is calling external Java program, then you will need to include those Jar files in to your path. The directory would be: decisionservices\DecisionService\war\WEB-INF\lib under your BPEL root directory.

3) RuleSet invoking other RuleSets

Hmm, this was cool one. If you have ruleSet and inside that ruleSet you are calling another ruleSet via pushRuleSet functioin, it just works fine in RuleAuthor, but decision service interface doesn't let you execute it. It throws exception something like "Rule Set xyz is undefined". Well, again if you are familiar with how Rule SDK works, then you can guess why this is the case. Developer of decision service probably loading only one rule set and that is what causing the issue.

Resolution: Rule function is quite different, if you execute rule function and if rule function calls multiple RuleSet that just works fine. When I created my custom decision service, I did the same for both RuleSet and Rule Functions. Therefore, for out-of-the box rule decision service, we have to create function interface in order to interact with multiple rule-sets.

4) Deploying to home container

As shown in below diagram, if you specify oc4j_soa while deploying decision service, it just works fine. If you specify home during creating Jdeveloper connection while deploying Decision service it throws following exception. Therefore, we have to use oc4j_soa while making Jdeveloper AS/Integration server connection.

[deployDecisionServices]
[deployDecisionServices] 08/05/14 18:20:26 Notification ==>application : rules_default_TestBPEL1_1_0_DecisionService is in failed state

[deployDecisionServices]

[deployDecisionServices] Exception in thread "ConfigurableThreadImpl::" java.lang.NoClassDefFoundError: oracle/classloader/util/LocalizedText
[deployDecisionServices] at com.evermind.server.rmi.RMICall.EXCEPTION_ORIGINATES_FROM_THE_REMOTE_SERVER(RMICall.java:109)
[deployDecisionServices] at com.evermind.server.rmi.RMICall.throwRecordedException(RMICall.java:125)
[deployDecisionServices] at com.evermind.server.rmi.RMIClientConnection.obtainRemoteMethodResponse(RMIClientConnection.java:517)
[deployDecisionServices] at com.evermind.server.rmi.RMIClientConnection.invokeMethod(RMIClientConnection.java:461)
[deployDecisionServices] at com.evermind.server.rmi.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:63)
[deployDecisionServices] at com.evermind.server.rmi.RecoverableRemoteInvocationHandler.invoke(RecoverableRemoteInvocationHandler.java:28)
[deployDecisionServices] at com.evermind.server.ejb.StatefulSessionRemoteInvocationHandler.invoke(StatefulSessionRemoteInvocationHandler.java:31)
[deployDecisionServices] at __Proxy2.getEvents(Unknown Source)
[deployDecisionServices] at oracle.oc4j.admin.jmx.client.MBeanServerEjbRemoteSynchronizer.getEvents(MBeanServerEjbRemoteSynchronizer.java:530)
[deployDecisionServices] at oracle.oc4j.admin.jmx.client.CoreRemoteMBeanServer.getEvents(CoreRemoteMBeanServer.java:319)
[deployDecisionServices] at oracle.oc4j.admin.jmx.client.EventManager.run(EventManager.java:217)
[deployDecisionServices] at oracle.oc4j.admin.jmx.client.ThreadPool$ConfigurableThreadImpl.run(ThreadPool.java:303)

How to start Enterprise Manager

If you shutdown Enterprise Manager from Console or any other way here is the trick to start the Enterprise Manager:

soasuite/j2ee/home/config/default-web-site.xml
<application name="ascontrol" path="../../home/applications/ascontrol.ear" parent="system" start="true" />
<application name="javasso" path="../../home/applications/javasso.ear" parent="default" start="true" /> (not sure if it is required all the time)

soasuite/j2ee/home/config/server.xml
<web-app application="ascontrol" name="ascontrol" load-on-startup="true" root="/em" ohs-routing="true" />

Bounce the home container using following command

opmnctl restartproc process-type=home

Friday, May 23, 2008

Custom Decision Service

I am not very big fan of Decision Service which comes out of the box with Oracle BPEL. There are couple of issues I found

- It creates whole bunch JAXB classes in my bpel process, which are .java and .class files (I have to keep both of them, .class files are not generated during compilation of bpel process!)

- I can NOT change the created Decision Service, it is not like I can click on DB adapter and reconfigure it. It make my life extremely difficult when have to accommodate changes from client

- I can not call the service from ESB or other WS consumer, the decision service exposed takes millions of parameters which are not relevant to web service consumer (e.g. bpelInstanceId? why in the hell rule engine needs to know bpel instance id?)

- There is a decision service for each partner link! We have 20 BPEL processes making almost 30 different type of calls, that ends up in 30 decision services. That make my life impossible to manage.

- It is way too complicated to implement WSIF with the decision service.

I love the idea of BAM, where all Active Data cache operations are exposed as one service. I had very naive and romantic idea of implementing my custom decision service and talk to any type of rule set, rule function, do the connection pooling of rules via one web service interface. It can take any type of parameter and using Java reflection I can convert to right type and that's all I need. My client did buy this idea and I implemented very nice and generic Custom Decision Service.

1. Implemented POJO

- I implemented a small Java POJO service, which does takes configuration

e.g. RuleRepositoryType (webdav, files), File Repository Connection Information, WebDav Repository Connection Information, Whole bunch of debug flags which are supported by Rule Engine, Dictionary Name, Rule Function or Rule Set, Input types and output types.

2. Implemented EJB 3.0 and exposed as Web service via annotations

- Just created EJB 3.0 session facade on top of POJO so that it is container managed, I can easily expose as Web Service via Annotations, and for implementing Connection pool.

3. Implemented Connection Pooling in EJB

- I implemented custom connection pool of Rule Repository and Rule Dictionary. I initiate new Rule Session each time. I also noticed that pooling/caching Rule repository was not just good enough so I had to cache Rule Dictionary as well. Yeah, I implemented

4. WSIF Interface via annotations

- I just followed http://chintanblog.blogspot.com/2007/12/although-wsif-seems-to-be-hack-to-plug.html to create WSIF on top of my EJB 3.0 session facade.

5. Performance Matrix

Test case: Used WebDav repository, Rule Function interface which executes more than 10 rule sets and creates whole bunch of external connection. It usually takes about 2 seconds to complete the Rule Session itself. I used about 10 threads and loop of 50 with 1 second delay in each (so total number of instances will be 500).

Default Decision Service over SOAP: I created sample BPEL process and created decision service with decide activity (which comes with jaxb java and classes, jar, ear, war, and so on...). Here were the results:

Average execution time: 3.1 second

Number of failures: 15

Custom Decision Service over SOAP: To be fair I didn't use WSIF interface. Calling my custom decision service over soap protocol. It is just like normal partner link.

Average execution time: 3.5 second

Number of failures: 10

Notes:

- Reason behind slowness is because of I am doing more reflection which is a bit expensive.

- Because of EJB and container managed stuff, I am getting less number of failures.

- Out-of-box decision service caches the Rule Connection where the custom decision service manages the entire pool which is shared across the enterprise. I believe if I test with 20 different BPEL processes with different pattern of calls, I will be far ahead in comparison with default decision service.

I am more than willing to get your feedback on this. I am not sure if I did right thing, but I believe I reduced (rather completely eliminated) overhead for managing far complex default decision service.

Thursday, May 22, 2008

ESB Display Older Instances

I used to have my own interceptor logging all required thing into the database, so never had to look at ESB console.

Issue 1 :

Recently when I was working on ESB console I realized that it displays only a day old instances! While looking into the database I saw all instances where there.

Yeah, of course the ESB client API was causing the issues. As mentioned in my earlier blogs, to debug ESB Console related issues, the best things is to use obtunnel to intercept all your request and then simple wingrep/grep to search for what you interested in.

The reason behind day old instance was easy to find:

File Name: %soasuite%/j2ee/oc4j_soa/applications/esb-dt/esb_console/esb/model/model.InstancesSearch.js

Where: Search for key word "InstanceSearchModel.prototype.init" about the line number: 150 in 10.1.3.1

Content:

Original:

InstanceSearchModel.prototype.init = function(serviceConfigROTID)
{
this._trackingSC = new TrackingSearchCriterion(serviceConfigROTID);
this._serviceEntityInfo = null; // ESBEntityInfo
this._status = "Any";
this._flowID = "";
this._timePeriod = 1;
this._timeUnit = "days";
this._timeZoneString = InstanceSearchModel.getTimezoneString(new Date(), false);
}

Basically InstanceSearchModel creates Rest Webservice request to ESB server, and timePeriod is configurable parameter to this object. Based on my observation, other javascript which are using InstanceSearchModel object are not passing this parameter, so default parameter will be used which is 1 day. I changed the default to 20 days and it showed 20 days worth of instances...

Modified:

InstanceSearchModel.prototype.init = function(serviceConfigROTID)
{
this._trackingSC = new TrackingSearchCriterion(serviceConfigROTID);
this._serviceEntityInfo = null; // ESBEntityInfo
this._status = "Any";
this._flowID = "";
this._timePeriod = 20;
this._timeUnit = "days";
this._timeZoneString = InstanceSearchModel.getTimezoneString(new Date(), false);
}

Issue 2 :

ESB has limitation on returning only 100 results. It was an amanzing experience to rip off the entire oraesb.jar and go through the hierachy of java files,

soasuite/j2ee/oc4j_soa/applications/esb-dt/esb_console/WEB-INF/web.xml
oraesb/oracle/tip/esb/configuration/servlet/CommandServlet.jad
oraesb/oracle/tip/esb/configuration/servlet/command/GetInstancesCommand.jad (xmlInstanceManager.getInstances)
oraesb/oracle/tip/esb/console/XMLInstanceManager.jad
oraesb/oracle/tip/esb/console/XMLInstanceManagerImpl.jad
oracle/tip/esb/monitor/manager/ActivityMessageStore.jad
oraesb/oracle/tip/esb/monitor/manager/database/DBActivityMessageStore.jad (InstanceListXMLBuilder)
oraesb/oracle/tip/esb/monitor/manager/database/InstanceListXMLBuilder.jad
oraesb/oracle/tip/esb/monitor/manager/database/FilterParser.jad (String s = RepositoryFactory.getRepository("RUNTIME").getESBParameter("MaxInstanceCount"))

Bingo! Finally I found the last file which was reading the parameter from ESB parameter table, now it was piece of cake!

I added, insert into esb_parameter (PARAM_NAME, PARAM_VALUE) values ( 'MaxInstanceCount', '1000' ); in ORAESB schema, and bounced the server and it started working like magic!

Tuesday, May 13, 2008

MQ Adapter Anatomy

Recently I had chance to work with MQ Adapter, and it was pretty nice experience and I pretty much tried all different types of settings supported by MQAdapter. Here I would like to share something.

If you religiously follow the document: http://download.oracle.com/docs/cd/B32110_01/web.1013/b28956.pdf then there is not much value I will be adding, but still you might find something useful and accumulated information.

1) Different Settings

Now let's take a look at each parameter:
  • Scheme: Three types of scheme

    1. dynamic : use max connections, if all are in use, then give new one and don't worry about maxConnections limit

    2. fixed: use max connections, if all are in use, then throw exception for new request

    3. fixed_wait: use max connections, if all are in use, then do fixed amount of wait for newly requested connection

    • Best practice is to use fixed_wait with higher timeout so that we can restrict the number of connection

  • minConnections/maxConnections/initial-capacity: Decided number of connections and initial connections during start up.

  • waitTimeout : This is the timeout used during fixed_wait scheme for new requests

  • inactivity-timeout-check: When to check for expired or inactive connections. Supported values are "never","periodic","piggyback" (when a new connection is fetched), and "all" (periodically and when a new connection is fetched).

    • We kept it “all” as we don’t want to keep connections idle longer than certain time period.

  • inactivity-timeout: The desired connection timeout, in seconds, as a positive integer, or 0 for connections to never expire. Negative values are disallowed.

    • We have kept it 60 seconds.

2) "Garbage Collector" - clean up routine

Basically three parameters are used to configure for releasing the idle connection held by OC4J connection pool.

  • inactivity-timeout-check: has to be "all" or "piggyback", all is desired if you want to periodically run clean up routine.

  • inactivity-timeout: The desired connection timeout, in seconds, as a positive integer, or 0 for connections to never expire. Negative values are disallowed.

  • Server.xml (taskmanager-granularity): When inactivity-timeout-check is configured to be periodic, this value specify how specified how frequently we want to run the periodic recycling routine.

Here is sample server.xml entry:

3) Different Level of Caching and Related Errors

Actually, after setting 1 and 2, connections were still not getting cleared, and we were able to see on MQ server side (using Tivoli) that connections are not getting closed. We found from Oracle product managers, that Adapter itself manages cache for MQ (or any type of) connections.

We need to provide, cacheConnections=false property in BPEL Partnerlink or ESB end point property. After that ESB and BPEL releases connection nicely and clean up routine described in No 2, works great and cleans up all idle connections.

4) Inbound and Outbound MQ adapter

Another finding I found was, difference between inbound and outbound adapters. If you have MQ outbound adapter with "cacheConnections=false", it releases connection right after completing the unit of work, and connection is available to OC4J connection pool. But if you have MQ inbound adapter with "cacheConnections=false", it still holds the connection. Probably it is because of continuous polling.

I have not tried it with changing the polling frequency to see if connection get releasesd.

Friday, May 9, 2008

Big News - Resigned Oracle

Joined on Nov 29, 2005 and finally completed my last day at Oracle today. Feel pretty sad and same time very exciting to face new challenges by my self. Now I am becoming independent consultant in Oracle SOA space and without any expert's support.

I am may be putting my career at risk but as we all know: The biggest risk in life is not to take one!

Wednesday, April 9, 2008

ESB (and BPEL) purge instances

There are multiple ways we can use to purge ESB instances.

Database Scripts
For 10.1.3.3.1 SOASuite ESB purge instances scripts are provided as part of installation (%soasuite%/integration/esb/sql/other):
- purge_by_date.sql
- purge_by_instance_id.sql
- purge_by_id.sql


ESB Client API

Database scripts are always good, but it should be best practice to rely on API provided by product. Same as BPEL Client API, whatever you can do in ESB console you can do via ESB client API. I used %soasuite%/bpel/bin/obtunnel.bat to check what XML based payload it is using. For cleaning the ESB instances, here is the sample code:

purgeInstance.consoleClient = purgeInstance.getESBClient();
String purge = "<instanceManage enable='true' userPurgeTimePeriod='14400000'/>";
HashMap<String, String> requestProps = new HashMap<String, String>();
requestProps.put("root", purge);
purgeInstance.consoleClient.perform("UpdateTrackingConfig", requestProps);

Here time period is in milliseconds and all instances older than timestamp will be purged. If we need to purge all instances, we need to change userPurgeTimePeriod='0'.


Custom ANT Task
I saw the custom ANT tasks for managing BPEL domain, purge instances, etc.. It is an amazing job done by Ramkumar Menon (http://blogs.oracle.com/rammenon/2007/11/26#a74 ). I used his framework and added some tasks for managing ESB.

It is really easy to use this tasks, e.g. for managing BPEL domain you can specify following in build.xml:

<BPELServerAdmin providerurl="${bpelProviderURL}" username="${bpelUsername}" password="${bpelPassword}">
<manageDomain domain="${bpelDomainName}">
<purgeInstances select="all" fromdatetime="${bpelPurgeInstanceFromDateTime}" todatetime="${bpelPurgeInstanceToDateTime}"/>
<undeployProcesses select="BPELProcess.*" type="all" revision="all"/>
</manageDomain>
</BPELServerAdmin>

This will purge instances based on timestamp and undeploy the processes.

For ESB, this is what I added:

<ESBServerAdmin username="${esbUsername}" password="${esbPassword}" hostname="${esbHostName}" port="${esbPort}">
<purgeESBInstances todatetime="${esbPurgeInstanceToDateTime}"/>
</ESBServerAdmin>

You can purge ESB instances based on timestamp, from and to is not supported. Probably it can be enhanced if there is requirement.

Here is the link for downloading Ant task with readme file, sample build.xml and sample build.properties file.



Tuesday, April 8, 2008

Performance with ESB and BPEL UDDI runtime lookups

Well, nothing new with it, as long as you are using SOASuite 10.1.3.3.1 you should have both of them out of the box. Just to recap:

BPEL run-time look-ups:


1) You have to define/publish wsdl in service registry. If you publish wsdl through User Interface, it creates unique sequence id for registry key which you can change using registry user interface. You can also use registry API to publish wsdl of the service.

2) Define registryServiceKey as property of partnerlink


You can either edit bpel.xml or edit the partnerlink to achieve it:




3) Define registry URLs in domain.xml


URL for uddiLocation should be http://host:port/registry/uddi/inquiry.


ESB run-time look-ups:


1) You have to define/publish wsdl in service registry. If you publish wsdl through User Interface, it creates unique sequence id for registry key which you can change using registry user interface. You can also use registry API to publish wsdl of the service.


2) Define registryServiceKey as end point property for service


3) Define registry URLs in integration/esb/config/esb_config.ini file



I have tried both ESB and BPEL run-time look-ups and they work great. Although there are some really bad performance issues I saw:

After enabling run-time look-ups ESB or BPEL calling any service via UDDI was taking longer than 1 minutes to invoke (it was couple of milliseconds earlier). Upon analysis we find out the actual invocation was not the issue but preparing the call (get-wsdl, parse wsdl and store in specific cache) was taking the longest time.

Also we saw that if load on BPEL and ESB if not very high, I don't see many performance issues but as soon as load increases (with number of threads) things get really slow. We did look at service registry logs and it's response time, service registry was extremely fast.

Here is what we found the reason behind slowness: ESB and BPEL both caches WSDL into their cache in specific format. If we do run-time look-ups BPEL and ESB gets the WSDL using service registry (which is pretty fast), but analyzing those wsdl and storing them in structure which BPEL and ESB run-time engine can understand takes a long time.

In conclusion, all this stuff BPEL, ESB, OSR - run-time look-ups looked like an ideal approach but had some performance drawbacks. All that glitters is not gold :)

Monday, March 31, 2008

ESB read-only Console

I know this functionality is supported out of the box, but there are some special touches added by me, and thought to share with everybody.

To create ESB read-only user account:

- Create user in EM with ascontrol_monitor, and that's all is required for creating read-only account for ESB read-only.

Now if I login to ESB console using esbreadonly user, everything works fine. There are some gotchas here:

- You can delete any service using esbreadonly user!

- Sometime you want to enhance functionality, e.g. not all readonly user should have access to routing rules, etc...

I published the article for BPEL Read-only Console , it was fairly easy because BPEL read-only console are all JSP and I used Servlet filter to achieve that functionality. Now ESB console is all Java Script underneath. From directory names I can see it is Picasa framework, not sure how that works, but debugging JS is hell lot of time unless you have good editor.

Disable Delete button for Readonly Users:

The best way to debug Javascript is just to use wingrep and find the pattern which you are looking for. For disabling DELETE button, here is the trick:

- open %soasuite%/j2ee/oc4j_soa/applications/esb-dt/esb_console/esb/commands/controller.ServiceNavigator.js

- Search for following two entry (it should be about 399 or 400 line in the file:

var a=ActionController[gActionDeleteService];
a.enable();

- Add following line of code below the a.enable(); line:

/* cshah changes : added code to disable delete button */
if(!isAdminUser()) {
a.disable();
}

That's all and DELETE button will be disabled for non-admin users.

Enhance functionality for ESB console, disable tabs e.g. Routing Rules

- open %soasuite%/j2ee/oc4j_soa/applications/esb-dt/esb_console/esb/commands/controller.ESBController.js

- Search for following two entry (it should be about 614 and 615 line in the file:

tabmodel.enableTab(TabModel.TAB_RTNGRULES_IDX);
tabmodel.enableTab(TabModel.TAB_TACKINGFLDS_IDX);

- Add the following line of code right below tabmodel.enableTab(TabModel.TAB_TACKINGFLDS_IDX);

/* cshah -- disable routing rules if the user is not admin */
if(!isAdminUser()) {
tabmodel.hideTab(TabModel.TAB_RTNGRULES_IDX);
}

This will remove Routing Rules tabs for read-only (non-admin) users.

Thursday, January 10, 2008

OWSM Offline Agents

Per architecture OWSM Agents intercepts the requerst, calls the policy manager to retrieve the policy and then executes those policy where agent is installed. It introduces lot of traffic on policy manager. Policies are stored in database and policy manager is single point of contant for them.

Although we can break this dependency by offline agent installation, we can export all the policy from database as an XML file and provide that XML file to OWSM agent during the time of installation. In this case, owsm agents will look into the file, and wouldn't contact policy manager. We can update this file and policies at run time without bouncing the server.

Here are steps I followed:

Step 1: Create a client agent using OWSM ccore User Interface and defined the policies as I would for any other client agent.

Step 2: I expored those policies to an XML file using "save" button:

Step 3: Now it is time to install this agent with policy file already exported. Client agent installation is explained pretty well in http://www.oracle.com/technology/obe/fusion_middleware/owsm/index.html , but here I will just provide required informaiton to make offline.

I provided following agent.properties and then used "wsmadmin installAgent" to install the client agent.

agent.componentType=OC4JClientInterceptor
agent.containerType=OC4J
agent.containerVersion=10.1.3
client.home=d:/soasuite/j2ee/oc4j_soa
agent.component.id=C0003007
agent.policymanager.enabled=false
agent.policySet.file=D:/temp/owsm/OfflineAgentPolicySet.xml

Here I had to provide the policy file and disable the policy manger.

Step 4: Final testing: After this, I just create two BPEL process Calle and Caller to test my client agent, and it worked simply great. Upon changing the file name my client agent didn't work and upon updating the file client agent picked up new changes. It worked..

You can download agent.properties, policy file and test code from here.

OWSM Client Agents

It took me a while to configure all client agents, here is some information I would like to share. OWSM supports 3 types of client agents (http://download.oracle.com/docs/cd/E10291_01/doc.1013/e10298.pdf page 6-6):

- J2EE Client Agents: Configured when Servlet/EJB makes call to WebService and web service clients are maintained by Container

- J2SE Client Agent: Configured when Stand-Alone Java is making call to WebService.

- WSIF Client Agent: To leverage the WSIF framework used by ESB and BPEL. Configured when ESB/BPEL making outbound call to webservice.

Client agent intercepts the outbound requests and executes the policies defined by policy manager.

WSIF Client Agent: It is very well explained at Oracle OBE: http://www.oracle.com/technology/obe/fusion_middleware/owsm/index.html

J2SE Client Agent: Here are the steps I followed to create J2SE client agent.

Step 1: I created BPEL webservice called HelloBPELAgentTestingCalle. Deployed that process to BPEL PM and secured it using Server Agent which requires WSSE authentication information.

Step 2: I copy the WSDL link and created J2EE webservice proxy from JDeveloper.

It creates pretty much all code required to invoke the webservice, although I had to add couple of lines of code to test the way I wanted:

I tried to create fake security token from wizard (and also per documentation), I created HelloBPELAgentTestingCalleBinding_Stub.xml and put that file in src and also under src/../proxy/runtime directory. The content of this file is very well explained in WSIF client agent.

Now after putting all libraries in classpath as mentioned in document: client.home/owsm/lib/extlib, ORACLE_HOME/owsm/lib/cfluent-log4j.jar, JDBC jar file, ORACLE_HOME/jlib/orai18n.jar, ORACLE_HOME/jlib/ojmisc.jar

Now if I execute the client using JDeveloper, I can see that it is executing the client agent before it goes to real service.

J2EE client agent:

I tried to create J2EE client by creating Web Service proxy mentioned earlier and then wrapping it up in servlet or JSF page.

I found out that if I have to follow http://download.oracle.com/docs/cd/B31017_01/web.1013/b28974/j2eewsclient.htm to create webservice client and then follow the document http://download.oracle.com/docs/cd/E10291_01/doc.1013/e10298.pdf for configuring my war or jar file. I believe that's too much work and per my personal experience I would avoid recommeding such complicated way to client.

I went for little easy solution, although I my client agent will not be managed by container, but I can get same functionality served. I used my J2SE client which _stub.xml file and all other good stuff. I created Servlet wrapper around it and deployed to application server, things works just fine....

In a nutshell, you can use J2SE client approach and still deploy to container. It will serve the the purpose but won't be managed by container.

You can download the code here for WSIF, J2SE and J2EE client agent over here.

Wednesday, January 9, 2008

OWSM Field Level Encryption

I had chance to work on OWSM agents and different type of encryption. Here I would like to present the solution for OWSM field level encryption. I already had encryption/decryptioin example working for full payload, so here I will just provide tricks on how to configure OWSM for field level encryption.

It is basically doing XPATH encryptiong, as described in http://download.oracle.com/docs/cd/E10291_01/doc.1013/e10299/policy_steps.htm#sthref612.

Step 1.

I created a BPEL process called DemoOWSMFieldLevelEncryption which pretty much returns the input string. Here is how it looks like:

I have input payload with SSN number in it, which I am interested in encrypting:

Please note down the namespace, it is used when we encrypt the message.
Step 2:

Just for testing purpose I registered that service in OWSM gateway, and start creating policies for the service as shown below:

If we look at the XML encrypt in more detail :

Here I am using existing utility to create JKS files. Interesting thing to note down is:

Encrypted Content: XPATH

Encrypt XPATH: /soap:Envelope/soap:Body/ns1:DemoFieldLevelEncryptionProcessRequest/ns1:SSN
Encrypt namespaces: soap=http://schemas.xmlsoap.org/soap/envelope/,ns1=http://xmlns.oracle.com/DemoFieldLevelEncryption

As you can see, I am using soap and ns1 namespaces in my XPATH, so I have to define them in namespaces section as comma seperated values.

If we look into XML decrypt, it remains pretty much the same. XML encrypt of body/header/envelope or xpath doesn't change XML decrption part.

I created LOG before and after each policy step as part of best practices.

Step 3:

Now time for testing. I used OWSM test page to test my registered service and used Execution Logs to check if messages are getting encrypted and then decrypted back to the original content. Here is what I saw:

First log (SSN is encrypted)

Second Log: SSN is decrypted back to the original value

It seems like it is encrypting and decrypting field level variables. Source code can be downloaded at here.