Tuesday, September 30, 2014

The ‘Other’ Wi-Fi in the room


I recently had the opportunity to troubleshoot a Wi-Fi issue on an iPhone. A previously detected and accepted home Wi-Fi connection on the phone was now not connecting to the network. Navigating to Settings->Wi-Fi and tapping on the specific Wi-Fi network did not help. Power cycling the home router did not help either. Then, the thought of playing around with the ‘Other’ networks on the settings came to mind. Navigate to Settings-> Wi-Fi->Other on the phone and enter in the Wi-Fi network name and select the appropriate security for the network. In this case, it was WPA and I selected that as the security option for the network. Then, I was prompted for the Wi-Fi password, which I duly entered and lo and behold, the phone was able to re-connect to the Wi-Fi network.

Wednesday, December 21, 2011

Circumvent a mutating trigger using INSTEAD OF trigger

Here is a new take on the classic mutating trigger issue in Oracle. Imagine a fictitious scenario where we have a requirement to do a cumulative sum on a column when an insert operation is performed on the table. Typically, we can write a trigger that looks up the table and uses the sum of the column to come up with a cumulative sum value for the row that is being inserted. The trouble with this approach is the moment we refer to the trigger table within the trigger itself, the chances are that the mutating trigger exception is waiting to happen.

So, how can we beat this problem? One way to do it is by a clever usage of INSTEAD OF trigger rather than a conventional trigger. Instead of allowing the insert to be performed on a table, if the insert were to be performed on a view of the table, that gives us the opening that we need to solve this problem!

Note: This is not the only mechanism to deal with mutating trigger issues. Well established mechanisms like using autonomous transaction, maintaining package level variables in statement level triggers etc. are also equally effective in solving this problem. Choosing the right mechanism would vary based on the exact use case.

Let us see how to solve this problem with an example:

--Creating a sample table to test the no mutation theory

create table no_mutation_t(running_num number, cum_sum number)
/
--Create a view on the table
create view no_mutation_v as select running_num from no_mutation_t
/
--Create a dummy sample data table that drives inserts on no_mutation_t table
create table no_mutation_sample_data_t(running_num number)
/

-- Insert rows in dummy sample data table

insert into no_mutation_sample_data_t values(10)
/
insert into no_mutation_sample_data_t values(20)
/

--Now, we are ready to write the instead of trigger.

create or replace trigger no_mutation_trg instead of insert on no_mutation_v
DECLARE
  l_cum_sum NUMBER;
 
BEGIN
    SELECT nvl(sum(running_num),0) into l_cum_sum
    from no_mutation_t;
   
    INSERT INTO no_mutation_t values(:NEW.running_num,:NEW.running_num + l_cum_sum);
END;
/

-- Proof of the pudding lies in eating part. Test the trigger with an insert on view rather than the table
insert into no_mutation_v select * from no_mutation_sample_data_t
/


The insertion happens smoothly with no mutating trigger errors. A select on the no_mutation_t table produces the running_num and cumulative sum.
For completeness sake, you can actually write a trigger on the table with the same trigger code as above and  test by inserting multiple rows into the table itself rather than the view. Doing so would result in a mutating trigger exception for the direct table insertion.

Wednesday, January 19, 2011

Explore AIA 3.0 artifacts without installing AIA!

If there is ever a need to take a closer look at the AIA 3.0 foundation pack artifacts without going thru the rigorous install process of AIA, then here is how to do it:

1. Download the AIA 3.0 installer zip from edelivery.oracle.com
2. Unzip the contents of the file
3. Navigate to Disk1/stage/Components/oracle.aia.top/11.1.1.2.0/1/DataFiles
4. There should be a jar file by the name filegroup1.jar
5. Unjar the contents of the filegroup1.jar file
6. Now, there should be a folder created with the name filegroup1
7. Navigate to filegroup1/AIAMetaData/AIAComponents

The AIAComponents folder is where all the artifacts like EBO, EBM, EBS are placed. The AIAMetaData folder also has seed versions of  DVM & XREF files.

Friday, July 09, 2010

SSLy Me!

Here is a tip on deploying SOA composites from JDeveloper 11g to a SOA server that is SSL enabled with a self-signed certificate.

Deploying a SOA composite from JDeveloper 11g would fail with the following exception if the SOA server is SSL enabled with a self-signed certificate:

javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: No trusted certificate found

The same exception will also be raised when we use the B2B adapter wizard in a composite to pick a B2B document.

We would have to do the following configurations in JDeveloper 11g to beat this error:


  • Get a local copy of the keystore JKS file that holds the self-signed certificate
  • Navigate to Tools->Preferences->Credentials in JDeveloper 11g
  • Specify the keystore jks file including the path in the server keystore text-box
  • Specify the keystore password in the server keystore password text-box
  • Specify the private key in the server private key password text-box
This would take care of the exception and the composites can then be successfully deployed directly from the JDeveloper 11g environment. 

Sample JDeveloper screenshot that shows the Credentials configuration page:












It turns out that even if we beat this error, the JDeveloper JDK compiler also needs to be able to identify this certificate when we compile/make a composite.

The following steps would take care of adding the certificate to the JDK certificate store:

  • Export the certificate using keytool utility from the JKS file into a certificate file
  • Import the certificate using keytool utility into cacerts file used by the JDeveloper JDK
  • The cacerts file would be in the following JDeveloper product path: \oracle\Middleware\JDeveloper\[jdk_directory]\jre\lib\security

Friday, July 02, 2010

'Long' story short

Recently, I came upon a requirement to do a substring on a long column from a select statement in Oracle. This was for a table named fnd_documents_long_text and is used as part of Oracle eBusiness Suite.

What started out as a trivial exercise turned out to be not so trivial! I was baffled to note that no built-in functions could be used on a long column at-least with respect to referencing it from a select statement. It then dawned on me that we would have no option but to come up with a fast and furious pl/sql custom solution to address this requirement :-)

The objective is to execute a statement like the following:

select substr(long_text,1,255) from fnd_documents_long_text

Running this statement would give us an error exposing the limitations of using the long datatype! In this case, it is not a self-imposed limitation as this table comes part of the eBusiness Suite install!


Here is an outline of one such solution for this issue:

  • Create a user-defined type with a structure similar to fnd_documents_long_text except for the fact that the datatype of long_text column would be set to CLOB in the user defined type
  • Create a table user-defined type from the user-defined type
  • Write a pipeline function that processes rows from fnd_documents_long_text, converts the long_text column to clob and returns the clob version of the column as part of the row
  • Create a view that selects rows from the pipelined function using the TABLE() function
  • Use the view instead of fnd_documents_long_text
  • Use dbms_lob.substr function on the long_text column from the view!
The sample code for the solution can be downloaded from here: long_story_short.sql


Once the view is in place, the select statement can now be run like the following:

select dbms_lob.substr(long_text,255,1) from fnd_documents_long_text_v

That's it! A little fist pump and we can start using this solution!

I certainly hope that Oracle would come out with an upgrade solution for handling long datatype related issues. It would be great to see an upgrade that handles the conversion of these long typed columns to clob typed columns.

Friday, June 11, 2010

Merry with XQuery


This post is about exploring the advantages of using xquery within a BPEL process to examine the contents of a payload. We are going to explore around this area for the following use cases:
  • count the occurrence of unique values for a specific repeating node in a payload
  • search for the existence of a node value in a payload
The components that would make up an XQuery solution are :
  • XQuery file
  • Invocation of processXQuery() function within a bpel process
The xquery file will be a part of the SOA project and can be easily created from within the JDeveloper 11g environment. Choose File->New->XML->XQuery file from the menu and a blank file with .xq extension will be created.

For our count node use case , the following code can be used as the contents of the xquery file. Let us name the file as count.xq.



The namespace mentioned in the xquery file must match the namespace of the element that we are going to pass as input to this xquey file.

Now that we have the xquery, let us take a look at how this can be used from a bpel process.

We will assume that an element with the following value will be passed to the xquery file and this is stored in a bpel variable named collectionvar. We are going to count the unique values of the node named Id.

Sample contents of the variable collectionvar:


Next an assign activity in bpel can be used to invoke the processXQuery() function.

ora:processxquery("count.xq", bpws:getVariableData('collectionvar'))

The result of this function can be assigned to another simple type integer variable in the assign activity.

Similar logic can be applied to search for the existence of a specific node value. We will assume that a bpel variable named collectionvar has both the search string node as well as the payload which is the target of our search.

We are going to assume that the variable collectionvar has the following sample data:



The targetelement node is is going to be the payload for this example. If there is a namespace associated with the variable, then the namespace needs to be declared in the XQuery file and used in the XQuery search.

Here is the XQuery code. Let us name this file as search.xq




The processing takes place with the following invocation:

ora:processXQuery("search.xq",bpws:getVariableData('collectionvar'))

Again, this function should be assigned to another BPEL variable via an assign activity to capture the result of the XQuery computation. The result will be a true or false expression.

Now it's time to make merry with XQuery!

Monday, December 21, 2009

Opening the gates to SOA gateway heaven

With the release of Integrated SOA gateway in R12, we can invoke a web service as a subscription to a business event. One of the pitfalls that we might encounter in the subscription definition is the lack of privilege to setup the web service invocation( Yes, we might get this error even if we are trying to set it up as SYSADMIN).

When you select the action type in the subscription definition as 'Invoke a Web Service', you might get the dreaded 'insufficient privileges' error message. The reason for this message is that the FND user account through which the subscription is being set up does not have the privilege to the following set of functions:

  • FND_WF_WS_CONSUME_WSDL
  • FND_WF_WS_SELECT_OPERATION
  • FND_WF_WS_SELECT_SERVICE
  • FND_WF_WS_SELECT_SERVICE_PORT
  • FND_WF_WS_SUBSCRIPTION


We can assign these privileges to the FND user (SYSADMIN or any other appropriate user) through the following steps:

1) Create a menu in eBusiness Suite with these functions as components of the menu. It is important to note that the menu type should be set to 'Permission Set'
2) Switch to User Management responsibility in eBusiness Suite to create a user defined role and assign the newly created permission set to this role
3) Assign this new role to the FND user account

This should take care of it. Our gateway to SOA integration heaven would then be completely open to us :-)