Friday, October 24, 2014

I should play more lotto

Just my luck: Friday afternoon and I hit an error in the Oracle VM.
OVMRU_001020E does not show on Google or inside Oracle Support ;-)

Am I really the first to hit that problem? Seems to be an issue with the network cards in my virtual machine. I guess I will do some more reading before I can continue installing my virtual machines.

Update:
Solved it. In the view I went to the Network Ports and selected additional ports, added them to the machine and off we go.

Wednesday, October 22, 2014

A call to flushChanges on the current MDSSession does not specify the correct transaction key

Interesting issue in a SOA/BPM today.
Redeployment of the application (SOA) and restart went through without an issue.

However the stresstest came back almost immediately. The AdminServer.out showed the error "A call to flushChanges on the current MDSSession does not specify the correct transaction key". Even more astonishing was the error TNS-12516 when trying to connect to the database.

After a little bit of searching we found that the database alert log said it could not extend the SOA_INFRA tablespace.

Adapted this and everything went back to normal.

Sunday, October 05, 2014

OVMAPI_4010e during server discovery

I bounced into the problem of getting the dreaded OVMAPI_4010e error when I was discovering the Oracle VM server in my network. Metalink (MyOracleSupport for the younger ones) told me that this was typically an issue with the password of the OVM Agent.

One obscure internet blog told me however that this was due to the difference in versions. Now - it beats me how the connection information and credentials are passed differently when a new version arrives, but eventually I placed the same version of the OVM Manager as the OVM Server on my Virualbox image and it works.

OVM's strange architecture

Obviously I do not run a major Oracle-based OVM data center. So for real-life large enterprises this is not an issue.

However image the following: you have a machine on which you want to run Oracle VM Server.
Installation of the OVM is straightforward and pretty easy.

Now you end up with a piece of equipment that does exactly NOTHING.

Nothing? Yup. Well you can start it and you get a Linux prompt, and that is pretty much it.

In comes the Oracle VM Manager. It dances, it talks, it does the wash and it walks the dog - and it can be yours for 99.99$ ;-)

Seriously, in order to get anything useful to be done with the OVM Server you need the manager. Now there is a small design flaw. Setting up the OVM Manager requires an extra machine. This can be a Virtualbox image, another machine in your network or even a guest on your Oracle VM Server.

Now the last seems to be appealing. Normally in such an environment I would like to have the controlling software on the box that I control (with the option to move it elsewhere when I grow and buy  a second box).

Unfortunately before you can place an image on the OVM Server you need to configure it - meaning you install the OVM Manager on a different platform.

Seem awkward. How about an installation option that would create the Dom0 on the (first) OVM Server and that would bring a OVM Manager with it? It is not there. Pity!

So I downloaded a Virtualbox template with the OVM Manager.

This did not work. The internet told me that this was due to a mismatch of the passwords of the OVM Agent. Changed the password - no success.

Now I am in the process to create my own Virtualbox image with OEL6 and install an OVM Manager 3.3.1.

Let's see how this works out.

UPDATE: It seems to work now (see also next post).

Saturday, October 04, 2014

A cable - a kingdom for a cable. Can you already see something?

Ok - when you do things the first time you make mistakes.

This was my first server I've ever build from scratch. I fiddled around with some equipment before, but buying a number of components and installing them was new for me.

Now the motherboard I have does have no video card as the folks who typically buy these extreme motherboards do bring their own video cards with them to render the monsters they are about to slay.
With me things were not so bright. Although I also bought a nice 24' screen I was still seeing nothing - basically because there was nothing to plug it in to the server.
Well - another trip to the store to buy a simple video card.

I did however initially bought a wireless card for the machine as I intend to place the machine in my study room. Historically this is where my DSL connection ended, so there was always a router with a cable present.

To my surprise I could not install the Oracle VM Server without a network. Obviously this makes sense when you are using this product in a data center which does not use WiFi a lot but all shades and colors of cable.

I investigated the issue and found that the machine was perfectly aware of its network sockets (two of them) and the WiFi card was detected as well. The issue at hand was the fact that the Oracle VM Server kernel was not equipped with the modules to support the ath9k which is the Linux wireless support for these kinds of cards.

Briefly I thought about baking my new Oracle VM kernel. I have done so in the past, so I knew what it would take to do so. I did some checks and found out that I would need to install a gazillion pieces of software to be able to build a new kernel.

I quickly abandoned the idea and now settled for the idea that I will have Ubuntu system running on the Oracle VM and use this as a stepping stone to connect to the WiFi.

Still not there. See the next post.

My endeavour with Oracle VM

So - I bought a decently sized system to have my own private cloud on it. The main motivation was the fact that currently it has become pretty difficult to run some Oracle software on your laptop.

Although a number of people do this you quickly realize that they just have a simple database installation on it or a plain WebLogic Server.

While this is satisfactory for a number of purposes it doesn't come near the things that I encounter in my day to day life. My customers typically have HA or even MAA environments, using RAC and some of the newest Fusion Middleware components.

When you simply want to re-enact an environment of that size your laptop will not be sufficient.

So - this journey is about the setup and "cloudy" ideas I have.

So what I have now is a system that offers me a good CPU, 64 GB of memory (probably the most expensive part nowadays) and 3 TB disk plus 512 SSD. All in all it cost me around 1800 Euro's. And as it was bought by my company the Dutch minister of finance is (hopefully) happily supporting my with this.

Let's see where this ends up.

Friday, May 16, 2014

JDBC and CMAN and a new location for the database

I had an interesting issue this week.

The setup is as follows:

WLS with Webcenter. The Datasource point to a database for the metadata. This database sits in the database zone. As we are not allowed to connect directly to the database there is a CMAN in between.
After the setup of the CMAN and the database we had a connection using JDBC where the CMAN was our endpoint.

So far so good.

Now the IT department decided (a long time ago) to move a number of databases from one platform to another. As this meant a change in the IP and name of the database we expected that the only configuration that was needed to change was the CMAN.

Database moved, CMAN config adapted and voila: nothing works!

First test was to use the console and test the JDBC connection. Error was a

ORA-12514 TNS:listener does not currently know of service requested in connect descriptor.


This in itself was strange as from the WLS/Webcenter perspective nothing did change. Endpoint was still the CMAN, SID did not change after the move.

Second test was to see if the connection from the WLS machine to the CMAN was still OK. As there was no Oracle client on the system I just used a telnet connection to the assigned CMAN port. Perfect result: the connection was open, so something was listening on the machine on the given port. As I am positive that this machine only harbors a CMAN I was happy.

Third check - going to the CMAN folks and see if the connection from the CMAN to the DB was OK.
Worked like a charm. Config looked perfect to. 

Now this became a kind of a puzzle.

I went back to the WLS Admin console and restarted the JDBC connection. Still the same error.

I tried to change some settings in the JDBC settings but no avail.

Just for the fun of it I created a new JDBC connection pointing to the same CMAN endpoint with absolutely the same settings as before. Assigned the new JDBC datasource to a managed server (actually the cluster) and hit the test button. Now everything was the same as before but the name. It worked. Hooray!!!

As we had a number of datasources with some of them having names that are used inside the applications it seemed like a tedious idea to drop the datasources and recreate them. I did this for one cluster and it worked but I was a little bit too lazy to do this for all of them.

Then I tried something else. I went to a managed server of another domain and restarted it.
When it came back - which surprised me with a "broken" datasource - I went to the datasource and tested it again. Immediately the test can back without an error.

Trying another one - same result.
So obviously I restarted all MS (and the AS as they need db access as well). 

I wanted to let you know this as such a setting can impact your MAA environment when you change the endpoint of a database behind a CMAN.

 

Friday, December 07, 2012

Cloud Control agent does not deal well with redeployments

Cloud Control Agent shows SCA as down after redeployment




In a lot of environments the development process calls for a daily rebuild of the system. This is typically followed by a (re)deployment of the SCA’s to the Fusion Middleware environment.

In one of my customers environments I had to deal with the issue that each morning the SOA SCA’s were depicted in the Cloud Control as down. A quick check on the server showed them up and running. The same happened in the Enterprise Manager of the server. So the redeployment worked as it should.

So why were the SCA’s noted as down in the Cloud Control when the EM showed them as up? This has to do with the way the agent of the Cloud Control works. The interactive way is described in the following manual: http://docs.oracle.com/cd/E24628_01/install.121/e24215/fmw_discovery.htm

Now when an agent has found an SCA it does not automatically update the lifecycle status of this deployment. So a redeployment does add a new label to a SCA in a partition. The agent seems to be unaware of such a change in the partition and clings to its formerly known label.

Obviously it is disturbing when you look into your Cloud Control dashboard and you find that a (large) number of targets are down.

The Admin Guide shows how to manually update the collection.

In http://docs.oracle.com/cd/E24628_01/install.121/e24215/fmw_discovery.htm#BABCEADC the process to look for new or modified targets is described. That certainly works if you have only a small environment (say one or two SOA Suite installations) with a low-frequency of redeployments. If you redeploy each night you will need a better approach.

The problem with the automation of this approach is that it is very well hidden (and to my knowledge) not documented.

What you need to do is to go to the domain that contains your deployments. In that summary screen you will see the status (below a screenshot after a manual refresh).



Look at the Date/Timestamp on the page (red circle). If you click on it the following pop-up box will show:




When you check the box next to “Enable Automatic Refresh” you will end with a job that will be editable in the job list. You cannot create such a job in the job section, but you can edit its frequency. The best frequency is to do this right after the nightly deployment run.

Tuesday, August 07, 2012

SOA Suite partitioning

For a customer I am busy to develop a SOA Suite purging strategy. As we expect the system to have a high number of instances we will have to keep the dehydration store and the MDS clean. The last is also essential as the end customer is under some budget constraints, which leads to the fact that the database (and especially the size of the disks) is nothing that will be allowed to grow limitless.

Now - one of the important features of the database when it comes to space management is partitioning. Partitioning a table gives you more management capabilities and tools to keep your DB size manageble.

There are a number of pointers in the documentation that describe how the database partinioning can be used for the SOA Suite - with one essential part missing: how to set it up.
There is a remark that the creation of partitions is a task for a skilled DBA (true) and a second remark that states that the admin guide will not describe it.

So when you want to use partitioning for your SOA Suite MDS you are on your own.
The RCU won't help you, the admin guide is vague and Google seems to point to SOA Suite partinioning in a different sense (the old BPEL domains).

There is only one thing that can help: an Oracle ACE willing to make it work, document it and spread the word.
So stay tuned as I will post my findings here in the next couple of days/weeks.