Tuesday, 21 December 2010

Bending the ITIM custom adapter model

For those who have needed to create custom RMI based Tivoli Identity Manager (TIM) adaptors using Tivoli Directory Integrator (TDI), you will know that there are certain constraints that have to be adhered to. In addition to data related constraints such as naming conventions and mandatory attributes ($dn, objectclass) there are several other less obvious architectural and communications related constraints, such as:
  • The target assembly line must be completely self contained - it cannot call out to child assembly lines
  • For a reconciliation assembly line (full and supporting), every iteration must return one and only one object
Whilst recently working on an adapter for a customer I found that these two constraints came together to turn an otherwise straight forward and logical data synchronisation process into something much more convoluted. After some experimentation I managed to come up with a solution that satisfied the customer requirement and it all worked well in practice. Since that time some collegues of mine working on similar tasks have come up against the same contraints so I thought that a breakdown of my approach might prove useful to some.

The inspiration for my approach came from a blog post by TDI guru Eddie Hartman on the topic of TDI Connector loops, which can be found here. Using a MemoryQueueConnector and a couple of Connector loops I created an assembly line that had the following structure.

Feed:
 MemoryQueueConnector (iterator mode)
  - beforeInit: set tdi flag so that queue is not created during initialisation
  - onError: capture the initial 'queue not created' errors, create the queue, set a control flag and re-init the MemoryQueueConnector

Flow:
 if(firstCycle)
 {
  //remove any control flags
  ConnectorLoop_Outer (iterator mode) {
   //read and set some work attributes based on the current iteration
  
   ConnectorLoop_Inner (iterator mode) {
    //Map retrieved attributes to queue attributes
    //Write out work data to memory queue
    //Purge work object
   }
  }
  //pop one entry from the queue
  //one discrete entry from the queue is returned to ITIM
 }
 else
 {
  //remove any control flags
  //one discrete entry from the queue is returned to ITIM
 }


At a high-level the approach above is composed the steps:
  • Create a Memory Queue, purposely get it to error during (the first) initialisation, tag it and reinit the Queue
  • 1st cycle - fetch all required data from target systems
  • 1st cycle - Push all fetched data into the memory queue
  • 1st cycle - Pop one entry from the queue and return data to ITIM
  • All other cycles - return data to ITIM as normal

As I am using a MemoryQueue I of course needed to be careful about the amount of data that gets pumped into the queue during the first cycle. For better memory management and perhaps data integrity the TDI System Store could be used to write the data into a database and then 'page' the queue data back out again.

Overall I have found that this approach is easy to debug as all the data is grabbed during one period and then read out one item at a time in a clear and concise manner. If I was expecting to get 500 data items (for example) from my backend system(s) then I can simply check the resultant queue size and ITIM to ensure that everything matches up.

Hope that this has been useful. If anyone would like to dig into any of the deeper details, please get in contact and I'd be happy to help out where I can.

1 comment:

  1. Nice solution, Steve! And you don't have to go as far as the System Store to handle bigger data sets. Just open the Advanced twistie on the Memory Queue Connector and violá! you have persistence settings, including watermark and %free memory threshold, plus System Store parameters like db and table name. If you leave db blank it goes to the standard SysStore db.

    I believe I have a Metamerge pen with your name on it :)

    -Eddie

    ReplyDelete