Thursday, April 23, 2015

OIM's New Flat File Connector

In this post, I will talk about OIM's new flat file connector which was released some time back.
After doing some testing with the connector , these are my findings on the connector.


  1. This connector is the way to go while dealing with flat file trusted/target reconciliation and should be used instead of GTC as it done away with GTC wizard.
  2. It relies on schemaFile to manage and define the structure of data, primary key, date key, mutivalued data.
  3. You can manually edit the schema file (.properties) to declare fields and their properties.
  4. Data file sits separately on the server with the first line just containing the column header followed by records of data.
  5. You can load/reconcile users, accounts, roles, entitlements, connected/disconnected resource data etc. using this connector.
  6. you can create custom parsers for parsing data which is in other format than CSV.
  7. it also supports reconciling complex multivalued data.
  8. delimiter, comment, mutivalued data indicator characters are configurable.
  9. if data contains " " spaces then it should be double quoted. (text qualifier=" configurable option)
  10. Flat File Header (column names in first line) can contain spaces to define column names. It need not be double quoted as it is not data.
  11. It supports Preprocess and Postprocess Handlers which can be handy for perform any job on the flat file directory, like zipping and unzipping files, encryption and decryption of the complete file dumps or specific fields in the files, virus scan of the files, or any other tasks limited only by the implementation of these tasks.
  12. reconciliation of deleted records is supported.
  13. Connector comes OOTB groovy script which is easy to use to generate trusted/target or disconnected application specific metadata (OIM artifacts)
  14. Most of the configuration can be changed during runtime by modifying lookups, schemaFile and dataFile. Lets say you add a new column then update the lookups then Schema File and then add the data in data file to get it working.
  15. Archiving by default moves the processed file to the folder specified and zips it with a time stamp value.
  16. In case of failure, file gets moved to a failed folder
  17. Transformation and data validation is also handled.
  18. This Connector can run on OIM machine or on a machine where Connector Server is running so this gives the additional flexibility of where to do all the processing.