The importance of the Agnostic/Non-branded Informational Architecture – Part II

image_print

What should be the main drivers of an agnostic/non-branded Informational Architecture? More than list and detail the drivers, my focus is to open discussion upon them.

1 – Non-disruptive/Minimal disruption

Even in the companies which an informational architecture is not formal, it will be very hard to find, from mid-size to big ones, a company with 0 informational initiatives/investments. So, the starting point must always be: to be the less disruptive as it is possible, and the less technology-specific the informational architecture is conceived, the greater are the chances to mitigate disruptions.

But, the Informational Architecture must have to focus on a TCO reduction, and as the IT world walks in SaaS (Software as a Service) and PaaS (Platform as a Service)direction, an Information Architecture SaaS/PaaS based tends to be extremely disruptive to implement considering the importance of historical data for it. It is important then, to have an architecture that it is not economically consistent if it only reduces the TCO if it is SaaS/PaaS centered.

But it is not only regarding infra-structure the disruption should be avoided, remember we are in the eye of the hurricane, with users having almost everything available in the fingertips at home and having to wait 5-15 minutes for a report to run. In such scenario, buy and implement a SaaS CSV/Excel based is extremely tempting, almost all Data Viz tools vendors offer it, but for this, you don’t need an informational architecture. So, your architecture can allow the UI changes, but sould not be based on this change, it cannot be a Petit-Gateau, beautiful, delicious and that melts at the first spoon slice. So, if it is not possible to avoid disruption keep it at the minimal level.

2 – Multi-source enabled

It may look like a little obvious that an informational architecture must be ready for any source system, but nowadays this is not enough, the Big Data era brought the non-system produced data, or the unstructured and semi-structured data to use the expert’s terms, to the table.

And here goes my first golden advice for your design: you don’t have to define one solution for all sources. Although your logical model will need to be physically deployed, it can comprise more than one physical system landscape solution. Obviously, each solution will have its technicalities and specific governance peculiarities, but the different solutions need only to be complimentary.

Any advice can lead to a bad decision, so this one, if taken unwisely. So, let me be crystal clear here:   I am not recommending to have a Frankenstein in a well-polished & shining armor. Identify your sources, classify them and choose one solution for the non-system produced data and another for the systems produced data, and voilà your Data Lake is conceived.

3 – Multi-granularity capable

Since your Informational Architecture, necessarily must support solutions for all audiences, this awful and confusing DW/BI concept shall haunt you in all technical meetings. Basically, because a high level of granularity means to have the information more subdivided, in smaller parts, on the other hand, low level of granularity means information in more aggregated/summarized parts.

The best is that your Informational Architecture is enabled to provide all levels of granularity for the different types of consumers. Here, I have to make clear that I’m not talking about the rawness of the data, that has been left behind in the Data Lake. Here, it is about providing information, and for the whole corporation, not only “the techies” or “the geeks” of each department.

Then your Informational Architecture should have room for harmonized/cleansed data that can be directly consumed or be part of a Data Vault, a Data Mart or an EDW implementation.

4 – No data persistency obligatory

As data footprint reduction, should always be in the Information Architect mindset, and in-memory technology and SOA approach are more and more a reality for IT, it is extremely important that your architecture is not centered in data replication or data persistency, but in data consumption and the cold-warm-hot data classification policy.

So, for example, if data from an ERP is SOA exposed and no complex treatment is required, let’s say you only need some rank, percentage, and threshold-based analysis, it is most likely you can provide this without any physical data repository.

5 – Safe, simple and flexible governance

Meanwhile, end-users pray for IT independency in their reporting activities, IT must attend these prays forwarding some governance accountability for the end-users.

As the profile and rules must be created in a four-hands job, the day-by-day governance of the Data Viz tools shall be under power-user’s responsibility.

So, power-user assumes here, a huge role in this model. It becomes the owner, and not only of the information, but the owner of the informational process end-to-end. Of course, I am not saying power-user will become an IT professional and will be the responsible for everything (ETL maps, Jobs, Source Connections, etc), but as the owner, it will be the one to know the critical path of the information flows. Although the accountability is not much different from the one they hold upon that automated excel spreadsheet with 19 sheets and 47 macros, now the information constitutes a process itself.

6 – Allow different tool/solutions for one same layer

I assume that for now, you already understood that a proper Informational Architecture will have a multi-layer approach, and of course that the purpose of an agnostic model is not to be technology attached.

On the other hand, as I stated loud & clear in item 3, being agnostic does not mean to have dozens of technology providers, so you can choose the best-of-breed for each layer’s purposes, but you can also choose a central binding solution and use the best-of-breed approach for spot cases. And here goes the second golden advice this decision must be taken looking for the expertise level available in ETL tools, as well as how stable is your ETL operation.

7 – Enabled to use any end-user viz tool

For me, visualization tools are meant to be chosen like your tablet, watch or your bear, it is a matter of end-user preference. But this only can be near to the reality of the corporate world, under certain circumstances.

First of all, data life-cycle management must be mature, power-users and data scientists/analysts must be expert enough in the tools to not require IT support

Second, if this will be the approach users must be aware that IT will only give support for infrastructure issues, ensure the required drivers are properly installed and not corrupted.

Also, they must be aware that if they want use web-tools offered by these tools, they will need to opt for some SaaS on cloud and owns some level of its management, otherwise they will need to stick with the client version’s capabilities.

Alternatively, the corporation adopts one central tool with the IT providing support, but allow client versions with an IT support similar to the one provided for MS-Office tools.

But here it is important not to lose the TCO reduction from the radar, and under any circumstances start the design to solve the issues of the visualization tools adopted without a proper architecture, these issues must be only one more input.

In the end, it is the visualization tool that will represent your Informational Architecture so the end-user must like it and have trust on it.  And here goes the last golden advice: stress the visualization tools connectors and consider complimentary connectors suppliers, many times a 3rd party connector is better than the one offered for free with the tool.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.