Data doesn’t lie, but in some circumstances it can misrepresent the truth.

 

At ITeM Group, we see this happening all too frequently, which keeps us busy but costs you lost time and productivity. In what scenario could data possibly lose integrity and credibility? Let’s look at a typical business, which relies on spreadsheets created and then controlled by a single person to store and organise data.

 

But what happens to that process if that delegated staff member leaves the business and that IP joins them as they exit the building? This breakage in procedural continuity is one of the reasons that data is often cut and pasted from several disparate sources, sometimes becoming corrupt or lost along the way.

 

There are so many bad things that can happen (and they often do) to data when it’s being drawn from different sources and unfiltered once recorded and reported.

 

That’s a polite way of describing data dumping. At any given time, data can be pulled from several sources and then used to create:

 

  • Financial reporting, where its imperatives dictate frequency, extreme accuracy and timeliness.
  • Run-time reporting, the ‘how is everything going’ temperature check that is frequently taken in manufacturing, logistics and construction.
  • Business unit reporting, where data is consolidated from a number of brands and fed upwards to a parent company.
  • Cross-functional reporting, where data in two departments can have different definitions, such as sales revenue versus cost.

 

Even when the capturing and reporting of data is seamless, sloppy rules and definitions lead to different interpretations, causing an erosion of trust in the reported numbers.

 

It’s not a case of simply letting ‘the facts’ or the data sets speak for themselves.

 

 

 

 

Sometimes they can’t even speak. When different software platforms are trying to interact, their inherent ‘differences’ mean they are not able to talk to each other, so the data flow simply stops. When ITeM Group is engaged to resolve this blockage, its consultants employ ITeM Group’s proprietary ‘Master Data Management’ (MDM) protocols.

 

MDM maps data between software platforms. It allows users to securely and correctly manage their data outside the core application or platform. Imagine a ‘bucket’, where all your data is stored, and then pouring the contents out to different software applications as required, where the flow is seamless and unobstructed.

 

The embedding of MDM ensures that the data being displayed can be trusted, building confidence in those making critical business decisions. It also gives data context, because data without context is just meaningless numbers.

 

ITeM Group can take data from any source and display it in easy to use charts, graphs or reports, most commonly in a business intelligence platform such as Microsoft Power BI.

 

These are typically interactive, so that users can dig deeper into the data on a self-serve basis (with security and user profiles in place). Reporting can be static and automated (for example, regular PDFs emailed to specific users) or interactive and on-demand (to help users self-serve and keep their finger on the pulse).

 

ITeM Group’s approach is to build data models (which drive the reporting output) outside of the BI platform itself, meaning the client can change BI platforms, (for example from Yellowfin to Power BI) quite easily.

 

The alternative model, which ITeM Group does not recommend, traps the user inside a particular platform. In fact, ITeM builds models with enhanced flexibility, so that internal IT teams can build out the dashboards if they have the skills, creating a stronger culture of autonomy.

 

ITeM Group doesn’t push a specific platform, but we do have strong expertise with Power BI, which is cementing its reputation as the market leader, because its licences are competitively priced, and are easy to manage within the Microsoft framework.

 

Connect with Michael Woodruff to learn more: michael.woodruff@itemgroup.com.au