top of page

Why Turning Raw Data into Good Information is Important

In this digital age, we are surrounded by data. It’s everywhere, collected by all sorts of systems and stored in all sorts of places. However, it is easy to overlook a fundamental principle when planning work – ensure you have good information, not just a whole heap of data.

Quinticon helps customers with all forms of initiatives, and an important focus is finding the right information to use when planning and executing projects. Using the example of a server migration initiative, here are some considerations below that we find key to transforming the initial seed of data into genuinely useful information.

Keep a record of your data sources: typically, the data required to plan and execute a server migration will come from a variety of sources such as a CMDB, extracts from tools, operational reports, scanners and design documentation. It is vital to keep track of where the data came from and any data transformations required to make it useable for your purposes; this helps determine the best inputs, show overlaps and clarify gaps.

Understand the format of your data: having lots of different sources means that sometimes the data may not line up as easily as you would like. Some servers may have multiple roles or multiple names that need to be accommodated; different extracts won’t always list data fields in the same order; spelling changes/typos can make it hard to match everything. Not understanding what you see can lead to misalignment of key grouping metrics or duplication of entries.

Watch out for changing naming conventions: systems and applications may have many names across the different data sources. The vendor’s name may have changed over time, the application could have been sold to a different vendor, or the application was first known by something other than its current name. Names can also vary across multiple environments like Development, Test, System Integration Testing, User Acceptance Testing etc, including potentially multiple environments of each. Paying attention here can help avoid replicated activity or even misalignment in migrated systems such as backups.

Ensure repeatability of extraction and collection: it is common to have to draw from a data source multiple times; even after testing phases, there will need to be refreshes and updates applied to your planning. Clearly documenting the process to extract the data from the sources and any post-production work is vital. Save the queries where possible, especially where other teams provide you with the data, to avoid the columns of data coming out in a different order. Have a system to keep track of all the different information you collect and how it fits together.

Normalise the data: an important step in creating an efficient and reliable migration plan is to normalise the data. Make sure to give thought to the best approach for normalising the data and making it meaningful, whether this is through spreadsheet formulas or code written in Perl, PowerShell or Python etc. – go the extra step after sourcing data to get it into a useable form.

Validated data made into reliable information facilitates successful initiatives

Hopefully this helps you think about some areas that require thought before you just jump into your next complex project; thorough discovery, assessment and planning are key elements of the Quinticon approach.

Recent Posts

See All

Choosing a Cloud Provider

Selecting a public cloud provider can sometimes be more challenging than initially anticipated – similar to moving house, (I’m sure many can relate) where you’re mistaken in believing that you have al


bottom of page