Data Quality Index Consultation - sub-section Overall Questions
Instructions for submitting your feedback
- Feel free to share your overall feedback on the Data Quality Index Consultation through the comment-box below.
- Consider the guiding questions as outlined.
Guiding questions:
- If you can use five words, what would you describe as good quality IATI data?
- Do you have suggestions on ways of using the Data Quality Index to incentivise better quality publishing and ensure increased attention by publishers on the transparency of aid data?
-
Have we missed any important measures in the DQI? If so, what is your additional proposal?
-
Should an overall summary index measure be added? If so, what weight should be given to each measure? Or should each measure be left as a separate assessment?
Data successfully used by others.
Publish the DQI in newsletters and other IATI publications.
Have we missed any important measures in the DQI? If so, what is your additional proposal?
Regarding the availability of data it is not enough if data have been published. Equally import is to assess that the data published is valid: e.g only existing codes and references to existing identifiers are being made.
Should an overall summary index measure be added? If so, what weight should be given to each measure? Or should each measure be left as a separate assessment?
Yes, with emphasis on publishing valid usable data. Weight should be given to elements relevant for network transparency and completeness of data taking into account the role a publisher has in the network.
Data useful and used by others and yourself
make DQI a dashboard,
recognition for degrees of quality in various categories.
allow publishers to use this recognition in their comms in agreed way, with IATI logo.
The length of trails and the last mile matter in terms of quality. Assessment of effort in helping next peer publishers 'down the trail' could be a missing measure. How many recipient organisations publish in IATI ?
Should an overall summary index measure be added? If so, what weight should be given to each measure? Or should each measure be left as a separate assessment?
Suggest to calculate an average of the measure rankings, without too much emphasis. the individual scores per measure are most important.
Hi - my experience has been that what is 'good quality data' varies so much based on what you are trying to use it for, that you probably need a different index/measure for each use case e.g. if I want to know 'how much aid went to Mali in 2020' then there are many fields I am not bothered about, but comprehensiveness, use of the org_ids needed to avoid double counting, not deleting completed activities etc are very important. That is completely different for other use cases which would priorities other fields. I think we need publishers to say "I seek to meet use case X' so we can say....ok, then look at index Y which specifically targets that use case.
Measure I would consider adding:
Great to see the validator will be used: Is there somewhere in the SSOT etc that explains what is a 'warning' vs an 'error' etc and why, so that publishers would be able to understand the feedback - otherwise these terms are rather random.
Would agree with several of Mat's suggestions, especially regarding the basic data failure checks, and also the complexity of defining data quality - it does really depend on the use case one has in mind. It's almost like we need a big interactive dashboard to cover them adequately:
- press "Use Case A", a sub-set of measures adds up to provide a publisher's score in meeting it
- press "Use Case B", another sub-set (which includes the same mandatory elements but also some different measures) adds up and provides the relevant score (which may be completely different!);
- press "Humanitarian" or "Grand Bargain Use Case", the Grand Bargain elements add up to calculate the score.
Granted, it would be hard to come up with a single, overall score for each publisher, but it would provide more useful guidance to data user.
Hello, everyone. Here is the ILO's feedback:
If you can use five words, what would you describe as good quality IATI data?
1. comprehensive
2. truthful/reliable
3. understandable
4. accessible
5. relevant
Have we missed any important measures in the DQI? If so, what is your additional proposal?
We do not have any additional proposals
Should an overall summary index measure be added? If so, what weight should be given to each measure? Or should each measure be left as a separate assessment?
It would be good to have an overall summary index. The weight for each measure to be discussed separately. In our view the coverage, thematic focus of activities, participating org. info and timeliness of updates should be prioritized together with the financial information
If you can use five words, what would you describe as good quality IATI data?
Do you have suggestions on ways of using the Data Quality Index to incentivise better quality publishing and ensure increased attention by publishers on the transparency of aid data?
Have we missed any important measures in the DQI? If so, what is your additional proposal?
Should an overall summary index measure be added? If so, what weight should be given to each measure? Or should each measure be left as a separate assessment?
If you can use five words, what would you describe as good quality IATI data?
Accurate, Lean, Relevant, Purpose-driven
Do you have suggestions on ways of using the Data Quality Index to incentivize better quality publishing and ensure increased attention by publishers on the transparency of aid data?
The statistics board could be made more attractive by adding a bit of gamification.
E.g. Using DQI to rank top 5 contributors of the month, have badges or rewards etc.
Have we missed any important measures in the DQI? If so, what is your additional proposal?
Right now the IATI Validator validates the schema but not the data involved. I read about the additional validation checks performed. But if we are to ensure that the quality of data is to be optimized, it is really important to validate that all the data entered is from the expected options. For example: Having a sector code which does not exist in the mentioned DAC sector code file drops the quality so validating that info could help is what I think.
Comprehensive, timely, frequent and qualitative
No, rather than adding we should refine the way we measure the existing KPIs
Yes, an overall summary would be good, with focus on the comprehensiveness (mandatory and recommended fields), timeliness, frequency and valid financials
Please find below the feedback from the Commission (FPI, DG NEAR, ECHO and INTPA).
If you can use five words, what would you describe as good quality IATI data?
Do you have suggestions on ways of using the Data Quality Index to incentivise better quality publishing and ensure increased attention by publishers on the transparency of aid data?
Have we missed any important measures in the DQI? If so, what is your additional proposal?
Should an overall summary index measure be added? If so, what weight should be given to each measure? Or should each measure be left as a separate assessment?
I will be the voice of dissention because I don't believe scores should be given to any of this. It often provides a false sense of quality. I've seen data in high scoring publishers that is not quality (made up publisher references) but they get full credit because the field is not blank. The statistics don't capture the gender policy marker in spite of the importance many data users and donors put on that item. Until there is a way to look the data with the nuances mentioned above (type of publisher, GB/humanitarian publishers etc.) weights and scores will never be accurate.
Getting usable, comprehensive, truthful/reliable and accessible can't be measure by the current scoring system. Matt's suggestion on capturing where data fails is more important than the current scoring mechanism.
Comprehensive, accurate, detailed, readable and standardised.
We'd be very happy to share our experiences of driving improvements in quality/availability of data.
Human readability of text data fields and whether these make sense and meet the definitions outlined in the Standard.
Review of the quality of documents including: checking the correct documents are included under the corresponding codes, checking document dates, relevance to activities and whether they contain the relevant information for the document type.
This depends on what IATI would intend to do with the measure and what the agreed approach is to question 2 (on incentives and attention). For the Aid Transparency Index we produce an Index score with weightings for the 35 indicators that make up the total score. We base the weighting of the indicators on feedback from stakeholders regarding what information is most important for them. This includes carrying out surveys with civil society, researchers, technical experts and other data users.
Data that can be succesfully used by (many) others, not just a single application.
Rich data: covering many elements, not only financials but also narratives, documents and results.
Publishers could be automatically notified of their DQI 'score'. Currently, many publishers are not even aware when they have published a file that doesn't validate or cannot be retrieved.
Have we missed any important measures in the DQI? If so, what is your additional proposal?
Matt made several good suggestions
Should an overall summary index measure be added? If so, what weight should be given to each measure? Or should each measure be left as a separate assessment?
Please log in or sign up to comment.