As a J40 team member, I want to know how our initial score compares to other industry indicators/scores.
Dependency:
- Running this tool with actual data requires a current version of the CEJST score to exist.
Definition of done:
- We've defined which indices/data sets we will initially use for comparison against the current version of the CEJST score
- We've loaded data from those comparison indices/data sets
- We have a script or Notebook that creates a report comparing the current version of the CEJST to various indices/data sets
Demo of Done:
- Prepare a report for internal use about the results of comparisons of the current version of the CEJST score to various stakeholders.
Not doing:
- Not trying to plan or build cloud infrastructure, i.e. it runs locally.
Tickets to be created:
Below this line are details that will be moved to other tickets shortly.
Description
As we develop a score, we want to understand how are score deviates or stays consistent with other scores used by communities, researchers, and policymakers.
Solution
Initially we want to spike on what's available and possible for an automated comparison tool. We want to get to a place where we have a card in the backlog addressing the following:
It would be ideal to have an open source script that we can run to compare a given census block group score calculated by our tool with a score for that group by another tool or scoring system. This script would have documentation for open source community members to be able to reuse, and we could write updates about this on our website. Ideally, we are able to incorporate this into the tool somehow so that users can also see how this compares to other scores they may be using. This would be a future user story based on further user research.
Ideally, the tool would generate an automated report each time it's run, so that we can continue to use this each time we generate an updated score.
The report can compare two binary scores, such as "the top 25% of SVI communities" compared to "the top 25% of our custom score", and generate a list of census block groups that are in the former but not the latter, as well as a vice versa (a list of CBGs that are that are in the latter but not the former).
The report can also show CBGs where the difference between two scores is the highest. (E.g., a CBG that's much higher on SVI than it is on the current custom score, and vice versa.)
This would require the output of other tools to be consumable data, eg as CSV.
Describe alternatives you've considered
- Eyeball other scores: This would not be very rigorous or quantitative approach to understanding how our score differs from others.
Links to user research or other resources
Tasks
- Identify what other scores we wish to compare ours to: Calenviroscreen, R/ECAP, Maryland Score
- Identify which tools have consumable data that we can use to compare our output with
Definition of "Done"
- We understand what it will take to write a reusable tool to compare our score against the identified scores with multiple census block groups across the country
- New card for the comparison tool requirements