R script to pull recent values of all eVars, events, and props and generate a standalone HTML file to review the results, as well as generate a Google Sheet with the underlying data.
Have an option/way to run the process for a year ago, six months ago, and the last 60 days and track if there are changes in the #/% of variables that have data in them. Basically, a “is your implementation getting worse” measure.
This is something we can just try manually for a while (snapshotting the summary) to see if it looks like it might occasionally turn up a worthwhile result.
This is what we were trying to do with anomaly detection.
Conceptually, though, if we could just say, “Flag events that have more than 5 consecutive days that are 0 or ‘very low’ (relative to the normal run rate,” THAT would be helpful.
The idea would be that we could have some level of “benchmark” that we could use to tell organizations how they stack up against other organizations (possible “before” and “after” an audit is performed and implemented). (Julie H. idea)
This is an idea to, essentially, do a combined-across-all-variables rollup. It could be just on the “% with data,” but would be a single %-based score (or could be a 0 to 100 or 0 to 10 just off of the %) to have ONE number to point to as an overall assessment.
This was a feature of the “old script.” Not sure if it’s available in the v2 API. We could use RSiteCatalyst for this…but then would be back to needing web services access to use it. :-(
Have the ability to pass in regEx (could be in the Description or could be in a separate file) to check the actual values showing up in AA to see if it matches the regEx. (Cory W.)
The idea would be that a user could go to a site, authenticate, select a report suite, and then click “go.” Presumably, this would ultimately then email them the HTML file (or a link to the web page for their report) and a link to the Google Sheets.
This would be beyond Tim’s capabilities to pull off.
There seems to sometimes be issues with Google Sheets.
In those cases, ideally, there would be a graceful fail whereby the HTML file would still get generated and the underlying data would all get written out to an .rda file (or something) so that a separate script could read in that file and just write it out to Google Sheets.
There is the ability to switch a flag (use_local_files: change from FALSE to TRUE) to populate the Google Sheet using locally stored data rather than re-querying. This is a partial fix--just speeds up subsequent runs if the only issue was that the hang-up happened on writing to Google Sheets.