Comments (12)
I suggest as a first step rather than requiring a full-on database (with the extra overhead for installation & maintenance) that VizAlerts have a structured CSV file (or files) then a .tds could point to those files.
from vizalerts.
Yep, I agree--that sounds like the best plan in the near term.
from vizalerts.
Not that this isn't a good idea, but one could just push the logs into something like logstash (which is what we're planning on doing - simply because its where all our operational logs get sent), and one could easily knock off half of your list. Slight modifications to the underlying logger to be a bit more "descriptive", and a simple solution like this could go a long way with little to no change to the current codebase.
You could then take it a step further, and use webconnector to connect to the underlying ElasticSearch instance, and instead of Kibana charts and graphs, we get the graphical goodness of Tableau.
This is the approach I've been thinking about using anyway.
from vizalerts.
Interesting idea... I've got two questions:
-
What's the effort/cost required for installation, configuration & operation of logstash? The reason that I ask is that so far we've tried to be sensitive to what are required components, for example there are users who don't have internet access so they can't do a pip install to get the required Python libraries (never mind the optional one).
-
What's the security model? Here the challenge is that in some organizations bits of info like who gets emails and info about their contents is as proprietary as their actual data.
Jonathan
from vizalerts.
Our organization is built entirely on-top of AWS - this includes Tableau Server, and Elastic Search/Logstatsh, etc.
We use logstash for all of our logs, as it helps us consolidate our logs across all of the EC2 instances we run, etc. - AWS has both standalone AMIs with the whole "ELK" stack pre-configured which you can install on any EC2 instance that fits your budget/performance needs, but it also offers an Elasticsearch Service which scales and has its own pricing model ($0.018 per Hour at its lowest level).
Security, etc. is also handled both at an infrastructure (AWS/EC2) level, but can also be controlled on an App/App basis - so long as its supported. In this case there's several options when it comes to logstash/Elasticsearch.
My thinking is along these lines: If you coupled a database layer it could become a "required component". As it stands, the only 'enhancement' that VizAlerts would have to make revolves around making the logs a bit more robust and informative. This would then allow for a "query" of the log files.
I guess my suggestion was more of an idea on "how to accomplish this now with what you've got" - and I'd whole heartedly support adding in a backing data-layer, as it could probably go beyond just something that is queried for reporting/administrative purposes, but even go as far as automating/setting up advanced alerts, etc.
from vizalerts.
Yes, that's another part that is appealing to the whole operational logging / near-real-time querying feature--easier prevention of duplicate alerts being sent. Right now there is no way for your alerts to know if they've already been sent to their recipients. If you had access to data describing who'd been sent what alert, when, you could structure your alerts to be more robust and not so reliant on relative dates. It'd also make them more fault-tolerant, since, in the event that your alert fails after the N retries it was afforded, it won't be attempted again unless the data says it should be. If you had the data, you could loosen up the primary trigger requirements, then prevent dupes over multiple "tests" by checking to see if you'd already sent that alert out to the recipient.
That all gets more complicated when you consider the dynamic nature of recipient lists and consolidated emails, but still, pretty cool!
All the above considered, we can fairly easily output a separate, structured log that rolls over less frequently than the standard log, intended for reporting purposes. At the end of a VizAlerts cycle, all output from all threads (this all must be thread-safe) can be appended to the master ops log. Expose the data with row-level security with a published datasource on Server, and you'd be good to go on nearly all fronts.
from vizalerts.
Restructured the code for more modularity (pushed to the twbconfig branch), and while I was at it, improved our ability to implement more structured output requested in this Issue. Each VizAlert is now an instance of a VizAlert class, which can store any validation and errors it encountered at various stages. This feature will still take more work to implement, but it's one step closer.
from vizalerts.
Both Twilio and Mandrill allow for some reporting based on what SMS / emails you've asked them to send, so if either of those services are used, there is at least some operational data available on what VizAlerts is doing (when an alert actually fires, anyway).
from vizalerts.
Attaching a nearly-complete suggestion on what the structured output could look like.
VizAlertsStructuredOutput.xlsx
from vizalerts.
from vizalerts.
Good calls!
-
Added clarity on the input value column for content refs. We'll just include the raw text of it, I think, unless you see an issue with doing so.
-
I hadn't considered that, but it's a great idea. Consolidated emails don't have a specific line, however, or even a clear sequence from the original trigger data, because of the unique-ifying and re-sorting we do before processing them. So it might need to be NULL in those instances.
For Subject, how about a field called "Output Name"? This can be the Subject of an email, or the Filename of a content reference (as defined by the |filename param, or if not specified, the raw filename). This would be empty for SMS.
Since we're also trying to tackle multithreading within each alert, I was considering pre-logging each instance of an email to be sent as it's own logged line, even if we know it won't be sent because its recipients or content reference(s) failed to process. That way, no matter what happens, we know the full set of emails that were supposed to be sent, and which were or weren't sent successfully.
- I have planned to add in the ScheduledTriggerViews config info, but not the global config info from vizalerts.yaml. I was thinking that it would simply be a lot of duplicative logging if we added the global config stuff in there, because it rarely changes. Though interestingly, it would be useful for performance monitoring if you switched to a different Tableau Server or SMTP server. As an IT person, that stuff is super helpful. I'll commit to a firm "maybe" here. :)
from vizalerts.
new version, with all the ScheduledTriggerViews fields (though I didn't mock up example values for those...too much work)
VizAlertsStructuredOutput.xlsx
from vizalerts.
Related Issues (20)
- Config file fails to load HOT 3
- Review code for platform-specific file paths
- yaml.load has been deprecated and is unsafe HOT 2
- Allow empty Email To HOT 1
- problem upgrading to 2.2.0 from 2.1.1 HOT 2
- Mixed action flag values used with consolidate lines can cause no emails to send
- Add check to ensure that cert file is not a directory
- Setting a customized mail header and body HOT 1
- Update config workbook to use new projects_contents mapping table HOT 7
- Viz Alert s are not sending out as getting an error like CSV and Sort Oder. We are using 2.1.1 version. HOT 3
- merge_pdf_attachments incompatible with MyPDF2 3.0.0 and other packages. HOT 1
- 'AttributeError' object has no attribute 'message' error when downloading alert CSV HOT 2
- VizConfigSQL HOT 1
- Upgrading Tableau to 2023.1.2 (Invalid regular expression "allowed_from_address" HOT 2
- Detect invalid CSV export data and improve user messaging
- Add requirements.txt
- Issue #203 is not fixed in vizalerts.zip file in release 2.2.1 HOT 3
- [ERROR] - export_view - Request Exception getting vizdata from url
- Vizalert not working on Tableau 2023.3
- Tableau server 2023 can't export as CSV HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vizalerts.