silverstripe-archive / deploynaut Goto Github PK
View Code? Open in Web Editor NEWA web based tool for performing deployments
License: BSD 3-Clause "New" or "Revised" License
A web based tool for performing deployments
License: BSD 3-Clause "New" or "Revised" License
Currently you need to copy the file over manually, or re-create it from scratch.
I suspect this also means we have orphaned files in there that aren't managed properly that will need a clean-up script or similar too.
For example, project viewers without the technical knowledge to check out git want to find out which tag is currently deployed on which environment. This should show anywhere we mention a SHA (maybe in the overlay?)
This could be after 6pm or early in the morning or even at lunch time just to keep an environment always up to date
The problem is that a deployment always does a DB backup snapshot in case it needs to restore. If a site is being deployed for the first time, there is nothing to backup.
A simple fix would be to skip snapshots if there is no "currently deployed" sha on the target environment.
Right now Deploynaut has some support for using puppet to configure it. This is great, but the implementation is a bit hacky; IMO it's done this way more for historical than solid architectural reasons. It leaves Deploynaut with two quite different execution paths in its configurations, increasing the chance of bugs in one set-up or the other.
It would be better if there was a CLI tool that injected a desired configuration into Deploynaut. It may be that this desired configuration is sourced from a set of files & folders, or from a larger data structure provided in a single file (or stdin). Deploynaut should be responsible for deciding what has changed and updating the database accordingly.
Rather than focusing solely on the Pipeline and Capistrano config files, it would be better if the tool was able to update any subset of the DNProject and DNEnvironment fields. Alongside this, it would be appropriate to have a more granular way of indicating which files have been puppet-supplied, and which should remain editable in the Deploynaut admin.
Its really confusing given it shows a progress indicator on a step, which indicates (at least to me) that its still going on that step. But when refreshing the browser, it's on the next step already, or might even have completed.
This is particularly confusing if a prod deployment fails: In this case, it'll "hang" at the current step, and if you refresh, there's no indication that anything has happened - you're just back on the main deployment screen without any indicators. There's no list of failed prod deployments either, so the only way I can see to get any information what happened is the failure notification email.
If you have a failed deployment, the maintenance page isn't reverted. After a failed deployment, the maintenance page should be taken away again, so that the site is back as it was before the failed deployment.
Since we're not solely focusing on Capistrano for deployment backends, there's a bunch of capistrano-specific code in DNEnvironment.php that really should be pushed to the backend.
In particular:
Related to this is that https://github.com/ss23/deploynaut-aws shouldn't need to create an AWSEnvironment
subclass of DNEnvironment
simply to add an extra parameter.
Some thoughts as to how this could be done:
DeploymentBackend::getParamMetadata()
returns a set of information about the configuration fields that this backend requires: name, title, field type. This could potentially be done by returning a FieldList. I wouldn't give the backend full control over manipulating the entire DNEnvironment FieldList: instead DNEnvironment can inject those fields into the right place in its form.DeploymentBacked::setParams()
gets called by the environment passing a map of fieldname => value.DNEnvironment.BackendParams
should store a JSON-backed set of all the relevant parameters.setParams()
call, rather than onBeforeWrite()`The allow_web_editing
could do with a bit of a re-think. Right now it applies to cap config file and pipeline config, which is a bit arbitrary. That said, it would probably be best to leave as-is and pick it up as part of a wider look at how puppet be better used to configure Deploynaut.
Not high priority because developer workflows generally avoid this, but it would be wonderful to have it built in.
This release was rolled back, but it shows as "Finished" in the release overview. The deployment log also shows as successful, presumably another step in the pipeline failed? Its confusing to lack that indication, and show the SHA that has been rolled back to.
cc @chillu
To make things more readable, I think we should show only the first 8 characters of a SHA in the lists of commits, with the full SHA shown in a tooltip.
This is because in some configurations webserver_user may not be defined. Parts of data.rb do not properly respect the optionality of this variable and it needs to be rewritten.
Workarounds for the time being: Put the following into your <env>.rb
set :webserver_user, "www-data"
If you are using Pipelines, and in the front go to Projects > Project > Environment, you see two lists:
These two lists can be very long. You may not see the Deploy history header half way down the page. Also, from experience people expect the one and only list on this page to be a Deploy history.
The layout of this page needs a redesign.
Scenario: Default smoke tests in CWP check for a 200 HTTP code on prod, but I want to secure the prod env with basic auth since the site isn't live yet (and we plan to use the env for demos, security testing, etc). Currently this requires me to ask CWP staff to modify the YAML config.
It would be good if that's a self-service capability. Allowing full access to the YAML config is probably a bit much, but we could provide a simple placeholder edit interface which is then inserted into the YAML alongside the deploynaut frontend (outside of CMS).
We could use the post migration tasks for this, but only if they complete before the site is available again.
We don't want to run Solr_Reindex in that way though, because it could take hours to run.
The "delete" action in snapshots relies on submitting a form, which takes up to 30 secs to process and reload a large amount of snapshots (50+). This is exactly the situation where you usually need to start deleting old snapshots, since quotas are exceeded. So if you want to delete 20 snapshots to free up space, that can easily take 10 minutes of sitting there and watching the browser spin. You can work with multiple tabs I guess, but that's hardly ideal.
We need to allow faster deleting of snapshots. Either by submitting the form via ajax, or (preferred) by a new batch edit mode with checkboxes, and a dropdown of the action that needs to be performed.
Work has been started in #71 but it needs to be completed for this to count as fixed.
From error logs:
[Wed Oct 08 11:47:53 2014] [error] [client 172.23.52.10] PHP Warning: htmlentities(): Invalid multibyte sequence in argument in /sites/deploynaut/www/framework/dev/Backtrace.php on line 183
[Wed Oct 08 11:47:53 2014] [error] [client 172.23.52.10] PHP Warning: htmlentities(): Invalid multibyte sequence in argument in /sites/deploynaut/www/framework/dev/Backtrace.php on line 183
[Wed Oct 08 11:47:53 2014] [error] [client 172.23.52.10] PHP Catchable fatal error: Method DNBranch::__toString() must return a string value in /sites/deploynaut/www/framework/dev/Backtrace.php on line 148
A small group of users (i.e. those with a separate permission setting) should have the ability to lock/unlock environments for deployment.
It would be good if the locking could be set for a scheduled time (e.g. "please lock between 1pm and 3pm"). Also it would be good if the locking could be set up as a repeating weekly schedule.
Optionally, people without the permission could click "request deployment" and the person that locked the environment could approve one a case-by-case basis.
Use cases:
Even though it runs as a different user, there is an issue where the snapshot feature has been used before, and the manifest has changed since then. They will likely get a fatal error, causing the feature to not work at all.
We should do a flush to prevent this.
#64 should be fixed before this.
This isn't major because it's never hit, but it should probably be fixed nonetheless.
This allows testers to know which state an environment is in. So basically, ensure that "Who can view this environment?" works when all other permissions aren't selected. I would've assumed that's already the case, but according to SilverStripe ops (passed on by @micmania1), its not a supported feature.
[2014-11-06 18:48:29] Running command: cap -f '/var/www/deploy/releases/20141029031715/assets/Capfile' -vv foo:Custom AWS deploy:check ROLES=web -s 'history_path'='/var/www/deploy/logs/deploynaut'
[2014-11-06 18:48:30] the task `foo:Custom' does not exist
We should be quoting and escaping these correctly
The maintenance screens aren't necessary unless you're taking db snapshots prior to release (which isn't used in all configuration) and they are irritating for people used to no-downtime deployments.
Introduce an option CapistranoDeploymentBackend::setUsesMaintenancePage()
, that defaults to true but can be set to false. If false, then enableMaintenance()
and disableMaintenance()
will be a no-op.
Once this setting is in place, people can use the injector configuration to create an allowed_backend
that has no maintenance page.
Currently, users often hit API limits with their composer installs.
We should either implement a way to avoid hitting this, or at least document how (may not be in deploynaut itself).
As a deployer of locked-down environment I want to be able to download a .tar.gz of a release package so that I can build a rigorous, automated deployment process even when deploynaut can't connect to the production server
Acceptance criteria:
Note that with the new package generation work by @sminnee this is a lot easier
Hi yalls,
My project has 12 pages of snapshots (one is created at each deployment automatically by pipelines), and the page takes a good minute to load.
Most likely an issue with not lazy loading the data from the database in the pagination.
Thanks,
Igor
Right now a repository that has many branches creates quite a lengthy display, which isn't ideal.
Here are a few things we could do:
The maintenance page is only necessary if there is a db schema update to run. Right now, the maintenance page is activated before the release package creation is even started.
The irony is that maintenance page feature increases the apparent downtime on the site.
I think that the maintenance page activation should be shifted to be just before the www/ symlink is changed. If no db-schema update is needed, this will keep its presence pretty short. It may also fix #76.
This will probably necessitate reorganising things so that capistrano triggers this itself.
In some cases initial requests to a site may timeout (due to manifest generation, caching, etc). It would be prudent to allow smoketesting to be given a number of retry attempts prior to failing.
If the code that cleans up old releases fails, then it will assume that the release has failed and revert the whole thing. Although release processes sometimes raise "is that a bug or a feature?" discussions, I don't think the current operation doesn't provide any benefit.
The best response would probably be flag an error notification to the sysadmin to sort out the failed release cleanup, but to let the deployment go through.
PS: the most common reason the cleanup fails is when the www-data user creates files in the webroot that can't be deleted by the deployer user.
It would probably make sense to change this at the same time as #77, to create the following flow.
cap deploy:prepare
)
cap deploy:activate
)
cap deploy:cleanup
)
The cap actions are just examples; I'm not sure if specific actions already exist.
Right now, this can probably be all changed within CapistranoDeploymentBackend
. However, as a further refactoring it might make sense to allow the swapping out of the various steps.
Presently DeploynautLogFile uses a custom format to write to log-files (e.g. the deployment log)
If the format changes, any calling logic will need to be altered as well.
Suggest we create something from SS_Log and create our own implementation of Zend_Log_Formatter_Interface (see: SS_LogErrorFileFormatter as an example) although haven't invstigated standard ways of reading the data out in a standard way.
Another suggestion is using http://www.php-fig.org/psr/psr-3/ psr style logging, it's up and coming format and by using it might be compatible with logs-as-a-service providers.
http://www.sitepoint.com/logging-with-psr-3-to-improve-reusability/
Currently we automatically treat master
as a special branch. Instead, we should let developers pick the default branch for their project and have that the 'special' one.
The rollback step button is green, implying you should click it to begin the next step. That's silly, you shouldn't.
It should be more clear that this is a "Revert, cancel, it all went wrong!" button.
It should also be clear that at this point, the deployment is considered complete and working.
I cannot see an easy way to implement this.
We need this to happen only after the git fetch
is completed, as if we trigger it before, it would be near-useless.
As far as I know, there is no SilverStripe code that runs after a Fetch job where this could be placed.
Inside the Fetch job itself, it doesn't spin up SilverStripe.
The only thing I can think of is if we, inside the Fetch job, trigger a php cli-script.php ./handleStuff/doStuff
style... hack.
What are general thoughts?
We currently have hardcoded dev/build, flush etc jobs in the deploy.rb
file, but this could be made modular so custom functionality can be included without having to modify the deploynaut code.
One potential way is to change the Capfile.template
so that it includes <base dir>/*/ruby/*.rb
so that it loads any deploynaut module rb files. That means we could have a deploynaut-solr
module which includes a task configure
in the solr
namespace, then that could be triggered in deploy environment files, e.g: after "deploy:migrate", "solr:configure"
The message is correct, however this is a UI issue where a non-privileged user can only see snapshots for the environments they are allowed to access.
E.g. Joe Bloe can only see UAT, and there is one snapshot totaling 100MB, however deploynaut shows "You have exceeded the total quota of 1024MB. You will need to delete old snapshots in order to create new ones".
An improvement would be to simply change the message to "The snapshot quota of 1024MB has been exceeded (100MB visible). You will need to delete old snapshots in order to create new ones."
Thanks,
Igor
If a repository is deleted or is removed between deployments then composer install will fail and the site will be stuck in maintenance mode with no way to recover it.
The prod list only shows successful releases when pipeline is enabled. Which means failed releases just disappear, and I can't see any way to view logs on WHY it failed after the fact. The emails being sent out on failure are similarly unhelpful (no reason for failure or link to logs).
Sam would suggest that, where pipelines are in use, we don't list the DNDeployment records—it's unnecessary and confusing. Perhaps the results of the relevant DNDeployment can be listed when you open the details of a single pipeline?
After fixing this, it would be worth looking at the comment history of #101 to confirm, as Sam assumes that this fix will have also addressed that.
cc @chillu
Instead, we should run it once and return something like json_encode($databaseConfig);
Because the data values are stored in the database, there are desync issues that can happen, causing there to be a wildly different value for this than what is actually on disk.
We should probably have a sync task, like file sync, but for now, checking disk may provide more accurate values.
This might be a bit of CWP specific confusion around the roles of "Deployment Manager" vs. "Instance Manager" etc, but in general it would be good to know who you're waiting on for a deployment, and which channel they have received a notification on.
Currently the deployment backend is in danger of becoming a god object. The handling of deployment is quite different from the db/assets.
Presumably this would involved two interfaces:
It's unclear as to whether pairs of these would be stitched together in config YML, or whether there would be two dropdowns in the environment configuration:
Deployment Backend [_______|v]
Data Transfer Backend [_______|v]
While it's not likely there would be sensitive data in there, better safe than sorry.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.