Giter Site home page Giter Site logo

silverstripe-archive / deploynaut Goto Github PK

View Code? Open in Web Editor NEW
71.0 71.0 26.0 8.38 MB

A web based tool for performing deployments

License: BSD 3-Clause "New" or "Revised" License

Shell 1.61% PHP 83.48% CSS 1.99% JavaScript 1.80% Ruby 3.61% Scheme 7.51%

deploynaut's People

Contributors

chillu avatar dhensby avatar halkyon avatar igor-silverstripe avatar igornadj avatar jakedaleweb avatar kinglozzer avatar kmayo-ss avatar madmatt avatar mateusz avatar micmania1 avatar spekulatius avatar ss23 avatar wilr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deploynaut's Issues

Refactor: better way of using puppet to configure projects & environments

Right now Deploynaut has some support for using puppet to configure it. This is great, but the implementation is a bit hacky; IMO it's done this way more for historical than solid architectural reasons. It leaves Deploynaut with two quite different execution paths in its configurations, increasing the chance of bugs in one set-up or the other.

It would be better if there was a CLI tool that injected a desired configuration into Deploynaut. It may be that this desired configuration is sourced from a set of files & folders, or from a larger data structure provided in a single file (or stdin). Deploynaut should be responsible for deciding what has changed and updating the database accordingly.

Rather than focusing solely on the Pipeline and Capistrano config files, it would be better if the tool was able to update any subset of the DNProject and DNEnvironment fields. Alongside this, it would be appropriate to have a more granular way of indicating which files have been puppet-supplied, and which should remain editable in the Deploynaut admin.

Deploy pipeline doesn't auto refresh

Its really confusing given it shows a progress indicator on a step, which indicates (at least to me) that its still going on that step. But when refreshing the browser, it's on the next step already, or might even have completed.
This is particularly confusing if a prod deployment fails: In this case, it'll "hang" at the current step, and if you refresh, there's no indication that anything has happened - you're just back on the main deployment screen without any indicators. There's no list of failed prod deployments either, so the only way I can see to get any information what happened is the failure notification email.

If deployment fails, maintenance page remains

If you have a failed deployment, the maintenance page isn't reverted. After a failed deployment, the maintenance page should be taken away again, so that the site is back as it was before the failed deployment.

Refactor: push Capistrano-specific code from DNEnvironment to CapistranoDeploymentBackend

Since we're not solely focusing on Capistrano for deployment backends, there's a bunch of capistrano-specific code in DNEnvironment.php that really should be pushed to the backend.

In particular:

  • The CMS fields for showing the config filename and editing the config content
  • The code for generating the config file as needed based on the config content TextArea
  • The config options for disabling web editing of config content (for situations where config is puppet managed)

Related to this is that https://github.com/ss23/deploynaut-aws shouldn't need to create an AWSEnvironment subclass of DNEnvironment simply to add an extra parameter.

Some thoughts as to how this could be done:

  • DeploymentBackend::getParamMetadata() returns a set of information about the configuration fields that this backend requires: name, title, field type. This could potentially be done by returning a FieldList. I wouldn't give the backend full control over manipulating the entire DNEnvironment FieldList: instead DNEnvironment can inject those fields into the right place in its form.
  • DeploymentBacked::setParams() gets called by the environment passing a map of fieldname => value.
  • DNEnvironment.BackendParams should store a JSON-backed set of all the relevant parameters.
  • The generation of the configuration, would presumably be shifted to the setParams() call, rather than onBeforeWrite()`

The allow_web_editing could do with a bit of a re-think. Right now it applies to cap config file and pipeline config, which is a bit arbitrary. That said, it would probably be best to leave as-is and pick it up as part of a wider look at how puppet be better used to configure Deploynaut.

BUG data.rb fails with undefined variable "webserver_user"

This is because in some configurations webserver_user may not be defined. Parts of data.rb do not properly respect the optionality of this variable and it needs to be rewritten.

Workarounds for the time being: Put the following into your <env>.rb

set :webserver_user, "www-data"

Can Deploy list & Deploy History list on same page one after the other is confusing

If you are using Pipelines, and in the front go to Projects > Project > Environment, you see two lists:

  • Successful xxxxx:uat releases
  • Deploy history

These two lists can be very long. You may not see the Deploy history header half way down the page. Also, from experience people expect the one and only list on this page to be a Deploy history.

The layout of this page needs a redesign.

Allow modification of pipeline config by non-privileged users

Scenario: Default smoke tests in CWP check for a 200 HTTP code on prod, but I want to secure the prod env with basic auth since the site isn't live yet (and we plan to use the env for demos, security testing, etc). Currently this requires me to ask CWP staff to modify the YAML config.

It would be good if that's a self-service capability. Allowing full access to the YAML config is probably a bit much, but we could provide a simple placeholder edit interface which is then inserted into the YAML alongside the deploynaut frontend (outside of CMS).

Batch delete snapshots (or do it in background)

The "delete" action in snapshots relies on submitting a form, which takes up to 30 secs to process and reload a large amount of snapshots (50+). This is exactly the situation where you usually need to start deleting old snapshots, since quotas are exceeded. So if you want to delete 20 snapshots to free up space, that can easily take 10 minutes of sitting there and watching the browser spin. You can work with multiple tabs I guess, but that's hardly ideal.

We need to allow faster deleting of snapshots. Either by submitting the form via ajax, or (preferred) by a new batch edit mode with checkboxes, and a dropdown of the action that needs to be performed.

Deploynaut breaks on project page sometimes

From error logs:

[Wed Oct 08 11:47:53 2014] [error] [client 172.23.52.10] PHP Warning: htmlentities(): Invalid multibyte sequence in argument in /sites/deploynaut/www/framework/dev/Backtrace.php on line 183
[Wed Oct 08 11:47:53 2014] [error] [client 172.23.52.10] PHP Warning: htmlentities(): Invalid multibyte sequence in argument in /sites/deploynaut/www/framework/dev/Backtrace.php on line 183
[Wed Oct 08 11:47:53 2014] [error] [client 172.23.52.10] PHP Catchable fatal error: Method DNBranch::__toString() must return a string value in /sites/deploynaut/www/framework/dev/Backtrace.php on line 148

Add ability to "lock deployments"

A small group of users (i.e. those with a separate permission setting) should have the ability to lock/unlock environments for deployment.

It would be good if the locking could be set for a scheduled time (e.g. "please lock between 1pm and 3pm"). Also it would be good if the locking could be set up as a repeating weekly schedule.

Optionally, people without the permission could click "request deployment" and the person that locked the environment could approve one a case-by-case basis.

Use cases:

  • A client needs to lock their UAT environment because they're using it for a big demo and doesn't want it broken
  • Prevent Friday deployments without approval

Allow readonly access to deployment list

This allows testers to know which state an environment is in. So basically, ensure that "Who can view this environment?" works when all other permissions aren't selected. I would've assumed that's already the case, but according to SilverStripe ops (passed on by @micmania1), its not a supported feature.

Environment names are not quoted correctly for Capistrano

[2014-11-06 18:48:29] Running command: cap -f '/var/www/deploy/releases/20141029031715/assets/Capfile' -vv foo:Custom AWS deploy:check ROLES=web -s 'history_path'='/var/www/deploy/logs/deploynaut'
[2014-11-06 18:48:30] the task `foo:Custom' does not exist

We should be quoting and escaping these correctly

Option to disable maintenance screens

The maintenance screens aren't necessary unless you're taking db snapshots prior to release (which isn't used in all configuration) and they are irritating for people used to no-downtime deployments.

Introduce an option CapistranoDeploymentBackend::setUsesMaintenancePage(), that defaults to true but can be set to false. If false, then enableMaintenance() and disableMaintenance() will be a no-op.

Once this setting is in place, people can use the injector configuration to create an allowed_backend that has no maintenance page.

.tar.gz downloads for releases (or at least tags)

As a deployer of locked-down environment I want to be able to download a .tar.gz of a release package so that I can build a rigorous, automated deployment process even when deploynaut can't connect to the production server

Acceptance criteria:

  • All release lists (main list, deploy history lists) have "download" links for at least tagged releases.
  • The results of the download links are cached so that they don't need to be regenerated each time
  • The download links are password protected by my usual deploynaut log-in
  • Optionally, the generation of the file for the download-link is also used when deploying that release

Note that with the new package generation work by @sminnee this is a lot easier

Performance issue with Snapshots list

Hi yalls,

My project has 12 pages of snapshots (one is created at each deployment automatically by pipelines), and the page takes a good minute to load.

Most likely an issue with not lazy loading the data from the database in the pagination.
Thanks,
Igor

Lengthy display on a repo with many branches

Right now a repository that has many branches creates quite a lengthy display, which isn't ideal.

Here are a few things we could do:

  • A better sort order for the branches. For example, it might be best to show the branch with the most recent commit first.
  • Turn off the special handling of the master branch or make it optional. Not everyone uses master as their primary branch.
  • Perhaps limit the number of branches, paginate it, or use infinite scroll?

Maintenance page should be shown only when it's necessary

The maintenance page is only necessary if there is a db schema update to run. Right now, the maintenance page is activated before the release package creation is even started.

The irony is that maintenance page feature increases the apparent downtime on the site.

I think that the maintenance page activation should be shifted to be just before the www/ symlink is changed. If no db-schema update is needed, this will keep its presence pretty short. It may also fix #76.

This will probably necessitate reorganising things so that capistrano triggers this itself.

Feature: Enable smoketesting retries

In some cases initial requests to a site may timeout (due to manifest generation, caching, etc). It would be prudent to allow smoketesting to be given a number of retry attempts prior to failing.

A failure to clean up old releases shouldn't cause a rollback

If the code that cleans up old releases fails, then it will assume that the release has failed and revert the whole thing. Although release processes sometimes raise "is that a bug or a feature?" discussions, I don't think the current operation doesn't provide any benefit.

The best response would probably be flag an error notification to the sysadmin to sort out the failed release cleanup, but to let the deployment go through.

PS: the most common reason the cleanup fails is when the www-data user creates files in the webroot that can't be deleted by the deployer user.

It would probably make sense to change this at the same time as #77, to create the following flow.

  • Deploy prep (package the code & upload it to the server, e.g. cap deploy:prepare)
    • Error message if failed
  • enable-maintenance (current behaviour)
  • Deploy core (snapshot db if needed, update the www symlink, and call dev/build, e.g. cap deploy:activate)
    • Rollback if failed
  • disable-maintenance (current behaviour)
  • Deploy cleanup (clean-up old releases, e.g. cap deploy:cleanup)
    • Error message if failed

The cap actions are just examples; I'm not sure if specific actions already exist.

Right now, this can probably be all changed within CapistranoDeploymentBackend. However, as a further refactoring it might make sense to allow the swapping out of the various steps.

Improve logging format

Presently DeploynautLogFile uses a custom format to write to log-files (e.g. the deployment log)

If the format changes, any calling logic will need to be altered as well.
Suggest we create something from SS_Log and create our own implementation of Zend_Log_Formatter_Interface (see: SS_LogErrorFileFormatter as an example) although haven't invstigated standard ways of reading the data out in a standard way.

Another suggestion is using http://www.php-fig.org/psr/psr-3/ psr style logging, it's up and coming format and by using it might be compatible with logs-as-a-service providers.
http://www.sitepoint.com/logging-with-psr-3-to-improve-reusability/

User selectable "default branch"

Currently we automatically treat master as a special branch. Instead, we should let developers pick the default branch for their project and have that the 'special' one.

Pipelines UI is confusing, especially the final "Rollback" step

The rollback step button is green, implying you should click it to begin the next step. That's silly, you shouldn't.
It should be more clear that this is a "Revert, cancel, it all went wrong!" button.

It should also be clear that at this point, the deployment is considered complete and working.

Missing an extension point for when a Fetch updates git sucessfully

I cannot see an easy way to implement this.

We need this to happen only after the git fetch is completed, as if we trigger it before, it would be near-useless.
As far as I know, there is no SilverStripe code that runs after a Fetch job where this could be placed.
Inside the Fetch job itself, it doesn't spin up SilverStripe.

The only thing I can think of is if we, inside the Fetch job, trigger a php cli-script.php ./handleStuff/doStuff style... hack.

What are general thoughts?

Figure out way of including modular cap tasks

We currently have hardcoded dev/build, flush etc jobs in the deploy.rb file, but this could be made modular so custom functionality can be included without having to modify the deploynaut code.

One potential way is to change the Capfile.template so that it includes <base dir>/*/ruby/*.rb so that it loads any deploynaut module rb files. That means we could have a deploynaut-solr module which includes a task configure in the solr namespace, then that could be triggered in deploy environment files, e.g: after "deploy:migrate", "solr:configure"

User sees "snapshot quota exceeded" but totals don't match up

The message is correct, however this is a UI issue where a non-privileged user can only see snapshots for the environments they are allowed to access.

E.g. Joe Bloe can only see UAT, and there is one snapshot totaling 100MB, however deploynaut shows "You have exceeded the total quota of 1024MB. You will need to delete old snapshots in order to create new ones".

An improvement would be to simply change the message to "The snapshot quota of 1024MB has been exceeded (100MB visible). You will need to delete old snapshots in order to create new ones."

Thanks,
Igor

BUG Rollback shouldn't use composer

If a repository is deleted or is removed between deployments then composer install will fail and the site will be stuck in maintenance mode with no way to recover it.

If pipelines are enabled, the "deploy history" should list pipeline objects, not DNDeployments

The prod list only shows successful releases when pipeline is enabled. Which means failed releases just disappear, and I can't see any way to view logs on WHY it failed after the fact. The emails being sent out on failure are similarly unhelpful (no reason for failure or link to logs).

Sam would suggest that, where pipelines are in use, we don't list the DNDeployment records—it's unnecessary and confusing. Perhaps the results of the relevant DNDeployment can be listed when you open the details of a single pipeline?

After fixing this, it would be worth looking at the comment history of #101 to confirm, as Sam assumes that this fix will have also addressed that.

cc @chillu

Refactor: split db/assets management into separate backend from deployment

Currently the deployment backend is in danger of becoming a god object. The handling of deployment is quite different from the db/assets.

Presumably this would involved two interfaces:

  • DeploymentBackend
  • DataTransferBackend

It's unclear as to whether pairs of these would be stitched together in config YML, or whether there would be two dropdowns in the environment configuration:

 Deployment Backend    [_______|v]
 Data Transfer Backend [_______|v]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.