python / psf-salt Goto Github PK
View Code? Open in Web Editor NEWPSF infrastructure configuration
License: MIT License
PSF infrastructure configuration
License: MIT License
It expires on July 28, 2021, which make me think the renew failed.
cc @ewdurbin
One possible avenue to publish any changes in salt-server-list.rst
back to our python/psf-salt
repository is to:
force_reset
parameter to be set to True, allowing it to always stay up to date with the remote repository.file.managed
state would be used to ensure that salt-server-list.rst
is present in the local copy of the repository and it's contents are up to date with its source file.git.latest
state, and tracking your file using file.managed
you could then use git.push to push any changes to the remote repository.It's important to note that the user
parameter must have necessary permissions to access the repository. Could possibly create an automated Salt user with limited permissions for the purposes of running these commands and states.
Another avenue may to be establish a githib workflow that runs on a scheduled cron
interval, pulls changes from salt using salt-cp
, and creates a pull request using create-pull-request action.
Because my bootstrap script was rejected in python/pythondotorg#905 I looked here to see if I can quickly setup python.org or mailman server on my local machine.
And no, I was not able to find info what to do. So, how to use this recommended salt+ansible to get mailman and python.org running for development on my local (windows) machine?
I noticed this yesterday - running migrations during a deploy (highstate) is conditional on the code having changed. Which makes a lot of sense, given that the highstate runs many, many times a day. However, if the migration fails, it fails that highstate run, but subsequent runs don't try to run the migration again (because the code was updated in the previous run, so in the later run it's not changing), and so subsequent runs appear to succeed even though things are actually not in the proper state anymore.
I'm not sure what the best fix is though. We could hack up the way we run migrations so we run them when the code has changed or the previous migration failed (keeping track of that somehow), but that's pretty kludgey. And anyway, unless someone has fixed something manually, migrations aren't suddenly going to start working without a code change.
Or we could just bite the bullet and remove the condition, so Django checks whether any migrations need to run on each highstate. Maybe we should also consider whether deploys should run in a frequent periodic highstate... but at least this way, if something was wrong each highstate would fail until it was fixed.
What we'd want ideally would be for all the changes in a deploy to happen in a transaction (somehow), so if anything fails, no changes take effect. The previous system with Chef was set up that way with regard to the source code, but not the database or the virtualenv, so things could still get out of sync when there was a failure. And I don't think anyone has a great solution for that.
While working on upgrading our salt configurations for ubuntu 20.04, I noticed these outputs in the logs from sshd:
Jul 12 12:58:40 salt-master.vagrant.psf.io sshd[1220613]: rexec line 65: Deprecated option UseLogin
Jul 12 12:58:40 salt-master.vagrant.psf.io sshd[1220613]: rexec line 66: Deprecated option UsePrivilegeSeparation
Jul 12 12:58:40 salt-master.vagrant.psf.io sshd[1220613]: rexec line 80: Deprecated option RhostsRSAAuthentication
Jul 12 12:58:40 salt-master.vagrant.psf.io sshd[1220613]: Connection from 172.17.0.1 port 60496 on 172.17.0.2 port 22 rdomain ""
Jul 12 12:58:40 salt-master.vagrant.psf.io sshd[1220613]: reprocess config line 80: Deprecated option RhostsRSAAuthentication
I'm not sure what the best approach is, opening this issue to decide how to address this in our configuration.
A small change should be made to infra.psf.io's Migrating to a new host guide, step 7 of "Shutdown and reclaim hostname" to include salt-minion
to the command.
Step should look like:
user@new-host:~$ sudo salt-call service.restart salt-minion
It would be great to have server fingerprints documented somewhere, so newcomers like me don't blindly ssh to a machine over an untrusted network. If it's already done, I missed it and it should probably be documented in the server list page.
Adding fingerprints columns to the server list looks cumbersome, maybe distributing a ssh_known_hosts file would be easier if we're not going full DNSSEC plus SSHFP RR?
The ssh_known_hosts
file can be easily generated via curl -s https://raw.githubusercontent.com/python/psf-salt/master/docs/list.rst | grep '|' | cut -d'|' -f2 | sed 1d | ssh-keyscan -f -
.
mail.python.org
do have an IPv6 (and an IPv4):
$ dig A +short mail.python.org
188.166.95.178
$ dig AAAA +short mail.python.org
2a03:b0c0:2:d0::71:1
But it does not accept emails on its IPv6 interface:
$ netcat -4 mail.python.org 25
220-mail.python.org ESMTP Postfix
$ netcat -6 mail.python.org 25
Our hourly logrotate cron job is not working on Ubuntu 20.04
psf-salt/salt/cdn-logs/init.sls
Lines 17 to 19 in d6c01b1
This is due to logrotate preferring the systemd timer (see first if
clause below):
ee@cdn-logs:/var/log/fastly$ ls -alhtr /etc/cron.hourly/logrotate
lrwxrwxrwx 1 root root 25 Jan 3 15:38 /etc/cron.hourly/logrotate -> /etc/cron.daily/logrotate
ee@cdn-logs:/var/log/fastly$ cat /etc/cron.daily/logrotate
#!/bin/sh
# skip in favour of systemd timer
if [ -d /run/systemd/system ]; then
exit 0
fi
# this cronjob persists removals (but not purges)
if [ ! -x /usr/sbin/logrotate ]; then
exit 0
fi
/usr/sbin/logrotate /etc/logrotate.conf
EXITVALUE=$?
if [ $EXITVALUE != 0 ]; then
/usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]"
fi
exit $EXITVALUE
We should update our configuration to change the systemd timer from daily to hourly on this host for logrotate.
Timer is currently configured as follows by default on installation of logrotate:
ee@cdn-logs:/var/log/fastly$ cat /etc/systemd/system/timers.target.wants/logrotate.timer
[Unit]
Description=Daily rotation of log files
Documentation=man:logrotate(8) man:logrotate.conf(5)
[Timer]
OnCalendar=daily
AccuracySec=12h
Persistent=true
[Install]
WantedBy=timers.target
I'm not positive what the best way to go about editing the timer is, but it may be as "simple" as overriding the file with salt and calling systemctl daemon-reload
as we do for services.
References:
I see latexmk on docs.iad1.psf.io, but I don't see it in salt.
Maybe latexmk is a dependency of a package documented in salt, but I don't see which one, maybe running:
aptitude why latexmk
Will tell us (but I do not have the rights to do so).
Hello,
Over at the peps repo we noticed that recent changes to the PEPs aren't getting updated to python.org (see bug python/peps#216).
Seems like it's due to python3 not being available there.
Can we please have python3 installed where the peps are being served?
Thanks.
The Sphinx-generated Python doc pages on docs.python.orgs each have an embedded JS version picker that is included in the daily docs rebuild. The version numbers are updated with new releases. But it seems that the CDN cache max age is currently set to 604800 secs (a week). It would be better if the cache expired more frequently, say daily, so that the docs were more up-to-date. There's a lot of confusion at the moment due to the Python 3.5.0 release doc updates (for example, http://bugs.python.org/issue25113).
On #python-infra, @dstufft suggested changing "Surrogate-control: max-age=604800" here:
https://github.com/python/psf-salt/blob/master/salt/docs/config/nginx.docs-backend.conf#L15
and possibly also setting stale-while-revalidate
"like Surrogate-Control: max-age=NNN, stale-while-revalidate=NNN
that'll cause Fastly to serve a "stale" (e.g. older than the max-age document) to someone while it goes in the background and refetches the latest version of that page
so people still get almost all docs served from Fastly's cache, but fastly's cache stays more current"
This issue is in reference to PR #331:
With our salt-master provisioned for upgrade to Ubuntu 22.04, apt-key
is deprecated with Ubuntu 22.04 under the pkgrepo.manage
module. The recommended approach is to configure -aptkey: False
to the package repo state, and set signed-by
in the repo name.
Salt does some fancy repo key management magic, where it gets the gpg key from the package repo key_url
, and creates the file in the described location, as noted by the signed-by
parameter. When salt places the keys in the designated location, the file is assigned appropriate permissions 644, and the user _apt
is able to read the file. For other packages that needed this configuration change, like datadog, it looks something like this:
-rw-r--r-- 1 root root 4538 Jan 12 13:52 datadoghq.gpg
However, when the gpg key file gets created by salt for the postgresql package, the permissions are not set appropriately, only getting 640, leaving out the ability for the user _apt
to read the file.
-rw-r----- 1 root root 3494 Jan 12 13:52 postgresql.gpg
To reproduced the deprecation error associated with this refactor:
laptop:psf-salt user$ vagrant up salt-master
laptop:psf-salt user$ vagrant ssh salt-master
sudo apt update
The expected postgres deprecation error:
W: http://apt.postgresql.org/pub/repos/apt/dists/jammy-pgdg/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.`
To reproduce _apt
user permissions bug that comes with refactoring pgkrepo.managed
:
laptop:psf-salt user$ vim ./salt/postgresql/base/init.sls
pkgrepo.managed
configure -aptkey: False
to the package repo state, and set signed-by
in the repo name as [signed-by=/etc/apt/keyrings/postgresql.gpg arch={{ grains["osarch"] }}]
laptop:psf-salt user$ vagrant destroy-f
laptop:psf-salt user$ vagrant up salt-master
The excepted error looks like this:
salt-master: ID: postgresql-repo
salt-master: Function: pkgrepo.managed
salt-master: Name: deb [signed-by=/etc/apt/keyrings/postgresql.gpg arch=arm64] http://apt.postgresql.org/pub/repos/apt jammy-pgdg main
salt-master: Result: False
salt-master: Comment: Failed to configure repo 'deb [signed-by=/etc/apt/keyrings/postgresql.gpg arch=arm64] http://apt.postgresql.org/pub/repos/apt jammy-pgdg main':
W: http://ports.ubuntu.com/ubuntu-ports/dists/jammy/InRelease: The key(s) in the keyring /etc/apt/keyrings/postgresql.gpg are ignored as the file is not readable by user '_apt' executing apt-key.
Here https://raw.githubusercontent.com/python/psf-salt/master/docs/list.rst there's an evote.python.org machine documented, but it does not resolve to an IP:
; <<>> DiG 9.11.5-1-Debian <<>> evote.python.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 31990
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;evote.python.org. IN A
;; Query time: 1 msec
;; SERVER: 10.0.0.2#53(10.0.0.2)
;; WHEN: Wed Nov 07 23:29:59 CET 2018
;; MSG SIZE rcvd: 45
has it been partially removed in 378249e?
Couple of things:
Just pushed a few things live, one of which had a migration. The migration was not run on staging or prod, had to do it manually.
Migration worked flawlessly on staging, but on Prod it complained about the peps app not having migrations. Had to comment out peps from INSTALLED_APPS to get the site working again. I'm guessing this is left over .pyc or perhaps pep related ghost migrations in the prod db?
Requirements for master (staging) and release (prod) both have Django==1.5.12 however requirements.txt on both boxes has 1.5.11 still. Cron delay issue maybe?
When running state.highstate
on any of our hosts, the various repositories we add via the pkgrepo.managed
state show as "Changed" on every run.
Perhaps we are incorrectly using this state or should move to another one. Ideally state.highstate
reports no changes when run repeatedly.
Steps to reproduce:
In a fresh clone of psf-salt, run
vagrant up salt-master
Hang occurs at:
==> salt-master: [INFO ] Executing command 'consul-template -config /etc/consul-template.d -once' in directory '/root'
/var/log/consul.log on salt-master looks like:
Sep 25 04:14:11 salt-master consul[6272]: serf: EventMemberJoin: salt-master 192.168.50.2
Sep 25 04:14:11 salt-master consul[6305]: serf: EventMemberJoin: salt-master 192.168.50.2
Sep 25 04:14:11 salt-master consul[6305]: serf: Failed to re-join any previously known node
Sep 25 04:14:11 salt-master consul[6305]: agent: failed to sync remote state: No known Consul servers
Sep 25 04:14:35 salt-master consul[6305]: http: Request /v1/health/service/graphite?dc=vagrant&passing=1&wait=60000ms, error: No known Consul servers
Sep 25 04:14:40 salt-master consul[6305]: agent: failed to sync remote state: No known Consul servers
Sep 25 04:14:40 salt-master consul[6305]: http: Request /v1/health/service/graphite?dc=vagrant&passing=1&wait=60000ms, error: No known Consul servers
Sep 25 04:15:00 salt-master consul[6305]: message repeated 4 times: [ http: Request /v1/health/service/graphite?dc=vagrant&passing=1&wait=60000ms, error: No known Consul servers]
...
After Ctrl+C (twice) to escape the 'vagrant up' command, vagrant up consul
will complete, but with a failure in the salt state 'consul-template':
==> consul: ID: consul-template [76/1951]
==> consul: Function: cmd.wait
==> consul: Name: consul-template -config /etc/consul-template.d -once
==> consul: Result: False
==> consul: Comment: Command "consul-template -config /etc/consul-template.d -once" run
==> consul: Started: 04:22:14.255200
==> consul: Duration: 18.716 ms
==> consul: Changes:
==> consul: ----------
==> consul: pid:
==> consul: 4294
==> consul: retcode:
==> consul: 15
==> consul: stderr:
==> consul: 2015/09/25 04:22:14 [ERR] (runner) error running command: exit status 1
==> consul: Consul Template returned errors:
==> consul: 1 error(s) occurred:
==> consul:
==> consul: * exit status 1
==> consul: stdout:
==> consul:
==> consul: Summary
==> consul: -------------
==> consul: Succeeded: 46 (changed=35)
==> consul: Failed: 1
==> consul: -------------
==> consul: Total states run: 47
Attempting to up another VM (such as 'speed-web') after vagrant up consul
behaves similarly to upping 'consul' (vagrant up
completes, but 'consul-template' fails). Attempting to up speed-web without attempting to up consul results in the same hang salt-master experiences.
At some time the infrastructure for bugs.python.org was moved to a different server.
It is very likely that it was moved off a server from Upfront Systems
as still written in the overview, this would need to be corrected. The new server should be added to the list of server (and hopefully with ssh pubkey fingerprints as recommended by #104 :) )
See this comment: python/psf-infra-meta#4 (comment)
And a current ping shows that this is not psf.upfronthosting.co.za
anymore:
date -u ; ping -c 1 bugs.python.org
Mi 6. Mär 08:57:38 UTC 2019
PING bugs.python.org (188.166.48.69) 56(84) bytes of data.
64 bytes from bugs.ams1.psf.io (188.166.48.69): icmp_seq=1 ttl=56 time=16.7 ms
Are the directories in /srv/docsbuild/
created manually? If so I'd like to permit build_docs.py
to do it automatically.
Also why are they 0770 for docsbuild:docsbuild? I'd like to know which branch is checked out in /srv/docsbuild/python37
but due to the mode I can't.
My initial problem was: As there is no 3.7 branch in cpython, build_docs.py
is failing to build /3.7/ which is causing a build error I'm trying to fix.
#290 updated the firewall for the cdn-logs host with the current ranges of Fastly IP addresses fetched via the API at https://developer.fastly.com/reference/api/utils/public-ip-list/
We should add a mechanism to automatically keep this list up to date so we don't cause logs to stop flowing again. Whatever range was being used to send syslog streams changed sometime in may and it was missed causing fastly to stop reporting any logs at all.
@NatanBagrov ,
In order to expedite the trouble-shooting process, could you please provide the complete code to reproduce the issue reported here.
Originally posted by @tilakrayal in tensorflow/tensorflow#55323 (comment)
It's currently set up on the buildbot host outside of Salt, running as the buildbot
user. It's listening on port 6667 (no SSL) and connected to irc.libera.chat 6697 (SSL) with the admin user set up using the same username and password as the Libera.Chat account. I followed the guides at https://wiki.znc.in/ZNC and https://wiki.znc.in/Sasl#Example to get set up. Here's roughly the sequence of events:
$ sudo apt install znc irssi # irssi used to finish setup
$ sudo -u buildbot znc --makeconf
[ .. ] Checking for list of available modules...
[ >> ] ok
[ ** ]
[ ** ] -- Global settings --
[ ** ]
[ ?? ] Listen on port (1025 to 65534): 6667
[ !! ] WARNING: Some web browsers reject port 6667. If you intend to
[ !! ] use ZNC's web interface, you might want to use another port.
[ ?? ] Proceed with port 6667 anyway? (yes/no) [yes]: yes
[ ?? ] Listen using SSL (yes/no) [yes]: no
[ ?? ] Listen using both IPv4 and IPv6 (yes/no) [yes]: no
[ .. ] Verifying the listener...
[ ** ] Enabled global modules [webadmin]
[ ** ]
[ ** ] -- Admin user settings --
[ ** ]
[ ?? ] Username (alphanumeric): cpython-buildbot
[ ?? ] Enter password:
[ ?? ] Confirm password:
[ ?? ] Nick [cpython-buildbot]:
[ ?? ] Alternate nick [cpython-buildbot_]:
[ ?? ] Ident [cpython-buildbot]:
[ ?? ] Real name [Got ZNC?]: CPython Buildbot
[ ?? ] Bind host (optional):
[ ** ] Enabled user modules [chansaver, controlpanel]
[ ** ]
[ ?? ] Set up a network? (yes/no) [yes]:
[ ** ]
[ ** ] -- Network settings --
[ ** ]
[ ?? ] Name [freenode]: liberachat
[ ?? ] Server host (host only): irc.libera.chat
[ ?? ] Server uses SSL? (yes/no) [no]: yes
[ ?? ] Server port (1 to 65535) [6697]:
[ ?? ] Server password (probably empty):
[ ?? ] Initial channels: #python-dev-notifs
...
$ irssi
/connect localhost 6667
/quote PASS cpython-buildbot <password>
/query *status
loadmod sasl
/query *sasl
mechanism external plain
set cpython-buildbot <password>
At this point, ZNC was able to connect to Libera.Chat, and buildbot was able to connect to ZNC by setting irc_host: localhost
in settings.yaml
. ZNC configuration can be found at /srv/buildbot/.znc/configs/znc.conf
and /srv/buildbot/.znc/users/cpython-bulidbot/networks/liberachat/moddata/sasl/.registry
.
Things can probably be set up rather better than what I threw together, but this hopefully at least documents what I did :)
/cc @ewdurbin
A simple change, in the /salt/base
environment for codespeed.sls
we need to include configurations that adds the nginx
user to the codespeed
group for proper automated provisioning of static files for codespeed.
See discussion at python/pythondotorg#1140. Looks like we need to have a newer version of Tex Live on the docs server. Not sure how best to make that happen.
As documented in https://docs.fastly.com/en/guides/wildcard-purges
It would be nice to be able to purge a whole version of the doc at once from docsubild scripts when updating a symlink, like PURGE /3/
and PURGE /fr/3/
and so on, instead of doing it file by file.
Currently we're all having a umask of 007
, so if we create a directory the docsbuild-scripts won't be able to touch it.
@ewdurbin would it be a good idea to set it to 002
(so if we create a directory it would go as 775
) and setting S_ISGID
(2000) bit on /srv/docs.python.org (and all directory of the herarchy) so anything created under it is still owned by the docs
group?
Is there a reason I'm not thinking of that gzip is turned off?
https://github.com/python/psf-salt/blob/master/salt/pydotorg/config/pydotorg.nginx.jinja#L264
This hurts site performance quite a bit, was wondering if it was there by mistake.
psf-salt frank]$ curl -I -H 'Accept-Encoding: gzip,deflate' https://www.python.org/
HTTP/1.1 200 OK
Date: Mon, 12 Jan 2015 17:27:34 GMT
Server: nginx
Content-Type: text/html; charset=utf-8
X-Frame-Options: SAMEORIGIN
Content-Length: 46032
Accept-Ranges: bytes
Via: 1.1 varnish
Age: 2865
X-Served-By: cache-dfw1834-DFW
X-Cache: HIT
X-Cache-Hits: 33
Vary: Cookie
Public-Key-Pins: max-age=600; includeSubDomains; pin-sha256="WoiWRyIOVNa9ihaBciRSC7XHjliYS9VwUGOIud4PB18="; pin-sha256="5C8kvU039KouVrl52D0eZSGf4Onjo4Khs8tmyTlV3nU="; pin-sha256="5C8kvU039KouVrl52D0eZSGf4Onjo4Khs8tmyTlV3nU="; pin-sha256="lCppFqbkrlJ3EcVFAkeip0+44VaoJUymbnOaEUk7tEU="; pin-sha256="TUDnr0MEoJ3of7+YliBMBVFB4/gJsv5zO7IxD9+YoWI="; pin-sha256="x4QzPSC810K5/cMjb05Qm4k3Bw5zBn4lTdO/nEW/Td4=";
Strict-Transport-Security: max-age=63072000; includeSubDomains
gzip should take say the front page content down from 45K to 9K which is a big win on mobile, results will be even better/greater for things like the site CSS and JS.
From the README, I gather that we won't currently keep backups going back more than 14 days.
It might be useful to maintain some backups longer, say one set of backups (media & database) from each month.
Can we get a Sentry or Rollbar account setup for python.org? Would be nice to get alerts/exceptions for when mistakes and/or outages occur.
I'm having build issues on docs.iad1.psf.io while french PDFs, but locals build are working. I spotted xelatex and babel are quite old on docs server:
For babel (in the texlive-latex-base package) I have 2018/05/02 3.20 The Babel package
while docs.iad1 have 2013/12/03 3.9h The Babel package
.
For XeTex:
If it's easy, maybe upgrading them would help.
[WARNING ] The function "module.run" is using its deprecated version and will expire in version "Phosphorus".
Some of our states use module.run
which apparently is using a deprecated syntax.
We should determine what the call should look like and update all invocations in our states to silence this warning.
To reproduce, vagrant up
the salt-master and then vagrant up
a node that uses the module.run
function such as cdn-logs
.
Provisioning the Salt master through Vagrant produces the following error, which seems to be caused by a reference (https://github.com/python/psf-salt/blob/master/salt/_states/consul.py#L5) to a non-existent Pillar value (https://github.com/python/psf-salt/blob/master/pillar/dev/consul.sls or https://github.com/python/psf-salt/blob/master/pillar/prod/consul.sls):
salt-master: [ERROR ] An exception occurred in this state: Traceback (most recent call last):
salt-master: File "/usr/lib/python3/dist-packages/salt/state.py", line 1919, in call
salt-master: **cdata['kwargs'])
salt-master: File "/usr/lib/python3/dist-packages/salt/loader.py", line 1918, in wrapper
salt-master: return f(*args, **kwargs)
salt-master: File "/var/cache/salt/minion/extmods/states/consul.py", line 5, in external_service
salt-master: token = __pillar__['consul']['acl']['tokens']['default']
salt-master: KeyError: 'default'
The docsbuild venv, from https://github.com/python/psf-salt/blob/master/salt/docs/init.sls#L38 runs python 2.7.
It's only used to run sphinx (not the docsbuild script itself) and I always (by "mistake") built it using "python3 -m venv" when building locally and it always worked perfectly, so the upgrade looks safe.
However as I almost never used salt, I'm still unsure salt is able to upgrade this cleanly, according to the docs a - python: python3
may be enough, if someone knows...
While working on the docsbuild-scripts I spotted this in /var/log/syslog
which does not looks right:
/var/log/syslog.1:Jul 25 00:07:01 docs sSMTP[24028]: Unable to locate mailhub
/var/log/syslog.1:Jul 25 00:07:01 docs sSMTP[24028]: Cannot open mailhub:25
/var/log/syslog.1:Jul 25 00:07:01 docs CRON[24025]: (docsbuild) MAIL (mailed 1 byte of output; but got status 0x0001, #012)
Context: the docsbuild script (started by cron) was failing (I used an f-string, remembering you installed Python to 3.6 on the server, but the cron is still running 3.4 (which is not a problem, the cron then make subsequents calls (to blurb and sphinx-doc) use the 3.6 venv)).
Hi,
There are discussions about making #python-dev channel less noisy. Previously, bots were talking too much, and people were not able to follow a discussion.
@zware already moved Buildbot notifications to a new #python-dev-notifs
channel.
It seems like irker bots are controller by the PSF: c94d99c#diff-109d7aa6f365b412c9454a6fc7cbbddc
It seems like the IRC channel is set there:
Line 7 in 8c29e89
Hi Guys,
There is some considerable work being done on reworking the navigation on python.org. It'll likely be a multi-week iterative process so I'd rather not use the existing staging for this in case there are other hot issues to be worked in between.
Would it be possible to setup staging2 or something on the same hardware as staging? Just a separate venv, DB, and branch to use?
Roles we have enabled in prod:
We should start by ensuring that each comes up clean in dev! Let's put a ✅ in roles we've confirmed!
The build_docs.py script has changed a lot:
So starting it hourly would keep fast-changing doc fresher, while using less CPU than the old "let's rebuild everything everyday" approach.
Technically starting it minutely would work too, but it would flood the logs and it would be started hundreds of times in a row just to git fetch and do nothing (if nothing changed).
Feel like hourly is a good compromise.
Can we have:
https://github.com/python/docsbuild-scripts#the-github-hook-server
on docs.nyc1.psf.io? I though we had an issue open about it but I don't find it :D
https://github.com/python/psf-salt/blame/master/salt/docs/config/nginx.docs-backend.conf#L86
Should not only redirect /lib/
but /lib/*
.
(can't assign myself, but I'll try it).
Deadman snitch currently gets an auto generated key/check for each server. We need to either add a step to our infra.psf.io migration guide that pauses the service during the migration steps, or automate provisioning a clean up when migrating to a new server happens.
Note: take a closer look at multi-user accounts for deadmans snitch.
Can I get a Python.org Sentry account setup for me? Thanks!
pypa/get-pip#50 and similar come up when pypa/get-pip adds a new version. optimally the configurations for specific versions such as symlinking and purging should be automated.
Travis are now recommending removing the sudo tag.
"If you currently specify sudo: false in your .travis.yml, we recommend removing that configuration"
Hey Everyone,
We need to set the Fastly API Key so the app code can automatically PURGE when Pages and Downloads (and likely eventually other aspects of the site) are edited.
So ultimately pydotorg.settings.server needs to contain:
FASTLY_API_KEY = ''
This will likely need to be setup in Consul, so I wasn't sure exactly how to go about putting together a pull request for it.
There are some instances where we are calling latest
when pinning our Ubuntu upgrades to salt. For example, we do this in our /salt/base
environment for salt.sls
We should go through and make sure that we are calling the right version for these pins:
3006
for jammy
3004
for focal
2018
for bionic
The psfmember.org certificate is about to expire (in 11 days from now), and it's not the first time it goes that far, what could we do to enhance the situation?
psf-salt/pillar/base/users.sls
Lines 263 to 270 in 31d9219
According to this, I should be able to connect, though it is rejecting my key for some reason. Maybe the configs aren't synced yet though this change is pretty old (more than 6 months).
$ ssh -i ~/.ssh/id_rsa_gh [email protected]
[email protected]: Permission denied (publickey).
$ cat ~/.ssh/id_rsa_gh.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDI+7nVxFPxT0A5mrLq59YFziAYv2KP9nsjQiKP4139FCYur3CbBkgBD2MSPk5kNy39X+b3xZrZFV72rspY968iFyLXBTfAEoLNAyKC8OHJ9irV2ToWLuWOoek30HrbSYGnuzRFfHNW/8wNuiB2GGZ3hwCAcWaLoGjQuaW47AulkRHWpDzQpJO6zXTo2r5OelLGu2z2zFdtOLvIEnG8FiSKLgMt1UJk7Y1JXQF0+cOOeS/NZHu5efa1Lxpch2qumKJhD4gh1sv9+K+VX70TtZ14uuqo8454b7n3kKeC/RQu6hdXUUcjCywLniNurgTT84OcJluQNHHb0sapOz76oOqr4rF3osFcLav3BbrvdKNH+2TU9feC1BTGwoeQih1jYH3cLJDAvbs+EVkTEjhgfsw88REYJ8srvzLOmeGHNgwscYe2r+q2WB0d4C/Vsud8wea5uSKFn37RkNFcb5zzkEYuyC3rdfgVvFWKVwcKsXtWTDV22M85GRo2F9chXfc4QU0= isidentical@desktop
When calling state.highstate
on any given node, the following log lines appear during execution:
[ERROR ] Invalid compound target: for results:
[ERROR ] Invalid compound target: for results:
We should determine what these are and how to resolve them.
Running sudo salt-call -l debug state.highstate
after vagrant ssh
ing into a given node locally should produce output that helps us track this down.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.