fausecteam / ctf-gameserver Goto Github PK
View Code? Open in Web Editor NEWFAUST Gameserver for attack-defense CTFs
Home Page: https://ctf-gameserver.org
License: ISC License
FAUST Gameserver for attack-defense CTFs
Home Page: https://ctf-gameserver.org
License: ISC License
FAUST CTF's current way to send emails to all teams is rather hacky.
In my opinion, a nice solution would be silently adding all addresses to an (of course moderated) mailing list. Mailman 3 has an API which could probably enable that, however it's still said to be kind of immature.
It would be nice to have Grafana (or similar) dashboards for what happened during a CTF โ flag submissions primarily come to my mind.
I see three implementation options:
@phi1010 also had some ideas in this direction.
There could be a way to manually mark the checker for a service as defective. That would be shown on the scoreboard and maybe be reflected in the scoring.
fingerprintingfoo
at least make sure it goes first so time correlation between flag placement and other stuff can't be abused for fingerprinting
There is a "bonus" field in the "scoring_flag" table, which somehow gets incorporated into scoring but (hopefully) is never actually used. I don't know what the semantics of that are supposed to be, so we should remove it.
On the other hand, one might actually want to award bonus points to a team. However, these should not be tied to a particular flag.
It will probably be easier to implement this after #28.
VPN configs could be distributed through a login-protected area of the website instead of/in addition to email.
That would required an option to provide downloads for individual teams. In order to be usable, such option would also have to support uploading the files for many teams at once.
The database and table where Checkers can store state is commonly referred to as "cache". This is not accurate and led to confusion for @rudis, since it really contains non-cached state.
All components could provide relevant metrics for monitoring in the Prometheus format.
That e.g. includes error rates per checker, per box.
Checkers should only run again active teams.
We had a problem with this in the web "Service History" during FAUST CTF 2019, which was hotfixed through 6d9c823.
We currently return the raw floating-point scores in the CTFTime scoreboard format ("scoreboard.json"). However, CTFTime doesn't like that.
I had success with rounding everything to the nearest integer, the spec says (at least for the overall "score", may also be true for task "points"):
Decimal up to 4 digits after delimiter
In some cases, scores that are 0 do not cause the score value ("0") to get rendered on the web scoreboard. This is some combination of no total points for a given service/category, where the total of 0 also prevents the score components from getting rendered.
The current implementation of flag IDs through their own script (ctf-flagid
) is a hack because of lack of time. Ideally, this should be part of the web component.
The ctf-flagid
script does not work anymore due to recent changes (IPv6, net numbers etc.).
There could be an automated way to alert of "first blood" events, i.e. the first successfully submitted flag for a service.
Maybe, this could also be handled through #31.
Current state of Python and Debian packaging is a major pain point: Files end up in the wrong package (e.g. "python3-ctf-gameserver" instead of "ctf-gameserver-checker") and getting static files added to packages is a game of luck. I have seriously wasted at least a working day with this for FAUST CTF 2018 and haven't even completely debugged the issues yet.
The problems include, but are not limited to, the fact that the Debian packaging strives to be nice and policy-compliant, which complicates stuff like adding a custom Bootstrap build. Since from my point of view, it's not worthwhile to get the project into official Debian repositories anyway, I'd just take a pragmatic approach and include more dependencies like jQuery and Bootstrap in our packages.
In addition to #32 and #33 we might want to support launching Docker containers for checker scripts. This would make testing and deployment of checker dependencies easier.
One container per service, team and tick is probably too expensive, but one per service and team (and checker instance) could work. However, we would need some (enhanced or new protocol) to communicate with the containers.
The order HTTP headers appears to be deterministic, which could theoretically facilitate checker fingerprinting. We might be able to add randomization by monkey-patching requests from the checker runner or similar.
According to @maltek, this depends on the Python version and the order wasn't deterministic in previous versions.
Doing scoring thorugh a Materialized View is nice from a DB point of view, however it causes some problems in practice:
scores()
in "calculations.py")I want something more maintainable, probably more Python instead of SQL.
Error messages for the command line components could be more verbose when an error occurs during start-up.
Error conditions I observed so far (definitely incomplete) are mostly related to the database connection, e.g. connection errors, missing permissions or not handling the case the no service objects have been created.
Somebody had the idea to provide a web dashboard with all information on a service and its checkers to service authors.
Exact features are to be determined.
Gameserver fingerprinting is made easier by the fact that checks are performed at a relatively static point in the tick every n
minutes.
This should be more random, however there are some pitfalls:
All code has originally been written for interpreter and library versions included in Debian Jessie.
We ran FAUST CTF 2018 on Debian Stretch and that mostly worked alright, but there still has to be a systematic check for changes in libraries and I guess some "requirements.txt" files will have to be updated. This is especially the case for Django.
Unfortunately, we don't have any test cases (see #35).
The unit files for the controller, submission and checker components launch their services as user "nobody".
It is of course reasonable to run them unprivileged, but running everything as "nobody" is problematic on its own. This especially subverts our privilege separation between the checker master and the checker scripts.
Other than creating separate accounts, a possible solution would be using systemd's DynamicUser
feature.
Important announcements during the CTF could be shown above the scoreboard, which a lot of people are looking at all the time.
In addition to its name, each service should have a slug stored in the database. That would make it easier to identify the files for its checker in the file system.
For FAUST CTF, we already have a slug in each service's metadata file, to identify its associated files.
I'm not sure if data from the "GameControl", especially regarding timing, is currently processed by all components in an optimal way. Questions include:
At the moment, we assume that actual start and end times for ticks are equal to the theoretical ones as computed from start time of the CTF and tick duration. Additionally, the current tick is stored in the "scoring_gamecontrol" table.
For improved error handling and better insights, it would be beneficial to have an extra table which stores the actual start and end time for each tick. All fields storing ticks, which are just integer values at the moment, should then be converted to Foreign Keys.
Since the first FAUST CTF, we want to provide a status page for the team's VPN connections (including ping checks) on the website before the CTF.
Similar to #13, it could probably reuse a good deal of checker infrastructure, however there are some technical challenges with adding special checkers which run during different times than the others.
More verbosely express that verifying your email address by clicking the confirmation link is a strict requirement for participating in the competition.
Writing checker scripts is currently heavily Python-specific, though there's no reason why it has to be like that. I'd wish for more language options.
Languages that come to my mind:
Currently, asset files like sponsor images or network maps either have to be uploaded to the server manually or be included in the Debian package.
Both options are not nice, ideally you could just upload them through the admin interface and have a nice way to include them on Flatpages.
We could eliminate the major performance issues from FAUST CTF 2015 by calculating scores asynchronously, but rendering the scoreboard HTML is still problematic: A page load can take up to several seconds if there is no cached version of the current scoreboard.
The HTML should probably also get rendered asynchronously from requests, however this is hard to do nicely with Django. Another option would be to only servce JSON and do the rendering client-sides through JavaScript, though I don't like requiring JS just for the scoreboard.
The checker script runner (ctf-checkerslave
, though I'm not happy with the name) could be redesigned into a Python library instead of importing the checker scripts as modules. Checker scripts would then be executables importing the library, instead of the other way round.
This would be the nicer design in my opinion and eliminate some configuration overhead for import-ability of checker modules. It would also provide a nice template for implementations in other languages (see #32).
There should be a log which just contains the results (no error messages or similar) of all checker runs and their time.
Are you planning on publishing the 2019 services just like the 2017/2018 services?
Quotes form our internal issue tracker (in German):
Auf der Webseite dokumentieren, was genau man hinschicken muss und was man dann als Antwort bekommt. Im Zweifel reicht ein Link zum Github.
Hilft als Teilnehmer halt dem Adrenalin-Spiegel, wenn man das vorbereiten kann.
link auf http://ctf-gameserver.org/flags.html#submission und ich mach dass da was sinnvolles steht
When editing a team after initial registration, the password (and confirmation) field may be empty. In that case, the existing password should be kept. However, the password gets set to an empty one instead and teams can't log in afterwards.
I first noticed this at FAUST CTF 2018, but from looking at the code, I'd say it's always been this way.
Deleting a team (pressing "Delete team" on the editing page) fails with:
django.db.utils.ProgrammingError: permission denied for relation scoring_scoreboard
The same or a similar problem also occurs when deleting the object through the Django admin interface. There probably are some database constraints or permissions kicking in here.
At the moment, ctf-testrunner
and ctf-checkerslave
take an integer (probably the service's database ID) as their --service
argument. ctf-checkermaster
takes a string (the service's name), but describes it as "database id of the service".
This should be made more consistent. Probably, the service slug should be used when #16 is ready.
We had some security issues with uploaded team images at FAUST CTF 2018, where Cross Site Scripting was possible. There should be validation of the uploaded file extensions and maybe also file contents.
Besides that, we should add notes about setting Content-Security-Policy
and maybe using a different domain for uploads in the documentation.
We need support for continuously testing checker scripts before the actual CTF. From the Gameserver's point of view, this should be pretty similar to hosting an actual CTF, with some differences including:
Issues to be checked when checker runtime exceeds certain bounds (>1m), expected behavior in
brackets:
After the CTF, you want to keep the website around but not have an (at some point outdated) Django instance running. Instead, it should just be served as static HTML.
I currently use a wget
command like this for that purpose:
wget --recursive --domains <year>.faustctf.net --page-requisites https://<year>.faustctf.net/
This works, but is hacky and does not include not linked-to files such as the scoreboard JSON.
Ideally, such functionality could be implemented through Django as a special custom command in manage.py
or so.
I know this is a far stretch and writing test cases is lots of work. But currently our only really test is running a FAUST CTF and that's not a satisfying situation.
For example, stuff like #34 is a complete poking in the dark.
So maybe, just maybe, we could get test cases for all components at some point?
Remove cancelled but not yet cleaned tasks from open_tasks in checkermaster
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.