pereztr5 / cyboard Goto Github PK
View Code? Open in Web Editor NEWScoring engine for cyber defense competitions
License: BSD 3-Clause "New" or "Revised" License
Scoring engine for cyber defense competitions
License: BSD 3-Clause "New" or "Revised" License
The style of the web API doesn't follow any forms or conventions. For instance, JSON API endpoints are mixed with HTML template serving endpoints. It deserves to be cleaned up. I personally like this guide for suggestions as to naming strategies (as well as several other points it hits home on).
This would mean updating server/routes.go
, and CTRL+F'ing the front end to keep any AJAX requests in sync with these updates.
There should be a way via the web gui for the challenge creators to add/delete/update their flags. Configuring this via the mongo cli / shell is error prone and dumb.
cyboard
needs a way to manage users from the web app as well, but that is a separate issueThis is simply my current idea, subject to change:
There is likely a wonderful JS editor for tabular data in this manner that we could leverage, but I have not researched that yet.
Edit: Additionally, challenge creators want live statistics on which of their flags have been submitted, by what teams, and other info. This should be provided on the proposed page or some sibling dashboard that is only for administrators.
With all the different components a competition can span, it is useful to let the competitors and challenge makers know where they fall in the grand scheme.
I've seen it done for Highcharts graphs, where the user can click to drill down into finer detail on some data point (score, in our case).
The bars themselves could be broken up into the main categories, CTF and Infrastructure, though that may ultimately be too messy.
We can utilize the table that is below the scoreboard to show each of these different groups of points as newly add columns.
Admins should not have to drop to CLI for everything related to service checks for the infrastructure portion of an event.
checks
, args
, and point
valuesAny logging that could be shown from the last few check runs is a plus.
Going beyond viewing current configuration, being able to update it is the next priority:
scorengine.challenges
collection for display purposes.This is only a rough idea as is.
Due to the number of errors that can occur throughout this process, we should be careful with the implementation.
Because good people deserve appreciation in the best way possible: Imaginary internet points!
If only we could give out a BRAND NEW CAAAAAAR
The web interface should allow admin
& blackteam
members to hand out points to teams that show respect to others, support the staff, and generally make the event a better experience.
All we would need, is a simple button that pulls up a modal, with a combo box for the team name, a number input box for the bonus points value, and a text area to leave a note about why the team receive these points.
Bonus points: If the component accepts negative numbers, this feature doubles as a way to dock scores of less-than-excellent contestants.
A great deal of our maintenance could be cut down if the application performed hot reloading of service check config options, checks.toml
. This way, when the user updates the checks that should run, the server does not need to be restarted, instead, it just becomes aware of the new state it should be guided under.
The config lib we use, viper
, supports this easily. Just be sure to avoid nasty race-condition scenarios, by preventing the vars the config is watching for from being reloaded in the middle of E.G. a round of service checks.
I don't believe this is possible to do for our web server options, config.toml
, at least not in a way that does not interrupt service. Even if done programmatically, the server would still have to restart itself to do something like bind to a different interface or port for hosting.
Where are the docs?
Competitions are likely to have a break for a meal, or between days. During that period, checking should not occur.
Currently, we have to manually stop and then resume the server for these times.
Instead, a list of breaks should be an available config option, with a pair of stop & resume dates.
As our main event grows larger, performance is becoming a bigger concern.
There appears to be room for improvement surrounding the WebSocket handlers. A new form of caching strategy would go a long ways to help us out, in this respect. The current method scales linearly.
I would love to see benchmarks backing up any work done on this. There are tools [1] to help out with this.
[1]: thor
Points are awarded for each successful check of a service. The amount of points awarded is currently given per interval. Eg.
[checks.2]
check_name = "web"
points = [ 10, 0, 0 ]
# Where '10' is on success, and then 0 on 'partial', and 0 on 'failure'
This format completely fails at allowing admins to easily say and enact "the web service should be worth 500 pts over the competition". While their is a use for the current naive format of increments, we have had trouble in the past with easily calculating the correct score that should be given over the whole competition.
Unfortunately, calculating the discrete individual amount of awarded points is not as simple as taking the desired total over the competition divided by the interval rate, because certain checks are not enabled at the start of the competition. Additionally, #12 would cause this to be even more effort to get correct.
A dirty, but workable solution, would be to peek into the results table to get the earliest posted result for each check, and use that timestamp to programmatically determine when a check was enabled, Then you could get the proportion to award per discrete check attempt.
The web client's javascript has 'wss' hard-coded in, which is--most importantly--amateur, and also breaks development without an SSL cert.
The biggest standout is that, currently, no logging is saved for any sort of analysis afterwards. This is an easy thing to change.
Additionally, it may be worth it for monitoring purposes to update to ๐ logrus, away from Go's built-in log
pkg.
Since last year, there have been a handful of changes to the main website for CNY Hackathon, including the new logo (the one with the fire, and the USB symbol that evokes the feeling of a murder weapon from Clue). In an effort to not clash against the design of the hackathon homepage, our CSS should be updated in an attempt to mirror the homepage.
The main website's design is now very minimal, opts for only 2 tones of blue for links / buttons, and pure black text on white background. From some scrounging, it seems to be the "Frank" WordPress theme, which is a no-frills wp theme. More detail here.
The logo, as you can see, has that red-orange, along with many more tones of white to gray to black. I'd like to try out a theme that has those colors, if only because it gives us more to work with.
Strictly speaking, the Scoring website should simply move away from the current dark theme, to a light theme. That would be enough to reduce the shock when contestants likely begin the competition by clicking a link on the CNY Hackathon homepage that leads to the Scoring website.
This can be time based on the first challenge submitted.
The reason for this is because teams will sit on flags and wait until the very end.
Service monitor scripts should be able to use the team's name and IP together. This would be for something like a SSH banner grab, where the content must have the team's name. At the moment, only the team's IP is available to the service scripts.
Right now, arguments to scripts are configured, where the raw string "IP" is replaced by the IP addr of the team being evaluated. Example:
[[checks]]
# ...
args = "-I IP -t 5"
While this has worked, it feels flimsy. A golang text/template
format string could do the job of substituting {{ .Name }}
and {{ .IP }}
. That is just one solution, though.
Savvy contestants who know their way around around the IP Address Block can lock down a service, such that only the Scoring Engine can reach through the firewall, leaving red team with no options. While it's great that the participant is demonstrating defensive networking skills, they would not be able to do this in a production IT shop with real customers. To this effect, we would like service checks to be performed over multiple networks. Blue teams should be black-listing bad actors, not white-listing when their services are supposed to be public.
The initial idea I had for this, would have been to permit the configuration of other VMs as worker nodes to the Service Checker. The master would schedule nodes to run a check against each team, and report back the results.
Adding whole other machines distributes the work load, but this also brings all the complexity of distributed systems into the mix (implementing fail over, networking, etc).
You can do this in a crude way - right now with no application changes - by deploying multiple Service Checkers, and coordinating toggles of which checks are run on each VM. This could be done using something like salt stack or even pssh
to orchestrate changes over the checks.toml
file.
Continued research led me to "Linux Network Namespaces", which would allow the service checks to be run on different networks without updates to the check scripts themselves. I've found several posts detailing this feature or using comparable methods, such as cgroups:
My idea is to add a config file option which provides a set of network namespaces. One would be randomly selected before each check is run. Then, the script would be wrapped in an ip netns exec <ns> ...
call, using the same ns for all teams.
This solution would be restricted to Linux environments. To support BSD, jails would be the tech to look at, but let's stick to Linux for now.
If there are any other ideas to achieve this, I'm all ears. There are definitely trade-offs to weigh.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.