tfns / ctfnote Goto Github PK
View Code? Open in Web Editor NEWCTFNote is a collaborative tool aiming to help CTF teams to organise their work.
License: GNU General Public License v3.0
CTFNote is a collaborative tool aiming to help CTF teams to organise their work.
License: GNU General Public License v3.0
I've been trying to setup CTFNote on my server, but everytime I try to open a pad for any task it simply opens an iframe of the index page.
Only change I've done was change the port from 80 to 8080
When upgrading the TFNS production database, I had to do a few additional tasks :
ALTER USER ctfnote WITH PASSWORD 'ctfnote';
This needs to be documented before public announcement
From #68
Reload incoming CTFs when changed
This is still a bug in the new CTFNote, the view is not updated if there is a new CTF added. This is probably related to this issue.
@JJ-8's fix is JJ-8@1505ef7
For exemple, slugifying '15LXnCDXlNeq16fXpNeUCg==' (base64 encoded unicode) will result to '' and cannot be used as an URL.
Edit: sorry, I opened this issue in the wrong repo :^)
My team and many other teams i know of are using discord servers to communicate. If this tool could automatically:
i think that'd be really useful.
I'm not quite sure how it'd be implemented, but it'd be a cool feature nonetheless
The page listing all challenges is very difficult to get a good overview of and find the challenge. It would be nice if the challenges were grouped by category, or at least if there was an option to group them by category to more easily be able to find them.
When CTFNote is configured with a domain (for Hedgedoc's CSP), new tasks store the wrong URL.
This happens if padUrl
starts with something else than /
. CTFNote strips the first char and replace it with the PAD_SHOW_URL
(e.g. /ttp://127.0.0.1/pad/foobar
)
When deploying the dev version of the third party containers (hedgedoc and database), The following CSP errors are encountered:
The issue can be quickly fixed by adding the following env var to the docker-compose.dev.yml
configuration file
CMD_CSP_ENABLE=false
.
I think we don't need the CSP in the dev version.
When importing a relatively large CTFd JSON file (~55kb), the API consumes a lot of CPU and RAM (it managed to exhaust 4GB of RAM and hang up the API thread)
The tasks do get imported, but it was only able to get through ~15 tasks before running out of memory.
The JSON I used (obtained from the picoCTF 2019 API):
https://pastebin.com/YCUMh1qM
Logs from the API container:
https://pastebin.com/VipdguRH
Hello there. I just wanted to test this interesting project on my rpi4 and it's fails while building.
UPDATE: node-pre-gyp http 403 status code downloading tarball https://mapbox-node-binary.s3.amazonaws.com/sqlite3/v5.0.0/napi-v3-linux-arm64.tar.gz
So because sqlite3 not available it's fails obviously and there is no Python executable Either, which is rather odd.
Info [email protected]: The platform "linux" is incompatible with this module.
info "[email protected]" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
[4/4] Building fresh packages...
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
error /usr/src/app/node_modules/sqlite3: Command failed.
Exit code: 1
Command: node-pre-gyp install --fallback-to-build
Arguments:
Directory: /usr/src/app/node_modules/sqlite3
Output:
node-pre-gyp info it worked if it ends with ok
node-pre-gyp info using [email protected]
node-pre-gyp info using [email protected] | linux | arm64
node-pre-gyp WARN Using request for node-pre-gyp https download
node-pre-gyp info check checked for "/usr/src/app/node_modules/sqlite3/lib/binding/napi-v3-linux-arm64/node_sqlite3.node" (not found)
node-pre-gyp http GET https://mapbox-node-binary.s3.amazonaws.com/sqlite3/v5.0.0/napi-v3-linux-arm64.tar.gz
node-pre-gyp http 403 https://mapbox-node-binary.s3.amazonaws.com/sqlite3/v5.0.0/napi-v3-linux-arm64.tar.gz
node-pre-gyp WARN Tried to download(403): https://mapbox-node-binary.s3.amazonaws.com/sqlite3/v5.0.0/napi-v3-linux-arm64.tar.gz
node-pre-gyp WARN Pre-built binaries not found for [email protected] and [email protected] (node-v83 ABI, musl) (falling back to source compile with node-gyp)
node-pre-gyp http 403 status code downloading tarball https://mapbox-node-binary.s3.amazonaws.com/sqlite3/v5.0.0/napi-v3-linux-arm64.tar.gz
gyp info it worked if it ends with ok
gyp info using [email protected]
gyp info using [email protected] | linux | arm64
gyp info ok
gyp info it worked if it ends with ok
gyp info using [email protected]
gyp info using [email protected] | linux | arm64
gyp ERR! configure error
gyp ERR! stack Error: Can't find Python executable "python", you can set the PYTHON env variable.
gyp ERR! stack at PythonFinder.failNoPython (/usr/src/app/node_modules/node-gyp/lib/configure.js:484:19)
gyp ERR! stack at PythonFinder.<anonymous> (/usr/src/app/node_modules/node-gyp/lib/configure.js:406:16)
gyp ERR! stack at F (/usr/src/app/node_modules/which/which.js:68:16)
gyp ERR! stack at E (/usr/src/app/node_modules/which/which.js:80:29)
gyp ERR! stack at /usr/src/app/node_modules/which/which.js:89:16gyp ERR! stack at /usr/src/app/node_modules/isexe/index.js:42:5
gyp ERR! stack at /usr/src/app/node_modules/isexe/mode.js:8:5
gyp ERR! stack at FSReqCallback.oncomplete (fs.js:176:21)
gyp ERR! System Linux 5.4.0-1015-raspi
gyp ERR! command "/usr/local/bin/node" "/usr/src/app/node_modules/node-gyp/bin/node-gyp.js" "configure" "--fallback-to-build" "--module=/usr/src/app/node_modules/sqlite3/lib/binding/napi-v3-linux-arm64/node_sqlite3.node" "--module_name=node_sqlite3" "--module_path=/usr/src/app/node_modules/sqlite3/lib/binding/napi-v3-linux-arm64" "--napi_version=6" "--node_abi_napi=napi" "--napi_build_version=3" "--node_napi_label=napi-v3"
gyp ERR! cwd /usr/src/app/node_modules/sqlite3
gyp ERR! node -v v14.5.0
gyp ERR! node-gyp -v v3.8.0
gyp ERR! not ok
node-pre-gyp ERR! build error
node-pre-gyp ERR! stack Error: Failed to execute '/usr/local/bin/node /usr/src/app/node_modules/node-gyp/bin/node-gyp.js configure --fallback-to-build --module=/usr/src/app/node_modules/sqlite3/lib/binding/napi-v3-linux-arm64/node_sqlite3.node --module_name=node_sqlite3 --module_path=/usr/src/app/node_modules/sqlite3/lib/binding/napi-v3-linux-arm64 --napi_version=6 --node_abi_napi=napi --napi_build_version=3 --node_napi_label=napi-v3' (1)
node-pre-gyp ERR! stack at ChildProcess.<anonymous> (/usr/src/app/node_modules/sqlite3/node_modules/node-pre-gyp/lib/util/compile.js:83:29)
node-pre-gyp ERR! stack at ChildProcess.emit (events.js:314:20)
node-pre-gyp ERR! stack at maybeClose (internal/child_process.js:1051:16)
node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit (internal/child_process.js:287:5)
node-pre-gyp ERR! System Linux 5.4.0-1015-raspi
node-pre-gyp ERR! command "/usr/local/bin/node" "/usr/src/app/node_modules/sqlite3/node_modules/.bin/node-pre-gyp" "install" "--fallback-to-build"
node-pre-gyp ERR! cwd /usr/src/app/node_modules/sqlite3
node-pre-gyp ERR! node -v v14.5.0
node-pre-gyp ERR! node-pre-gyp -v v0.11.0
node-pre-gyp ERR! not ok
Failed to execute '/usr/local/bin/node /usr/src/app/node_modules/node-gyp/bin/node-gyp.js configure --fallback-to-build --module=/usr/src/app/node_modules/sqlite3/lib/binding/napi-v3-linux-arm64/node_sqlite3.node --module_name=node_sqlite3 --module_path=/usr/src/app/node_modules/sqlite3/lib/binding/napi-v3-linux-arm64 --napi_version=6 --node_abi_napi=napi --napi_build_version=3 --node_napi_label=napi-v3' (1)
ERROR: Service 'api' failed to build: The command '/bin/sh -c yarn install' returned a non-zero code: 1
I cant docker-compose
CTFNote on my VPS, here are the logs and files:
root@PubBuntu:~/CTFNote# docker-compose up -d --build
Building db
Step 1/2 : FROM postgres:14
---> e94a3bb61224
Step 2/2 : COPY create-multiple-postgresql-databases.sh /docker-entrypoint-initdb.d/create-multiple-postgresql-databases.sh
---> Using cache
---> aee0bfef7d1a
Successfully built aee0bfef7d1a
Successfully tagged ctfnote_db:latest
Building api
Step 1/27 : ARG NODE_ENV="production"
Step 2/27 : FROM node:16-alpine as builder
---> 710c8aa630d5
Step 3/27 : ARG NODE_ENV
---> Using cache
---> f9713ada2f4e
Step 4/27 : COPY package.json yarn.lock tsconfig.json start.sh /app/
---> Using cache
---> 9ce0ca6840a8
Step 5/27 : WORKDIR /app/
---> Using cache
---> 775f55525e2b
Step 6/27 : RUN yarn install --frozen-lockfile --production=false --no-progress
---> Using cache
---> 0c67b81de709
Step 7/27 : COPY src/ /app/src/
---> Using cache
---> e7c4d56244e5
Step 8/27 : COPY migrations/ /app/migrations/
---> Using cache
---> 932106bc3d9f
Step 9/27 : RUN yarn run build
---> Using cache
---> 0625bc5270aa
Step 10/27 : FROM node:16-alpine as clean
---> 710c8aa630d5
Step 11/27 : ARG NODE_ENV
---> Using cache
---> f9713ada2f4e
Step 12/27 : COPY --from=builder /app/package.json /app/yarn.lock /app/start.sh /app/
---> Using cache
---> 8eb318c725b2
Step 13/27 : COPY --from=builder /app/dist/ /app/dist/
---> Using cache
---> 07d67540af01
Step 14/27 : COPY --from=builder /app/migrations/ /app/migrations/
---> Using cache
---> 3787a522e2ae
Step 15/27 : FROM node:16-alpine
---> 710c8aa630d5
Step 16/27 : ARG NODE_ENV
---> Using cache
---> f9713ada2f4e
Step 17/27 : EXPOSE 3000
---> Using cache
---> c689b056f292
Step 18/27 : WORKDIR /app/
---> Using cache
---> b7612c97836e
Step 19/27 : COPY --from=clean /app/ /app/
---> Using cache
---> 42b27d846142
Step 20/27 : RUN yarn install --frozen-lockfile --production=true --no-progress
---> Using cache
---> 3fd1deac3937
Step 21/27 : RUN chmod -R 0555 .
---> Using cache
---> ad1b5e21749b
Step 22/27 : RUN mkdir /app/uploads
---> Using cache
---> ba8ed7170503
Step 23/27 : RUN chown node /app/uploads
---> Using cache
---> b97814ba9fca
Step 24/27 : ENV GRAPHILE_TURBO=1
---> Using cache
---> 5dc158df4aef
Step 25/27 : ENV NODE_ENV=$NODE_ENV
---> Using cache
---> f2b49be54b6b
Step 26/27 : USER node
---> Using cache
---> ebf8942720c9
Step 27/27 : CMD ./start.sh db 5432 yarn start
---> Using cache
---> 58672dd09a4c
Successfully built 58672dd09a4c
Successfully tagged ctfnote_api:latest
Building front
Step 1/12 : FROM node:16-alpine AS build-deps
---> 710c8aa630d5
Step 2/12 : RUN mkdir -p /usr/src/app
---> Using cache
---> 24cd8b5f5324
Step 3/12 : COPY package.json quasar.conf.js .eslintrc.js .eslintignore tsconfig.json .postcssrc.js yarn.lock babel.config.js quasar.extensions.json /usr/src/app/
---> Using cache
---> eeb3817307d4
Step 4/12 : RUN cd /usr/src/app && yarn install
---> Using cache
---> a7817dccba14
Step 5/12 : COPY src /usr/src/app/src
---> Using cache
---> 8f6724d17868
Step 6/12 : COPY public /usr/src/app/public
---> Using cache
---> 4562c8cd9267
Step 7/12 : WORKDIR /usr/src/app
---> Using cache
---> 235dd15200e9
Step 8/12 : RUN yarn build
---> Running in f05795e5e70d
yarn run v1.22.15
$ quasar build
Build mode........ spa
Pkg quasar........ v2.3.3
Pkg @quasar/app... v3.2.3
Pkg webpack....... v5
Debugging......... no
Publishing........ no
Configured browser support (>= 89.37% of global marketshare):
· Chrome for Android >= 96
· Firefox for Android >= 94
· Android >= 96
· Chrome >= 87
· Edge >= 93
· Firefox >= 85
· iOS >= 11.3-11.4
· Opera >= 77
· Safari >= 12.1
App • Running "@quasar/apollo" Quasar App Extension...
App • Running "@quasar/qmarkdown" Quasar App Extension... App • Extension(@quasar/apollo): Extending quasar.conf... App • Extension(@quasar/qmarkdown): Extending quasar.conf...
App • Extension(@quasar/qmarkdown): Chaining "UI" Webpack config
App Extension (qmarkdown) Info: 'Adding markdown loader to chainWebpack'
App Extension (qmarkdown) Info: 'Adding .vmd (vue+markdown) loader to chainWebpack'
App • Chaining "UI" Webpack config
App • Cleaned build artifact: "/usr/src/app/dist/spa"
/usr/src/app/node_modules/fork-ts-checker-webpack-plugin/lib/rpc/rpc-ipc/RpcIpcMessagePort.js:19
closedError = new RpcIpcMessagePortClosedError_1.RpcIpcMessagePortClosedError(code
^
[ RpcIpcMessagePortClosedError: Process 38 exited [SIGKILL].
- RpcIpcMessagePort.js:19
[app]/[fork-ts-checker-webpack-plugin]/lib/rpc/rpc-ipc/RpcIpcMessagePort.js: 19:23
- Generator.next
- RpcIpcMessagePort.js:8
# Pad
[app]/[fork-ts-checker-webpack-plugin]/lib/rpc/rpc-ipc/RpcIp
POSTGRES_PASSWORD: ctfnote
cMessagePort.js: 8:71
- new Promise
- RpcIpcMessagePort.js:4 __awaiter
[app]/[fork-ts-checker-webpack-plugin]/lib/rpc/rpc-ipc/RpcIpcMessagePort.js: 4:12
- RpcIpcMessagePort.js:18 ChildProcess.handleExit
[app]/[fork-ts-checker-webpack-plugin]/lib/rpc/rpc-ipc/RpcIpcMessagePort.js: 18:42
- node:events:390 ChildProcess.emit
node:events:390:28
- child_process:290 Process.ChildProcess._handle.onexit
node:internal/child_process:290:12
] {
code: null,
signal: 'SIGKILL'
}
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
ERROR: Service 'front' failed to build: The command '/bin/sh -c yarn build' returned a non-zero code: 1
.env:
# Pad
# Secure: we're using HTTPS
CMD_PROTOCOL_USESSL=true
# Domain: you need to define this if you wish to enable any options
CMD_DOMAIN=ctf.[REDACED]:8080
# Content-Security Policy
# CMD_CSP_ENABLE=true
# Upload on filesystem (instead of imgur)
CMD_IMAGE_UPLOAD_TYPE=filesystem
*note, i got a few of these messages when building:
warning " > @apollo/[email protected]" has unmet peer dependency "graphql@^14.0.0 || ^15.0.0 || ^16.0.0".
warning "@apollo/client > @graphql-typed-document-node/[email protected]" has unmet peer dependency "graphql@^0.8.0 || ^0.9.0 || ^0.10.0 || ^0.11.0 || ^0.12.0 || ^0.13.0 || ^14.0.0 || ^15.0.0 || ^16.0.0".
warning "@apollo/client > [email protected]" has unmet peer dependency "graphql@^0.9.0 || ^0.10.0 || ^0.11.0 || ^0.12.0 || ^0.13.0 || ^14.0.0 || ^15.0.0 || ^16.0.0".
warning "@quasar/quasar-app-extension-apollo > @vue/[email protected]" has unmet peer dependency "vue@^2.6.0 || ^3.1.0".
API uses node:15-buster
Front uses node:16-alpine
since 3f72460
Ideally, we want both containers to be the same so we save bandwidth.
I prefer node:16-alpine
but API requires /bin/bash
for wait-for-it.sh
(lots of bashisms).
Using ash
instead of bash
does not work.
Do we move to a debian-based container ? Do we install bash on the API ?
Is it possible to host the docker container through Github pages?
Hi! I am the owner of the CTFNote organisation, a project almost identical to this one.
As my project has been abandoned (loss of motivation and energy for the project), it has been archived, and I am willing to give away the org.
Since this project is pretty big, splitting the individual components (frontend, backend, docs?) into individual repos could potentially be a good idea. As such, I'm proposing that this repo is moved over to my org, and ownership of the org is transferred over to you.
Alternatively, as the project's main branch hasn't been touched in four months, if continuing to maintain the project isn't wanted or desired by you, I could possibly take over the repo and maintain it. There is also the option of me forking the project and moving it over myself.
Hello!
Amazing work with the new main branch! Really cool to see the Vue 3 / Quasar 2 / TypeScript migration and great to see that there is activity again!
While you were working on the graphql
branch, I continued development on my own fork to get new features merged faster without disturbing your work on the new version. Since it is merged to master, I am willing to fork again and put my own changes on top of it. But my question is: should I create PRs for the changes or not? I really like to contribute to the upstream repo, but only if my changes are actually be merged (otherwise it only takes time to create branches and PRs).
It is about the following changes:
I am curious if you like these changes, if I should contribute them to upstream and if the changes will be merged in a reasonable amount of time.
Commit: 22bd652
The wording when deleting a task is wrong.
It should not say that deleting a task will delete all the tasks.
API container runs as root.
This does not follow the best practices
It could be nice to have a way to search CTF and Tasks by title or description
When we recently used CTFNote, for some reason we ended up with having the same category spelled in different ways. This was probably some user error but it would be nice if some of the standard categories had aliases that mapped to each other, for example:
so, if you accidentally input a challenge as "pwnable" it will automatically be mapped to "pwn". Of course there might be other categories so the category name would still be free text but these few aliases would cover the vast majority of cases.
Installing this on my Rpi Zero. Getting the following error.
`pi@raspberrypi:~/CTFNote $ sudo docker-compose up -d
Building api
Step 1/10 : FROM node:14.5.0-alpine3.10
---> 26cac906db96
Step 2/10 : RUN mkdir -p /usr/src/app && apk add openssh-client g++ make python
---> Running in 76de4a468273
ERROR: Service 'api' failed to build : The command '/bin/sh -c mkdir -p /usr/src/app && apk add openssh-client g++ make python' returned a non-zero code: 139`
Uploading logo when creating (not importing) a CTF converts it to data://
.
The resulting URL is too big for the database.
We either need to write an upload script or upload the logos to the Internet.
Would be nice to be able to add one or more tags to a task. Let's say you have a task inside Web category but you would want to tag it as "XSS" so when you are on a CTF you can search for past tasks you've done with that particular tag.
Using the import function of CTFNote will not work if the uploaded data contains tasks that have already been imported.
This is typically encountered when importing challenges once and importing an updated list of challenges after new tasks are released.
Steps to reproduce
foo|foo
foo|foo
bar|bar
bar
is not imported because foo
already existsLogs
This happened on commit a0f94e1 (current master)
db_1 | 2021-03-20 17:46:16.487 UTC [231] ERROR: duplicate key value violates unique constraint "UQ_7e28b6f481038cc247d976df847"
db_1 | 2021-03-20 17:46:16.487 UTC [231] DETAIL: Key ("ctfId", slug)=(1, welcome) already exists.
db_1 | 2021-03-20 17:46:16.487 UTC [231] STATEMENT: UPDATE "task" SET "ctfId" = $2 WHERE "id" = $1
db_1 | 2021-03-20 17:46:16.490 UTC [231] ERROR: current transaction is aborted, commands ignored until end of transaction block
db_1 | 2021-03-20 17:46:16.490 UTC [231] STATEMENT: UPDATE "task" SET "ctfId" = $2 WHERE "id" = $1
db_1 | 2021-03-20 17:46:16.493 UTC [231] ERROR: current transaction is aborted, commands ignored until end of transaction block
db_1 | 2021-03-20 17:46:16.493 UTC [231] STATEMENT: UPDATE "task" SET "ctfId" = $2 WHERE "id" = $1
db_1 | 2021-03-20 17:46:16.495 UTC [231] ERROR: current transaction is aborted, commands ignored until end of transaction block
db_1 | 2021-03-20 17:46:16.495 UTC [231] STATEMENT: UPDATE "task" SET "ctfId" = $2 WHERE "id" = $1
db_1 | 2021-03-20 17:46:16.498 UTC [231] ERROR: current transaction is aborted, commands ignored until end of transaction block
db_1 | 2021-03-20 17:46:16.498 UTC [231] STATEMENT: UPDATE "task" SET "ctfId" = $2 WHERE "id" = $1
db_1 | 2021-03-20 17:46:16.499 UTC [231] ERROR: current transaction is aborted, commands ignored until end of transaction block
db_1 | 2021-03-20 17:46:16.499 UTC [231] STATEMENT: UPDATE "task" SET "ctfId" = $2 WHERE "id" = $1
db_1 | 2021-03-20 17:46:16.502 UTC [231] ERROR: current transaction is aborted, commands ignored until end of transaction block
db_1 | 2021-03-20 17:46:16.502 UTC [231] STATEMENT: UPDATE "task" SET "ctfId" = $2 WHERE "id" = $1
db_1 | 2021-03-20 17:46:16.504 UTC [231] ERROR: current transaction is aborted, commands ignored until end of transaction block
db_1 | 2021-03-20 17:46:16.504 UTC [231] STATEMENT: UPDATE "task" SET "ctfId" = $2 WHERE "id" = $1
db_1 | 2021-03-20 17:46:16.506 UTC [231] ERROR: current transaction is aborted, commands ignored until end of transaction block
db_1 | 2021-03-20 17:46:16.506 UTC [231] STATEMENT: UPDATE "task" SET "ctfId" = $2 WHERE "id" = $1
db_1 | 2021-03-20 17:46:16.508 UTC [231] ERROR: current transaction is aborted, commands ignored until end of transaction block
db_1 | 2021-03-20 17:46:16.508 UTC [231] STATEMENT: UPDATE "task" SET "ctfId" = $2 WHERE "id" = $1
db_1 | 2021-03-20 17:46:16.511 UTC [231] ERROR: current transaction is aborted, commands ignored until end of transaction block
db_1 | 2021-03-20 17:46:16.511 UTC [231] STATEMENT: UPDATE "task" SET "ctfId" = $2 WHERE "id" = $1
db_1 | 2021-03-20 17:46:16.513 UTC [231] ERROR: current transaction is aborted, commands ignored until end of transaction block
db_1 | 2021-03-20 17:46:16.513 UTC [231] STATEMENT: UPDATE "task" SET "ctfId" = $2 WHERE "id" = $1
db_1 | 2021-03-20 17:46:16.515 UTC [231] ERROR: current transaction is aborted, commands ignored until end of transaction block
db_1 | 2021-03-20 17:46:16.515 UTC [231] STATEMENT: UPDATE "task" SET "ctfId" = $2 WHERE "id" = $1
api_1 | (node:1) UnhandledPromiseRejectionWarning: QueryFailedError: duplicate key value violates unique constraint "UQ_7e28b6f481038cc247d976df847"
api_1 | at new QueryFailedError (/usr/src/app/node_modules/typeorm/error/QueryFailedError.js:11:28)
api_1 | at Query.callback (/usr/src/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:235:38)
api_1 | at Query.handleError (/usr/src/app/node_modules/pg/lib/query.js:128:19)
api_1 | at Client._handleErrorMessage (/usr/src/app/node_modules/pg/lib/client.js:335:17)
api_1 | at Connection.emit (events.js:314:20)
api_1 | at /usr/src/app/node_modules/pg/lib/connection.js:115:12
api_1 | at Parser.parse (/usr/src/app/node_modules/pg-protocol/dist/parser.js:40:17)
api_1 | at Socket.<anonymous> (/usr/src/app/node_modules/pg-protocol/dist/index.js:10:42)
api_1 | at Socket.emit (events.js:314:20)
api_1 | at addChunk (_stream_readable.js:304:12)
api_1 | (node:1) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 4)
It would be nice to have an endpoint that exports the CTF list as ICS
Ideally, we would like to have a link with a unique token (/ctf.ics?key=4314c35c-9d38-40ef-aa18-8e1093fabbfc
) otherwise it'll leak the CTF list to outsiders
We encountered issues when trying to import challenges. The situation was something like this:
I know this isn't super useful debug info. I'm in the middle of a CTF but afterwards I'll try to recreate it and see if I find any error messages or logs.
The frontend is currently using normal JS, which makes development much harder (compared to when using TS).
I could add TS support to the frontend and submit a PR if that is wanted
Hi,
Thanks for the awesome project!
I have tried both the main
and the dev
branch and looks like the session cookie is not cleared after pressing the Logout
button.
Maybe its something specific to my env.
Can you confirm that it works for you ?
Cheers,
PY
In the mobile environment, the UI of the top bar looks broken.
This is because its content exceeds the width that can be expressed in one line.
There are five suggestions for resolving this.
CTFNote contains a vulnerable dependency (markdown-it-katex
)
# yarn audit
┌───────────────┬──────────────────────────────────────────────────────────────┐
│ high │ Cross-Site Scripting │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Package │ markdown-it-katex │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Patched in │ No patch available │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Dependency of │ vue-markdown │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ Path │ vue-markdown > markdown-it-katex │
├───────────────┼──────────────────────────────────────────────────────────────┤
│ More info │ https://www.npmjs.com/advisories/1466 │
└───────────────┴──────────────────────────────────────────────────────────────┘
Despite what was originally thought, we are vulnerable to it.
We parse markdown in CTF descriptions and credentials.
Credentials is not a real problem (only admins/captains can edit it)
CTF description means somebody could drop the payload in the description of a CTF on CTFTime.
Thanks to @tastelessctf I could test that this is exploitable (you do not need kyprizel's confirmation to edit a CTF's description)
Here is a screenshot of the vulnerability (104Kb) exploited to print an alert box.
The package is unmaintained. It has to be replaced.
Creating a new CTF from scratch does not work on the graphql branch.
CTFNote complains about CtfInput.secretsId
not being provided.
Commit: a6420a3
Request:
{
"operationName": "createCtf",
"variables": {
"title": "foo",
"ctfUrl": "https://example.org/",
"startTime": "2021-04-20T00:00:00+08:00",
"endTime": "2021-04-22T06:00:00+08:00",
"description": "test ctf"
},
"query": "mutation createCtf($title: String!, $weight: Float, $ctfUrl: String, $ctftimeUrl: String, $logoUrl: String, $startTime: Datetime, $endTime: Datetime, $description: String) {\n createCtf(\n input: {ctf: {title: $title, weight: $weight, ctfUrl: $ctfUrl, ctftimeUrl: $ctftimeUrl, logoUrl: $logoUrl, startTime: $startTime, endTime: $endTime, description: $description}}\n ) {\n ctf {\n ...CtfInfo\n __typename\n }\n __typename\n }\n}\n\nfragment CtfInfo on Ctf {\n nodeId\n id\n granted\n ctfUrl\n ctftimeUrl\n description\n endTime\n logoUrl\n startTime\n weight\n title\n __typename\n}\n"
}
Reponse:
{
"errors": [
{
"message": "Field \"CtfInput.secretsId\" of required type \"Int!\" was not provided.",
"locations": [
{
"line": 3,
"column": 18
}
]
}
]
}
Accessing a CTF requires clicking on its name on the graphql branch.
This is not obvious for a new user. main branch has an OPEN CTF
button.
commit: a6420a3
It would feel more natural to be able to click on the whole card.
A button like the current version (or any other indicator) is fine too
Commit: 22bd652
It is possible to import a task that was already imported before.
CTFNote used to filter tasks that were already imported, so importing a whole CTFd output would not duplicate the tasks.
CTFNote/front/src/apollo/index.ts
Line 11 in 0137694
If the description contains very long lines, then the table will become scrollable. All the important buttons are now out of the view...
We can solve this by wrapping the text, but I don't know if this is preferred since it may result in very high rows. Another way of solving this is too add some expansion button for the description, just like this example: https://quasar.dev/vue-components/table#expanding-rows
Or maybe cut the text at some point and when the user clicks it, a popup will be displayed with the full description?
Originally posted by @JJ-8 in #110 (review)
Hi,
Thanks for the awesome project!
The Calnendar
section is not showing the CTFs I've imported:
There is no errors on in the web browser console and can't find anything wrong in the nginx logs.
How to diagnose the problem ?
I can see that you are about the merge a new functionality to the Calendar - #116 , so my other question is - how to update my instance once you merge the changes ?
EDIT:
Just tested on a local instance without nginx and it works:
So the issue could (should) be in the reverse proxy.
Cheers,
PY
It could be nice to have a docker-compose.with-ssl.yml that provide ctfnote + a reverse proxy with SSL (let's encrypt)
It's easy to do with traefik or caddy.
We just need to find an easy way for users to provide the domain name and registration email without messing with the default config
Trying to upload an image on a hedgedoc pad fails.
Logs from the container:
codimd_1 | (node:51) UnhandledPromiseRejectionWarning: TypeError [ERR_INVALID_URL]: Invalid URL: /pad/uploads/
codimd_1 | at onParseError (internal/url.js:257:9)
codimd_1 | at new URL (internal/url.js:333:5)
codimd_1 | at new URL (internal/url.js:330:22)
codimd_1 | at Object.exports.uploadImage (/hedgedoc/lib/web/imageRouter/filesystem.js:28:19)
codimd_1 | at /hedgedoc/lib/web/imageRouter/index.js:63:22
codimd_1 | (node:51) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
codimd_1 | (node:51) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
codimd_1 | 2021-02-23T15:04:19.493Z info: 172.20.0.5 - - [23/Feb/2021:15:04:19 +0000] "POST /uploadimage HTTP/1.1" - - "http://127.0.0.1/pad/0r7z3QpWRf2groqjPIphGA" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36"
Reverting to the codimd image is a temporary fix, but any ideas on how to actually fix it?
Hello,
Im trying to run this project by cloning the latest version and starting the containers with docker-compose.
While I can create a new ctf, I cant create any manual new tasks under it using the create button.
Am I doing something wrong or this a bug?
Thanks
Table display uses <pre>
which displays line breaks faithfully.
Every other display do not.
We should be consistent : either drop <pre>
from table display, or use <pre>
everywhere.
Using <pre>
looks better, but we might encounter issues when tasks have multiple lines (see comments in #93)
Front uses a dependency that pulls outdated and vulnerable packages (#143)
5 vulnerabilities are still present in front.
This is caused by eslint-plugin-graphql being deprecated and pulling outdated dependencies.
The official recommendation is to migrate to graphql-eslint. (I will make an issue for that)
The instructions to setup TLS are incomplete and broken
If you turn CSP on, pad does not work anymore
We need to:
CMD_PROTOCOL_USESSL
to .env
and docker-compose.yml
X-Forwarded-Proto
from the frontIn my use case I'm not able to use the port 443 for https so I'm forced to have a url like: <domain>:<port>
. In theory this shouldn't be a problem but looks like Hedgedoc doesn't manage well a port in the CMD_DOMAIN
variable and ends up generating wrong URLs for some features (In my case the URLs were like: <domain>:<port>:<port>
).
This problem could be solved by doing this:
CMD_PORT
CMD_DOMAIN
value (Without the port!)CMD_URL_ADDPORT
option to make Hedgedoc add the port to the URLThe thing is that the Nginx used with the front docker image has the port 3000 for Hedgedoc hard coded, so that changes will produce a failure trying to open a task for example.
A solution for this could be to parameterize the Hedgedoc port used in the Nginx configuration to take the CMD_PORT
value. envsubst
can be used to generate the Nginx configuration dynamically before nginx starts.
It is not uncommon to have challenges in multiple categories.
For example: reverse + crypto, web + pwn, etc.
The current system allows for a single category.
Filtering by category is awkward
We came across @ISnackable's blog post that describes how to run CTFNote in a Heroku instance.
We should consider adding a section in the README to add these kinds of tutorials.
Are there more resources like these ?
Whenever I start ctfnote_api I get this error
A serious error occurred when building the initial schema. Exiting because `retryOnInitFail` is not set. Error details:
Error: A type naming conflict has occurred - two entities have tried to define the same type 'UsersOrderBy'.
The first entity was:
Adding connection "orderBy" argument for table "ctfnote"."Users". You can rename the table's GraphQL type via a 'Smart Comment':
COMMENT ON TABLE "ctfnote"."Users" IS E'@name newNameHere';
The second entity was:
Adding connection "orderBy" argument for view "ctfnote"."users". You can rename the table's GraphQL type via a 'Smart Comment':
COMMENT ON VIEW "ctfnote"."users" IS E'@name newNameHere';
at Object.addType (/app/node_modules/graphile-build/node8plus/makeNewBuild.js:253:17)
at Object.newWithHooks (/app/node_modules/graphile-build/node8plus/makeNewBuild.js:730:14)
at /app/node_modules/graphile-build-pg/node8plus/plugins/PgConnectionArgOrderBy.js:34:7
at Array.forEach (<anonymous>)
at /app/node_modules/graphile-build-pg/node8plus/plugins/PgConnectionArgOrderBy.js:27:38
at SchemaBuilder.applyHooks (/app/node_modules/graphile-build/node8plus/SchemaBuilder.js:264:20)
at SchemaBuilder.createBuild (/app/node_modules/graphile-build/node8plus/SchemaBuilder.js:325:10)
at SchemaBuilder.buildSchema (/app/node_modules/graphile-build/node8plus/SchemaBuilder.js:333:26)
at Object.exports.createPostGraphileSchema (/app/node_modules/postgraphile-core/node8plus/index.js:226:28)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
error Command failed with exit code 34.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
This also happens on a clean build of CTFNote, I've tried version 2.0.0, 2.0.1 and up to date on master
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.