Giter Site home page Giter Site logo

lachlan2k / phatcrack Goto Github PK

View Code? Open in Web Editor NEW
106.0 3.0 8.0 10.34 MB

Modern web-based distributed hashcracking solution, built on hashcat

License: MIT License

Go 55.06% Shell 0.69% JavaScript 0.17% HTML 0.07% Vue 30.93% CSS 0.08% TypeScript 12.99%
distributed-computing golang hashcat hashcracking pentesting security-tools vue gpu-computing hacking infosec

phatcrack's People

Contributors

lachlan2k avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

phatcrack's Issues

UI

Big issue to track UI TODO list. Will add/check off things as I go.

  • Wordlist page
    • View running attacks
    • Websockets for live updates
    • Modal wizard for creating new attacks
    • Ability to view logs from running jobs
  • Admin pages
    • User creation
    • Agent creation
    • Wordlist/rulefile management
  • Misc
    • Logout button
    • Password change

Custom Charsets

Backend work is already done, but implement frontend requirements for custom charsets on masks

Total hashrate erroneous when one job fails

Problem:
Total hashcount for any given attack is taken from the last known hashrate per agent/job. In this case, one job failed (error code 255 and not much else of note) and the hashrate for that failed job did not return to '0'.

Job Failed Error

cuEventDestroy(): unspecified launch failure

cuEventDestroy(): unspecified launch failure

cuStreamDestroy(): unspecified launch failure

cuModuleUnload(): unspecified launch failure

cuModuleUnload(): unspecified launch failure


> Error: exit status 255

The wordlist attack shows the following:
image

Whilst the totals are shown below:
image

Phatcrack API fails to start on first run

When using the provided docker compose, phatcrack as a whole fails to properly start (with the main frontend getting 502 due to api initialisation failure).

This may occur due to postgres taking some time to initialise the first time. We could try and do a reconnect in the API on failure.

Wordlist & Rulefile Handling

The application should:

  • Allow users to upload wordlists/rulefiles. Large wordlists should be required to be stored in gzip format. Users should have to supply the total number of lines in the wordlist.
  • Automatically sync wordlists/rulefiles to agents.
  • If a wordlist hasn't yet sync'd to all agents, it should be greyed out and unaccessible.

Questions to ponder:

  • Should wordfiles/rulefiles have access control on them?
    • People might create custom wordlists for a specific job, so it needs to be low-friction to upload those, but, we don't want to pollute things with 50 random files laying around.
    • People will then need to delete files they upload, but obviously we don't want them deleting wordlists other people are using.
    • Perhaps admins should be able to "lock" files so they cannot be deleted?

Access controls for wordlists can be added after-the-fact, so we don't need it for MVP, but good to bear in mind

Migrating from 0.1.x to 0.2.x

2.x contains a bug fix that requires a manual database migration:

To do this connect to the phatcrack database instance:

docker exec -it -u postgres phatcrack-db-1 psql -U phatcrack

Then complete this change:

BEGIN;

-- Create temporary table with unique rows
CREATE TEMP TABLE unique_temp_potfile AS
SELECT DISTINCT ON (hash, plaintext_hex, hash_type) *
FROM potfile_entries
ORDER BY hash, plaintext_hex, hash_type, id;

-- Delete original rows
DELETE FROM potfile_entries;

-- Copy the rows back
INSERT INTO potfile_entries SELECT * FROM unique_temp_potfile;

DROP TABLE unique_temp_potfile;

COMMIT;

An issue has been identified which stops postgres from building an index for the potfiles which will be resolved soon. This may cause the following errors to be emitted:

2024/03/25 14:11:09 /app/api/internal/db/db.go:87 ERROR: index row size 2784 exceeds btree version 4 maximum 2704 for index "idx_uniq" (SQLSTATE 54000)
[28.572ms] [rows:0] CREATE UNIQUE INDEX IF NOT EXISTS "idx_uniq" ON "potfile_entries" ("hash","plaintext_hex","hash_type")

Randomly generate password on user creation

Howdy chief,
In the "create user" dialog it'd be quite neat if a randomly generated password was automatically generated for you so all you had to do was hit "create".

image

This could also be coupled with a check box with "force password change on login"

Missing FROM clause in SQL statement

Howdy just getting this error multiple times:

2023/10/25 03:09:12 /app/api/internal/db/job.go:349 ERROR: missing FROM-clause entry for table "attack" (SQLSTATE 42P01)
{"URI":"/api/v1/project/<uuid_omitted>","content_length":"","error":"ERROR: missing FROM-clause entry for table \"attack\" (SQLSTATE 42P01)","error_id":"1cd92282-3ec4-460c-9410-4960937a04bd","latency_ms":2,"level":"error","method":"DELETE" <omitted>

Error Handling is Unclear on the Frontend

Howdy,

When entering in a hashlist with a name that is 4 characters or below, e.g "flop" an error is generated:

{
    "message": "Key: 'HashlistCreateRequestDTO.Name' Error:Field validation for 'Name' failed on the 'min' tag"
}

This is fine, however it isnt clearly communicated to the front end:

image

Which makes it a little hard to diagnose whats wrong.

Version: v0.1.0

Thanks!

Attack Sharding

Attacks like wordlist attacks, mask attacks, etc. should all be "sharded" in multiple jobs for distriubtion, if possible.

When an attack is started, phatcrack should look at the number of available agents, and appropriately carve up the job, based on the attack mode. There are parameters to tell hashcat to skip the first n entries in a wordlist, etc. so that would be appropriate for wordlist based attacks. We can probably find clever ways of carving up mask attacks too, etc.

Hashlist handling broken when usernames contain spaces

For NetNTLMv2 hashes in particular (where the username forms part of the seed), it is important to be able to handle the parsing of hashes where the usernames contain spaces.

(For other types some preprocessing can overcome this issue).

Creating such a hashlist fails with error 400 (type 5600).

Bug Identified

I have one major problem... This project is way too epic!!!!!!

Improve Admin User Management UI

Implement the following:

  • Edit user roles
  • Create accounts with locked passwords (i.e. for SSO)
  • Allow emails as usernames
  • Remove passwords from user accounts

Job Checkpoint/Restore and Pause/Resume

Hashcat status outputs contain a restore point. If a job dies, or is manually paused, we should save this restore point and be able to start the job again later.

There is a bit of bizarre arithmetic with combining exiting --skip --limit calculations, but shouldn't be too hard.

MFA

  • Implement MFA with WebAuthN/U2F
  • Add configuration option to require users to configure MFA on first login

Error after upgrade to v0.2.1

Howdy, after upgrading and running the project (with db migrations) I get this following log line emitted on startup.

Doesnt seem to cause any issues that anyone has mentioned (and I've started a job and it has worked). So not sure if this is something to fix

2024/03/25 14:11:09 /app/api/internal/db/db.go:87 ERROR: index row size 2784 exceeds btree version 4 maximum 2704 for index "idx_uniq" (SQLSTATE 54000)
[28.572ms] [rows:0] CREATE UNIQUE INDEX IF NOT EXISTS "idx_uniq" ON "potfile_entries" ("hash","plaintext_hex","hash_type")

Panic in-memory session handler

Not sure what triggered this but unfortunately a map r/w triggered a panic.

{"hashlist_id":"5ef65b91-d745-4fe3-b330-d68d574b2135","level":"warning","log_type":"audit","msg":"New attack created","project_id":"3e8e5d56-ecd5-4f10-9540-846bd6aeeeb0","project_name":"tests","remote_ip":"192.168.11.2","time":"2023-10-10T03:28:56Z","user_id":"54a814e6-ac0e-4734-8764-c313b268edbe","user_username":"redacted"}
fatal error: concurrent map read and map write

goroutine 12173 [running]:
github.com/lachlan2k/phatcrack/api/internal/auth.(*InMemorySessionHandler).getEntry(0xc000e1ddd0, {0x10f31b0, 0xc000504140})
	/app/api/internal/auth/session_inmemory.go:193 +0xa5
github.com/lachlan2k/phatcrack/api/internal/auth.(*InMemorySessionHandler).CreateMiddleware.func1.1({0x10f31b0, 0xc000504140})
	/app/api/internal/auth/session_inmemory.go:44 +0x73
github.com/lachlan2k/phatcrack/api/internal/auth.CreateHeaderAuthMiddleware.func1.1({0x10f31b0, 0xc000504140})
	/app/api/internal/auth/header_auth_middleware.go:27 +0x1b3
github.com/labstack/echo/v4.(*Echo).add.func1({0x10f31b0, 0xc000504140})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:582 +0x4b
github.com/lachlan2k/phatcrack/api/internal/webserver.Listen.Recover.RecoverWithConfig.func5.1({0x10f31b0, 0xc000504140})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/recover.go:131 +0x119
github.com/labstack/echo/v4/middleware.RequestLoggerConfig.ToMiddleware.func1.1({0x10f31b0, 0xc000504140})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/request_logger.go:259 +0x16b
github.com/labstack/echo/v4.(*Echo).ServeHTTP(0xc000e606c0, {0x10e6590?, 0xc0003aa1c0}, 0xc000ab4600)
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:669 +0x399
net/http.serverHandler.ServeHTTP({0xc000b58c90?}, {0x10e6590?, 0xc0003aa1c0?}, 0x6?)
	/usr/local/go/src/net/http/server.go:2938 +0x8e
net/http.(*conn).serve(0xc0003d4e10, {0x10e83c8, 0xc000c18b10})
	/usr/local/go/src/net/http/server.go:2009 +0x5f4
created by net/http.(*Server).Serve in goroutine 1
	/usr/local/go/src/net/http/server.go:3086 +0x5cb

goroutine 1 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x7fac3c12eae8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc00093ea00?, 0x4?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00093ea00)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00093ea00)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000e0ecc0)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).AcceptTCP(0xc000e0ecc0)
	/usr/local/go/src/net/tcpsock.go:302 +0x30
github.com/labstack/echo/v4.tcpKeepAliveListener.Accept({0x445fc0?})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:989 +0x17
net/http.(*Server).Serve(0xc000e4c0f0, {0x10e63e0, 0xc000df83b0})
	/usr/local/go/src/net/http/server.go:3056 +0x364
github.com/labstack/echo/v4.(*Echo).Start(0xc000e606c0, {0xc000f0e198, 0x5})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:686 +0xd2
github.com/lachlan2k/phatcrack/api/internal/webserver.Listen({0xc000014015, 0x4})
	/app/api/internal/webserver/webserver.go:77 +0x98d
main.main()
	/app/api/main.go:80 +0x43f

goroutine 9 [select, 125 minutes]:
database/sql.(*DB).connectionOpener(0xc0005fb040, {0x10e8400, 0xc00085fc70})
	/usr/local/go/src/database/sql/sql.go:1218 +0x87
created by database/sql.OpenDB in goroutine 1
	/usr/local/go/src/database/sql/sql.go:791 +0x165

goroutine 14 [sleep]:
time.Sleep(0xdf8475800)
	/usr/local/go/src/runtime/time.go:195 +0x125
github.com/lachlan2k/phatcrack/api/internal/auth.(*InMemorySessionHandler).janitor(0x0?)
	/app/api/internal/auth/session_inmemory.go:233 +0x27
created by github.com/lachlan2k/phatcrack/api/internal/auth.(*InMemorySessionHandler).CreateMiddleware in goroutine 1
	/app/api/internal/auth/session_inmemory.go:35 +0xae

goroutine 13 [select]:
github.com/lachlan2k/phatcrack/api/internal/fleet.stateReconciliationTask()
	/app/api/internal/fleet/state_reconcilition.go:371 +0x65
created by github.com/lachlan2k/phatcrack/api/internal/fleet.Setup in goroutine 1
	/app/api/internal/fleet/state_reconcilition.go:431 +0x1b1

goroutine 529 [IO wait]:
internal/poll.runtime_pollWait(0x7fac3c12e610, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc000039000?, 0xc0001bd000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000039000, {0xc0001bd000, 0x1000, 0x1000})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc000039000, {0xc0001bd000?, 0x0?, 0xcc26b8?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000306068, {0xc0001bd000?, 0xc000f73610?, 0xc00019af40?})
	/usr/local/go/src/net/net.go:179 +0x45
bufio.(*Reader).fill(0xc000322de0)
	/usr/local/go/src/bufio/bufio.go:113 +0x103
bufio.(*Reader).Peek(0xc000322de0, 0x2)
	/usr/local/go/src/bufio/bufio.go:151 +0x53
github.com/gorilla/websocket.(*Conn).read(0xc00039ec60, 0xc00019b050?)
	/go/pkg/mod/github.com/gorilla/[email protected]/conn.go:371 +0x26
github.com/gorilla/websocket.(*Conn).advanceFrame(0xc00039ec60)
	/go/pkg/mod/github.com/gorilla/[email protected]/conn.go:809 +0x6d
github.com/gorilla/websocket.(*Conn).NextReader(0xc00039ec60)
	/go/pkg/mod/github.com/gorilla/[email protected]/conn.go:1009 +0xb0
github.com/gorilla/websocket.(*Conn).ReadJSON(0xcb3cda?, {0xb4e680, 0xc0013522a0})
	/go/pkg/mod/github.com/gorilla/[email protected]/json.go:50 +0x25
github.com/lachlan2k/phatcrack/api/internal/fleet.(*AgentConnection).Handle(0xc0003d3098)
	/app/api/internal/fleet/agent.go:208 +0x225
github.com/lachlan2k/phatcrack/api/internal/controllers.handleAgentWs({0x10f31b0, 0xc0003b4e60})
	/app/api/internal/controllers/agent_handler.go:86 +0x3a7
github.com/labstack/echo/v4.(*Echo).add.func1({0x10f31b0, 0xc0003b4e60})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:582 +0x4b
github.com/lachlan2k/phatcrack/api/internal/webserver.Listen.Recover.RecoverWithConfig.func5.1({0x10f31b0, 0xc0003b4e60})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/recover.go:131 +0x119
github.com/labstack/echo/v4/middleware.RequestLoggerConfig.ToMiddleware.func1.1({0x10f31b0, 0xc0003b4e60})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/request_logger.go:259 +0x16b
github.com/labstack/echo/v4.(*Echo).ServeHTTP(0xc000e606c0, {0x10e6590?, 0xc0003400e0}, 0xc000260a00)
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:669 +0x399
net/http.serverHandler.ServeHTTP({0xc0003829c0?}, {0x10e6590?, 0xc0003400e0?}, 0x6?)
	/usr/local/go/src/net/http/server.go:2938 +0x8e
net/http.(*conn).serve(0xc000e1f200, {0x10e83c8, 0xc000c18b10})
	/usr/local/go/src/net/http/server.go:2009 +0x5f4
created by net/http.(*Server).Serve in goroutine 1
	/usr/local/go/src/net/http/server.go:3086 +0x5cb

goroutine 66 [IO wait]:
internal/poll.runtime_pollWait(0x7fac3c12e8f8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc000b9c000?, 0xc000208000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000b9c000, {0xc000208000, 0x1000, 0x1000})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc000b9c000, {0xc000208000?, 0x1de000?, 0xcc26b8?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0001303e8, {0xc000208000?, 0xc000b65220?, 0xc0001a6f40?})
	/usr/local/go/src/net/net.go:179 +0x45
bufio.(*Reader).fill(0xc0001143c0)
	/usr/local/go/src/bufio/bufio.go:113 +0x103
bufio.(*Reader).Peek(0xc0001143c0, 0x2)
	/usr/local/go/src/bufio/bufio.go:151 +0x53
github.com/gorilla/websocket.(*Conn).read(0xc000228000, 0xc0001a7050?)
	/go/pkg/mod/github.com/gorilla/[email protected]/conn.go:371 +0x26
github.com/gorilla/websocket.(*Conn).advanceFrame(0xc000228000)
	/go/pkg/mod/github.com/gorilla/[email protected]/conn.go:809 +0x6d
github.com/gorilla/websocket.(*Conn).NextReader(0xc000228000)
	/go/pkg/mod/github.com/gorilla/[email protected]/conn.go:1009 +0xb0
github.com/gorilla/websocket.(*Conn).ReadJSON(0xcb3cda?, {0xb4e680, 0xc000e0e300})
	/go/pkg/mod/github.com/gorilla/[email protected]/json.go:50 +0x25
github.com/lachlan2k/phatcrack/api/internal/fleet.(*AgentConnection).Handle(0xc0000124f8)
	/app/api/internal/fleet/agent.go:208 +0x225
github.com/lachlan2k/phatcrack/api/internal/controllers.handleAgentWs({0x10f31b0, 0xc000504000})
	/app/api/internal/controllers/agent_handler.go:86 +0x3a7
github.com/labstack/echo/v4.(*Echo).add.func1({0x10f31b0, 0xc000504000})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:582 +0x4b
github.com/lachlan2k/phatcrack/api/internal/webserver.Listen.Recover.RecoverWithConfig.func5.1({0x10f31b0, 0xc000504000})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/recover.go:131 +0x119
github.com/labstack/echo/v4/middleware.RequestLoggerConfig.ToMiddleware.func1.1({0x10f31b0, 0xc000504000})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/request_logger.go:259 +0x16b
github.com/labstack/echo/v4.(*Echo).ServeHTTP(0xc000e606c0, {0x10e6590?, 0xc000214000}, 0xc00020a000)
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:669 +0x399
net/http.serverHandler.ServeHTTP({0xc000dc0090?}, {0x10e6590?, 0xc000214000?}, 0x6?)
	/usr/local/go/src/net/http/server.go:2938 +0x8e
net/http.(*conn).serve(0xc000c34120, {0x10e83c8, 0xc000c18b10})
	/usr/local/go/src/net/http/server.go:2009 +0x5f4
created by net/http.(*Server).Serve in goroutine 1
	/usr/local/go/src/net/http/server.go:3086 +0x5cb

goroutine 893 [IO wait]:
internal/poll.runtime_pollWait(0x7fac3c12e138, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc000b9db00?, 0xc0001d5000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000b9db00, {0xc0001d5000, 0x1000, 0x1000})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc000b9db00, {0xc0001d5000?, 0x639c00?, 0x9?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000ba61f8, {0xc0001d5000?, 0xc000a25958?, 0xc000286f40?})
	/usr/local/go/src/net/net.go:179 +0x45
bufio.(*Reader).fill(0xc000a06ae0)
	/usr/local/go/src/bufio/bufio.go:113 +0x103
bufio.(*Reader).Peek(0xc000a06ae0, 0x2)
	/usr/local/go/src/bufio/bufio.go:151 +0x53
github.com/gorilla/websocket.(*Conn).read(0xc0008a1ce0, 0xc000287050?)
	/go/pkg/mod/github.com/gorilla/[email protected]/conn.go:371 +0x26
github.com/gorilla/websocket.(*Conn).advanceFrame(0xc0008a1ce0)
	/go/pkg/mod/github.com/gorilla/[email protected]/conn.go:809 +0x6d
github.com/gorilla/websocket.(*Conn).NextReader(0xc0008a1ce0)
	/go/pkg/mod/github.com/gorilla/[email protected]/conn.go:1009 +0xb0
github.com/gorilla/websocket.(*Conn).ReadJSON(0xcb3cda?, {0xb4e680, 0xc000b6b6c0})
	/go/pkg/mod/github.com/gorilla/[email protected]/json.go:50 +0x25
github.com/lachlan2k/phatcrack/api/internal/fleet.(*AgentConnection).Handle(0xc0001deb40)
	/app/api/internal/fleet/agent.go:208 +0x225
github.com/lachlan2k/phatcrack/api/internal/controllers.handleAgentWs({0x10f31b0, 0xc00080a0a0})
	/app/api/internal/controllers/agent_handler.go:86 +0x3a7
github.com/labstack/echo/v4.(*Echo).add.func1({0x10f31b0, 0xc00080a0a0})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:582 +0x4b
github.com/lachlan2k/phatcrack/api/internal/webserver.Listen.Recover.RecoverWithConfig.func5.1({0x10f31b0, 0xc00080a0a0})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/recover.go:131 +0x119
github.com/labstack/echo/v4/middleware.RequestLoggerConfig.ToMiddleware.func1.1({0x10f31b0, 0xc00080a0a0})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/request_logger.go:259 +0x16b
github.com/labstack/echo/v4.(*Echo).ServeHTTP(0xc000e606c0, {0x10e6590?, 0xc0003aa700}, 0xc00042f700)
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:669 +0x399
net/http.serverHandler.ServeHTTP({0xc000dc1950?}, {0x10e6590?, 0xc0003aa700?}, 0x6?)
	/usr/local/go/src/net/http/server.go:2938 +0x8e
net/http.(*conn).serve(0xc000a9aab0, {0x10e83c8, 0xc000c18b10})
	/usr/local/go/src/net/http/server.go:2009 +0x5f4
created by net/http.(*Server).Serve in goroutine 1
	/usr/local/go/src/net/http/server.go:3086 +0x5cb

goroutine 3564 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x7fac3477dfd8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc000a00200?, 0xc00040d000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000a00200, {0xc00040d000, 0x1000, 0x1000})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc000a00200, {0xc00040d000?, 0x4daba5?, 0x0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000df85c0, {0xc00040d000?, 0x0?, 0xc000b7c038?})
	/usr/local/go/src/net/net.go:179 +0x45
net/http.(*connReader).Read(0xc000b7c030, {0xc00040d000, 0x1000, 0x1000})
	/usr/local/go/src/net/http/server.go:791 +0x14b
bufio.(*Reader).fill(0xc000a88000)
	/usr/local/go/src/bufio/bufio.go:113 +0x103
bufio.(*Reader).Peek(0xc000a88000, 0x4)
	/usr/local/go/src/bufio/bufio.go:151 +0x53
net/http.(*conn).serve(0xc000b95cb0, {0x10e83c8, 0xc000c18b10})
	/usr/local/go/src/net/http/server.go:2044 +0x75c
created by net/http.(*Server).Serve in goroutine 1
	/usr/local/go/src/net/http/server.go:3086 +0x5cb

goroutine 18248 [runnable]:
internal/poll.ignoringEINTRIO(...)
	/usr/local/go/src/internal/poll/fd_unix.go:737
internal/poll.(*FD).Read(0xc00013a100, {0xc0000c4461, 0x1, 0x1})
	/usr/local/go/src/internal/poll/fd_unix.go:160 +0x2e5
net.(*netFD).Read(0xc00013a100, {0xc0000c4461?, 0xc000a3cf40?, 0x469630?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000306008, {0xc0000c4461?, 0x10e8101?, 0xc000a1e6e0?})
	/usr/local/go/src/net/net.go:179 +0x45
net/http.(*connReader).backgroundRead(0xc0000c4450)
	/usr/local/go/src/net/http/server.go:683 +0x37
created by net/http.(*connReader).startBackgroundRead in goroutine 17780
	/usr/local/go/src/net/http/server.go:679 +0xba

goroutine 17779 [runnable]:
gorm.io/gorm.SoftDeleteQueryClause.ModifyStatement({{{0x0, 0x0}, 0x0}, 0xc000ba94a0}, 0xc00097f340)
	/go/pkg/mod/gorm.io/[email protected]/soft_delete.go:92 +0x273
gorm.io/gorm.(*Statement).AddClause(0xc00097f340, {0x10e6770, 0xc000b6b8e0})
	/go/pkg/mod/gorm.io/[email protected]/statement.go:266 +0x42
gorm.io/gorm/callbacks.BuildQuerySQL(0xc000a65260)
	/go/pkg/mod/gorm.io/[email protected]/callbacks/query.go:36 +0x1cf5
gorm.io/gorm/callbacks.Query(0xc000a65260)
	/go/pkg/mod/gorm.io/[email protected]/callbacks/query.go:17 +0x36
gorm.io/gorm.(*processor).Execute(0xc00085f2c0, 0xc000960420?)
	/go/pkg/mod/gorm.io/[email protected]/callbacks.go:130 +0x375
gorm.io/gorm.(*DB).First(0xc001228450?, {0xbd0dc0?, 0xc0005fb2b0}, {0xc000b2f620, 0x2, 0x2})
	/go/pkg/mod/gorm.io/[email protected]/finisher_api.go:129 +0x1b2
github.com/lachlan2k/phatcrack/api/internal/db.GetUserByID({0xc001228450, 0x24})
	/app/api/internal/db/user.go:64 +0xb9
github.com/lachlan2k/phatcrack/api/internal/auth.UserAndSessFromReq({0x10f31b0, 0xc0000e2d20})
	/app/api/internal/auth/session.go:61 +0x194
github.com/lachlan2k/phatcrack/api/internal/controllers.HookAuthEndpoints.handleRefresh.func10({0x10f31b0, 0xc0000e2d20})
	/app/api/internal/controllers/auth.go:273 +0x65
github.com/lachlan2k/phatcrack/api/internal/webserver.Listen.EnforceAuthMiddleware.func2.1({0x10f31b0, 0xc0000e2d20})
	/app/api/internal/auth/middleware.go:65 +0x102
github.com/lachlan2k/phatcrack/api/internal/auth.(*InMemorySessionHandler).CreateMiddleware.func1.1({0x10f31b0, 0xc0000e2d20})
	/app/api/internal/auth/session_inmemory.go:52 +0x110
github.com/lachlan2k/phatcrack/api/internal/auth.CreateHeaderAuthMiddleware.func1.1({0x10f31b0, 0xc0000e2d20})
	/app/api/internal/auth/header_auth_middleware.go:27 +0x1b3
github.com/labstack/echo/v4.(*Echo).add.func1({0x10f31b0, 0xc0000e2d20})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:582 +0x4b
github.com/lachlan2k/phatcrack/api/internal/webserver.Listen.Recover.RecoverWithConfig.func5.1({0x10f31b0, 0xc0000e2d20})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/recover.go:131 +0x119
github.com/labstack/echo/v4/middleware.RequestLoggerConfig.ToMiddleware.func1.1({0x10f31b0, 0xc0000e2d20})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/request_logger.go:259 +0x16b
github.com/labstack/echo/v4.(*Echo).ServeHTTP(0xc000e606c0, {0x10e6590?, 0xc0004ecb60}, 0xc00042ff00)
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:669 +0x399
net/http.serverHandler.ServeHTTP({0xc000e1c510?}, {0x10e6590?, 0xc0004ecb60?}, 0x6?)
	/usr/local/go/src/net/http/server.go:2938 +0x8e
net/http.(*conn).serve(0xc00017e000, {0x10e83c8, 0xc000c18b10})
	/usr/local/go/src/net/http/server.go:2009 +0x5f4
created by net/http.(*Server).Serve in goroutine 1
	/usr/local/go/src/net/http/server.go:3086 +0x5cb

goroutine 17780 [runnable]:
context.WithDeadlineCause({0x10e81d0, 0x16385a0}, {0x1608f60?, 0xf4240?, 0x1608f60?}, {0x0?, 0x0})
	/usr/local/go/src/context/context.go:633 +0x1d7
context.WithDeadline(...)
	/usr/local/go/src/context/context.go:607
context.WithTimeout({0x10e81d0, 0x16385a0}, 0x199?)
	/usr/local/go/src/context/context.go:685 +0x4d
github.com/jackc/pgx/v5/pgconn.(*PgConn).CheckConn(0xc1415484f0878141?)
	/go/pkg/mod/github.com/jackc/pgx/[email protected]/pgconn/pgconn.go:1694 +0x45
github.com/jackc/pgx/v5/stdlib.(*Conn).ResetSession(0xc000bc2420, {0x10e81d0, 0x16385a0})
	/go/pkg/mod/github.com/jackc/pgx/[email protected]/stdlib/sql.go:473 +0x8b
database/sql.(*driverConn).resetSession(0xc000101400?, {0x10e81d0, 0x16385a0})
	/usr/local/go/src/database/sql/sql.go:553 +0xe3
database/sql.(*DB).conn(0xc0005fb040, {0x10e81d0, 0x16385a0}, 0x1)
	/usr/local/go/src/database/sql/sql.go:1313 +0x1e5
database/sql.(*DB).query(0x7fac3c0d1730?, {0x10e81d0, 0x16385a0}, {0xc000b26f50, 0x62}, {0xc00129dc80, 0x1, 0x8}, 0xdd?)
	/usr/local/go/src/database/sql/sql.go:1721 +0x57
database/sql.(*DB).QueryContext.func1(0xd8?)
	/usr/local/go/src/database/sql/sql.go:1704 +0x4f
database/sql.(*DB).retry(0x10?, 0xc0002b4f80)
	/usr/local/go/src/database/sql/sql.go:1538 +0x42
database/sql.(*DB).QueryContext(0xc000e1d890?, {0x10e81d0?, 0x16385a0?}, {0xc000b26f50?, 0x0?}, {0xc00129dc80?, 0x40fe65?, 0x0?})
	/usr/local/go/src/database/sql/sql.go:1703 +0xc5
gorm.io/gorm/callbacks.Query(0xc000e1d890)
	/go/pkg/mod/gorm.io/[email protected]/callbacks/query.go:20 +0xb2
gorm.io/gorm.(*processor).Execute(0xc00085f2c0, 0xc000960420?)
	/go/pkg/mod/gorm.io/[email protected]/callbacks.go:130 +0x375
gorm.io/gorm.(*DB).First(0xc001228450?, {0xbd0dc0?, 0xc000e256c0}, {0xc000964760, 0x2, 0x2})
	/go/pkg/mod/gorm.io/[email protected]/finisher_api.go:129 +0x1b2
github.com/lachlan2k/phatcrack/api/internal/db.GetUserByID({0xc001228450, 0x24})
	/app/api/internal/db/user.go:64 +0xb9
github.com/lachlan2k/phatcrack/api/internal/auth.UserAndSessFromReq({0x10f31b0, 0xc0009563c0})
	/app/api/internal/auth/session.go:61 +0x194
github.com/lachlan2k/phatcrack/api/internal/webserver.Listen.EnforceMFAMiddleware.func3.1({0x10f31b0, 0xc0009563c0})
	/app/api/internal/auth/middleware.go:14 +0x38
github.com/lachlan2k/phatcrack/api/internal/webserver.Listen.EnforceAuthMiddleware.func2.1({0x10f31b0, 0xc0009563c0})
	/app/api/internal/auth/middleware.go:65 +0x102
github.com/lachlan2k/phatcrack/api/internal/auth.(*InMemorySessionHandler).CreateMiddleware.func1.1({0x10f31b0, 0xc0009563c0})
	/app/api/internal/auth/session_inmemory.go:52 +0x110
github.com/lachlan2k/phatcrack/api/internal/auth.CreateHeaderAuthMiddleware.func1.1({0x10f31b0, 0xc0009563c0})
	/app/api/internal/auth/header_auth_middleware.go:27 +0x1b3
github.com/labstack/echo/v4.(*Echo).add.func1({0x10f31b0, 0xc0009563c0})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:582 +0x4b
github.com/lachlan2k/phatcrack/api/internal/webserver.Listen.Recover.RecoverWithConfig.func5.1({0x10f31b0, 0xc0009563c0})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/recover.go:131 +0x119
github.com/labstack/echo/v4/middleware.RequestLoggerConfig.ToMiddleware.func1.1({0x10f31b0, 0xc0009563c0})
	/go/pkg/mod/github.com/labstack/echo/[email protected]/middleware/request_logger.go:259 +0x16b
github.com/labstack/echo/v4.(*Echo).ServeHTTP(0xc000e606c0, {0x10e6590?, 0xc000340460}, 0xc00020ac00)
	/go/pkg/mod/github.com/labstack/echo/[email protected]/echo.go:669 +0x399
net/http.serverHandler.ServeHTTP({0xc0000c4450?}, {0x10e6590?, 0xc000340460?}, 0x6?)
	/usr/local/go/src/net/http/server.go:2938 +0x8e
net/http.(*conn).serve(0xc00017e1b0, {0x10e83c8, 0xc000c18b10})
	/usr/local/go/src/net/http/server.go:2009 +0x5f4
created by net/http.(*Server).Serve in goroutine 1
	/usr/local/go/src/net/http/server.go:3086 +0x5cb

goroutine 18292 [runnable]:
net/http.(*connReader).startBackgroundRead.func2()
	/usr/local/go/src/net/http/server.go:679
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1
created by net/http.(*connReader).startBackgroundRead in goroutine 17779
	/usr/local/go/src/net/http/server.go:679 +0xba

goroutine 11223 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x7fac3c12df48, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0xc000e00280?, 0xc0001bc000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000e00280, {0xc0001bc000, 0x1000, 0x1000})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc000e00280, {0xc0001bc000?, 0x4daba5?, 0x0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000df8028, {0xc0001bc000?, 0x0?, 0xc000b7c308?})
	/usr/local/go/src/net/net.go:179 +0x45
net/http.(*connReader).Read(0xc000b7c300, {0xc0001bc000, 0x1000, 0x1000})
	/usr/local/go/src/net/http/server.go:791 +0x14b
bufio.(*Reader).fill(0xc0001d6120)
	/usr/local/go/src/bufio/bufio.go:113 +0x103
bufio.(*Reader).Peek(0xc0001d6120, 0x4)
	/usr/local/go/src/bufio/bufio.go:151 +0x53
net/http.(*conn).serve(0xc000e1e000, {0x10e83c8, 0xc000c18b10})
	/usr/local/go/src/net/http/server.go:2044 +0x75c
created by net/http.(*Server).Serve in goroutine 1
	/usr/local/go/src/net/http/server.go:3086 +0x5cb

goroutine 18278 [runnable]:
net/http.(*connReader).startBackgroundRead.func2()
	/usr/local/go/src/net/http/server.go:679
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1650 +0x1
created by net/http.(*connReader).startBackgroundRead in goroutine 12173
	/usr/local/go/src/net/http/server.go:679 +0xba

Audit Logging

It is vital we log every action taken by every user.

One easy starting point could be to log all PUT/POST/DELETE/etc requests, with the time of request, the user that ran the request, and any key information.

When jobs are run as well, the agents should report back to the API whenever it runs a hashcat session, with the argv of hashcat.

Agent crash on missing mask

Howdy, got a bug where an agent will crash if mask attack is specified but no mask is given:

2024/04/30 17:54:04 Received: DeleteFileRequest
2024/05/01 15:50:53 Received: JobStart
2024/05/01 15:50:53 unrecoverable handler error: error when handling message: using mask attack (3), but no mask was given

Bug inserting hashes (v0.1.4)

Howdy, getting this issues when a user tries to start a job?

Its a little bit unclear about how or why this bug is occuring, sorry.

code=500, message=Something went wrong creating attack job (error id 3f6080ca-b681-4fb1-8f70-25efc143eaac), internal=ERROR: insert or update on table \"jobs\" violates foreign key constraint \"fk_attacks_jobs\" (SQLSTATE 23503)
2023/12/04 21:27:10 /app/api/internal/db/job.go:365 ERROR: insert or update on table "jobs" violates foreign key constraint "fk_attacks_jobs" (SQLSTATE 23503)
[5.206ms] [rows:0] INSERT INTO "jobs" ("created_at","updated_at","deleted_at","hashlist_version","attack_id","hashcat_params","target_hashes","hash_type","assigned_agent_id") VALUES ('2023-12-04 21:27:10.819','2023-12-04 21:27:10.819',NULL,1,'07ba7758-54b6-466c-817f-61caf6897c9c','{"attack_mode":0,"hash_type":13100,"mask":"","mask_increment":false,"mask_increment_min":0,"mask_increment_max":0,"mask_sharded_charset":"","mask_custom_charsets":[],"wordlist_filenames":["1b82c71a-49b7-4f6e-8cd4-115788202fbc"],"rules_filenames":["5b5bfbe6-e25c-42d5-9ecb-47ae80832de8"],"additional_args":[],"optimized_kernels":true,"slow_candidates":false,"skip":0,"limit":3004579076}',

Hashes omitted obv

Fast Path for Keyspace calculation

Add a "fast path" for keyspace calculation of simple cases. Such as a single wordlist, or wordlist with basic rules.

The problem is, whilst for most hashes you can simply calculate wordlist_line_count * rule_count, this isn't true of all hashes, because of how hashcat does its inner vs outer loops. So, I'll need to do a bunch of testing to identify which hashtypes are "normal" or not

Failure Condition Handling

Currently, there is:

  • Rudimentary auto-reconnect in the agent
  • Agent health checks before assigning jobs

However, the following problems currently exist:

  • If a disconnect briefly happens and reconnects, the agent will try and send messages to the OLD websocket handle, and those won't get delivered.
  • There is no handling of the condition when an agent fully dies and jobs die with it.

We need to avoid a split-brain problem, so I suggest something like the following:

  • If an agent disconnects for more than, say, 5 minutes, then BOTH the agent and API should consider it "dead". The agent should stop all of its currently running jobs, and the API should re-assign these to another node.
  • If an agent has disconnected for shorter than 5 minutes, it should be considered "unhealthy", and won't have any new jobs assigned, but not "dead", so we give it a chance to recover from its hiccup.
  • Any disconnects shorter than 5 minutes should be considered "fine", but generate some warnings if possible. If the agent is trying to send messages whilst disconnected, it should interanlly buffer those, and send them once the websocket re-connects.
  • If the agent is intentioinally killed (due to reboot or whatever), instead, a graceful shutdown should happen. The agent should say "lol bye" to the server, which should immediately consider the agent unhealthy/dead.

Nice to haves:

  • Session resumption/restore points? We need to investigate how hashcat does it internally with .restore files, given that cracking order is somewhat non-deterministic. Can we replicate this? So if a job dies 12 hours in, we can resume it on another node from the last known point, instead of starting from scratch

Job Templates

Users should be able eto create job templates. I.e. "this wordlist + this rules" or "this mask attack".

Even better if they can be done in a batch. For example, if I can click a handful of buttons, chuck in some NTLM hashes, and I'm off to the races cracking with 3 different techniques, that would be amazing.

This can likely be mostly frontend-driven. API endpoints can be used to serve up preconfigured hashcat_params, which the frontend can then apply to the wizard, etc.

Agent failure on Out of Memory if hashcat process is already dead

Howdy.

Just ran into a condition that causes the agent to die entirely when the graphics card runs out of memory. I believe this is happening due to the process trying to clean up hashcat, but hashcat is already gone leading to a "double close" which triggers and error that kills the agent with here:

func (sess *HashcatSession) Kill() error {

Is currently:

func (sess *HashcatSession) Kill() error {
	if sess.proc == nil || sess.proc.Process == nil {
		return nil
	}
	return sess.proc.Process.Kill()
}

Should be:

func (sess *HashcatSession) Kill() error {
	if sess.proc == nil || sess.proc.Process == nil {
		return nil
	}

        if err := sess.proc.Process.Kill(); err != nil && err != os.ErrProcessDone {
           return err 
        }
 
	return nil
}

Log for completeness:

Jan 19 14:21:48 node phatcrack-agent[3816646]: 2024/01/19 14:21:48 Running hashcat command: "/opt/crypt/phatcrack-agent/hashcat/hashcat.bin --quiet --session sess-omitted>
Jan 19 14:21:48 node phatcrack-agent[3816646]: 2024/01/19 14:21:48 read stderr: "cuCtxCreate(): out of memory"
Jan 19 14:21:48 node phatcrack-agent[3816646]: 2024/01/19 14:21:48 read stderr: ""
Jan 19 14:21:48 node phatcrack-agent[3816646]: 2024/01/19 14:21:48 read stderr: "cuCtxCreate(): out of memory"
Jan 19 14:21:48 node phatcrack-agent[3816646]: 2024/01/19 14:21:48 read stderr: ""
Jan 19 14:21:49 node phatcrack-agent[3816646]: 2024/01/19 14:21:49 read stderr: "cuCtxCreate(): out of memory"
Jan 19 14:21:49 node phatcrack-agent[3816646]: 2024/01/19 14:21:49 read stderr: ""
Jan 19 14:21:49 node phatcrack-agent[3816646]: 2024/01/19 14:21:49 read stderr: "cuCtxCreate(): out of memory"
Jan 19 14:21:49 node phatcrack-agent[3816646]: 2024/01/19 14:21:49 read stderr: ""
Jan 19 14:21:49 node phatcrack-agent[3816646]: 2024/01/19 14:21:49 read stderr: "clCreateContext(): CL_OUT_OF_HOST_MEMORY"
Jan 19 14:21:49 node phatcrack-agent[3816646]: 2024/01/19 14:21:49 read stderr: ""
Jan 19 14:21:49 node phatcrack-agent[3816646]: 2024/01/19 14:21:49 read stderr: "clCreateContext(): CL_OUT_OF_HOST_MEMORY"
Jan 19 14:21:49 node phatcrack-agent[3816646]: 2024/01/19 14:21:49 read stderr: ""
Jan 19 14:21:49 node phatcrack-agent[3816646]: 2024/01/19 14:21:49 read stderr: "clCreateContext(): CL_OUT_OF_HOST_MEMORY"
Jan 19 14:21:49 node phatcrack-agent[3816646]: 2024/01/19 14:21:49 read stderr: ""
Jan 19 14:21:50 node phatcrack-agent[3816646]: 2024/01/19 14:21:50 read stderr: "clCreateContext(): CL_OUT_OF_HOST_MEMORY"
Jan 19 14:21:50 node phatcrack-agent[3816646]: 2024/01/19 14:21:50 read stderr: ""
Jan 19 14:21:50 node phatcrack-agent[3816646]: 2024/01/19 14:21:50 read stderr: "No devices found/left."
Jan 19 14:21:50 node phatcrack-agent[3816646]: 2024/01/19 14:21:50 read stderr: ""
Jan 19 14:21:50 node phatcrack-agent[3816646]: 2024/01/19 14:21:50 Received: JobKill
Jan 19 14:21:50 node phatcrack-agent[3816646]: 2024/01/19 14:21:50 unrecoverable handler error: error when handling message: os: process already finished
Jan 19 14:21:50 node systemd[1]: phatcrack-agent.service: Main process exited, code=exited, status=1/FAILURE
Jan 19 14:21:51 node systemd[1]: phatcrack-agent.service: Failed with result 'exit-code'.

Unable to delete project or task

2024/04/04 21:52:05 /app/api/internal/db/project.go:263 SLOW SQL >= 200ms
[202.220ms] [rows:1] SELECT * FROM "hashlists" WHERE id = '<id>' AND "hashlists"."deleted_at" IS NULL ORDER BY "hashlists"."id" LIMIT 1
{"authenticated_username":"admin","level":"warning","log_type":"audit","msg":"Session started","remote_ip":"192.168.12.2","time":"2024-04-04T21:56:37Z"}

2024/04/04 21:57:04 /app/api/internal/db/project.go:263 SLOW SQL >= 200ms
[201.862ms] [rows:1] SELECT * FROM "hashlists" WHERE id = '<id>' AND "hashlists"."deleted_at" IS NULL ORDER BY "hashlists"."id" LIMIT 1

2024/04/04 21:57:17 /app/api/internal/db/job.go:333 ERROR: column attacks.hashlists_id does not exist (SQLSTATE 42703)
[9.408ms] [rows:0] SELECT "jobs"."id","jobs"."created_at","jobs"."updated_at","jobs"."deleted_at","jobs"."hashlist_version","jobs"."attack_id","jobs"."hashcat_params","jobs"."target_hashes","jobs"."hash_type","jobs"."assigned_agent_id" FROM "jobs" join attacks on attacks.id = jobs.attack_id WHERE attacks.hashlists_id = '<id>' AND "jobs"."deleted_at" IS NULL
{"URI":"/api/v1/hashlist/5a0099fe-ccb2-4152-bf85-a62c18d7e703","content_length":"","error":"ERROR: column attacks.hashlists_id does not exist (SQLSTATE 42703)","error_id":"f3e5050c-c893-4f20-944a-1805a42d957f","latency_ms":154,"level":"error","method":"DELETE","msg":"request error f3e5050c-c893-4f20-944a-1805a42d957f","remote_ip":"192.168.12.2","response_size":0,"status":500,"time":"2024-04-04T21:57:17Z","user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36","user_id":"92580446-7aca-4f1f-95ef-8f160dbc9956","user_username":"admin"}

2024/04/04 21:57:19 /app/api/internal/db/job.go:333 ERROR: column attacks.hashlists_id does not exist (SQLSTATE 42703)
[0.337ms] [rows:0] SELECT "jobs"."id","jobs"."created_at","jobs"."updated_at","jobs"."deleted_at","jobs"."hashlist_version","jobs"."attack_id","jobs"."hashcat_params","jobs"."target_hashes","jobs"."hash_type","jobs"."assigned_agent_id" FROM "jobs" join attacks on attacks.id = jobs.attack_id WHERE attacks.hashlists_id = '<id>' AND "jobs"."deleted_at" IS NULL
{"URI":"/api/v1/hashlist/5a0099fe-ccb2-4152-bf85-a62c18d7e703","content_length":"","error":"ERROR: column attacks.hashlists_id does not exist (SQLSTATE 42703)","error_id":"0c7fcec8-afeb-42f6-9393-5227ce3d9514","latency_ms":132,"level":"error","method":"DELETE","msg":"request error 0c7fcec8-afeb-42f6-9393-5227ce3d9514","remote_ip":"192.168.12.2","response_size":0,"status":500,"time":"2024-04-04T21:57:19Z","user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36","user_id":"92580446-7aca-4f1f-95ef-8f160dbc9956","user_username":"admin"}

2024/04/04 21:57:22 /app/api/internal/db/job.go:333 ERROR: column attacks.hashlists_id does not exist (SQLSTATE 42703)
[0.408ms] [rows:0] SELECT "jobs"."id","jobs"."created_at","jobs"."updated_at","jobs"."deleted_at","jobs"."hashlist_version","jobs"."attack_id","jobs"."hashcat_params","jobs"."target_hashes","jobs"."hash_type","jobs"."assigned_agent_id" FROM "jobs" join attacks on attacks.id = jobs.attack_id WHERE attacks.hashlists_id = '<id>' AND "jobs"."deleted_at" IS NULL
{"URI":"/api/v1/hashlist/5a0099fe-ccb2-4152-bf85-a62c18d7e703","content_length":"","error":"ERROR: column attacks.hashlists_id does not exist (SQLSTATE 42703)","error_id":"07c0f8d6-3c6c-446a-93a6-197849548966","latency_ms":156,"level":"error","method":"DELETE","msg":"request error 07c0f8d6-3c6c-446a-93a6-197849548966","remote_ip":"192.168.12.2","response_size":0,"status":500,"time":"2024-04-04T21:57:22Z","user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36","user_id":"92580446-7aca-4f1f-95ef-8f160dbc9956","user_username":"admin"}

2024/04/04 21:57:46 /app/api/internal/db/job.go:348 SLOW SQL >= 200ms
[349.441ms] [rows:33] SELECT "jobs"."id","jobs"."created_at","jobs"."updated_at","jobs"."deleted_at","jobs"."hashlist_version","jobs"."attack_id","jobs"."hashcat_params","jobs"."target_hashes","jobs"."hash_type","jobs"."assigned_agent_id" FROM "jobs" join attacks on attacks.id = jobs.attack_id join hashlists on hashlists.id = attacks.hashlist_id WHERE hashlists.project_id = 'e373e6ca-89c2-442e-a6a3-38449a078de9' AND "jobs"."deleted_at" IS NULL

2024/04/04 21:57:46 /app/api/internal/db/db.go:30 ERROR: update or delete on table "projects" violates foreign key constraint "fk_projects_project_share" on table "project_shares" (SQLSTATE 23503)
[0.785ms] [rows:0] DELETE FROM "projects" WHERE "projects"."id" = 'e373e6ca-89c2-442e-a6a3-38449a078de9'
{"URI":"/api/v1/project/e373e6ca-89c2-442e-a6a3-38449a078de9","content_length":"","error":"ERROR: update or delete on table \"projects\" violates foreign key constraint \"fk_projects_project_share\" on table \"project_shares\" (SQLSTATE 23503)","error_id":"d0391bb3-bf4c-4edb-b397-0f6ddf408f48","latency_ms":350,"level":"error","method":"DELETE","msg":"request error d0391bb3-bf4c-4edb-b397-0f6ddf408f48","remote_ip":"192.168.12.2","response_size":0,"status":500,"time":"2024-04-04T21:57:46Z","user_agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36","user_id":"92580446-7aca-4f1f-95ef-8f160dbc9956","user_username":"admin"}

Don't hang "start attack"

Currently, the endpoint to start the attack waits for keyspace claculation to complete. This can take several minutes and be a bit confusing, despite the new UI information.

The endpoint should return in under < 500ms, and only capture a keyspace calculation error if it returns quickly.

This might make error logging a bit difficult however, so might need to add errors into the attack status itself.

Review API Endpoints

Now that the majority of API endpoints have been written, I want to ensure that each one is written properly, and there hasn't been a drift in code quality or applied practices.

For every API endpoint, ensure:

  • Access controls are being applied correctly
  • Proper validation is being done in apitypes for request bodies
  • Database queries are relatively sane, and performance isn't garbage because of them
  • Consistent code quality

Large hashlist support

It seems that Phatcrack refuses to save large hashlists (> 10000 hashes), giving an error 500, "Failed to create hashlist".

Would it be possible to add file upload support or otherwise support larger hashlists please?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.