Giter Site home page Giter Site logo

complement's People

Contributors

andrewdoh avatar anoadragon453 avatar ara4n avatar callahad avatar clokep avatar dependabot[bot] avatar devonh avatar erikjohnston avatar gdlol avatar h-shay avatar half-shot avatar hughns avatar kegsay avatar madlittlemods avatar meenal06 avatar michaelkaye avatar neilalexander avatar oliverpool avatar poljar avatar realtyem avatar reivilibre avatar richvdh avatar s7evink avatar sandhose avatar shadowjonathan avatar squahtx avatar sumnerevans avatar tommie avatar turt2live avatar valkum avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

complement's Issues

Make it work on linux

Linux doesn't have host.docker.internal by default, you need additional CLI args when you run the network/container, so we need to do that on startup if running under linux

TestMediaWithoutFileName test does not allow changes in content type between media upload and download

The tests:

  • Can download without a file name locally
  • Can download without a file name over federation

that are a part of TestMediaWithoutFileName both have a check for the Content-Type header value that was provided during initial upload of the test media. They ensure this value does not change between uploading and downloading media and its associated metadata.

Synapse will however augment the content type of text files in order to specify UTF-8 as their encoding. The reasoning behind this is at matrix-org/synapse#7043.

While the spec doesn't explicitly state that the Content-Type response header when downloading a file needs to exactly match what was provided during upload, it does state that it should be the content type of the file, and so homeservers should at least try to get it right.

The original sytest this was taken from, Can download without a file name locally, did not check Content-Header. I personally think it is valuable to do so, but we may want to make an exception for this modification, as it does seem to be beneficial and not against the spec, however vague it might be.

Tail end of server logs appearing above the rest of the logs

Logs: buildkite.log

At line 1974 in the log file you can see that I seem to be experiencing logs being cut off when I have quite a lot of them.

Interestingly it actually looks like a chunk of logs are being spurted out, before Complement actually prints them on line 2518.

And if you look at the end of the actual logs... that lines up with the beginning of the spurted out logs.

So somehow the tail end of the server logs are being printed out above the rest of them...

I initially wondered if this was a bug with piping the logs to less and viewing them there. But the same happens even without less involved.

Rename 'CI' environment variable to 'COMPLEMENT_IN_DOCKER' or similar

This grew out of a discussion here, where it was found that the CI environment variable is used by Complement to know whether it is running inside of a docker container already (and thus needs to talk to the host's docker socket) or is running on the host directly. It's assumed that if Complement is running inside of docker, CI=true.

@richvdh proposed to rename this variable to COMPLEMENT_IN_DOCKER (or something similar) instead, as it's more descriptive of what the variable actually specifies. Plus people won't be tempted to add CI-specific things outside of what's necessary (and thus making CI code split further from production code) ๐Ÿ˜‡.

Make `Sync*` functions validate sync response

Currently functions such as SyncUntil do little to validate the JSON they're actually getting, I think it could be an easy win to develop a generic validation function, which walks the entire sync response and validates it, before handing it back to the caller.

sytest-coverage: Coverage script is broken after cs-api revamping

Describe the bug

Coverage script can no longer be run because of the cs-api directory added.

To Reproduce
Steps to reproduce the behavior:

  • Run the go run ./cmd/sytest-coverage

Expected behavior

Sytest coverage should be outputted

Screenshots

panic: ./tests/csapi:1:1: illegal UTF-8 encoding (and 1 more errors)

goroutine 1 [running]:
main.main()
	/Volumes/T7/code/complement/cmd/sytest-coverage/main.go:56 +0xf45
exit status 2

Complement pipeline will currently use the latest Synapse release when building on branch `master`

When running Complement tests against Synapse in CI, the current Complement buildkite pipeline code will check to see if a similarly-named branch exists on Synapse's repo.

As is often the case, if the name of the branch is master, we'll end up looking for a branch named master on Synapse's side which then grabs the latest release of Synapse, rather than Synapse's develop branch.

This can cause CI to fail on Complement master while the latest release of Synapse does not support tests that are intended to be run against Synapse develop. We saw this recently with TestKnocking, which then magically fixed itself after Synapse 1.37.0 was released today.

The fix is to insert some bash to say "if $BUILDKITE_BRANCH is master, use Synapse develop".

Could Complement be extended to load and run out-of-repository tests?

The context here is that we 'need' some way to run integration tests for non-Matrix-standard features, such as those implemented by custom Synapse modules or maybe even Synapse-specific behaviours (if/where there is good reason).

Without knowing too much, Complement seems basically perfect for this use case, but it really wouldn't make sense to add these kinds of tests straight to Complement's repository.

It would be nice to have out-of-repository test suites (for example: alongside a Synapse module, say https://github.com/matrix-org/synapse-user-restrictions/, but you can also imagine weirder customer-specific modules).
Is Complement suitable for being extended this way? What would it take?

I'm not familiar with Go, so I can't easily suggest the right approach (I can speculate, but I'll refrain from doing so for now).

Consider using a single file-naming scheme and test grouping methodology

Currently, while browsing the test repository, i find it difficult to navigate to tests i expect to exist, and/or not to exist, as the file naming scheme (and test grouping) seems to be all over the place.

I think it'll be worth it (for organisational purposes) to organise files and scope tests such that someone new coming into the repository can find their way and discover related tests better.


Some current complaints and notes are that some files seem to inherit sytest's vague naming scheme (apidoc, federation, etc.), some simply document and test MSCs, and some restrict themselves to very narrow scopes (account_change_password_pushers_test.go)

Build tag configuration duplicated in many places

Currently the build tags get duplicated in a handful of places:

Note that the build tags are homeserver specific (and might vary based on the version of the homeserver).

It would be great if complement was able to know which build tags to use somehow by inspecting a file or the homeserver capabilities. This would help avoid the various configs above getting out of sync.

Additionally we should probably:

  • Remove the buildkite CI for complement.
  • Remove the unused pipeline for Synapse.

Support E2EE rooms

And actually decrypting them. We support olm as a dependency now so it should be possible.

Synapse tests run against `master` branch instead of `develop` by default

=== RUN   TestRestrictedRoomsRemoteJoinLocalUser
    msc3083_test.go:191: Deploy times: 6.570039ms blueprints, 3.985522787s containers
    client.go:370: POST hs2/_matrix/client/r0/createRoom => 200 OK (226.771354ms)
    client.go:370: POST hs2/_matrix/client/r0/createRoom => 400 Bad Request (5.686334ms)
    msc3083_test.go:204: CSAPI.MustDo POST http://172.17.0.1:49305/_matrix/client/r0/createRoom returned HTTP 400

https://github.com/matrix-org/complement/runs/3180791920#step:7:1029

Support host mounts in containers

Developers often want to run new tests against a local version of their HS. This is cumbersome at present as you have to rebuild a docker image before you can run the test. Instead, we can expose directory on the container which will be volume mounted from a new env var e.g:

COMPLEMENT_MOUNT=/Users/me/my/local/dendrite
COMPLEMENT_BASE_IMAGE=dendrite-which-reads-mounted-dir:latest
go test ./...
  • Should we just expose the source location or also the destination location e.g COMPLEMENT_MOUNT=/Users/me/my/local/dendrite:/data
  • We should standardise the mounted directories (as /ca is done for COMPLEMENT_CA=1) as we will inevitably expand this in the future (e.g appservices need registration YAML files in the container)

We can either be prescriptive and say that we map to a static directory layout:

/complement
    |__ /appservice
    |__ /ca
    |__ /mount

or we can make this configurable e.g:

COMPLEMENT_MOUNT=/Users/me/my/local/dendrite:/mnt
COMPLEMENT_CA=/Users/me/my/ca:/certs
COMPLEMENT_AS=/Users/me/my/appservices:/as

This feels wrong though as it then ties env vars to a specific docker image which isn't what we're trying to do here (we want them to be interchangeable), so let's be prescriptive.

Consider removing COMPLEMENT_CA and having it always on

Before too many more homeserver dockerfiles are written, what do people think about removing the COMPLEMENT_CA environment variable, and making its behaviour the default? /ca will just be mounted in all docker containers, and homeserverd dockerfiles can choose to use the certificate and key in /ca or not.

Existing dockerfiles, such as the Synapse Monolith one will need to be updated, but I've been planning to adjust that one to use Complement's CA anyhow.

Homerunner fails with `failed to ContainerCommit`

A recent change seems to have broken Homerunner. If I install from master and try to run:

curl -XPOST -d '{"base_image_uri":"complement-dendrite", "blueprint_name":"federation_one_to_one_room"}' http://localhost:54321/create

I get this error:

2021/08/04 17:38:49 .federation_one_to_one_room.hs1 : failed to ContainerCommit: invalid reference format
2021/08/04 17:38:49 .federation_one_to_one_room.hs2 : failed to ContainerCommit: invalid reference format
2021/08/04 17:38:50 could not construct blueprint: .federation_one_to_one_room.hs1 : failed to ContainerCommit: invalid reference format
2021/08/04 17:38:50 could not construct blueprint: .federation_one_to_one_room.hs2 : failed to ContainerCommit: invalid reference format

Support HS upgrade tests

Dendrite has https://github.com/matrix-org/dendrite/tree/master/cmd/dendrite-upgrade-tests which will automatically pull and run different versions of dendrite with the same database and ensure db migrations work correctly. It can be run for example by doing ./dendrite-upgrade-tests --from 0.1.0 --to 0.3.1. This will:

  • Fetch semver looking tags from github (or HEAD).
  • Sort them.
  • Pull archives for each release.
  • Execute a Dockerfile with the build context set to the root directory of the release.
  • Run the lowest semver, make some users and rooms, send a few messages, tear down the server.
  • Keep the database as a docker volume.
  • Repeat with the next lowest semver up until --to.
  • Test that the last HS run has the original users/rooms/messages from all previous runs.

It looks like Synapse has some bisectability baked into the Complement Synapse image:

ARG SYNAPSE_VERSION=latest

If Complement standardised the format of this (e.g thou shalt accept the build arg HS_VERSION=$semver) then we could, in theory, make https://github.com/matrix-org/dendrite/tree/master/cmd/dendrite-upgrade-tests work for any HS (modulo working out how to do persistence, as currently Dendrite uses postgres and that assumption is baked into the upgrade testing infra: this could just be stating that the directory /data is persisted across runs and you need to just dump your DB there).

This arguably is and isn't feature creep. It is feature creep because it has nothing to do with the integration tests in /tests. It isn't feature creep because it is still ultimately testing HSes, just the upgrade paths.

Thoughts @richvdh (on the basis that you added SYNAPSE_VERSION?

Think about adding `sync_next_batch` equivalent variable to `CSAPI`

While looking through sytests, there seems to be a concept of storing/updating the next_batch of previous syncs with user objects (CSAPI objects in complement)

This could be quite helpful when "checkpointing" sync to a particular moment in the test fairly easily.

However, i suspect that adding this would be a massive breaking internal change, as every test in complement has basically acted with since: null up until this point.

Either that, or add an additional bool to every Sync* function which signifies if it should try to continue from the last-stored next_batch variable.

Add more docs for how the federation requirements work for images

Some implementations care quite a lot about what hostname they are expected to be, and presumably complement spins up multiple images to create a local federation for some tests. How is the image meant to know what hostname it is running for?

The real world example of this is my matrix-media-repo project: it needs to know which domain it is running for so it can accurately bind all the routes, media IDs, etc.

Synapse will not pass TestKeysQueryWithDeviceIDAsObjectFails due to backwards-compatibility handling

TestKeysQueryWithDeviceIDAsObjectFails checks that non-spec compliant behaviour does not occur. This behaviour was specifically being exhibited by Element iOS (issue). Synapse will likely maintain backwards compatibility here for the long line of old Element iOS clients, and thus Synapse is unlikely to pass this test any time soon.

We'd like to get Synapse passing all of a subset of tests in order to have Complement not soft-fail in our CI. Perhaps one solution is to add a build tag for backwards-compatibility checks such as this one, which Synapse can disable?

Helper function format

We are sprouting helper functions which are missing:

  • t as the first arg
  • t.Helper() called in the function itself.

E.g:

func FailJoinRoom(c *client.CSAPI, t *testing.T, roomIDOrAlias string, serverName string)

The latter means if the helper function fails the test, the line number of the helper function is returned (which is unhelpful) instead of the caller. E.g msc3083_test.go:25: CSAPI.MustDoWithStatusRaw POST http://172.17.0.1:49215/_matrix/client/r0/join/%21axRzfYYvbjlmsBAGdx:hs1?access_token=syt_Ym9i_gWbzZkqCcutSaHXBQTbq_0Ohh5T&server_name=hs1 returned HTTP 200, expected 403 which maps to

c.MustDoWithStatusRaw(

Related: #110

Parallelise tests by default

Tests are currently only run sequentially, unless t.Parallel() is called in a subtest. This will not scale as we increase the number of tests. The go test framework will parallelise tests in different packages by default, but the environments are completely isolated from each other so test suites cannot co-ordinate who is making what blueprint, or when to clean up docker containers. It would be good to figure out a way of doing this such that we have the ability to have at least some parallelisation by default to make it not scale with the number of tests quite so badly.

We may be able to do the package solution, provided we have some co-ordination mechanism for cleanup, even if it's just a naive "okay, all packages get their own namespace to work with". From a sheer complexity PoV this may actually be the best solution as it's easier to reason about (in that you don't need to care about who set up what, at the cost of making O(n) blueprints where n is the number of packages). Then the problem is how to slice and dice the existing tests. I tried this briefly, and we cannot split based entirely on feature, due to how the build tags work. If a directory has test files which are all build tagged up (e.g MSCs) then go test complains that there is nothing testable there (when the tag is not specified). We could add a dummy test file, but that is icky given we then expect new MSCs to add new directories causing the writer of the test to know about this. We could split based on protocol (CS API, SS API, ST API) but that reduces the amount of helper function sharing that can be done, which is typically per-feature (think registration, knocking). Currently all non-test functions share a global namespace as they are all in the same package which is a bit gross in terms of having a mess of global functions lying around.

Thoughts welcome.

Autogen per-test docs for discoverability

While working on importing sytests, i found myself ctrl-shift-f-ing on keywords to see what tests already existed to see if a sytest would fit that file, however, I think that this displays some poor test discoverability.

However, when i was documenting some sytests (and their behaviour), i came upon something like the following format;

sytest:
  - name: "Non-present room members cannot ban others"
    type:
      - power_level
      - ban
    desc:
      When a user does not exist in a room, but has powerlevels there, it should not be able to ban users.
    variants:
      - user has never entered the room
      - user has left the room
      - user has been kicked while ban is sent out
      - user server has been ACL-d
  - name: "New federated private chats get full presence information (SYN-115)"
    # ...

I think it would be useful to programmatically document (and maybe link) these tests, so that a quick glance can see which ones exist, what it does, what variants of behaviour it's also testing, and also in which "areas" of testing matrix it exists.


For a final variant of above schema, i think adding a path (something like tests/csapi/rooms_state_test.go:TestRoomCreationReportsEventsToMyself) would help auto-identify the corresponding function.

Complement is not using a Content-Type of application/json for federation request responses

When running the test Outbound federation can query profile data, Synapse attempts to query for profile information, however the handler returns JSON information with a Content-Type of text/plain; charset=utf-8:

2020-11-04 16:53:11,545 - synapse.http.matrixfederationclient - 190 - WARNING - GET-1 - {GET-O-1} [host.docker.internal] Error reading response GET matrix://host.docker.internal/_matrix/federation/v1/query/profile?user_id=%40user%3Ahost.docker.internal&field=displayname: Failed to send request: RuntimeError: Content-Type not application/json: was 'text/plain; charset=utf-8'

The federation api spec states that responses should use a Content-Type of application/json unless otherwise stated.

Adding w.Header().Set("Content-Type", "application/json") to the request handler function fixes this test. I'm not sure whether we want to manually add this everywhere or we can have this be the default for federation responses.

TestJoinViaRoomIDAndServerName assumes the homeserver will extract domain names from room IDs

// join the room by room ID alone - the server will need to extract the domain for this to work.
alice := deployment.Client(t, "hs1", "@alice:hs1")
alice.JoinRoom(t, serverRoom.RoomID)

Room IDs are meant to be treated as opaque strings, and thus homeservers are not supposed to extract domain names from them. Having a test that relies on this functionality does not seem right.

(Yes, Dendrite will currently extract domain names from room IDs as a last resort).

Allow "user-defined" list of skipped tests

Currently, through the use of runtime.SkipIf, tests (from within that test) are skipped depending on which known runtime/server it is testing against.

This is fine considering that synapse and dendrite are closely developed with complement, however, there's a problem if server X comes into play.

Server X's developers are not part of Element or Matrix.org, and developed their own homeserver off on the side, engaging in the community and open spirit of matrix itself.

However, they are still in the middle of development on that homeserver, many features and endpoints not finalized, yet they still want to use Complement.

The current option would be to PR against Complement's codebase, and register their own server.

Then, every time they have added a feature, or complement adds another incompatible test, they would have to STW and wait until the skipIf addition or removal PR has proceeded.

Even still, if they would CI against :latest, and Complement adds new tests, old CI runs cannot be comprehensively retried unless the exact image hash is known and still up.


These points may be doable, however, they have a few breaking points;

  • What if the developers (for whatever reason) refuse to use github, or simply do not find it feasable to contribute in the public?
  • What if their homeserver project is semi-secretive, and it's name shouldn't be known?
  • What about their development speed, if it has to constantly await complement's approval?

All of these issues could sap or otherwise burn those developer's motivation or patience to work on matrix, and it would raise the threshold of motivation and effort to develop matrix homeservers, while i think these issues are completely avoidable.

Synapse and Dendrite don't have to deal with some of these questions, because their development is pretty closely tied with Complement's developers and internal development channels. But, i hope that some of the above can illustrate the difficulty (and defacto "centralisation") that the current setup requires for a homeserver to start and keep using complement.

In this scenario, the homeserver awaits complement's development.


Let me propose the following instead;

Complement should try to read a complement.yml in the current directory when it starts to run, this file can give a blacklist or whitelist of test items that this run wishes to (exclusively) include or exclude.

(This list can also be compressed in JSON in an environment variable, such as COMPLEMENT_TASKLIST)

Complement then selects or unselects to-be-ran tests, this could possibly be done by a boilerplate checkSkip() invocation at the start of every test.

In this scenario, the homeserver developers could submit whitelist additions/blacklist removals to incrementally support more and more tests for their homeserver, with the definition file for such existing on their development branch, while complement only has to read that at runtime. Complement awaits the homeserver's development.


Additionally, some more features;

Tags could be introduced, something akin to tag:v1.1, which enables every test which is geared towards matrix v1.1 compatibility.

Globs for test names could be used, blacklist: ["test:federation*"], which would exclude all tests starting with "federation" (though, test naming depends a bit on #241 resolution, so this is likely not accurate, nor possibly useful)

Empty 'access_token' parameter passed to /_matrix/client/r0/register

Complement calls /_matrix/client/r0/register with an empty access_token query parameter, which causes Synapse to fail the request. It is valid to call /register with an access token if you're an application service registering a user in your namespace.

=== CONT  TestRegistration/parallel/POST_/register_returns_the_same_device_id_as_that_in_the_request
    client.go:196: Making POST request to http://localhost:32857/_matrix/client/r0/register?access_token=
    client.go:196: Request body: {"auth":{"type":"m.login.dummy"},"username":"user-device","password":"sUp3rs3kr1t","device_id":"my_device_id"}
=== CONT  TestRegistration
    client.go:262: POST /_matrix/client/r0/register => 401 Unauthorized (1.981009ms)
=== CONT  TestRegistration/parallel/POST_/register_returns_the_same_device_id_as_that_in_the_request
    client.go:196: HTTP/1.1 401 Unauthorized
        Transfer-Encoding: chunked
        Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept, Authorization
        Access-Control-Allow-Methods: GET, HEAD, POST, PUT, DELETE, OPTIONS
        Access-Control-Allow-Origin: *
        Cache-Control: no-cache, no-store, must-revalidate
        Content-Type: application/json
        Date: Wed, 04 Nov 2020 16:53:03 GMT
        Server: Synapse/1.22.1
        
        55
        {"errcode":"M_UNKNOWN_TOKEN","error":"Unrecognised access token","soft_logout":false}
        0
        
    apidoc_register_test.go:92: CSAPI.MustDo POST http://localhost:32857/_matrix/client/r0/register?access_token= returned HTTP 401

The question of whether Synapse should just ignore an empty access_token query parameter is a good one. The spec doesn't really say one way or the other, so it's probably not good to fail a homeserver for this. Though I'm not sure where using an empty access_token would even be valid either.

Just wanted to get your thoughts on whether you think this should be solved Synapse-side or Complement-side.

Add documentation on build tags/homeserver blacklists

People may be confused on how exactly to run optional tests, or how they should implement white/blacklists for their homeservers.

I don't think we should document every build tag that's placed in, but just a quick summary of where and why we normally use them would be good.

Replace COMPLEMENT_VERSION_CHECK_ITERATIONS

It's really not clear what this is supposed to do, and what it controls. It ultimately controls how long we're willing to wait for the HS to deploy when using docker run so rename it to COMPLEMENT_SPAWN_HS_TIMEOUT_SECS perhaps. Given we sleep a bit between checks, we can deprecate COMPLEMENT_VERSION_CHECK_ITERATIONS and automatically convert it to a time in the new format.

BlueprintFederationOneToOne does not work on homeservers that do not extract the domain from room IDs

The blueprint for creating a one-to-one room between two federated homeservers (BlueprintFederationOneToOne) fails on homeservers that do not extract domain names from room IDs.

This is because an instruction will be generated on the second homeserver asking to join the room without providing any server_name query parameters.

I've created a PR to allow instructions to carry query parameters: #37

However this does not solve the full problem, as we still need a way to encode into the blueprint that a room was originally created on hs1, ideally without extracting the domain from the room ID in complement. Perhaps we could add a OriginServerName key to Room objects when creating a room in blueprints, which other homeservers can then retrieve via the room's Ref?

Error handling in constructHomeserver could be unified

Considering this code (with 2 places where an err != nil can happen):

if err != nil {
log.Printf("%s : failed to deployBaseImage: %s\n", contextStr, err)
containerID := ""
if dep != nil {
containerID = dep.ContainerID
}
printLogs(d.Docker, containerID, contextStr)
return result{
err: err,
}
}
d.log("%s : deployed base image to %s (%s)\n", contextStr, dep.BaseURL, dep.ContainerID)
err = runner.Run(hs, dep.BaseURL)
if err != nil {
d.log("%s : failed to run instructions: %s\n", contextStr, err)
}

If the deploy fails (first err != nil) it gets logged to the standard output via log.Printf (and printLogs is called explicitely).
If the runner fails, it gets logged to the Builder (and printLogs is called by the parent).

I think those 2 error logging should be unified: what do you think is the best approach?

I would recommend having some logging to the standard output, since it can impact the rest of the test.

For instance adding an err param to printLogs, which will get printed at the end of the server log if not nil, called inside construct.

I would be glad to make a PR for this.


Background info: I am trying to understand how #39 can be fixed, and I discovered that the Blueprints has an error (but only after spending a couple of hours and finally adding COMPLEMENT_DEBUG=1 :)

2020/11/17 22:57:42 federation_one_to_one_room.hs2 : failed to run instructions: federation_one_to_one_room.hs2 : request http://localhost:32902/_matrix/client/r0/join/%21IiiKfQGoJGEqTPCTXN:hs1?access_token=MDAxMWxvY2F0aW9uIGhzMgowMDEzaWRlbnRpZmllciBrZXkKMDAxMGNpZCBnZW4gPSAxCjAwMWJjaWQgdXNlcl9pZCA9IEBib2I6aHMyCjAwMTZjaWQgdHlwZSA9IGFjY2VzcwowMDIxY2lkIG5vbmNlID0gbjNjbkNJeUs7bmxhMFBubAowMDJmc2lnbmF0dXJlIGjnOvc29ANDh6mwUukBi_1NocFirh6_gE1ZoogwQotxCg returned HTTP 404 Not Found : {"errcode":"M_UNKNOWN","error":"No known servers"}

Account snapshot fails on malfored Matrix ID

Describe the bug

Account snapshot fails when it encounters an invalid Matrix ID @signalbot in this case where the suffix :homeserver.tld is somehow missing

To Reproduce

  1. Remove the :homeserver.tld bit in a read receipt event in sync_snapshot.json
  2. Run the account-snapshot script again

Expected behavior
The event should probably be dropped

Desktop (please complete the following information):

  • OS: macOS 11.4
  • Version: latest

Additional context

2021/06/28 14:02:03 Invalid matrix identifier: @signalbot
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x1291501]

goroutine 1 [running]:
github.com/matrix-org/complement/cmd/account-snapshot/internal.ConvertToBlueprint(0x0, 0x13213ad, 0x3, 0xc000c9c0c0, 0xc, 0xc000711770)
        /path/to/complement/cmd/account-snapshot/internal/blueprint.go:24 +0x41
main.main()
        /path/to/complement/cmd/account-snapshot/main.go:80 +0x4cf

Test log output is no longer streamed as the test runs since CSAPI tests moved to `tests/csapi` (multiple packages in tests)

Before #171 and the csapi package was added, I was able to run Complement tests and see the log output as it runs. But nowadays, all of the test log output is buffered and only printed when all of the tests finish.

This is caused by Go deciding to not stream log output when there are multiple packages present (we have tests and csapi in the ./tests/... directory):

Related:


I liked the streaming behavior before to see the test progress but it's also important when I try to hook up Element to the homeserver and rooms from Complement to be able to see the log output of room ID's, etc to join and inspect, https://github.com/matrix-org/complement/blob/master/ONBOARDING.md#how-do-i-hook-up-a-matrix-client-like-element-to-the-homeservers-spun-up-by-complement-after-a-test-runs

As a workaround, I can get the log streaming behavior I want by being very selective about the files to run: go test -v -run TestImportHistoricalMessages ./tests/main_test.go ./tests/msc2716_test.go but this is manual process adjusting our generic Synapse test runner every time.

Potential solutions

  1. Move all tests to the tests package.
  2. Just wait for upstream Go changes although there is a possibility this won't change and just the documentation will be updated with this caveat
  3. Run each test package in their own process to still get parallel but still get stream output
  4. Can we run tests in parallel without separate packages? #171

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.