matrix-org / complement Goto Github PK
View Code? Open in Web Editor NEWMatrix compliance test suite
License: Apache License 2.0
Matrix compliance test suite
License: Apache License 2.0
Linux doesn't have host.docker.internal by default, you need additional CLI args when you run the network/container, so we need to do that on startup if running under linux
So authors can do stuff like DENDRITE_TRACE_INTERNAL=1
So things like /transactions
, /users
, /rooms
etc.
The tests:
Can download without a file name locally
Can download without a file name over federation
that are a part of TestMediaWithoutFileName
both have a check for the Content-Type
header value that was provided during initial upload of the test media. They ensure this value does not change between uploading and downloading media and its associated metadata.
Synapse will however augment the content type of text files in order to specify UTF-8 as their encoding. The reasoning behind this is at matrix-org/synapse#7043.
While the spec doesn't explicitly state that the Content-Type
response header when downloading a file needs to exactly match what was provided during upload, it does state that it should be the content type of the file, and so homeservers should at least try to get it right.
The original sytest this was taken from, Can download without a file name locally
, did not check Content-Header
. I personally think it is valuable to do so, but we may want to make an exception for this modification, as it does seem to be beneficial and not against the spec, however vague it might be.
Into something that CI will understand and be useful.
Logs: buildkite.log
At line 1974 in the log file you can see that I seem to be experiencing logs being cut off when I have quite a lot of them.
Interestingly it actually looks like a chunk of logs are being spurted out, before Complement actually prints them on line 2518.
And if you look at the end of the actual logs... that lines up with the beginning of the spurted out logs.
So somehow the tail end of the server logs are being printed out above the rest of them...
I initially wondered if this was a bug with piping the logs to less
and viewing them there. But the same happens even without less
involved.
This grew out of a discussion here, where it was found that the CI
environment variable is used by Complement to know whether it is running inside of a docker container already (and thus needs to talk to the host's docker socket) or is running on the host directly. It's assumed that if Complement is running inside of docker, CI=true
.
@richvdh proposed to rename this variable to COMPLEMENT_IN_DOCKER
(or something similar) instead, as it's more descriptive of what the variable actually specifies. Plus people won't be tempted to add CI-specific things outside of what's necessary (and thus making CI code split further from production code) ๐.
Currently functions such as SyncUntil
do little to validate the JSON they're actually getting, I think it could be an easy win to develop a generic validation function, which walks the entire sync response and validates it, before handing it back to the caller.
E.g via host.docker.internal
Describe the bug
Coverage script can no longer be run because of the cs-api directory added.
To Reproduce
Steps to reproduce the behavior:
go run ./cmd/sytest-coverage
Expected behavior
Sytest coverage should be outputted
Screenshots
panic: ./tests/csapi:1:1: illegal UTF-8 encoding (and 1 more errors)
goroutine 1 [running]:
main.main()
/Volumes/T7/code/complement/cmd/sytest-coverage/main.go:56 +0xf45
exit status 2
When running Complement tests against Synapse in CI, the current Complement buildkite pipeline code will check to see if a similarly-named branch exists on Synapse's repo.
As is often the case, if the name of the branch is master
, we'll end up looking for a branch named master
on Synapse's side which then grabs the latest release of Synapse, rather than Synapse's develop
branch.
This can cause CI to fail on Complement master
while the latest release of Synapse does not support tests that are intended to be run against Synapse develop
. We saw this recently with TestKnocking
, which then magically fixed itself after Synapse 1.37.0 was released today.
The fix is to insert some bash to say "if $BUILDKITE_BRANCH is master
, use Synapse develop
".
Otherwise calls to t.Fatalf will not print the failing event as the test is immediately terminated (though deferred functions will execute)
I just had following test flake on me: https://buildkite.com/matrix-dot-org/synapse/builds/17313#b4af426b-92c0-49cd-8f82-445f3c46c6f3
I haven't investigated yet to see whether it's a Synapse issue or a Complement one.
CC @clokep
We have an ApplicationService
struct but the YAML is hard-coded currently, we should add fields in for things like regexps.
xref #68 (comment)
The context here is that we 'need' some way to run integration tests for non-Matrix-standard features, such as those implemented by custom Synapse modules or maybe even Synapse-specific behaviours (if/where there is good reason).
Without knowing too much, Complement seems basically perfect for this use case, but it really wouldn't make sense to add these kinds of tests straight to Complement's repository.
It would be nice to have out-of-repository test suites (for example: alongside a Synapse module, say https://github.com/matrix-org/synapse-user-restrictions/, but you can also imagine weirder customer-specific modules).
Is Complement suitable for being extended this way? What would it take?
I'm not familiar with Go, so I can't easily suggest the right approach (I can speculate, but I'll refrain from doing so for now).
Currently, while browsing the test repository, i find it difficult to navigate to tests i expect to exist, and/or not to exist, as the file naming scheme (and test grouping) seems to be all over the place.
I think it'll be worth it (for organisational purposes) to organise files and scope tests such that someone new coming into the repository can find their way and discover related tests better.
Some current complaints and notes are that some files seem to inherit sytest's vague naming scheme (apidoc
, federation
, etc.), some simply document and test MSCs, and some restrict themselves to very narrow scopes (account_change_password_pushers_test.go
)
Initially i thought that this function was generic enough to just match key
and throw that variable into the check function, but turns out that's not the case.
I suggest renaming it to SyncUntilArray
, and adding a generic function SyncUntil
which does above expected behaviour.
Currently the build tags get duplicated in a handful of places:
complement.sh
scriptNote that the build tags are homeserver specific (and might vary based on the version of the homeserver).
It would be great if complement was able to know which build tags to use somehow by inspecting a file or the homeserver capabilities. This would help avoid the various configs above getting out of sync.
Additionally we should probably:
And actually decrypting them. We support olm as a dependency now so it should be possible.
=== RUN TestRestrictedRoomsRemoteJoinLocalUser
msc3083_test.go:191: Deploy times: 6.570039ms blueprints, 3.985522787s containers
client.go:370: POST hs2/_matrix/client/r0/createRoom => 200 OK (226.771354ms)
client.go:370: POST hs2/_matrix/client/r0/createRoom => 400 Bad Request (5.686334ms)
msc3083_test.go:204: CSAPI.MustDo POST http://172.17.0.1:49305/_matrix/client/r0/createRoom returned HTTP 400
https://github.com/matrix-org/complement/runs/3180791920#step:7:1029
Developers often want to run new tests against a local version of their HS. This is cumbersome at present as you have to rebuild a docker image before you can run the test. Instead, we can expose directory on the container which will be volume mounted from a new env var e.g:
COMPLEMENT_MOUNT=/Users/me/my/local/dendrite
COMPLEMENT_BASE_IMAGE=dendrite-which-reads-mounted-dir:latest
go test ./...
COMPLEMENT_MOUNT=/Users/me/my/local/dendrite:/data
/ca
is done for COMPLEMENT_CA=1
) as we will inevitably expand this in the future (e.g appservices need registration YAML files in the container)We can either be prescriptive and say that we map to a static directory layout:
/complement
|__ /appservice
|__ /ca
|__ /mount
or we can make this configurable e.g:
COMPLEMENT_MOUNT=/Users/me/my/local/dendrite:/mnt
COMPLEMENT_CA=/Users/me/my/ca:/certs
COMPLEMENT_AS=/Users/me/my/appservices:/as
This feels wrong though as it then ties env vars to a specific docker image which isn't what we're trying to do here (we want them to be interchangeable), so let's be prescriptive.
Before too many more homeserver dockerfiles are written, what do people think about removing the COMPLEMENT_CA
environment variable, and making its behaviour the default? /ca
will just be mounted in all docker containers, and homeserverd dockerfiles can choose to use the certificate and key in /ca
or not.
Existing dockerfiles, such as the Synapse Monolith one will need to be updated, but I've been planning to adjust that one to use Complement's CA anyhow.
A recent change seems to have broken Homerunner. If I install from master and try to run:
curl -XPOST -d '{"base_image_uri":"complement-dendrite", "blueprint_name":"federation_one_to_one_room"}' http://localhost:54321/create
I get this error:
2021/08/04 17:38:49 .federation_one_to_one_room.hs1 : failed to ContainerCommit: invalid reference format
2021/08/04 17:38:49 .federation_one_to_one_room.hs2 : failed to ContainerCommit: invalid reference format
2021/08/04 17:38:50 could not construct blueprint: .federation_one_to_one_room.hs1 : failed to ContainerCommit: invalid reference format
2021/08/04 17:38:50 could not construct blueprint: .federation_one_to_one_room.hs2 : failed to ContainerCommit: invalid reference format
Dendrite has https://github.com/matrix-org/dendrite/tree/master/cmd/dendrite-upgrade-tests which will automatically pull and run different versions of dendrite with the same database and ensure db migrations work correctly. It can be run for example by doing ./dendrite-upgrade-tests --from 0.1.0 --to 0.3.1
. This will:
HEAD
).--to
.It looks like Synapse has some bisectability baked into the Complement Synapse image:
complement/dockerfiles/Synapse.Dockerfile
Line 14 in 76db9e5
If Complement standardised the format of this (e.g thou shalt accept the build arg HS_VERSION=$semver
) then we could, in theory, make https://github.com/matrix-org/dendrite/tree/master/cmd/dendrite-upgrade-tests work for any HS (modulo working out how to do persistence, as currently Dendrite uses postgres and that assumption is baked into the upgrade testing infra: this could just be stating that the directory /data
is persisted across runs and you need to just dump your DB there).
This arguably is and isn't feature creep. It is feature creep because it has nothing to do with the integration tests in /tests
. It isn't feature creep because it is still ultimately testing HSes, just the upgrade paths.
Thoughts @richvdh (on the basis that you added SYNAPSE_VERSION
?
In addition to Sytest for now.
Hopefully we'll start getting a trickle of PRs, so it'll be a good idea to get CI working on it.
While looking through sytests, there seems to be a concept of storing/updating the next_batch
of previous syncs with user objects (CSAPI
objects in complement)
This could be quite helpful when "checkpointing" sync to a particular moment in the test fairly easily.
However, i suspect that adding this would be a massive breaking internal change, as every test in complement has basically acted with since: null
up until this point.
Either that, or add an additional bool
to every Sync*
function which signifies if it should try to cont
inue from the last-stored next_batch
variable.
Some implementations care quite a lot about what hostname they are expected to be, and presumably complement spins up multiple images to create a local federation for some tests. How is the image meant to know what hostname it is running for?
The real world example of this is my matrix-media-repo project: it needs to know which domain it is running for so it can accurately bind all the routes, media IDs, etc.
Title, msc2716_test.go
does use filters for its functionality, but sync_filter_test.go
doesnt test filters by itself, nor different types or configurations.
TestKeysQueryWithDeviceIDAsObjectFails
checks that non-spec compliant behaviour does not occur. This behaviour was specifically being exhibited by Element iOS (issue). Synapse will likely maintain backwards compatibility here for the long line of old Element iOS clients, and thus Synapse is unlikely to pass this test any time soon.
We'd like to get Synapse passing all of a subset of tests in order to have Complement not soft-fail in our CI. Perhaps one solution is to add a build tag for backwards-compatibility checks such as this one, which Synapse can disable?
We are sprouting helper functions which are missing:
t
as the first argt.Helper()
called in the function itself.E.g:
func FailJoinRoom(c *client.CSAPI, t *testing.T, roomIDOrAlias string, serverName string)
The latter means if the helper function fails the test, the line number of the helper function is returned (which is unhelpful) instead of the caller. E.g msc3083_test.go:25: CSAPI.MustDoWithStatusRaw POST http://172.17.0.1:49215/_matrix/client/r0/join/%21axRzfYYvbjlmsBAGdx:hs1?access_token=syt_Ym9i_gWbzZkqCcutSaHXBQTbq_0Ohh5T&server_name=hs1 returned HTTP 200, expected 403
which maps to
complement/tests/msc3083_test.go
Line 25 in 288b922
Related: #110
Tests are currently only run sequentially, unless t.Parallel()
is called in a subtest. This will not scale as we increase the number of tests. The go test
framework will parallelise tests in different packages by default, but the environments are completely isolated from each other so test suites cannot co-ordinate who is making what blueprint, or when to clean up docker containers. It would be good to figure out a way of doing this such that we have the ability to have at least some parallelisation by default to make it not scale with the number of tests quite so badly.
We may be able to do the package solution, provided we have some co-ordination mechanism for cleanup, even if it's just a naive "okay, all packages get their own namespace to work with". From a sheer complexity PoV this may actually be the best solution as it's easier to reason about (in that you don't need to care about who set up what, at the cost of making O(n) blueprints where n
is the number of packages). Then the problem is how to slice and dice the existing tests. I tried this briefly, and we cannot split based entirely on feature, due to how the build tags work. If a directory has test files which are all build tagged up (e.g MSCs) then go test
complains that there is nothing testable there (when the tag is not specified). We could add a dummy test file, but that is icky given we then expect new MSCs to add new directories causing the writer of the test to know about this. We could split based on protocol (CS API, SS API, ST API) but that reduces the amount of helper function sharing that can be done, which is typically per-feature (think registration, knocking). Currently all non-test functions share a global namespace as they are all in the same package which is a bit gross in terms of having a mess of global functions lying around.
Thoughts welcome.
While working on importing sytests, i found myself ctrl-shift-f
-ing on keywords to see what tests already existed to see if a sytest would fit that file, however, I think that this displays some poor test discoverability.
However, when i was documenting some sytests (and their behaviour), i came upon something like the following format;
sytest:
- name: "Non-present room members cannot ban others"
type:
- power_level
- ban
desc:
When a user does not exist in a room, but has powerlevels there, it should not be able to ban users.
variants:
- user has never entered the room
- user has left the room
- user has been kicked while ban is sent out
- user server has been ACL-d
- name: "New federated private chats get full presence information (SYN-115)"
# ...
I think it would be useful to programmatically document (and maybe link) these tests, so that a quick glance can see which ones exist, what it does, what variants of behaviour it's also testing, and also in which "areas" of testing matrix it exists.
For a final variant of above schema, i think adding a path
(something like tests/csapi/rooms_state_test.go:TestRoomCreationReportsEventsToMyself
) would help auto-identify the corresponding function.
When running the test Outbound federation can query profile data, Synapse attempts to query for profile information, however the handler returns JSON information with a Content-Type
of text/plain; charset=utf-8
:
2020-11-04 16:53:11,545 - synapse.http.matrixfederationclient - 190 - WARNING - GET-1 - {GET-O-1} [host.docker.internal] Error reading response GET matrix://host.docker.internal/_matrix/federation/v1/query/profile?user_id=%40user%3Ahost.docker.internal&field=displayname: Failed to send request: RuntimeError: Content-Type not application/json: was 'text/plain; charset=utf-8'
The federation api spec states that responses should use a Content-Type
of application/json
unless otherwise stated.
Adding w.Header().Set("Content-Type", "application/json")
to the request handler function fixes this test. I'm not sure whether we want to manually add this everywhere or we can have this be the default for federation responses.
Rather than just per-file like we have currently.
complement/tests/federation_room_join_test.go
Lines 56 to 58 in 8cbf875
Room IDs are meant to be treated as opaque strings, and thus homeservers are not supposed to extract domain names from them. Having a test that relies on this functionality does not seem right.
(Yes, Dendrite will currently extract domain names from room IDs as a last resort).
Currently, through the use of runtime.SkipIf
, tests (from within that test) are skipped depending on which known runtime/server it is testing against.
This is fine considering that synapse and dendrite are closely developed with complement, however, there's a problem if server X comes into play.
Server X's developers are not part of Element or Matrix.org, and developed their own homeserver off on the side, engaging in the community and open spirit of matrix itself.
However, they are still in the middle of development on that homeserver, many features and endpoints not finalized, yet they still want to use Complement.
The current option would be to PR against Complement's codebase, and register their own server.
Then, every time they have added a feature, or complement adds another incompatible test, they would have to STW and wait until the skipIf
addition or removal PR has proceeded.
Even still, if they would CI against :latest
, and Complement adds new tests, old CI runs cannot be comprehensively retried unless the exact image hash is known and still up.
These points may be doable, however, they have a few breaking points;
All of these issues could sap or otherwise burn those developer's motivation or patience to work on matrix, and it would raise the threshold of motivation and effort to develop matrix homeservers, while i think these issues are completely avoidable.
Synapse and Dendrite don't have to deal with some of these questions, because their development is pretty closely tied with Complement's developers and internal development channels. But, i hope that some of the above can illustrate the difficulty (and defacto "centralisation") that the current setup requires for a homeserver to start and keep using complement.
In this scenario, the homeserver awaits complement's development.
Let me propose the following instead;
Complement should try to read a complement.yml
in the current directory when it starts to run, this file can give a blacklist or whitelist of test items that this run wishes to (exclusively) include or exclude.
(This list can also be compressed in JSON in an environment variable, such as COMPLEMENT_TASKLIST
)
Complement then selects or unselects to-be-ran tests, this could possibly be done by a boilerplate checkSkip()
invocation at the start of every test.
In this scenario, the homeserver developers could submit whitelist additions/blacklist removals to incrementally support more and more tests for their homeserver, with the definition file for such existing on their development branch, while complement only has to read that at runtime. Complement awaits the homeserver's development.
Additionally, some more features;
Tags could be introduced, something akin to tag:v1.1
, which enables every test which is geared towards matrix v1.1 compatibility.
Globs for test names could be used, blacklist: ["test:federation*"]
, which would exclude all tests starting with "federation" (though, test naming depends a bit on #241 resolution, so this is likely not accurate, nor possibly useful)
Complement calls /_matrix/client/r0/register
with an empty access_token
query parameter, which causes Synapse to fail the request. It is valid to call /register
with an access token if you're an application service registering a user in your namespace.
=== CONT TestRegistration/parallel/POST_/register_returns_the_same_device_id_as_that_in_the_request
client.go:196: Making POST request to http://localhost:32857/_matrix/client/r0/register?access_token=
client.go:196: Request body: {"auth":{"type":"m.login.dummy"},"username":"user-device","password":"sUp3rs3kr1t","device_id":"my_device_id"}
=== CONT TestRegistration
client.go:262: POST /_matrix/client/r0/register => 401 Unauthorized (1.981009ms)
=== CONT TestRegistration/parallel/POST_/register_returns_the_same_device_id_as_that_in_the_request
client.go:196: HTTP/1.1 401 Unauthorized
Transfer-Encoding: chunked
Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept, Authorization
Access-Control-Allow-Methods: GET, HEAD, POST, PUT, DELETE, OPTIONS
Access-Control-Allow-Origin: *
Cache-Control: no-cache, no-store, must-revalidate
Content-Type: application/json
Date: Wed, 04 Nov 2020 16:53:03 GMT
Server: Synapse/1.22.1
55
{"errcode":"M_UNKNOWN_TOKEN","error":"Unrecognised access token","soft_logout":false}
0
apidoc_register_test.go:92: CSAPI.MustDo POST http://localhost:32857/_matrix/client/r0/register?access_token= returned HTTP 401
The question of whether Synapse should just ignore an empty access_token
query parameter is a good one. The spec doesn't really say one way or the other, so it's probably not good to fail a homeserver for this. Though I'm not sure where using an empty access_token
would even be valid either.
Just wanted to get your thoughts on whether you think this should be solved Synapse-side or Complement-side.
People may be confused on how exactly to run optional tests, or how they should implement white/blacklists for their homeservers.
I don't think we should document every build tag that's placed in, but just a quick summary of where and why we normally use them would be good.
Currently blocking #221
It's really not clear what this is supposed to do, and what it controls. It ultimately controls how long we're willing to wait for the HS to deploy when using docker run
so rename it to COMPLEMENT_SPAWN_HS_TIMEOUT_SECS
perhaps. Given we sleep a bit between checks, we can deprecate COMPLEMENT_VERSION_CHECK_ITERATIONS
and automatically convert it to a time in the new format.
The blueprint for creating a one-to-one room between two federated homeservers (BlueprintFederationOneToOne
) fails on homeservers that do not extract domain names from room IDs.
This is because an instruction will be generated on the second homeserver asking to join the room without providing any server_name
query parameters.
I've created a PR to allow instructions to carry query parameters: #37
However this does not solve the full problem, as we still need a way to encode into the blueprint that a room was originally created on hs1, ideally without extracting the domain from the room ID in complement. Perhaps we could add a OriginServerName
key to Room
objects when creating a room in blueprints, which other homeservers can then retrieve via the room's Ref
?
Considering this code (with 2 places where an err != nil
can happen):
complement/internal/docker/builder.go
Lines 269 to 284 in 9acaf3c
If the deploy fails (first err != nil
) it gets logged to the standard output via log.Printf
(and printLogs
is called explicitely).
If the runner fails, it gets logged to the Builder (and printLogs
is called by the parent).
I think those 2 error logging should be unified: what do you think is the best approach?
I would recommend having some logging to the standard output, since it can impact the rest of the test.
For instance adding an err
param to printLogs
, which will get printed at the end of the server log if not nil
, called inside construct
.
I would be glad to make a PR for this.
Background info: I am trying to understand how #39 can be fixed, and I discovered that the Blueprints has an error (but only after spending a couple of hours and finally adding COMPLEMENT_DEBUG=1
:)
2020/11/17 22:57:42 federation_one_to_one_room.hs2 : failed to run instructions: federation_one_to_one_room.hs2 : request http://localhost:32902/_matrix/client/r0/join/%21IiiKfQGoJGEqTPCTXN:hs1?access_token=MDAxMWxvY2F0aW9uIGhzMgowMDEzaWRlbnRpZmllciBrZXkKMDAxMGNpZCBnZW4gPSAxCjAwMWJjaWQgdXNlcl9pZCA9IEBib2I6aHMyCjAwMTZjaWQgdHlwZSA9IGFjY2VzcwowMDIxY2lkIG5vbmNlID0gbjNjbkNJeUs7bmxhMFBubAowMDJmc2lnbmF0dXJlIGjnOvc29ANDh6mwUukBi_1NocFirh6_gE1ZoogwQotxCg returned HTTP 404 Not Found : {"errcode":"M_UNKNOWN","error":"No known servers"}
Describe the bug
Account snapshot fails when it encounters an invalid Matrix ID @signalbot
in this case where the suffix :homeserver.tld
is somehow missing
To Reproduce
:homeserver.tld
bit in a read receipt event in sync_snapshot.json
account-snapshot
script againExpected behavior
The event should probably be dropped
Desktop (please complete the following information):
Additional context
2021/06/28 14:02:03 Invalid matrix identifier: @signalbot
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x1291501]
goroutine 1 [running]:
github.com/matrix-org/complement/cmd/account-snapshot/internal.ConvertToBlueprint(0x0, 0x13213ad, 0x3, 0xc000c9c0c0, 0xc, 0xc000711770)
/path/to/complement/cmd/account-snapshot/internal/blueprint.go:24 +0x41
main.main()
/path/to/complement/cmd/account-snapshot/main.go:80 +0x4cf
Before #171 and the csapi
package was added, I was able to run Complement tests and see the log output as it runs. But nowadays, all of the test log output is buffered and only printed when all of the tests finish.
This is caused by Go deciding to not stream log output when there are multiple packages present (we have tests
and csapi
in the ./tests/...
directory):
Related:
I liked the streaming behavior before to see the test progress but it's also important when I try to hook up Element to the homeserver and rooms from Complement to be able to see the log output of room ID's, etc to join and inspect, https://github.com/matrix-org/complement/blob/master/ONBOARDING.md#how-do-i-hook-up-a-matrix-client-like-element-to-the-homeservers-spun-up-by-complement-after-a-test-runs
As a workaround, I can get the log streaming behavior I want by being very selective about the files to run: go test -v -run TestImportHistoricalMessages ./tests/main_test.go ./tests/msc2716_test.go
but this is manual process adjusting our generic Synapse test runner every time.
tests
package.A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.