opendevstack / ods-core Goto Github PK
View Code? Open in Web Editor NEWThe core of OpenDevStack - infrastructure setup based on Atlassian tools, Jenkins, Nexus, SonarQube and shared images
License: Apache License 2.0
The core of OpenDevStack - infrastructure setup based on Atlassian tools, Jenkins, Nexus, SonarQube and shared images
License: Apache License 2.0
While I was installing the enviroment, following the instructions provided it failed cause there were some missing packages that the ansible playbooks didn't install and I was force to do it manually:
The setup of blob store and repositories should be handled by a groovy script in nexus3.
The script can be uploaded and executed through the rest api.
Logs from jenkins master:
Terminated Kubernetes instance for agent tost-cd/jenkins-slave-vm5x7-65pk8
| Disconnected computer jenkins-slave-vm5x7-65pk8
| Mar 06, 2019 7:25:59 AM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave deleteSlavePod
| INFO: Terminated Kubernetes instance for agent tost-cd/jenkins-slave-vm5x7-65pk8
| Mar 06, 2019 7:25:59 AM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
| INFO: Disconnected computer jenkins-slave-vm5x7-65pk8
| Mar 06, 2019 7:25:59 AM org.jenkinsci.plugins.workflow.job.WorkflowRun finish
| INFO: tost-cd/tost-cd-be-docker-plain-test-master #10 completed: SUCCESS
| Mar 06, 2019 7:25:59 AM hudson.model.listeners.RunListener report
| WARNING: RunListener failed
| java.lang.NullPointerException
| at io.fabric8.jenkins.openshiftsync.BuildSyncRunListener.upsertBuild(BuildSyncRunListener.java:336)
| at io.fabric8.jenkins.openshiftsync.BuildSyncRunListener.pollRun(BuildSyncRunListener.java:226)
| at io.fabric8.jenkins.openshiftsync.BuildSyncRunListener.onFinalized(BuildSyncRunListener.java:193)
| at hudson.model.listeners.RunListener.fireFinalized(RunListener.java:257)
| at hudson.model.Run.onEndBuilding(Run.java:1998)
| at org.jenkinsci.plugins.workflow.job.WorkflowRun.finish(WorkflowRun.java:611)
| at org.jenkinsci.plugins.workflow.job.WorkflowRun.access$900(WorkflowRun.java:132)
| at org.jenkinsci.plugins.workflow.job.WorkflowRun$GraphL.onNewHead(WorkflowRun.java:993)
| at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.notifyListeners(CpsFlowExecution.java:1440)
| at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$3.run(CpsThreadGroup.java:417)
| at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$1.run(CpsVmExecutorService.java:35)
| at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131)
| at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
| at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
| at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
| at java.util.concurrent.FutureTask.run(FutureTask.java:266)
| at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
| at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
| at java.lang.Thread.run(Thread.java:748)
https://github.com/openshift/jenkins-sync-plugin/blob/master/src/main/java/io/fabric8/jenkins/openshiftsync/BuildSyncRunListener.java#L336
nodes == null :(
Currently, every component creates two Jenkins pipelines, dev
and test
, as OCP resources. In addition, every repo has two webhooks (actually four, but two of them are pointless), which trigger both pipelines for every commit. Inside the pipeline, Jenkins checks whether the pipeline is responsible for the commit (test=master,dev=*) and then continues.
This has several drawbacks:
Jenkinsfile
is always grabbed from master
instead of from the current branch, so changes to the Jenkinsfile
cannot be tested inside a branchdev
pipeline is responsible for all sorts of branches, so its build status is all over the placedev
pipeline, we cannot run builds for different branches in parallelTo improve this situation, we have build a "webhook proxy" internally. This proxy provides one endpoint accepting webhooks from BitBucket and forwards them to the corresponding Jenkins pipeline (which is determined based on the branch name). If there is no corresponding pipeline yet, it will be created on the fly. Once a branch is deleted or a pull request declined/merged, the corresponding Jenkins pipeline is deleted automatically. The webhook proxy is a Go application - a single, no-dependency binary, produced from one file using just the standard library.
This works very reliable so far (tested in two initiatives for about a month) and solves all the pain points mentioned above.
To include it in OpenDevStack, the following has to be changed:
cd
OCP namespace. This works well, but it requires that the user inside the pod can create/delete resources in all other namespaces. This should not be the case, we would rather have a proxy running in each foo-cd
project, and then the user only needs permissions to create/delete resources within its own project.oc start-build webhook-proxy --from-dir . --follow --namespace cd
, which copies a binary in the current working dir into the Docker image. While this works well, it requires Go to be installed on the local system which is not something we would want to impose on an ODS user. We either need to create a Jenkins Go slave to build it there, or we would need to build the binary within the OCP build (which is probably the easier option, but increases the size of the Docker image - currently that is based on a barebones alpine
image).dev
and test
pipelines from the OCP templates (they are not needed, the master
pipeline will be auto-created on the first push)FYI @tjaeschke @rattermeyer @clemensutschig @stitakis @gerardcl
As we want to have full traceability, it should not be allowed (at least not by default) to have commits that are not connected to a Jira ticket.
This means that:
develop
or master
etc.) only merge commits which reference one or more Jira tickets should be allowedMost likely this can be done through a BitBucket hook, and/or the shared library ...
we need such a step/functionality so we do not reach such error in Jenkins:
javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated
at sun.security.ssl.SSLSessionImpl.getPeerCertificates(SSLSessionImpl.java:440)
at org.apache.commons.httpclient.protocol.SSLProtocolSocketFactory.verifyHostName(SSLProtocolSocketFactory.java:257)
at org.apache.commons.httpclient.protocol.SSLProtocolSocketFactory.createSocket(SSLProtocolSocketFactory.java:115)
at org.apache.commons.httpclient.protocol.SSLProtocolSocketFactory.createSocket(SSLProtocolSocketFactory.java:156)
at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:714)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:394)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:178)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:404)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:330)
at com.cloudbees.jenkins.plugins.bitbucket.server.client.BitbucketServerAPIClient.getRequest(BitbucketServerAPIClient.java:493)
Caused: java.io.IOException: Communication error for url: /rest/api/1.0/projects/PSP/repos/psp-be-user/branches?start=0
at com.cloudbees.jenkins.plugins.bitbucket.server.client.BitbucketServerAPIClient.getRequest(BitbucketServerAPIClient.java:522)
at com.cloudbees.jenkins.plugins.bitbucket.server.client.BitbucketServerAPIClient.getBranches(BitbucketServerAPIClient.java:363)
Caused: java.io.IOException: I/O error when accessing URL: /rest/api/1.0/projects/PSP/repos/psp-be-user/branches?start=0
at com.cloudbees.jenkins.plugins.bitbucket.server.client.BitbucketServerAPIClient.getBranches(BitbucketServerAPIClient.java:384)
at com.cloudbees.jenkins.plugins.bitbucket.BitbucketSCMSource.retrieve(BitbucketSCMSource.java:782)
at jenkins.scm.api.SCMSource.fetch(SCMSource.java:564)
at org.jenkinsci.plugins.workflow.multibranch.SCMBinder.create(SCMBinder.java:95)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:263)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Finished: FAILURE
A colleague reported this evening issues starting the vagrant box:
Installing additional modules ...
VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel modules.
This system is currently not set up to build kernel modules.
Please install the Linux kernel "header" files matching the current kernel
for adding new hardware support to the system.
The distribution packages containing the headers are probably:
kernel-devel kernel-devel-3.10.0-514.26.2.el7.x86_64
VirtualBox Guest Additions: Running kernel modules will not be replaced until the system is restarted
VirtualBox Guest Additions: Starting.
VirtualBox Guest Additions: modprobe vboxsf failed
An error occurred during installation of VirtualBox Guest Additions 5.2.10. Some functionality may not work as intended.
In most cases it is OK that the "Window System drivers" installation failed.
Redirecting to /bin/systemctl start vboxadd.service
Redirecting to /bin/systemctl start vboxadd-service.service
Job for vboxadd-service.service failed because the control process exited with error code. See "systemctl status vboxadd-service.service" and "journalctl -xe" for details.
Unmounting Virtualbox Guest Additions ISO from: /mnt
==> atlcon: Checking for guest additions in VM...
atlcon: The guest additions on this VM do not match the installed version of
atlcon: VirtualBox! In most cases this is fine, but in rare cases it can
atlcon: prevent things such as shared folders from working properly. If you see
atlcon: shared folder errors, please make sure the guest additions within the
atlcon: virtual machine match the version of VirtualBox you have installed on
atlcon: your host and reload your VM.
atlcon:
atlcon: Guest Additions Version: 5.1.26
atlcon: VirtualBox Version: 5.2
==> atlcon: Setting hostname...
==> atlcon: Configuring and enabling network interfaces...
atlcon: SSH address: 127.0.0.1:2201
atlcon: SSH username: vagrant
atlcon: SSH auth method: private key
==> atlcon: Mounting shared folders...
atlcon: /vagrant => C:/Users/TGR/workspace/ods-core/infrastructure-setup
Vagrant was unable to mount VirtualBox shared folders. This is usually
because the filesystem "vboxsf" is not available. This filesystem is
made available via the VirtualBox Guest Additions and kernel module.
Please verify that these guest additions are properly installed in the
guest. This is not a bug in Vagrant and is usually caused by a faulty
Vagrant box. For context, the command attempted was:
mount -t vboxsf -o uid=1000,gid=1000 vagrant /vagrant
The error output from the command was:
/sbin/mount.vboxsf: mounting failed with the error: No such device
Operators are a new way of running more complex applications. For us, this could mean that things like SonarQube or Nexus could run as an operator. More info on operators is available here: https://coreos.com/operators/.
I googled and found at least one operator for SonarQube: https://github.com/wkulhanek/sonarqube-operator.
@clemensutschig @rattermeyer @tjaeschke @stitakis I guess we need to talk in general how operators affect us and what our strategy is going forward ...
Refactor the base preparation scripts. At the moment the aws cli is installed in the base preparation, which is unnecessary, if you don't use AWS.
From 3.0.6 to 3.11.43
Most customers do not have Atlassian Crowd. A lot already have keycloak.
Roadmap for Atlassian Crowd is unclear. We therefore should support alternative Identity and Access Management solutions. Keycloak is a very popular one at the moment.
To support Keycloak, we have to support Login to Keycloak from
Right now, Jenkins pipelines can only be protected by branch name, defaulting to master,develop,production,staging,release
. We should allow users to protect all branches (*
) and branches with a certain prefix (as in, ending in a slash, e.g. release/
, feature/
).
Bitbucket and Crowd support OpenJDK in the current versions, Jira and Confluence roles use the binary installer, which is shipped with a JRE. Therefore we should be able to remove the Oracle Java role.
Initial commit missed cleanup script for Nexus tasks
Currently, we are one major or multiple minor versions behind releases:
Tool | Default Version | available |
---|---|---|
Bitbucket | 4.14.3 | 5.12 |
Confluence | 6.1.3 | 6.10.1 |
Jira | 7.3.6 | 7.10.2 |
Or generally speaking, we shoiuld define against which version we define certain customziations, e.g. confluence blueprint key in provisioning app.
strategy: type: Docker dockerStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift
this seems pretty odd ?! no?
installed of latest master / production combo - with tailor update ...
4:21:52 PM | Normal | Pulled | Container image "registry.access.redhat.com/rhscl/postgresql-95-rhel7@sha256:f5bd38675cdac60b72836f114c88809f9eec9e8c9672f20f339bb90789ba9aa1" already present on machine61 times in the last 6 minutes |
---|---|---|---|
4:17:16 PM | Warning | Failed | Error: configmaps "sonarqube" not found8 times in the last |
When there is an update to ODS on Github, consumers of ODS should receive those updates or at least be notified about them ... what could we use to do this?
When the Jenkins container starts, it tries to copy the fixed jar into a location that does not exist yet. Only when Jenkins is restarted, it works correctly.
See #86 for addressed problem.
https://serverfault.com/questions/168826/how-to-install-gpg-keys-from-behind-a-firewall
It is possible that one repo contains more than one Jenkinsfile
. Now that we have a webhook proxy, it is relatively trivial to allow users to specify the locaction of each Jenkinsfile
, and create one pipeline for each. This would help to deal e.g. with multi module projects.
The implementation would roughly look like this:
Allow to add a query param to the webhook, e.g. ?dirs=foo,bar
. In that case, the webhook proxy would create two pipelines, one pointing to foo/Jenkinsfile
and the other pointing to bar/Jenkinsfile
. Then it would trigger both pipelines.
Futher, each pipeline could check whether it really needs to run. It would do that by comparing if something in the folder changed compared to the last build of that folder. First, it would get the latest built commit sha via oc -n foo-dev get -o template bc/be-main --template={{.spec.output.to.name}}
, and then it would check for changes via git diff --exit-code <sha1>..<sha2> <folder>
.
@tjaeschke @rattermeyer @stitakis @clemensutschig Thoughts on this?
It would be helpful to scan commits in BitBucket for strings such as URLs and passwords to give users peace of mind before they push this code to GitHub to contribute to OpenDevStack.
Possible solutions:
in order to provide consistency - the config for ansible and any other infrastructure setup scripts should be in ods-configuration-sample repo
Suppose i want to use ods as is- today i need to Clone all repos to sie tailor for Installation. I think it would be wise to move all ocp config into ods-configuration-sample- so we have one place to clone and install..
Thoughts?
Create required antora component and modules, so documentation can be sourced from this repository.
Related to opendevstack/opendevstack.github.io#91
e.g. https://github.com/opendevstack/ods-core/blob/master/jenkins/ocp-config/bc.yml#L72
we should use the same description as we have in
https://github.com/opendevstack/ods-configuration-sample/blob/master/ods-core/jenkins/ocp-config/bc.env.sample
to have this really consistent.
There is a problem if Nexus uses a self signed certificate. If for example gradle is used to build an application nexus can't be accessed. The build breaks with a certificate validation exception.
A problem occurred configuring root project 'prov-cd-prov-app-dev'.
> Could not resolve all artifacts for configuration ':classpath'.
> Could not resolve org.springframework.boot:spring-boot-gradle-plugin:1.5.9.RELEASE.
Required by:
project :
> Could not resolve org.springframework.boot:spring-boot-gradle-plugin:1.5.9.RELEASE.
> Could not get resource 'https://nexus-cd.192.168.99.100.nip.io/repository/jcenter/org/springframework/boot/spring-boot-gradle-plugin/1.5.9.RELEASE/spring-boot-gradle-plugin-1.5.9.RELEASE.pom'.
> Could not GET 'https://nexus-cd.192.168.99.100.nip.io/repository/jcenter/org/springframework/boot/spring-boot-gradle-plugin/1.5.9.RELEASE/spring-boot-gradle-plugin-1.5.9.RELEASE.pom'.
> sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
> Could not resolve org.springframework.boot:spring-boot-gradle-plugin:1.5.9.RELEASE.
> Could not get resource 'https://nexus-cd.192.168.99.100.nip.io/repository/maven-public/org/springframework/boot/spring-boot-gradle-plugin/1.5.9.RELEASE/spring-boot-gradle-plugin-1.5.9.RELEASE.pom'.
> Could not GET 'https://nexus-cd.192.168.99.100.nip.io/repository/maven-public/org/springframework/boot/spring-boot-gradle-plugin/1.5.9.RELEASE/spring-boot-gradle-plugin-1.5.9.RELEASE.pom'.
> sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
> Could not resolve org.springframework.boot:spring-boot-gradle-plugin:1.5.9.RELEASE.
> Could not get resource 'https://nexus-cd.192.168.99.100.nip.io/repository/atlassian_public/org/springframework/boot/spring-boot-gradle-plugin/1.5.9.RELEASE/spring-boot-gradle-plugin-1.5.9.RELEASE.pom'.
> Could not GET 'https://nexus-cd.192.168.99.100.nip.io/repository/atlassian_public/org/springframework/boot/spring-boot-gradle-plugin/1.5.9.RELEASE/spring-boot-gradle-plugin-1.5.9.RELEASE.pom'.
> sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
> Could not resolve org.sonarsource.scanner.gradle:sonarqube-gradle-plugin:2.6.2.
Required by:
project :
> Could not resolve org.sonarsource.scanner.gradle:sonarqube-gradle-plugin:2.6.2.
> Could not get resource 'https://nexus-cd.192.168.99.100.nip.io/repository/jcenter/org/sonarsource/scanner/gradle/sonarqube-gradle-plugin/2.6.2/sonarqube-gradle-plugin-2.6.2.pom'.
> Could not GET 'https://nexus-cd.192.168.99.100.nip.io/repository/jcenter/org/sonarsource/scanner/gradle/sonarqube-gradle-plugin/2.6.2/sonarqube-gradle-plugin-2.6.2.pom'.
> sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
> Could not resolve org.sonarsource.scanner.gradle:sonarqube-gradle-plugin:2.6.2.
> Could not get resource 'https://nexus-cd.192.168.99.100.nip.io/repository/maven-public/org/sonarsource/scanner/gradle/sonarqube-gradle-plugin/2.6.2/sonarqube-gradle-plugin-2.6.2.pom'.
> Could not GET 'https://nexus-cd.192.168.99.100.nip.io/repository/maven-public/org/sonarsource/scanner/gradle/sonarqube-gradle-plugin/2.6.2/sonarqube-gradle-plugin-2.6.2.pom'.
> sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
> Could not resolve org.sonarsource.scanner.gradle:sonarqube-gradle-plugin:2.6.2.
> Could not get resource 'https://nexus-cd.192.168.99.100.nip.io/repository/atlassian_public/org/sonarsource/scanner/gradle/sonarqube-gradle-plugin/2.6.2/sonarqube-gradle-plugin-2.6.2.pom'.
> Could not GET 'https://nexus-cd.192.168.99.100.nip.io/repository/atlassian_public/org/sonarsource/scanner/gradle/sonarqube-gradle-plugin/2.6.2/sonarqube-gradle-plugin-2.6.2.pom'.
> sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
I get always the error.
Cloning "https://github.com/opendevstack/ods-core.git " ...
error: fatal: Couldn't find remote ref production
Unexpected end of command stream
Enviroment:
I have check this:
[root@jenkins-slave-base-3-build home]# git clone https://github.com/opendevstack/ods-core.git
Cloning into 'ods-core'...
remote: Enumerating objects: 37, done.
remote: Counting objects: 100% (37/37), done.
remote: Compressing objects: 100% (23/23), done.
Receiving objects: 52% (516/988), 3.38 MiB | 155.00 KiB/s
Get the LTS (Long-term Support): SonarQube 6.7.x (that's why I guess)
When the branch is named e.g. bugfix/FOO-529-bar-6-baz
, then the webhook proxy sets the pipeline name to -6
instead of -529
.
the webhook proxy should take HTTP (get?paramA=B¶mB=C) params on request and transform them into BC pipeline ENV params.
Since the last local installation on May 23th, Atlassian has changed the download URL locations.
The location in the ansible roles has to point do
https://product-downloads.atlassian.com/software//downloads/
The storage class and provisoner have to be configurable via tailor for minishift or environments outside aws
https://github.com/opendevstack/ods-core/blob/master/sonarqube/ocp-config/sonarqube.yml
Allows user to make use of Tailor in the Jenkins pipeline.
Rundeck doesn't work with the current jaas-crowd connector. The connector has to be modified.
Right now we add the CNES report jar in the base slave, but we should provide a wrapper script located in the $PATH
so that it is easily invokable by consumers (e.g. Jenkins pipelines).
Line 24 in 5a6c8dc
E.g. protecting branches etc.
It looks like it is not working ... emails should be sent out when the build fails.
Steps To Replicate:
Going to OC and starting the pipeline there works
@michaelsauter i gave this some thought and we should add the groovy script
The OpenShift documentation states that the prometheus instance that is used for monitoring the cluster itself should not be used for application monitoring:
Users interested in leveraging Prometheus for application monitoring on OpenShift should consider using OLM to easily deploy a Prometheus Operator and setup new Prometheus instances to monitor and alert on their applications.
Additionally, the OpenShift docs also mentions the EFK stack, a modified version of the ELK stack:
As an OpenShift Container Platform cluster administrator, you can deploy the EFK stack to aggregate logs for a range of OpenShift Container Platform services.
Does opendevstack include such a Prometheus instance for application monitoring and/or and EFK stack?
Needed for the release manager. @metmajer
That way jenkins etc is completely abstracted and one can use the webhook proxy for integration.
Add a bash script, which removes the sample suffix and copies the configuration-sample into the configuration directory. Additionally this should make a git diff, to show wether there are changes between the current master and the locally cloned files.
Instead of using a special oc container, use the Jenkins slave base image,s o we have only one set of containerdefinitons to maintain.
They should not need to be customized and are better located in ods-core.
| dpkg-genchanges -b >../nginx_1.10.3-0ubuntu0.16.04.3_amd64.changes
| dpkg-genchanges: binary-only upload (no source code included)
| dpkg-source --after-build nginx-1.10.3
| dpkg-buildpackage: binary-only upload (no source included)
| dpkg: error processing archive /tmp/nginx-custom/nginx-common_1.10.3-0ubuntu0.16.04.2_all.deb (--install): | cannot access archive: No such file or directory
| Errors were encountered while processing:
| /tmp/nginx-custom/nginx-common_1.10.3-0ubuntu0.16.04.2_all.deb
| Removing intermediate container 965fcfda0cd0
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.