radeonopencompute / rocm_documentation Goto Github PK
View Code? Open in Web Editor NEWLegacy ROCm Software Platform Documentation
Home Page: http://rocm.docs.amd.com
Legacy ROCm Software Platform Documentation
Home Page: http://rocm.docs.amd.com
The GitHub edit links on read the docs are missing tree, as an example the link on https://rocmdocs.amd.com/en/latest/ is to https://github.com/RadeonOpenCompute/ROCm_Documentation//master/index.rst which resolves as not found. The correct link should be https://github.com/RadeonOpenCompute/ROCm_Documentation/tree/master/index.rst or https://github.com/RadeonOpenCompute/ROCm_Documentation/blob/master/index.rst
Dear @Rmalavally, I saw that you are somewhat alone in contributing to this gigantic repository.
Is it parked? Will documentation be split back to their respective repositories?
Best regards
I was reading - https://rocmdocs.amd.com/en/latest/ROCm_API_References/HIP_API/Stream-Management.html - and noticed that it used the cuda api - cudaStreamAddCallback instead of hipStreamAddCallback. I'm pretty sure this isn't the only instance of this as well.
All pages on https://rocm-documentation.readthedocs.io/en/latest/index.html are limited to approximately 800 pixels in width and there's no apparent way to make the text wider (I can zoom in but that just makes the font larger, the amount of information per line stays the same).
This is a problem for tables, because, for example, most tables in the Vega ISA page are too wide to fit into 800 pixels.
Hello,
~$ docker pull rocm/pytorch:rocm2.0
Error response from daemon: manifest for rocm/pytorch:rocm2.0 not found
And when we look at:
https://hub.docker.com/r/rocm/pytorch/tags
there is no such thing than a rocm2.0 tag.
Please upload the new image?
Regards,
Several documents contain hipLaunchKernel
instead of hipLaunchKernelGGL
, and several kernel examples show a hipLaunchParm
first argument. Should be easy to find those by grepping.
CodeXL is now being deprecated while this documentation still suggests it. What is the alternative? And, this documentation should be modified to suggest the alternative instead. If an alternative is unavailable, maybe better to put a deprecation notice.
The branch name appears to be incorrect. repo
fails with the following error:
Downloading Repo source from https://gerrit.googlesource.com/git-repo
remote: Finding sources: 100% (7/7)
remote: Total 7 (delta 0), reused 7 (delta 0)
Downloading manifest from https://github.com/RadeonOpenCompute/ROCm.git
fatal: couldn't find remote ref refs/heads/roc-3.7
manifests:
fatal: couldn't find remote ref refs/heads/roc-3.7
There's no roc-3.7
branch in the ROCm repo. There's a roc-3.7.x
, which works. There's also rocm-3.7.0
tag (repo does not seem to like it, though). It would be great to update the docs with the branch reference intended to be used with the ROCm 3.7 release.
Documentation says to uninstall amdgpu-uninstall
but it should be amdgpu-install --uninstall
.
https://github.com/RadeonOpenCompute/ROCm_Documentation/blob/8bb172b33e56d01deddb4f4c13f1d1f9add0db33/Installation_Guide/Installation_new.rst#rocm-stack-uninstallation
How properly install and configure a docker image "rocm/tensorflow" on the windows 10, and integrate Visual Studio Community and Anacoda with it?
The canonical documentation site (and I say "canonical" because there are scraps of conflicting ROCm information all over the internet), is presumably this one. However nowhere is it made clear in this documentation, and it should be in BIG RED WRITING, that on the most widely deployed desktop version of Linux, by far, ROCm doesn't install.
No point in having installation documentation which merrily tells you what needs to happen to get ROCm up and running, when there's an obvious issue that hasn't been resolved. This Kernel 5.8 problem, which is the default kernel of the current Ubuntu LTS, should not be buried as if it doesn't exist.
Mentioning in brackets as you do "(Ubuntu 20.04.1 (5.4 and 5.6-oem) and 18.04.5 (Kernel 5.4)" is insufficient because most users will not even see this, nor indeed do many users even care what kernel they're on, if they even know. They just expect a virgin 20.04 + ROCm 4 installation, to work. If it doesn't please say so boldly, and indeed, if it doesn't work on Linux's "baseline" distro, then it shouldn't even have been released. This is not some fancy Fedora cutting-edge "testing" distro. This is Ubuntu LTS.
Platform: Ubuntu 20.04.01 LTS
Problem
I try to install tensorflow
as described in the official tutorial Rocm
seems to be installed properly but I have the problem follwing the steps.
After command sudo apt install rocm-libs miopen-hip cxlactivitylogger rccl
I get E: Unable to locate package cxlactivitylogger
.
What may be the reason or how to fix it?
there are some issues in your example code, which makes learning more difficult :(
Opencl-programming-guide:
there are some line of code, that stuck under comment, because of formatting
Example code 1.
Block 1.:
the example kernel code was not quoted into a string
platform and device is not defined
Block 2.:
the example kernel code was not quoted into a string
there's device, but still no platform
context is not defined (part of comment)
clCreateBuffer( context, CL_MEM_WRITE_ONLY, // line / command just hung up
/ 6. is not a comment :(
size_t global_work_size = NWITEMS; // is a new line
cl_uint *ptr; // new line
The docs for hipEventQuery currently state:
This function will return #hipErrorNotReady if all commands in the appropriate stream (specified to hipEventRecord()) have completed. If that work has not completed, or if hipEventRecord() was not called on the event, then #hipSuccess is returned.
That looks incorrect, since if all the work have completed, why return NotReady
? Should not it be the other way around?
Installation_Guide.rst
is now 1280 lines in the source.
This is ridiculous in my opinion.
200 would be acceptable, but even less would be better.
Building sections (including repo tool, git repo links, versions, etc) can be moved to own document. That will shave maybe 300 lines.
Some other sections also doesn't belong to the Installation Guide:
Machine Learning and High Performance Computing Software Stack for AMD GPU
Software Stack for AMD GPU
ROCm Platform Packages
and
List of ROCm Packages for Supported Operating Systems
sections can be moved to own documents or eradicated.
I would even say, that List of ROCm Packages for Supported Operating Systems
should be removed completely, it is already redundant with other information in the document and outdated.
Platform: Ubuntu 18.04
When I try to add user to the render group using sudo usermod -a -G render piotr
I get: usermod: group 'render' does not exist
I have successfully installed rocm package, however render
group didn't appear.
Is there anything that can be done about it?
How can mac applications harness the power of AMD GPUS for OpenCV image processing library functions like template matching, ORB, SIFT, etc.?
For the following set commands in step 6 of the instructions for Ubuntu on the Installation Guide:
echo 'ADD_EXTRA_GROUPS=1'
sudo tee -a /etc/adduser.conf
echo 'EXTRA_GROUPS=video'
sudo tee -a /etc/adduser.conf
Each pair should be linked by a pipe; otherwise, the output of echo
gets printed directly to stdout, and tee
, lacking a piped input, waits there listening on stdin.
I tried to follow the instructions on page https://rocmdocs.amd.com/en/latest/Installation_Guide/Installation-Guide.html# to compile and install the basic rocm3.7 compnents manually, kernel driver has already been installed and loaded successfully.
HIP ROCm-CompilerSupport rocminfo ROCR-Runtime ROCm-Device-Libs ROCT-Thunk-Interface ROCm-OpenCL-Runtime ROCclr llvm-project
When I ran HIP/samples/0_Intro/square example, I got following output:
LoadLib(libhsa-amd-aqlprofile64.so) failed: libhsa-amd-aqlprofile64.so: cannot open shared object file: No such file or directory
/work/HIP/rocclr/hip_code_object.cpp:92: guarantee(false && "hipErrorNoBinaryForGpu: Coudn't find binary for current devices!")
Aborted (core dumped)
Any advice for these errors?
Which project/repository will generate libhsa-amd-aqlprofile64.so?
How to fix hipErrorNoBinaryForGpu?
the outputs of rocminfo and hipconfig are in the attachment.
hipconfig.txt
rocminfo.txt
I have been experience a number of problems with virsh/virt-installer when trying to set this up, notably not being able to stop the VM, or even gracefully shutdown/resert my computer
I have not had these issues with QEMU
One thing to note also generally is that it seems much simpler to use virtualization with more than one GPU, such that the main host GPU can remain as such, and any additional can be put towards vfio-pci
my launch command looks like this
qemu-system-x86_64 -m 10048 \
-net nic,model=virtio \
-net user,hostfwd=tcp::8022-:22 \
-cpu host -smp 4 \
-enable-kvm \
-nographic \
-device virtio-rng-pci \
-drive file="img.qcow2",media=disk,snapshot=off,if=virtio \
-device vfio-pci,host=0c:00.0,x-vga=off -device vfio-pci,host=0c:00.1
I needed to run a modified version of Ubuntu 20.04 to support my hardware (mbp 16,1), namely I used this project which I believe does not change anything relevant to ROCm. However I get an error:
dpkg: error processing package rocm-dkms (--configure):
dependency problems - leaving unconfigured
ERROR (dkms apport): kernel package linux-headers-5.7.19-mbp is not supported
Possibly because of the way allowed kernels are specified. Maybe switching from white listing to blacklisting or checking for the actual ROCm dependencies would be more appropriate? I do not think needing to modify the kernel to support new hardware is uncommon.
Thank you so much; I really appreciate it!
Could you please let me know how to enable HCC Profile Mode with all information like kernel command ,Memory copy commands,Barrier commands.
PFA text files HCC profile mode does not have profile record like Resource=GPU & Resource=DATA Gaps between the records and Barrier command.Memeory commands.
In Rocm3.3 version displays only Resource GPU info
@perhaad;@mikeseven;@nitishjohn;@sklam;@guansong;@TermoSINteZ;@nhaustov;@kasaurov;@NEELMCW;
@ukidaveyash15;@Kirpich30000;@bjelich;@arodrigx7;@tingxingdong;@dsquaredx2;@amdgerritcr;@wkwchau;
@BlackDogWriter;@ascollard
We are going to enforce two factor authentication in (https://github.com/RadeonOpenCompute/) organization on 8th April , 2022 . Since we identified you as outside collaborator for this organization, you need to enable two factor authentication in your github account else you shall be removed from the organization after the enforcement. Please skip if already done.
To set up two factor authentication, please go through the steps in below link:
Please reach out to "[email protected] " for queries
$ sudo apt install rocm-dkms
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
rocm-dkms : Depends: rocm-dev but it is not going to be installed
Depends: rock-dkms but it is not installable
E: Unable to correct problems, you have held broken packages.
$ sudo apt install rocm-dev
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
openmp-extras : Depends: libstdc++-5-dev but it is not installable or
libstdc++-7-dev but it is not installable
Depends: libgcc-5-dev but it is not installable or
libgcc-7-dev but it is not installable
rocm-gdb : Depends: libpython3.8 but it is not installable
rocm-llvm : Depends: libstdc++-5-dev but it is not installable or
libstdc++-7-dev but it is not installable
Depends: libgcc-5-dev but it is not installable or
libgcc-7-dev but it is not installable
Recommends: gcc-multilib but it is not going to be installed
Recommends: g++-multilib but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
rock-dkms
is not in the rocm repo and gcc-5 and gcc-7 packages are not in the Ubuntu repo anymore.
Is there a place where I can find more detailed documentation of ROCr API? Here on github the page just lists the names of the functions (no prototypes or semantics)
Hey guys, here's my tuning for Rx Vega56 , hopefully it can give you some help on the documentation?
My GPU Model is XFX vega 56.
Thanks for rocm-smi that I can get the information I need.
Here's the default clock table:
$ rocm-smi --showclkvolt GPU[0] : OD_SCLK: GPU[0] : 0: 852Mhz 800mV GPU[0] : 1: 991Mhz 900mV GPU[0] : 2: 1084Mhz 950mV GPU[0] : 3: 1138Mhz 1000mV GPU[0] : 4: 1200Mhz 1050mV GPU[0] : 5: 1401Mhz 1100mV GPU[0] : 6: 1536Mhz 1150mV GPU[0] : 7: 1630Mhz 1200mV GPU[0] : OD_MCLK: GPU[0] : 0: 167Mhz 800mV GPU[0] : 1: 500Mhz 800mV GPU[0] : 2: 700Mhz 900mV GPU[0] : 3: 800Mhz 950mV GPU[0] : OD_RANGE: GPU[0] : SCLK: 852MHz 2400MHz GPU[0] : MCLK: 167MHz 1500MHz GPU[0] : VDDC: 800mV 1200mV
The voltages are stable, but if I want more performance, I would undervolt the card. Since 220w is the TDP limit, there is no need to push the GPU clock higher, as clockspeed is not the limiting factor of the overall performance, but the power limit. We need to even underclock the GPU a little bit, to leave more room in power for HBM2 , as I have found out overclocking HBM2 can greatly boost our computational workload.
However, it's not always a good idea to just simply bump up the power limit number.
Remember, if you are not familiar with the Voltage Regulator and PCB setup of your graphics card, DO NOT change the poweroverdrive! If you set the powercap too high, permanent damage will possibly occur. Certainly, we don't want that stuff to happen. So I left the PwrCap untouched as 220w.
After several black-screens and flickering, I finally get a stable clock-volt comps below:
GPU[0] : OD_SCLK: GPU[0] : 0: 852Mhz 800mV GPU[0] : 1: 991Mhz 800mV GPU[0] : 2: 1138Mhz 850mV GPU[0] : 3: 1269Mhz 850mV GPU[0] : 4: 1312Mhz 870mV GPU[0] : 5: 1474Mhz 900mV GPU[0] : 6: 1537Mhz 930mV GPU[0] : 7: 1592Mhz 1000mV GPU[0] : OD_MCLK: GPU[0] : 0: 167Mhz 800mV GPU[0] : 1: 500Mhz 800mV GPU[0] : 2: 800Mhz 950mV GPU[0] : 3: 945Mhz 1100mV GPU[0] : OD_RANGE: GPU[0] : SCLK: 852MHz 2400MHz GPU[0] : MCLK: 167MHz 1500MHz GPU[0] : VDDC: 800mV 1200mV
This is a stable clock for my card, it works quite well. But stable clocks and voltages for different cards can differ, even if they're using the same GPU.
And this, is called sillicon lottery. Silicon chips are manufactured in an unbelievably complicated process, and it's possible that some chips can run at higher clocks with lower voltage needed, others don't. Maybe you'll find a stable clock settings on one card, and fail on another, and it's often the case.
The ROCm documentation for MXNet (https://rocmdocs.amd.com/en/latest/Deep_learning/MXNet.html) should use "Apache MXNet". Can you please update this documentation? Thanks.
Please see https://www.apache.org/foundation/marks/faq/ for guidelines.
Hi,
ROCm is such an exciting project, with teething issues of course, but I'd rather use this than NVidia's proprietary software.
I've been looking at buying an appropriate card and was using the hardware list at https://rocm.github.io/hardware.html. The document provided a list of supported cards and caveats for each. And is referenced by the README.md at Supported GPUs
For a more detailed list of hardware support, please see the following documentation.
However, in the past week or so the page redirects to https://rocm-documentation.readthedocs.io/en/latest/.
I've searched the docs page, but I cannot find the the detailed list of supported cards. Where has the list of supported cards gone?
I'm happy for this issue to be moved to the ROCm repo if needed.
Thanks!
https://rocm.github.io/ROCmInstall.html#Ubuntu
Copy pasting the instructions (wget ...) didn't work (probably due to HTML rendering)
ubuntu20.04 + Radeon Rx Vega10 Graphics.
/opt/rocm/bin/rocminfo has a mistake:
ROCk module is loaded
Unable to open /dev/kfd read-write: Bad address
cfl is member of render group
hsa api call failure at: /src/rocminfo/rocminfo.cc:1142
Call returned HSA_STATUS_ERROR_OUT_OF_RESOURCES: The runtime failed to allocate the necessary resources. This error may also occur when the core runtime library needs to spawn threads or create internal OS-specific events.
how can I fix it ?
Memory management functions documentation doesn't exist: https://rocm-documentation.readthedocs.io/en/latest/ROCm_API_References/HIP_API/Memory-Management.html#hipmalloc
HIP-FAQ.rst has a feature comparison with CUDA, up to "CUDA 8.0: TBD".
It should be uodated to cover the features from CUDA 8.0, 9.x, 10.x. For example
These instructions don't match the modern security practices.
apt-key
is not used anymore. Instead
gpg --export --export-options=export-minimal CA8BB4727A47B4D09B4EE8969386B48A1A693C5C > rocm.gpg
) is downloaded and put into a dir within a local filesystem (OTHER THAN /etc/apt/trusted.gpg.d
, but any other dir, like /usr/share/keyrings/
or /etc/apt/trusted.gpg.d-3rdp/
).signed-by=
optionRead https://wiki.debian.org/DebianRepository/UseThirdParty for more info.
Following the documentation for Debian at https://rocm.github.io/ROCmInstall.html, using the rock-dkms package and not upstream kernel, I ended up with /dev/kfd owned by group render
instead of group video
. This resulted in rocminfo failing with a confusing error as normal user, but working as root, so I tracked it down to a permission issue.
The documentation does suggest installing a udev rule for kfd, but in a separate section when using upstream driver and not rock-dkms. I am running Debian testing and kernel package 4.19.0-5-amd64. Overall the installation process was incredibly smooth, but this hiccup could have thrown a less experienced Linux user for a loop. Maybe the docs could be updated with an FAQ for resolving this issue, by adding user to render
group or setting up udev rule, or maybe rock-dkms should install the udev rule to prevent this from happening? Not sure which kernel / user space combos use render
group by default.
Hi there @gstoner,
Thank you so much for offering an alternative to the AMD GPUPRO drivers and for maintaining all related ROCm repos. Appreciate it.
While going through the instructions at https://rocm.github.io I noticed some inconsistencies and unclear items. I also found this page (http://rocm-documentation.readthedocs.io) which I assume is connected to this repo.
Here's what I'd like to do: Go through the documentation, fix content and create a pull request. I personally like how the documentation is structured at http://rocm-documentation.readthedocs.io compared to https://rocm.github.io.
What's the best way to help with that? Is new content to this repo (https://github.com/RadeonOpenCompute/ROCm_Documentation) automatically pushed to https://rocm.github.com?
Thanks,
Andre
The web page at readthedocs is just a bunch of errors.
Example:
Runtime Notification
Warning
doxygenenum: Cannot find enum “hsa_status_t” in doxygen xml output for project “ReadTheDocs-Breathe” from directory: xml/
Warning
doxygenfunction: Cannot find function “hsa_status_string” in doxygen xml output for project “ReadTheDocs-Breathe” from directory: xml/
Hi folks - I upgraded to Bionic, after checking the ROCm docs first to ensure it was supported on 18.04 ("bionic"). However, after re-enabling my apt-source for ROCm, which now points to "bionic", it errors out saying there's no Release file.
Sure enough, when I open the repository in my browser I see only Xenial.
Should I use Xenial for both distro versions? Or, is the Bionic repository a work-in-progress?
Currently, the documentation says:
Note that all current Nvidia devices return 32 for this variable, and all current AMD devices return 64.
(https://github.com/RadeonOpenCompute/ROCm_Documentation/blob/d54ddbd43dcc434211c55451445093e4c6a5bb07/Programming_Guides/Kernel_language.rst#warpsize)
However, this is not the case. On gfx1032 (AMD RDNA 2), the warpSize
value in the kernel is (correctly) 32, so "all current AMD devices return 64" is not true.
the changes in: 369de85
no longer list the w6800 gpu as being supported, is this correct?
In which case wavefronts can be dependent? Isn't every work-items suggested to be independent?
When wavefronts are dependent, what does this paragraph means by saying "independent operations from different wavefronts can be selected to be assigned to a single vector unit to be executed in parallel every cycle."?
Thanks,
Hello ROCm developers-
I've been trying to learn about support for PyTorch and Caffe2 on AMD GPUs, and there is precious little accessible news and documentation. Some blogs led me to have a look at ROCm, but the combination of seeing little mention of ROCm in the machine learning literature and blogosphere, and of seeing a 2014 copyright on the ROCm ReadTheDocs pages (particularly the ROCm deep learning page), initially led me to suspect ROCm support of deep learning packages was abandoned. However, I checked the GitHub page for the docs, and learned there that there have been updates to the deep learning docs as recently as a few weeks ago. Glad to see that!
I suggest that the documentation be slightly revised to indicate recent/current activity in the project. As a minimum, update the copyright line in the template. Perhaps also consider putting "latest release" info prominently on the documentation landing page (with a date), or a release date somewhere prominent on the "Current Release Notes" page.
Thanks for all your work on ROCm. I'm likely to be getting a Mac with an AMD GPU soon, and I plan on giving ROCm a try when it arrives.
Hello, I have 4 AMD 480
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (rev e7)
05:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (rev e7)
08:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (rev e7)
09:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (rev e7)
On Ubuntu 20.04 all update, followed the install intructions
But cliinfo does not detect any cards:
/opt/rocm/opencl/bin/clinfo
Number of platforms: 1
Platform Profile: FULL_PROFILE
Platform Version: OpenCL 2.0 AMD-APP (3212.0)
Platform Name: AMD Accelerated Parallel Processing
Platform Vendor: Advanced Micro Devices, Inc.
Platform Extensions: cl_khr_icd cl_amd_event_callback
Platform Name: AMD Accelerated Parallel Processing
Number of devices: 0
No OpenCL applications work, no devices are detected. Can you help me diagnose this problem ?
I was able to successfully compile LightGBM from source and build it for my Vega FE. I love it and I think you should add it to the repo! Also if you do take a look please let me know if you can help me run in multi-gpu config; I have 2.
According to the ROCm github page HCC is deprecated and will be discontinued already within 2019. Please add a warning to the documentation - especially to Programming-Guides.
It appears to serve as the basis for choosing the language. And the way it is written now suggests that HCC should be used when one wants to start a new project with specifically targeting AMDGPU hardware.
Is it possible to use ROCm as a CPU-only OpenCL installation for Travis CI builds, to replace the way that AMDAPPSDK
is often used? If so, can this please be addressed with an example in the documentation?
Due to recent reorganizations of the AMD site, several Linux OpenCL Open Source Travis builds are broken. For example:
The current ROCm documentation (eg. ./InstallGuide.rst
) seems to suggest that both a compatible CPU and GPU are needed (emphasis mine):
To use ROCm on your system you need the following: ROCm Capable CPU and GPU
This suggests that ROCm is not suitable for use with Travis as a CPU-only OpenCL implementation. If this is indeed the case, I think it would help to emphasize the requirements more, and make it clear that CPU-only operation is not possible.
Works for me with target_link_libraries(<your_target> roc::rocthrust)
, not with bare rocthrust
.
@sriharikarnam could you help update the HIP markdown documents?
Like the following hip_bugs, those are out of sync for a while:
https://github.com/RadeonOpenCompute/ROCm_Documentation/blob/master/Programming_Guides/HIP-bug.rst
https://github.com/ROCm-Developer-Tools/HIP/blob/master/docs/markdown/hip_bugs.md
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.