Giter Site home page Giter Site logo

iigroup / maniqa Goto Github PK

View Code? Open in Web Editor NEW
270.0 2.0 35.0 5.64 MB

[CVPRW 2022] MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment

License: Apache License 2.0

Python 100.00%
nr-iqa swintransformer vision-transformer cvpr2022 pytorch-implementation deep-learning csiq kadid-10k live pipal

maniqa's Introduction

MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment

Sidi Yang*, Tianhe Wu*, Shuwei Shi, Shanshan Lao, Yuan Gong, Mingdeng Cao, Jiahao Wang and Yujiu Yang

Tsinghua University Intelligent Interaction Group

🚀 🚀 🚀 Updates:

  • something more...
  • Mar. 11, 2023: Model trained with Koniq10k dataset checkpoint has be released.
  • Mar. 10, 2023: We release the large dataset (kadid10k) checkpoint and add the predicting one image files.
  • April. 11, 2022: We release the MANIQA source code and the checkpoint of PIPAL22.

paper download Open issue Closed issue visitors IIGROUP GitHub Stars

This repository is the official PyTorch implementation of MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment. 🔥🔥🔥 We won first place in the NTIRE2022 Perceptual Image Quality Assessment Challenge Track 2 No-Reference competition.

Ground Truth Distortion 1 Distortion 2 Distortion 3 Distortion 4
MOS (GT) 1539.1452 (1) 1371.4593 (2) 1223.4258 (3) 1179.6223 (4)
Ours (MANIQA) 0.743674 (1) 0.625845 (2) 0.504243 (3) 0.423222 (4)
MOS (GT) 4.33 (1) 2.27 (2) 1.33 (3) 1.1 (4)
Ours (MANIQA) 0.8141 (1) 0.2615 (2) 0.0871 (3) 0.0490 (4)
Model: 0.3398 Model: 0.2612 Model: 0.3078 Model: 0.3716 Model: 0.3581

No-Reference Image Quality Assessment (NR-IQA) aims to assess the perceptual quality of images in accordance with human subjective perception. Unfortunately, existing NR-IQA methods are far from meeting the needs of predicting accurate quality scores on GAN-based distortion images. To this end, we propose Multi-dimension Attention Network for no-reference Image Quality Assessment (MANIQA) to improve the performance on GAN-based distortion. We firstly extract features via ViT, then to strengthen global and local interactions, we propose the Transposed Attention Block (TAB) and the Scale Swin Transformer Block (SSTB). These two modules apply attention mechanisms across the channel and spatial dimension, respectively. In this multi-dimensional manner, the modules cooperatively increase the interaction among different regions of images globally and locally. Finally, a dual branch structure for patch-weighted quality prediction is applied to predict the final score depending on the weight of each patch's score. Experimental results demonstrate that MANIQA outperforms state-of-the-art methods on four standard datasets (LIVE, TID2013, CSIQ, and KADID-10K) by a large margin. Besides, our method ranked first place in the final testing phase of the NTIRE 2022 Perceptual Image Quality Assessment Challenge Track 2: No-Reference.


Network Architecture

image.png

Dataset

The PIPAL22 dataset is used in NTIRE22 competition and we test our model in PIPAL21.
We also conducted experiments on LIVE, CSIQ, TID2013 and KADID-10K datasets.

Attention:

  • Put the MOS label and the data python files into data folder.
  • The validation dataset comes from NTIRE 2021. If you want to reproduce the results on validation or test set for NTIRE 2022 NR-IQA competition, register the competition and upload the submission.zip by following the instruction on the website.

Checkpoints

Click into the website and download the pretrained model checkpoints, ignoring the source files (tag Koniq-10k has the latest source file).

Training Set Testing Set Checkpoints of MANIQA
PIPAL2022 dataset (200 reference images, 23200 distorted images, MOS scores for each distorted image) [Validation] PIPAL2022 dataset (1650 distorted images) download
SRCC:0.686, PLCC:0.707
KADID-10K dataset (81 reference images and 10125 distorted images). 8000 distorted images for training KADID-10K dataset. 2125 distorted images for testing download
SRCC:0.939, PLCC:0.939
KONIQ-10K dataset (in-the-wild database, consisting of 10,073 quality scored images). 8058 distorted images for training KONIQ-10K dataset. 2015 distorted images for testing download
SRCC:0.930, PLCC:0.946

Usage

Training MANIQA model

  • Modify "dataset_name" in config
  • Modify train dataset path: "train_dis_path"
  • Modify validation dataset path: "val_dis_path"
python train_maniqa.py

Predicting one image quality score

  • Modify the path of image "image_path"
  • Modify the path of checkpoint "ckpt_path"
python predict_one_image.py 

Inference for PIPAL22 validing and testing

Generating the ouput file:

  • Modify the path of dataset "test_dis_path"
  • Modify the trained model path "model_path"
python inference.py

Results

image.png

Environments

  • Platform: PyTorch 1.8.0
  • Language: Python 3.7.9
  • Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-104-generic x86_64)
  • CUDA Version 11.2
  • GPU: NVIDIA GeForce RTX 3090 with 24GB memory

Requirements

Python requirements can installed by:

pip install -r requirements.txt

Citation

@inproceedings{yang2022maniqa,
  title={MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment},
  author={Yang, Sidi and Wu, Tianhe and Shi, Shuwei and Lao, Shanshan and Gong, Yuan and Cao, Mingdeng and Wang, Jiahao and Yang, Yujiu},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={1191--1200},
  year={2022}
}

Acknowledgment

Our codes partially borrowed from anse3832 and timm. Thanks for the SwinIR Readme.md. We modify ours file like them.

Related Work

NTIRE2021 IQA Full-Reference Competition

[CVPRW 2021] Region-Adaptive Deformable Network for Image Quality Assessment (4th place in FR track)

paper code

NTIRE2022 IQA Full-Reference Competition

[CVPRW 2022] Attentions Help CNNs See Better: Attention-based Hybrid Image Quality Assessment Network. (1th place in FR track)

paper code

maniqa's People

Contributors

shuweis avatar stephen0808 avatar tianhewu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

maniqa's Issues

about config.num_avg_val

您好!关于num_avg_val,我有一点疑问。
在train_maniqa中,num_avg_val设置为5,对288288的图像进行5次定点裁剪得到5个224224的图像,然后估计分数取平均值。在inference中,设置为1。
请问您在提交NTIRE2022的过程中是将num_avg_val设置为1得到output.txt的吗?和论文中4.2节最后一段文字有什么关系吗?
希望得到您的解答。

Issue about the checkpoint

Hi there, as the provided checkpoint contains the whole model rather than the weight parameters, I try to save weight parameters by myself to pytorch_model.pt, and use it as follows:

model = MANIQA()
model = load_state_dict(torch.load('pytorch_model.pt'))

However, I found the output result is different from your provided model:

net = torch.load('ckpt_valid.ckpt')

I have checked the parameter size, these two models seem to be the same.

Is there any difference in the forward pass of give model python files and provided checkpoints?

some question for dataparallel

when i try to train this model on double GPUs, an error occurred, suggesting that in models.maniqa.py in line 117 x = torch.cat((x6, x7, x8, x9), dim=2) appeared some data in CUDA0, some data in cuda1, please ask how this problem should be solved? I don't see some suitable solution on the internet.

KonIQ pretrained model hyperparameters

Hello authors,

Thanks for open sourcing this repository!
I had one query regarding the pre-trained model shared for KonIQ dataset. In the paper you mentioned the following:

Snip20230511_1

I understood the following the previous IQA works, you split the dataset into 8:2 ratio five times using five different seeds. And during the test time, you took image crops of size 224x224 20 times and reported the average results.

But can you explain the following two points:

  1. What do you mean by, "the final score is generated by predicting the mean score of these 20 images and all results are averaged by 10 times split". As far as I understood, the split created were 5 right?

  2. The checkpoint you have provided for KonIQ is giving the best results on the val split created by one of the seed values right? (Please correct me if I am wrong in the understanding). Can you please share the hyperparameters of the this model then if this is the one the seed model. Or the metrics reported are from some ensemble model?

Kindly clarify,

Thanks!

Question regarding the Python random seed & achieving replicable results

Hey there! 👋

I'm seeking some help to better understand how the custom Python random seed impacts the replicability of the results.

Currently this project is using a seed value of 20.
However, I've noticed that when evaluating a group of 10 images, the most aesthetically pleasing image changes depending on the seed value used. I find this somewhat confusing, as I expected the "best" image in the group to remain consistent regardless of the Python random seed value.

I'd appreciate any insights or advice you can provide to help clarify this issue.
Is there a special reason why 20 is used here - does this value produces the best results overall?

Thank you for your assistance! ❤️🖖

MSU Video Quality Metrics Benchmark Invitation

Hello! We kindly invite you to participate in our video quality metrics benchmark. You can submit MANIQA to the benchmark, following the submission steps, described here. The dataset distortions refer to compression artifacts on professional and user-generated content. The full dataset is used to measure methods overall performance, so we do not share it to avoid overfitting. Nevertheless, we provided the open part of it (around 1,000 videos) within our paper "Video compression dataset and benchmark of learning-based video-quality metrics", accepted to NeurIPS 2022.

Get score for individual images

Hi. Thanks for this interesting work.
I am interested in using the inference.py script to get the individual score of each image in a folder?
What parameters do I need to change in the configuration?
I see there is this:

    # config file
    config = Config({
        # dataset path
        "db_name": "PIPAL",
        "test_dis_path": "/mnt/data_16TB/ysd21/IQA/NTIRE2022_NR_Valid_Dis/",
        
        # optimization
        "batch_size": 10,
        "num_avg_val": 1,
        "crop_size": 224,

        # device
        "num_workers": 8,

        # load & save checkpoint
        "valid": "./output/valid",
        "valid_path": "./output/valid/inference_valid",
        "model_path": "./output/models/model_maniqa/epoch1"
    })

Do I just need to change the batch_size from 10 to 1?
Thank you.

a question about released checkpoint

Hi!
In Table 3 of the paper, MANIQA-s achieves 0.686/0.707 on the validation set, and 0.667/0.702 on the test set. Are these two sets of results are obtained from testing the same model checkpoint?
I tried to test and submit using the checkpoint and test script you released, and achieved 0.69/0.71 on the validation set but only 0.66/0.68 on the test set. So I want to know if the results are tested from the same parameters.
Thanks!

MLP RATIO

Hi!
Thank you for the amazing job!
I have tried finetuning and things work somoothly :)

One thing I removed is the argument mlp_ratio.
It caused an error when training from koniq10k checkpoint.

in swin.py file, mlp_ratio argument is used thourgh self.mlp_ratio but is never iniialized with self.mlp_ratio = mlp_ratio .
What is the utility of this argument exactly?
Is it set to one?

Thanks!

A question about the performance reported in your paper

Hi, is the performance of the synthetic distortion datasets reported in your paper, e.g. CSIQ, TID2013, and KADID-10K, obtained using the reference image splitting training test set with 80-20% ratio? Or is the splitting done directly on all distorted images.

关于restormer的问题

您好,我想请问一下restormer里在通道上计算转置注意力,是怎么隐式获得全局特征的?

Regarding loss convergence?

Hello authors,

Thanks for open-sourcing your code and models! It really helps in reproducing the results.

My query is regarding the loss convergence. I tried to train the model on KonIQ dataset before you modified the code and it seems the network is overfitting a lot with a huge gap between training and validation loss. I even tried to change the loss function and did normalisation but it didn't factor.

Any suggestions regarding the same? Can you also share the loss plots to verify the convergence criteria?

Thanks!

目标计算机积极拒绝连接

作者您好,请问在使用您的代码时,按照您所说的修改数据集名称以及训练路径等之后,一直提示目标计算机积极拒绝连接,请问是什么原因造成的呢?要如何解决该问题。

关于推理出的score的含义

感谢作者的开源!我最近初步接触IQA方向,因此想问下对失真图片推理出的score的含义具体代表什么?数值大小代表图片质量高低吗?我试了你们公布的权重,貌似每次推理出的score都是0.01量级的小数?

scale config when using konIQ10k checkpoint

Hi!

I wanted to know what is the scale to use (for SSTB) when using checkpoint form KonIQ10k. Thanks!
In the paper it is mentioned 0.1, in the code it's 0.8, so just want to confirm.

Thanks a lot!

Dataset

Hello!
Nice work!
I wonder how to get the PIPAL datasets?

torch 1.12.1

Running inference.py after upgrading to pytorch 1.12.1:

Traceback (most recent call last):
  File "convert-video.py", line 33, in <module>
    maniqa_scores = MANIQA.from_files([
  File "/home/max/MANIQA/__init__.py", line 53, in from_files
    pred += net(x_d)
  File "/home/max/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/max/MANIQA/models/maniqa.py", line 121, in forward
    _x = self.vit(x)
  File "/home/max/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/max/MANIQA/models/timm/models/vision_transformer.py", line 407, in forward
    x = self.forward_features(x)
  File "/home/max/MANIQA/models/timm/models/vision_transformer.py", line 398, in forward_features
    x = self.blocks(x)
  File "/home/max/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/max/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 139, in forward
    input = module(input)
  File "/home/max/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1148, in _call_impl
    result = forward_call(*input, **kwargs)
  File "/home/max/MANIQA/models/timm/models/vision_transformer.py", line 273, in forward
    x = x + self.drop_path(self.mlp(self.norm2(x)))
  File "/home/max/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/max/MANIQA/models/timm/models/layers/mlp.py", line 27, in forward
    x = self.act(x)
  File "/home/max/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/max/.local/lib/python3.8/site-packages/torch/nn/modules/activation.py", line 681, in forward
    return F.gelu(input, approximate=self.approximate)
  File "/home/max/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1207, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'GELU' object has no attribute 'approximate'

about pipal21_val.txt

hello, I see that pipal21_val.txt in your repo,
but in NTIRE21, the competition does not release the MOS label of the validation set.
so where do you get the label from? :)
Thank you for sharing your code, nice work~

Question about dataset partition

Thanks for your great work. I have some questions about the dataset partition strategy on LIVE, CSIQ, TID2013 and KADID-10k, because this may cause great difference in performance.

As we know, the distortion images in these datasets are synthetic images for a small number of reference images. In other words, the same reference image may have many corresponding distortion ones. To avoid content bias during training/validation, previous works, such as TReS, usually split the datasets according to different reference images.

However, in the script utils/process.py, it seems that experiments in the paper simply split the distortion images in 8:2 ratio without considering the reference images. I have done some simple experiments on KADID10k for these two different dataset partition strategy and found that random splitting without considering reference images leads to great improvement for PLCC/SRCC.

Therefore, performance with simple random partition may not be reliable because of content overlapping between train/valid images.

performance on pipal_val21

Hello, would you please release the performance on pipal_val21? I dont find that in your paper.
I want to know that because in my experiment, using pipal_val21 and pipal_train21, it seems that the model overfitting quickly.
The best val SRCC/PLCC (best about ~0.64) appears in the the first serveral epochs, and decreases persistently (down to ~0.4) while the metric in training always increases.
I tried to increase the dropout rate, but no obvious effect.
Should I stop the training earlier? Any suggestions are appreciated.

About time efficiency of predicting one image

Hi,

Glad to see the fabulous work! 👏 However, I've not found any data about time cost of assessment an image in the paper. I'm curious about the model performance----How fast the model will get the results? Would your team provide any cpu/gpu cost with different types of cpu/gpu? Several popular devices are enough.

Perhaps, your team is more focus on the accuracy? Please Let me know if you're interested.

A question about TABlock

attn = q @ k.transpose(-2, -1) * self.norm_fact
attn = self.softmax(attn)
x = (attn @ v).transpose(1, 2).reshape(B, C, N)
x = self.proj_drop(x)
x = x + _x
您好,
关于TABlock的代码,我理解attn的形状是(B, C, C),v的形状是(B, C, N),做完矩阵乘法之后的形状是(B, C, N),那这里为什么还需要使用transpose交换维度1和维度2,Swin Transformer代码中有这一步是因为使用了多头注意力,但您的代码中似乎并没有使用多头注意力。

dual branch and output score normalization

Hey!
Thanks for beeing so responsive.
I have another question again...

in the dual branch,
FC score is a fully connect that ends with a RELU and fc_weight branch ends with a sigmoid. Am I correct that it makes score q between 0 and + infinity.

Is the network supposed to output normalized scores?

some questions about your model

Hi, thank you for your work. When I run the file inference.py your provided, I have some questions.

1、As shown in the code in the figure below, what does the parameter config.num_avg_val mean, I see that it is set to 1.
2、Can I understand the pred in "pred /= config.num_avg_val" as a score for image quality assessment, and does it range from 0 to 1?
3、Is the model "ckpt_valid" you provided in this URL(https://github.com/IIGROUP/MANIQA/releases/tag/PIPAL22-VALID-CKPT) the final model you trained, or is it just a model that has not been trained yet?
Looking forward for your answer.
image

code for other datasets?

Hi,

Thanks for your code.
Do you mind sharing the code for other datasets like KONIQ, TID2013 etc.?
Because we cannot reproduce the performance you reported in the paper.

Thanks

Hi! A small question about the weight score. Looks weired.

The authors divided the input whole image to NxN small patches, and use one branche to calculate the weight score of each patch. So, I was wondering that the score of each small patch of the whole image is the same, like all of them is 1.4 score, how would the network learn the different scores of different patches? I mean the initial score of each patch is the same, why the network be able to learn the different score of these patches with different scores?

Using with my own images

Hello,

Congrats for your work and also for being the top approach in the NTIRE 2022, NR-IQA track.

I am just wondering how to use your technique with my own high resolution images which were generated by blind/unsupervised deep learning methods. In other words, I do not have the high resolution images and only the low resolution ones. Thus, I created the high resolution images based on the low ones only.

Best regards.

HEY @vsantjr

    HEY @vsantjr, 

Could you please share your update on this if you had any progress?
I am trying to do the same.

Hello piba941,

In my case, I trained with the PIPAL datasets as shown in this repo. Then, I got the best model and used 224 x 224 images.

Best.

some question for model architecture

In the MANIQA.py file,
def extract_feature(self, save_output):
x6 = save_output.outputs[6][:, 1:]
x7 = save_output.outputs[7][:, 1:]
x8 = save_output.outputs[8][:, 1:]
x9 = save_output.outputs[9][:, 1:]
you are using the output of 6, 7, 8 and 9 Blocks in VIT, why did you choose the output of these Blocks?

Are you interested in maintaining the repo?

The concept of referenceless seems pretty useful. It seems to me, it can find commercial use cases.
Though, the code might need some refactoring. It would be cool to have cpu-compatible models.
So, are you interested in maintaining the repo? I might make some PR's, but idk if they're even needed

hard to get a checkpoint with high srcc and plcc

您好!很高兴看到您们的工作,在运行了您的train_maniqa后,我有几点疑问,期待能得到解答。
我们使用了您的程序中的超参数,除增加了val_freq之外完全相同,训练出的checkpoint在NTIRE2022上提交的结果和paper中的数据以及release的checkpoint相比有一定的差距。我们训练出的checkpoint在pipal_val.txt的结果为:plcc=0.667,srcc=0.691,在NTIRE2022 NRIQA验证集上的结果为:plcc=0.67,srcc=0.65。请问关于这种情况您有什么建议吗?

Provide directory and file info on data folder

Can you provide some documentation on what the directory layout of the .data folder should look like along with expected files for training with PIPAL data set (Distortion_1 to _4), Train_ref, Train_label, etc...

Thanks in advance for this.

训练数据集的问题

请问PIPAL的训练集要在哪里下载呢。
作者您能公布一下训练其他数据集的代码吗?比如TID2013或者LIVE的。
谢谢

timm import

Hi!
Just by curiosity. Any reason why copying the timm directory and not put timm with a specific version in the requirements. did some of the timm files changed if we compare to the timm version you used?

Thanks again!

Could you provide me dataset

"train_dis_path": "/mnt/data_16TB/wth22/IQA_dataset/PIPAL/Train_Distort/",
"val_dis_path": "/mnt/data_16TB/wth22/IQA_dataset/PIPAL/Val_Distort/",
"train_txt_file_name": "./data/pipal21_train.txt",
"val_txt_file_name": "./data/pipal21_val.txt",

Please provide dataset!

weighting branch and scoring branch

Hi

In your paper, it is shown that the final quality score is obtained my combining the weighting branch and scoring branch.
The only difference between the two branches seems to be the final activation function i.e. ReLu and Sigmoid.
Can you please share the intuition regarding the architecture of the two branches?

setup.cfg

Hey! Is it possible to add a setup.cfg so we can use the repo as a package?
Thanks so much for the great work!

Cannot achieve reported test results using pretrained model

Hello, and congratulations for the paper and winning the NR-IQA track at NTIRE 2022 competition.

I've been trying to replicate the test results for the NTIRE 2022 test set that were reported in the paper using your code and the model checkpoint you have provided, but without success.
Running the inference script and uploading the output to codalab server, I've managed to obtain the following results:
SROCC: 0.65, PLCC: 0.67

However, looking at the competition's leaderboard, I see that you have obtained
SROCC: 0.70, PLCC: 0.74

Is there a discrepancy between the github code / model checkpoint and the final model that you have used in the competition? In case this difference in scores is caused only by the usage of ensembles, can you kindly provide more details about which models you have used in the final ensemble, as well as the ensemble strategy? Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.