Giter Site home page Giter Site logo

few-shot-meta-baseline's People

Contributors

cyvius96 avatar yinboc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

few-shot-meta-baseline's Issues

Results in Table 11

Would it be possible to release the config files for cos-metric based training? Thanks.

EOFError: Ran out of input

Hello
Thank you for your awesome work.
When I'm trying to run train_classifier.py code with miniImageNet dataset, I'm receiving the followed error:

./save\classifier_mini-imagenet_resnet12 exists, remove? ([y]/n): n
mini-imagenet
./materials\mini-imagenet
train dataset: torch.Size([3, 80, 80]) (x38400), 64
mini-imagenet
./materials\mini-imagenet
val dataset: torch.Size([3, 80, 80]) (x18748), 64
mini-imagenet
./materials\mini-imagenet
fs dataset: torch.Size([3, 80, 80]) (x12000), 20
num params: 8.0M
Traceback (most recent call last):
File "train_classifier.py", line 281, in
main(config)
File "train_classifier.py", line 148, in main
for data, label in tqdm(train_loader, desc='train', leave=False):
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\site-packages\tqdm\std.py", line 1171, in iter
for obj in iterable:
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 352, in iter
return self._get_iterator()
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 294, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 801, in init
w.start()
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'MiniImageNet.init..convert_raw'
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

The config as follows:

{'train_dataset': 'mini-imagenet', 'train_dataset_args': {'split': 'train', 'augment': 'resize'}, 'val_dataset': 'mini-imagenet', 'val_dataset_args': {'split': 'train_phase_val'}, 'fs_dataset': 'mini-imagenet', 'fs_dataset_args': {'split': 'test'}, 'eval_fs_epoch': 5, 'model': 'classifier', 'model_args': {'encoder': 'resnet12', 'encoder_args': {}, 'classifier': 'linear-classifier', 'classifier_args': {'n_classes': 64}}, 'batch_size': 8, 'max_epoch': 100, 'optimizer': 'sgd', 'optimizer_args': {'lr': 0.1, 'weight_decay': 0.0005, 'milestones': [90]}, 'save_epoch': 5, 'visualize_datasets': True}

I would appreciate it if you give me any suggestions.
Thank you

Question about the meta-test.

Dear author,
I have some questions about the meta-test stage.
In your paper, you apply consistent sampling and the same 800 testing tasks are sampled for estimating the performence. But in the test_few_shot.py, I find the random seed is fixed as 0 and the default value of the parameter test-epochs is 10. Do you mean runing the same 800 testing tasks for ten times and computing the average acc?
If so, since the meta-test stage is non-parametric, why different test-epochs get different acc?
Looking forward to your reply!

some training problems using "max_epoch=100"

When I tried to run the train_meta_baseline.py using "max_epoch=100", it always stopped at epoch 83...
【"tval .................... 0%"】
it seems that something is wrong with "tval"

how to run on CPU?

i tryed this in my code:

model = models.load(torch.load(config['load'], map_location='cpu'))

but next i get this error:

Traceback (most recent call last):
File "test_few_shot.py", line 125, in
main(config)
File "test_few_shot.py", line 77, in main
logits = model(x_shot, x_query).view(-1, n_way)
File "/hdd/anaconda3/envs/fsl_last/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
TypeError: forward() takes 2 positional arguments but 3 were given

pls tell me how to run 'test_few_show' on cpu?

Q

Hello,I have a Q.During the run, this problem appeared:
No such file or directory: './materials\mini-imagenet\miniImageNet_category_split_train_phase_train.pickle'
Expect your help!

The expected results are not achieved on the tiered-imagenet

Thank you for sharing, I really appreciate it. I recently used your code to reproduce the experimental results in your paper, and I had no problem with the results on mini-imagenet, but when I used the tiered-resnet12.pth file from the provided pre-trained model to train on the tiered-imagenet dataset, I found that with the 5way-1shot setting, I could only achieve about 63% and could not achieve 68% of the results in the paper.

Question about training from scratch.

What a great work!
I have some questions about the training from scratch in table 5.
Do you mean training the second stage(encoder + cosine similarity) without the first stage? Just like change the load_encoder and load to None in the train_meta_mini.yaml ? If so, what is the details about optimizer? Is it the same as the first stage?

loss/acc averaging

In train_meta.py, when the average train/val loss and accuracy get updated, it seems like the argument n in add() should be set to the number of episodes per mini-batch.

EOFError: Ran out of input

Hello
Thank you for your awesome work.
When I'm trying to run train_classifier.py code with miniImageNet dataset, I'm receiving the followed error:

./save\classifier_mini-imagenet_resnet12 exists, remove? ([y]/n): n
mini-imagenet
./materials\mini-imagenet
train dataset: torch.Size([3, 80, 80]) (x38400), 64
mini-imagenet
./materials\mini-imagenet
val dataset: torch.Size([3, 80, 80]) (x18748), 64
mini-imagenet
./materials\mini-imagenet
fs dataset: torch.Size([3, 80, 80]) (x12000), 20
num params: 8.0M
Traceback (most recent call last):
File "train_classifier.py", line 281, in
main(config)
File "train_classifier.py", line 148, in main
for data, label in tqdm(train_loader, desc='train', leave=False):
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\site-packages\tqdm\std.py", line 1171, in iter
for obj in iterable:
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 352, in iter
return self._get_iterator()
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 294, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\utils\data\dataloader.py", line 801, in init
w.start()
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'MiniImageNet.init..convert_raw'
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\nusra\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

The config as follows:

{'train_dataset': 'mini-imagenet', 'train_dataset_args': {'split': 'train', 'augment': 'resize'}, 'val_dataset': 'mini-imagenet', 'val_dataset_args': {'split': 'train_phase_val'}, 'fs_dataset': 'mini-imagenet', 'fs_dataset_args': {'split': 'test'}, 'eval_fs_epoch': 5, 'model': 'classifier', 'model_args': {'encoder': 'resnet12', 'encoder_args': {}, 'classifier': 'linear-classifier', 'classifier_args': {'n_classes': 64}}, 'batch_size': 8, 'max_epoch': 100, 'optimizer': 'sgd', 'optimizer_args': {'lr': 0.1, 'weight_decay': 0.0005, 'milestones': [90]}, 'save_epoch': 5, 'visualize_datasets': True}

I would appreciate it if you give me any suggestions.
Thank you

Pretrained models for meta-baseline

Hi! Thanks for this code. I was wondering if it would be possible to share pretrained models for the Meta-baseline (not just the Classifier-baseline) on ImageNet-800

Where is the learnable parameter in your code before cosine similarity?

in your paper:
Scaling the cosine similarity. Since cosine similarity has
the value range of [1; 1], when it is used to compute the
logits, it is important to scale the value before applying Softmax
function during training. We add a learnable parameter
r similar to recent works (Gidaris & Komodakis, 2018;
Qi et al., 2018; Oreshkin et al., 2018), that the predicted
probability in training becomes:

I can't find this learnable parameter r in your code . Your classifier seems to use cosine similarity directly?

An error occurred during operation

Hello, I have download datasets and put it in the appropriate folder, but when I run the "python train_classifier.py --config configs/train_classifier_mini.yaml" , this problem of "FileNotFoundError: [Errno 2] No such file or directory: 'miniImageNet_category_split_train_phase_train.pickle’ occured". I can't find a way to solve this problem. Would you like to know how to solve this problem? Thank you for reading this problem and I look forward to receiving your reply.

partition of dataset

  1. How do you divide mini-imagenet dataset for training, validating and testing ?
  2. Your code of prototypical network using the same partition ?

Testing the model

Hey!
I have trained the model as you mentioned in the repo. I do have the weights file now. Im now stuck at how to evaluate or test my model.How do i feed my query and target image to the model. I ran the test_one_shot.py file as you said.It gave an error like this:
set gpu: 0
dataset: torch.Size([3, 80, 80]) (x12000), 20
num params: 0.0K
Traceback (most recent call last):
File "test_few_shot.py", line 125, in
main(config)
File "test_few_shot.py", line 77, in main
logits = model(x_shot, x_query).view(-1, n_way)
File "/home/malesh/anaconda3/envs/att_2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/mnt/dash/Alpha_Share/LOGO_POC/RnD/few_shot/few-shot-meta-baseline-master/models/meta_baseline.py", line 31, in forward
x_tot = self.encoder(torch.cat([x_shot, x_query], dim=0))
TypeError: 'NoneType' object is not callable

How do i rectify this and please provide us with more details on evaluating the model.

I can't find the mixup/label smoothing operation

Hi authors, thanks very much for the amazing work. You say that mixup/label smoothing are used in your experiments, but I can't find where it is, would you mind show me the position? If it's not the latest version, I would be appreciate if you can commit your change.

Pretrained models

Hi, awesome work! Could you please tell us how to use the pretrained models offered by you?Is it just like epoch-last.pth?

The meta-dataset folder

Hi, thanks for your impressive work!
I am wondering what is the usage of the folder meta-dataset, since it is an independent folder. How and when should I use it?
Should I run "2. Training Meta-Baseline python train_meta.py --config configs/train_meta_mini.yaml" in the meta-dataset folder, or in the few-shot-meta-baseline folder? It is a little confusing.

about miniagenet datasets

Hello!
In the train stage of minIageNet datasets, there are 64 types of samples, each of which has 600 samples, which means 38400 samples. If you use the 5-way 1-shot experimental setting, it means that one task requires 80 images and your batch has 200 tasks,
your epoch has 4 batches, which means that 64000 pictures are used, and you used data amplification? Or is the sample reused in one epoch? thank you for your patience!

classifier related to baseline in Closer look at few-shot leawrning

Hi authors,

thanks for your paper. As mentioned in your paper, your baseline classifier method is quite similar to baseline++. I was wondering why the method outperforms baseline/baseline++ in "a closer look at few-shot learning"? What is the difference between your methods and baseline/baseline++ that make your paper perform pretty well?

I would appreciate it if I can get your reply!

Thanks!

train_image_size

Great work! The image size of the mini dataset in your code is setted 38080, so accuracy in your paper is 38080? Why don't you use 38484? Expect your help!

Higher performance for classifier-baseline but lower performance for meta-baseline

Hi, thanks for the amazing and well-documented code!

I directly ran your code to try to reproduce the results on mini-ImageNet. However, compared to the accuracies reported in the paper, I observed higher performance for classifier-baseline but slightly lower performance for meta-baseline. Here are my results for 2 runs with random seed 1 and 2. I wonder if you have observed similar variance across different runs? Thank you so much for your help!

classifier classifier meta meta
1-shot 5-shot 1-shot 5-shot
run1 60.3 78.47 62.75 79.01
run2 60.69 78.32 62.99 79.24

base/novel class generalization

thanks for your good paper and code. i have a little confusion about base and novel class generalization in figure3 and figure 1b. if i understand correctly, the gap between base and novel class generalization happens in meta-baseline stage after classifier-baseline stage. And the novel class generalization, which is also the performance we care about most, reach the peak at the first epoch. So, does this mean that we have no need to continue training after classifier-baseline stage ? And i find the meta-learning stage in meta-baseline only have effects in miniImageNet, i.e. inceasting training epoch can get performance improvement, but no effect in tieredImageNet and ImageNet-800.

if there is something wrong, please correct me.

AttributeError: Can't pickle local object 'MiniImageNet.__init__.<locals>.convert_raw'

I was testing the code with "python train_classifier.py --config configs/train_classifier_mini.yaml", and I got this error message. The output in my powershell console is:

set gpu: 0
./save\classifier_mini-imagenet_resnet12 exists, remove? ([y]/n): y
train dataset: torch.Size([3, 80, 80]) (x38400), 64
val dataset: torch.Size([3, 80, 80]) (x18748), 64
fs dataset: torch.Size([3, 80, 80]) (x12000), 20
num params: 8.0M
train: 0%| | 0/1200 [00:00<?, ?it/s]Traceback (most recent call last):
File "train_classifier.py", line 279, in
main(config)
File "train_classifier.py", line 147, in main
for data, label in tqdm(train_loader, desc='train', leave=False):
File "C:\Users\Administrator\anaconda3\lib\site-packages\tqdm\std.py", line 1107, in iter
for obj in iterable:
File "C:\Users\Administrator\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 279, in iter
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\Administrator\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 719, in init
w.start()
File "C:\Users\Administrator\anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\Administrator\anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\Administrator\anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\Administrator\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\Administrator\anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'MiniImageNet.init..convert_raw'
PS D:\few-shot-meta-baseline-master> Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Administrator\anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\Administrator\anaconda3\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

Have anyone run into the same problem under windows 10?

How to speed up dataloading for Meta-Dataset?

Hi, thanks for the amazing code!

I'm running the training script for meta-dataset, but found the GPU utilization rate to be very low. I'm not very familiar with tensorflow, could you help me point out how to speed up the dataloading? Thanks a lot!

Is it possible to use a pretrained model on Classifier-baseline training?

Just out of curiosity, since my own dataset is quite small, with only 51 classes and each class has less than 100 examples. Do you think it is possible to use a pretrained model, say ResNet18, as the start point to engage the Classifier-Baseline training, then use the trained model for Meta-Baseline training and so on?

I've looked into the code and set the pretrained=True in def resnet18(pretrained=True, progress=True, **kwargs). However, the result did not seem to be any difference when pretrained=False. Base on the train_classifier_im800.yaml, the
model_args is

  • encoder: resnet18
  • encoder_args: {}
  • classifier: linear-classifier
  • classifier_args: {n_classes: 800}

It just enlists resnet18 as the backbone encoder. Can I set encoder_args like encoder_args: {pretrained: true} or so?

训练自己的数据集

请问如何训练自己的数据集呢,在baseline_classfiy 里有需要加载序列化文件,是不是也需要将自己的文件存储成mini_imagnenet的序列化形式,再进行加载呢?

How did you get the encoder that you shared in the git hub project?

Thank you for sharing, I really appreciate it.

I have tried your code and I find it easy to use the encoder you provided to get the expected result on mini-imagenet. However, when I use your classifier script, it is difficult to train an encoder which can get the expected result. So can you tell me some details or tips for how to use the classifier script to get such a great encoder. And how did you use your classifier script to get the encoder you provided?

Thank you~

tval_dataset uses the test dataset?

train_dataset: mini-imagenet
train_dataset_args: {split: train}
tval_dataset: mini-imagenet
tval_dataset_args: {split: test}
val_dataset: mini-imagenet
val_dataset_args: {split: val}

Dataset problem

the miniImageNet dataset used in your paper:
miniImageNet_category_split_train_phase_train.pickle
miniImageNet_category_split_train_phase_val.pickle
miniImageNet_category_split_train_phase_test.pickle
miniImageNet_category_split_val.pickle
miniImageNet_category_split_test.pickle
I know :
①miniImageNet_category_split_val.pickle is the validation set of miniImageNet which has 16 classes and Each class has 600 images
②miniImageNet_category_split_test.pickle a test set of miniImageNet which has 20 classes and Each class has 600 images.
③miniImageNet_category_split_train_phase_train.pickle a train set of miniImageNet which has 64 classes and Each class has 600 images.
but I don't know the detail of miniImageNet_category_split_train_phase_val.pickle and
miniImageNet_category_split_train_phase_test.pickle.??????????????
Is there any overlap between these two data sets and the data inminiImageNet_category_split_train_phase_train.pickle or something else?

Error reported for training classifier using meta-Dataset on multi-GPUs

Hi there,

Thanks for the great work!

I followed the README.md instructions on training classifier/multi-classifier on meta-Dataset. However when using multi-GPUs, the error reported as the following:

Traceback (most recent call last):
File "train_classifier.py", line 195, in
main(config)
File "train_classifier.py", line 107, in main
loss = F.cross_entropy(logits, label)
File "/home/anaconda3/envs/fewshot_torch17/lib/python3.7/site-packages/torch/nn/functional.py", line 2468, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/anaconda3/envs/fewshot_torch17/lib/python3.7/site-packages/torch/nn/functional.py", line 2264, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, Reduction.get_enum(reduction), ignore_index)
RuntimeError: Assertion `THCTensor
(checkGPU)(state, 4, input, target, output, total_weight)' failed. Some of weight/gradient/input tensors are located on different GPUs. Please move them to a single one. at /opt/conda/conda-bld/pytorch_1603729006826/work/aten/src/THCUNN/generic/ClassNLLCriterion.cu:28

No changes are made on the original code. I am wondering if you have any clue on this?

Custom dataset and pretrained weights

HI,
im a newbie to deep learning.i want to train my own dataset and evaluate it.Can u elaborate on how it can be done and how do i pass the model path,my query and target images to the evaluation script?

Prototypical Networks?

Hi,

Thanks for your work.

Recently I am reading the paper "Prototypical Networks". I found your idea is quite similar to that one. Could you help explain a little bit about the main difference between your idea and that one?

What will the accuracy of ProtoNets on miniImageNet be if the backbone function is changed to ResNet that used in your paper?

Code meta-dataset

Thanks for your code! I have the question about your code in meta-dataset ,How to use it and I can't find your pre-trained model Can you tell me the meaning of this folder code and the code file location of the pre training model, and I look forward to your reply!

Thank you for your attention

I think your idea is very brilliant, so I want to run it.
But when I download the datasets like 'mini-imagenet',ran the code as you did ' python train_classifier.py --config configs/train_classifier_mini.yaml' something bad occured--
' FileNotFoundError: [Errno 2] No such file or directory: './materials/mini-imagenet/miniImageNet_category_split_train_phase_train.pickle'
--Could you help me to deal with it?I will be grateful!thx

Pretrained models

Do you plan to release the pretained models?
Not everyone can afford pretraining on 8 GPUs.
Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.