Giter Site home page Giter Site logo

dbiir / uer-py Goto Github PK

View Code? Open in Web Editor NEW
2.9K 75.0 522.0 51.69 MB

Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo

Home Page: https://github.com/dbiir/UER-py/wiki

License: Apache License 2.0

Python 100.00%
bert pre-training fine-tuning gpt chinese natural-language-processing pytorch elmo classification ner

uer-py's Introduction

English | 中文

Build Status codebeat badge arXiv

Pre-training has become an essential part for NLP tasks. UER-py (Universal Encoder Representations) is a toolkit for pre-training on general-domain corpus and fine-tuning on downstream task. UER-py maintains model modularity and supports research extensibility. It facilitates the use of existing pre-training models, and provides interfaces for users to further extend upon. With UER-py, we build a model zoo which contains pre-trained models of different properties. See the UER-py project Wiki for full documentation.



🚀 We have open-sourced the TencentPretrain, a refactored new version of UER-py. TencentPretrain supports multi-modal models and enables training of large models. If you are interested in text models of medium size (with parameter sizes of less than one billion), we recommend continuing to use the UER-py project.


Table of Contents


Features

UER-py has the following features:

  • Reproducibility UER-py has been tested on many datasets and should match the performances of the original pre-training model implementations such as BERT, GPT-2, ELMo, and T5.
  • Model modularity UER-py is divided into the following components: embedding, encoder, target embedding (optional), decoder (optional), and target. Ample modules are implemented in each component. Clear and robust interface allows users to combine modules to construct pre-training models with as few restrictions as possible.
  • Model training UER-py supports CPU mode, single GPU mode and distributed training mode.
  • Model zoo With the help of UER-py, we pre-train and release models of different properties. Proper selection of pre-trained models is important to the performances of downstream tasks.
  • SOTA results UER-py supports comprehensive downstream tasks (e.g. classification and machine reading comprehension) and provides winning solutions of many NLP competitions.
  • Abundant functions UER-py provides abundant functions related with pre-training, such as feature extractor and text generation.

Requirements

  • Python >= 3.6
  • torch >= 1.1
  • six >= 1.12.0
  • argparse
  • packaging
  • regex
  • For the pre-trained model conversion (related with TensorFlow) you will need TensorFlow
  • For the tokenization with sentencepiece model you will need SentencePiece
  • For developing a stacking model you will need LightGBM and BayesianOptimization
  • For the pre-training with whole word masking you will need word segmentation tool such as jieba
  • For the use of CRF in sequence labeling downstream task you will need pytorch-crf

Quickstart

This section uses several commonly-used examples to demonstrate how to use UER-py. More details are discussed in Instructions section. We firstly use BERT (a text pre-training model) on book review sentiment classification dataset. We pre-train model on book review corpus and then fine-tune it on book review sentiment classification dataset. There are three input files: book review corpus, book review sentiment classification dataset, and vocabulary. All files are encoded in UTF-8 and included in this project.

The format of the corpus for BERT is as follows (one sentence per line and documents are delimited by empty lines):

doc1-sent1
doc1-sent2
doc1-sent3

doc2-sent1

doc3-sent1
doc3-sent2

The book review corpus is obtained from book review classification dataset. We remove labels and split a review into two parts from the middle to construct a document with two sentences (see book_review_bert.txt in corpora folder).

The format of the classification dataset is as follows:

label    text_a
1        instance1
0        instance2
1        instance3

Label and instance are separated by \t . The first row is a list of column names. The label ID should be an integer between (and including) 0 and n-1 for n-way classification.

We use Google's Chinese vocabulary file models/google_zh_vocab.txt, which contains 21128 Chinese characters.

We firstly pre-process the book review corpus. In the pre-processing stage, the corpus needs to be processed into the format required by the specified pre-training model (--data_processor):

python3 preprocess.py --corpus_path corpora/book_review_bert.txt --vocab_path models/google_zh_vocab.txt \
                      --dataset_path dataset.pt --processes_num 8 --data_processor bert

Notice that six>=1.12.0 is required.

Pre-processing is time-consuming. Using multiple processes can largely accelerate the pre-processing speed (--processes_num). BERT tokenizer is used in default (--tokenizer bert). After pre-processing, the raw text is converted to dataset.pt, which is the input of pretrain.py. Then we download Google's pre-trained Chinese BERT model google_zh_model.bin (in UER format and the original model is from here), and put it in models folder. We load the pre-trained Chinese BERT model and further pre-train it on book review corpus. Pre-training model is usually composed of embedding, encoder, and target layers. To build a pre-training model, we should provide related information. Configuration file (--config_path) specifies the modules and hyper-parameters used by pre-training models. More details can be found in models/bert/base_config.json. Suppose we have a machine with 8 GPUs:

python3 pretrain.py --dataset_path dataset.pt --vocab_path models/google_zh_vocab.txt \
                    --pretrained_model_path models/google_zh_model.bin \
                    --config_path models/bert/base_config.json \
                    --output_model_path models/book_review_model.bin \
                    --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
                    --total_steps 5000 --save_checkpoint_steps 1000 --batch_size 32

mv models/book_review_model.bin-5000 models/book_review_model.bin

Notice that the model trained by pretrain.py is attacted with the suffix which records the training step (--total_steps). We could remove the suffix for ease of use.

Then we fine-tune the pre-trained model on downstream classification dataset. We use embedding and encoder layers of book_review_model.bin, which is the output of pretrain.py:

python3 finetune/run_classifier.py --pretrained_model_path models/book_review_model.bin \
                                   --vocab_path models/google_zh_vocab.txt \
                                   --config_path models/bert/base_config.json \
                                   --train_path datasets/book_review/train.tsv \
                                   --dev_path datasets/book_review/dev.tsv \
                                   --test_path datasets/book_review/test.tsv \
                                   --epochs_num 3 --batch_size 32

The default path of the fine-tuned classifier model is models/finetuned_model.bin . It is noticeable that the actual batch size of pre-training is --batch_size times --world_size ; The actual batch size of downstream task (e.g. classification) is --batch_size . Then we do inference with the fine-tuned model.

python3 inference/run_classifier_infer.py --load_model_path models/finetuned_model.bin \
                                          --vocab_path models/google_zh_vocab.txt \
                                          --config_path models/bert/base_config.json \
                                          --test_path datasets/book_review/test_nolabel.tsv \
                                          --prediction_path datasets/book_review/prediction.tsv \
                                          --labels_num 2

--test_path specifies the path of the file to be predicted. The file should contain text_a column. --prediction_path specifies the path of the file with prediction results. We need to explicitly specify the number of labels by --labels_num. The above dataset is a two-way classification dataset.


The above content provides basic ways of using UER-py to pre-process, pre-train, fine-tune, and do inference. More use cases can be found in complete ➡️ quickstart ⬅️ . The complete quickstart contains abundant use cases, covering most of the pre-training related application scenarios. It is recommended that users read the complete quickstart in order to use the project reasonably.


Pre-training data

This section provides links to a range of ➡️ pre-training data ⬅️ . UER can load these pre-training data directly.


Downstream datasets

This section provides links to a range of ➡️ downstream datasets ⬅️ . UER can load these datasets directly.


Modelzoo

With the help of UER, we pre-trained models of different properties (e.g. models based on different corpora, encoders, and targets). Detailed introduction of pre-trained models and their download links can be found in ➡️ modelzoo ⬅️ . All pre-trained models can be loaded by UER directly.


Instructions

UER-py is organized as follows:

UER-py/
    |--uer/
    |    |--embeddings/ # contains modules of embedding component
    |    |--encoders/ # contains modules of encoder component such as RNN, CNN, Transformer
    |    |--decoders/ # contains modules of decoder component
    |    |--targets/ # contains modules of target component such as language modeling, masked language modeling
    |    |--layers/ # contains frequently-used NN layers
    |    |--models/ # contains model.py, which combines modules of different components
    |    |--utils/ # contains frequently-used utilities
    |    |--model_builder.py
    |    |--model_loader.py
    |    |--model_saver.py
    |    |--opts.py
    |    |--trainer.py
    |
    |--corpora/ # contains pre-training data
    |--datasets/ # contains downstream tasks
    |--models/ # contains pre-trained models, vocabularies, and configuration files
    |--scripts/ # contains useful scripts for pre-training models
    |--finetune/ # contains fine-tuning scripts for downstream tasks
    |--inference/ # contains inference scripts for downstream tasks
    |
    |--preprocess.py
    |--pretrain.py
    |--README.md
    |--README_ZH.md
    |--requirements.txt
    |--LICENSE

The code is organized based on components (e.g. embeddings, encoders). Users can use and extend upon it with little efforts.

Comprehensive examples of using UER can be found in ➡️ instructions ⬅️ , which help users quickly implement pre-training models such as BERT, GPT-2, ELMo, T5 and fine-tune pre-trained models on a range of downstream tasks.


Competition solutions

UER-py has been used in winning solutions of many NLP competitions. In this section, we provide some examples of using UER-py to achieve SOTA results on NLP competitions, such as CLUE. See ➡️ competition solutions ⬅️ for more detailed information.


Citation

If you are using the work (e.g. pre-trained models) in UER-py for academic work, please cite the system paper published in EMNLP 2019:

@article{zhao2019uer,
  title={UER: An Open-Source Toolkit for Pre-training Models},
  author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
  journal={EMNLP-IJCNLP 2019},
  pages={241},
  year={2019}
}

Contact information

For communication related to this project, please contact Zhe Zhao ([email protected]; [email protected]) or Yudong Li ([email protected]) or Cheng Hou ([email protected]) or Wenhang Shi ([email protected]).

This work is instructed by my enterprise mentors Qi Ju, Xuefeng Yang, Haotang Deng and school mentors Tao Liu, Xiaoyong Du.

We also got a lot of help from Weijie Liu, Lusheng Zhang, Jianwei Cui, Xiayu Li, Weiquan Mao, Xin Zhao, Hui Chen, Jinbin Zhang, Zhiruo Wang, Peng Zhou, Haixiao Liu, and Weijian Wu.

uer-py's People

Contributors

2389540652 avatar autoliuweijie avatar bitvoyage avatar bojone avatar chaosles avatar chestnut1999 avatar edwardmao avatar embedding avatar eric8932 avatar fanrong5151 avatar guoraikkonen avatar haoyanliu avatar hhou435 avatar hqwu-hitcs avatar laolang1919 avatar liuhaixiao0720 avatar liujiachi1997 avatar lixuefeng2020ai avatar lots-o avatar lx865712528 avatar mardopp avatar renqingcolin avatar winter523 avatar withchencheng avatar xin704 avatar ydli-ai avatar yrchen92 avatar zhao-ssstu avatar zhaoxinruc avatar zhezhaoa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

uer-py's Issues

albert support?

是否能够增加albert系列支持?
Can Albert support will be added?
官方支持或者给出如何转化别人的albert模型的教程?
Official support or tutorials on how to transform someone else's Albert model?

请问该项目怎样应用于英文任务?

您好!首先十分感谢贵团队的工作,这项工作很有意义。
我想要将这个项目应用于英文文本上的标注任务(具体来说是NEG、CHUNKING和POS),但是我没有找到使用英文语料的预训练模型,请问这个项目有提供相关的.bin文件吗?
如果我自己生成.bin模型的话,是不是需要先用huggingface提供的脚本将tensorflow版本的预训练模型转换到huggingface版本,再用你们提供的脚本将huggingface的版本转换为本项目的.bin版本,最后使用该模型来训练下游任务(使用google提供的英文词表)呢?

期待您的回复!

Can't load Reviews+LstmEncoder+LmTarget pretrain model

Hi.
After i run python pretrain.py --dataset_path dataset.pt --vocab_path models/google_zh_vocab.txt --output_model_path models/model.bin --gpu_ranks 0 --total_steps 20000 --save_checkpoint_steps 5000 --encoder lstm --target lm --learning_rate 1e-3 --pretrained_model_path reviews_lstm_lm_model.bin --config_path models/rnn_config.json --batch_size 16
it show following error.
image
Can you tell me where I made a mistake,thx~

关于Quantitative evaluation的训练步骤问题

您好,感谢您在预训练模型框架方面的探索研究,对于我们当前的业务有一定指导意义。
在这里有几个个问题想请教一下您:

  1. 在Quantitative evaluation的stage1中写道“We train with batch size of 256 sequences and each sequence contains 256 tokens. We load Google's pretrained models and train upon it for 500,000 steps. The learning rate is 2e-5 and other optimizer settings are identical with Google BERT. BERT tokenizer is used.”这个步骤是用特定领域的语料,从零开始重新训练bert模型,还是加载google在wiki上面预训练好的bert然后再去训练?
    2.在步骤2和步骤3分别用于下游任务训练和微调的数据集要一样吗?还是说训练的数据集要远大于微调的数据集?因为我看在book_review的那个分类的demo里面,微调的train+dev+test就是对应的训练的book_review.txt数据集
    期待您的回复

在使用自己的MRC数据集运行run_mrc.py时候报错

把自己的机器阅读理解数据处理成了SQuAD数据集的格式,运行run_mrc.py
读数据和训练运行正常
在evaluate的时候报了Assertion Error

Traceback (most recent call last):
  File "run_mrc_test.py", line 590, in <module>
    main()
  File "run_mrc_test.py", line 575, in main
    result = evaluate(args, False)
  File "run_mrc_test.py", line 474, in evaluate
    assert len(pred_list) == len(examples)
AssertionError

后来尝试输出了evaluate函数运行时候的几个tensor的长度

len(start_pred_all):1997, len(dataset):1997
len(start_logits_all):1997, len(dataset):1997
len(pred_list):1906, len(examples):1920

前面两个断言是满足的
pred_list的结果和examples的长度不等
dataset和examples的长度也不等,不过我不知道是否应该相等
我还在阅读这个脚本的代码
请问应该是哪里出了问题?

Could you please provide more details about the word-based bert?

Hi!

Now I'm faced with a task that I got a sentence and a word XXX. In this sentence, I need to find the specific word which is most similar to the word XXX.

I notice that you provide a word-based bert model which I believe is suitable for my task. But I don't understand how it works. Is it trained on Chinese words directly? How does it process the OOV words? Can we add more words to the vocabulary?

I would appreciate it if you could provide more information about this bert model.

Could you provide the settings of the pretrained LstmEncoder models?

The provided feature_extractor.py and models/google_config.json works perfect with BertEncoder pretrained models, but there is no settings providing for the LstmEncoder pretrained models.

Could you plz provide the settings for the pretrained (Mixedlarge corpus & Amazon reviews)+LstmEncoder+(LmTarget & ClsTarget) model? When I use the default models/rnn_config.json for this model, it errors like:

RuntimeError: Error(s) in loading state_dict for Model:
        Missing key(s) in state_dict: "target.output_layer.weight", "target.output_layer.bias".
        Unexpected key(s) in state_dict: "target.linear_1.weight", "target.linear_1.bias", "target.linear_2.weight", "target.linear_2.bias".

I'm not sure where the issue locates (probably the multiple targets I guess). More detail param settings of the pretrained models should be provided, since we don't know how to match the training params when inferencing.

Thank you in advance!

seq_length

您好,我想请问一下如果要修改句子的长度,模型应该做如何变化?
我在运行preprocess时将seq_length由128改成了1500,然而pretrain时没有找到相应的改变参数,导致报错,显示是维度相关的问题,请问可以怎么修改参数,并且如果修改了序列长度之后您提供的预训练的模型是否就不能使用了?

exit code: 135

Hi, I have a trouble in running pretrain.py, as fellows:
/opt/conda/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 2 leaked semaphores to clean up at shutdown
len(cache))
/opt/pipeline-codes/pipeline_codes/run.sh: line 3: 10 Bus error python -u $@
exit code: 135

The problem seems in :
mp.spawn(worker, nprocs=args.ranks_num, args=(args.gpu_ranks, args, model),
daemon=False)
uer/trainer.py line 45

关于序列标注

感谢分享:请问下:为啥 序列标注的开头和结尾没有加 [CLS] 和 [SEP]

hugging face bert-base-chinese模型转化为uer报错

在hugging face下载的模型:https://cdn.huggingface.co/bert-base-chinese-pytorch_model.bin ,使用scripts中convert_bert_from_huggingface_to_uer.py,报错:
Traceback (most recent call last):
File "convert_bert_from_huggingface_to_uer.py", line 22, in
output_model["embedding.layer_norm.gamma"] = input_model["bert.embeddings.LayerNorm.weight"]
KeyError: 'bert.embeddings.LayerNorm.weight'

脚本:
python convert_bert_from_huggingface_to_uer.py
--input_model_path ../models/bert-base-chinese-pytorch_model.bin
--output_model_path ../models/google_zh_model.bin

run_ner有两处瑕疵

出色的工作!认真学习中

176行model.load_state_dict(torch.load(args.pretrained_model_path), strict=False)对于没有gpu的机器会报错,建议将187行device = torch.device("cuda" if torch.cuda.is_available() else "cpu")移到前面,然后此处改成model.load_state_dict(torch.load(args.pretrained_model_path, map_location = device), strict=False)
332行ooptimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, correct_bias=False)多了一个o

gpt generate

使用generate.py脚本生成时,提示seqmentation fault core dumped

mixed_large_24_model.bin shows bad results when converted into TensorFlow version

Hi,
Thank you for providing mixed_large_24_model.bin.
I was trying to convert this model using scripts/convert_bert_from_uer_to_google.py.
Everything goes fine without any alerts.
However, when I tried to fine-tune on a simple binary text classification task, it showed random results (i.e. acc=50%). I used Google's vocabulary and Bert-large config file and did not change other settings.
I think it might be the conversion was not successful, and thus results in a bad performance.
Any ideas?

多节点训练问题

请问多个节点多GPU训练的具体操作是怎样的?

ReadME中的例子:

Node-0 : python3 pretrain.py --dataset_path dataset.pt --vocab_path models/google_zh_vocab.txt \
                             --pretrained_model_path models/google_model.bin --output_model_path models/output_model.bin \
                             --encoder bert --target bert --world_size 16 --gpu_ranks 0 1 2 3 4 5 6 7 --master_ip tcp://node-0-addr:port
Node-1 : python3 pretrain.py --dataset_path dataset.pt --vocab_path models/google_zh_vocab.txt \
                             --pretrained_model_path models/google_model.bin --output_model_path models/output_model.bin \
                             --encoder bert --target bert --world_size 16 --gpu_ranks 8 9 10 11 12 13 14 15 --master_ip tcp://node-0-addr:port

这里"Node-0:"和"Node-1:"是什么意思呢?是两条分开的命令吗?在两台机器上分别运行pretrain.py?那怎么保证同时训练呢?

如何在同一个命令中指定多个节点进行训练呢?(例如slurm集群多节点训练?)

使用reviews_lstm_lm_model报错

抱歉哈,我又来提问了 ==。。
我想使用reviews_lstm_lm_model来进行序列标注任务,config文件选择的是rnn_config.json,但是在导入模型时报错:
Traceback (most recent call last):
File "tagger.py", line 383, in
main()
File "tagger.py", line 164, in main
bert_model.load_state_dict(torch.load(args.pretrained_model_path), strict=False)
File "/home/wangzs/anaconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 777, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Model:
size mismatch for encoder.rnn.weight_ih_l1: copying a param with shape torch.Size([2048, 512]) from checkpoint, the shape in current model is torch.Size([2048, 1024]).

添加额外专业词汇

我有一些专业词汇想添加进词典。 google_zh_vocab.txt 里面有100个空位,但是这个数量远远达不到需求。不知道我如果想添加成千上万的专业词汇该怎么办?

在这个回答中看到,在词典中加新词是可以的
google-research/bert#9

(b) Append it to the end of the vocab, and write a script which generates a new checkpoint that is identical to the pre-trained checkpoint, but but with a bigger vocab where the new embeddings are randomly initialized (for initialized we used tf.truncated_normal_initializer(stddev=0.02)). This will likely require mucking around with some tf.concat() and tf.assign() calls.

但是具体怎么做我也不太懂。

所以,不知道UER-py能否考虑加入附加词典的功能呢?

请问可以增加词典么?

你好,请问在进行预训练的时候,可以修改bert中文字典,增加一些oov的字进去进行训练么?

What's the strategy to get sentence embedding from pretrained models?

When using the Feature Extractor API, what's the strategy to get sentence embedding for the pretrained Webqa2019+BertEncoder+BertTarget and Reviews+LstmEncoder+LmTarget models?

E.g.

  • For BertEncoder, is it the mean average of all the token embedding?
  • Does it include or exclude the embedding of [CLS] or other special token embedding when getting the sentence embedding?
  • For LstmEncoder, the sentence embedding is the average of two layer or just 1 layer?

Thank you in advance!

quickstart 报错

env:

python3.6 pytorch 1.1

command:

python pretrain.py
--dataset_path dataset.pt
--vocab_path models/google_vocab.txt
--pretrained_model_path models/google_model.bin
--output_model_path models/book_review_model.bin
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7
--total_steps 20000
--save_checkpoint_steps 5000
--encoder bert --target bert

log:

Using distributed mode for training.
Vocabulary file line 344 has bad format token
Vocabulary Size: 21128
Bus error (core dumped)
/opt/conda/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 2 leaked semaphores to clean up at shutdown
len(cache))

mrc result is pool

the cmrc result is pool,have you ever tested the result, and if tested,what's you parametes? thanks

about vocab.txt

你好~请问其他模型比如人民日报的预训练模型的词表是 原有的google_vocab.txt吗?

关于预训练target为mlm的问题

你好,我看代码在构建mlm数据集的过程中,针对一条数据,并没有在开头和结尾加入[CLS]和[SEP],不知道是代码漏加了还是在训练mlm的时候不需要加入这两个token

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.