Giter Site home page Giter Site logo

deepseek-ai / deepseek-vl Goto Github PK

View Code? Open in Web Editor NEW
1.9K 18.0 184.0 12.45 MB

DeepSeek-VL: Towards Real-World Vision-Language Understanding

Home Page: https://huggingface.co/spaces/deepseek-ai/DeepSeek-VL-7B

License: MIT License

Makefile 1.85% Python 89.78% JavaScript 2.91% CSS 5.46%
vision-language-model vision-language-pretraining foundation-models

deepseek-vl's Introduction

DeepSeek LLM

📥 Model Download | ⚡ Quick Start | 📜 License | 📖 Citation
📄 Paper Link | 🤗 Huggingface Paper Link | 👁️ Demo

1. Introduction

Introducing DeepSeek-VL, an open-source Vision-Language (VL) Model designed for real-world vision and language understanding applications. DeepSeek-VL possesses general multimodal understanding capabilities, capable of processing logical diagrams, web pages, formula recognition, scientific literature, natural images, and embodied intelligence in complex scenarios.

DeepSeek-VL: Towards Real-World Vision-Language Understanding

Haoyu Lu*, Wen Liu*, Bo Zhang**, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Hao Yang, Yaofeng Sun, Chengqi Deng, Hanwei Xu, Zhenda Xie, Chong Ruan (*Equal Contribution, **Project Lead)

2. Release

2024-03-14: Demo for DeepSeek-VL-7B available on Hugging Face.
Check out the gradio demo of DeepSeek-VL-7B at https://huggingface.co/spaces/deepseek-ai/DeepSeek-VL-7B. Experience its capabilities firsthand!
2024-03-13: Support DeepSeek-VL gradio demo.
2024-03-11: DeepSeek-VL family released, including DeepSeek-VL-7B-base, DeepSeek-VL-7B-chat, DeepSeek-VL-1.3B-base, and DeepSeek-VL-1.3B-chat.
The release includes a diverse set of models tailored for various applications within the DeepSeek-VL family. The models come in two sizes: 7B and 1.3B parameters, each offering base and chat variants to cater to different needs and integration scenarios.

3. Model Downloads

We release the DeepSeek-VL family, including 1.3B-base, 1.3B-chat, 7b-base and 7b-chat models, to the public. To support a broader and more diverse range of research within both academic and commercial communities. Please note that the use of this model is subject to the terms outlined in License section. Commercial usage is permitted under these terms.

Huggingface

Model Sequence Length Download
DeepSeek-VL-1.3B-base 4096 🤗 Hugging Face
DeepSeek-VL-1.3B-chat 4096 🤗 Hugging Face
DeepSeek-VL-7B-base 4096 🤗 Hugging Face
DeepSeek-VL-7B-chat 4096 🤗 Hugging Face

4. Quick Start

Installation

On the basis of Python >= 3.8 environment, install the necessary dependencies by running the following command:

pip install -e .

Simple Inference Example

import torch
from transformers import AutoModelForCausalLM

from deepseek_vl.models import VLChatProcessor, MultiModalityCausalLM
from deepseek_vl.utils.io import load_pil_images


# specify the path to the model
model_path = "deepseek-ai/deepseek-vl-7b-chat"
vl_chat_processor: VLChatProcessor = VLChatProcessor.from_pretrained(model_path)
tokenizer = vl_chat_processor.tokenizer

vl_gpt: MultiModalityCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()

## single image conversation example
conversation = [
    {
        "role": "User",
        "content": "<image_placeholder>Describe each stage of this image.",
        "images": ["./images/training_pipelines.jpg"],
    },
    {"role": "Assistant", "content": ""},
]

## multiple images (or in-context learning) conversation example
# conversation = [
#     {
#         "role": "User",
#         "content": "<image_placeholder>A dog wearing nothing in the foreground, "
#                    "<image_placeholder>a dog wearing a santa hat, "
#                    "<image_placeholder>a dog wearing a wizard outfit, and "
#                    "<image_placeholder>what's the dog wearing?",
#         "images": [
#             "images/dog_a.png",
#             "images/dog_b.png",
#             "images/dog_c.png",
#             "images/dog_d.png",
#         ],
#     },
#     {"role": "Assistant", "content": ""}
# ]

# load images and prepare for inputs
pil_images = load_pil_images(conversation)
prepare_inputs = vl_chat_processor(
    conversations=conversation,
    images=pil_images,
    force_batchify=True
).to(vl_gpt.device)

# run image encoder to get the image embeddings
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)

# run the model to get the response
outputs = vl_gpt.language_model.generate(
    inputs_embeds=inputs_embeds,
    attention_mask=prepare_inputs.attention_mask,
    pad_token_id=tokenizer.eos_token_id,
    bos_token_id=tokenizer.bos_token_id,
    eos_token_id=tokenizer.eos_token_id,
    max_new_tokens=512,
    do_sample=False,
    use_cache=True
)

answer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=True)
print(f"{prepare_inputs['sft_format'][0]}", answer)

CLI Chat

python cli_chat.py --model_path "deepseek-ai/deepseek-vl-7b-chat"

# or local path
python cli_chat.py --model_path "local model path"

Gradio Demo

pip install -e .[gradio]

python deepseek_vl/serve/app_deepseek.py

Have Fun!

5. License

This code repository is licensed under the MIT License. The use of DeepSeek-VL Base/Chat models is subject to DeepSeek Model License. DeepSeek-VL series (including Base and Chat) supports commercial use.

6. Citation

@misc{lu2024deepseekvl,
      title={DeepSeek-VL: Towards Real-World Vision-Language Understanding},
      author={Haoyu Lu and Wen Liu and Bo Zhang and Bingxuan Wang and Kai Dong and Bo Liu and Jingxiang Sun and Tongzheng Ren and Zhuoshu Li and Hao Yang and Yaofeng Sun and Chengqi Deng and Hanwei Xu and Zhenda Xie and Chong Ruan},
      year={2024},
      eprint={2403.05525},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}

7. Contact

If you have any questions, please raise an issue or contact us at [email protected].

deepseek-vl's People

Contributors

benjamin-eecs avatar fodark avatar haoy945 avatar rerv avatar sinanakkoyun avatar stevenliuwen avatar youho99 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepseek-vl's Issues

Support for M1 Mac, or non-cuda devices

Seems like parts of the code explicitly call cuda functions, so you can't just switch it to MPS for Macs. Any roadmap for supporting Macs?

Io.py

def load_pretrained_model(model_path: str):
    vl_chat_processor: VLChatProcessor = VLChatProcessor.from_pretrained(model_path)
    tokenizer = vl_chat_processor.tokenizer

    vl_gpt: MultiModalityCausalLM = AutoModelForCausalLM.from_pretrained(
        model_path, trust_remote_code=True
    )
    vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()

error

  File "/Users/user/projects/ai/DeepSeek-VL/deepseek_vl/utils/io.py", line 37, in load_pretrained_model
    vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()
  File "/Users/user/miniconda3/envs/py39/lib/python3.9/site-packages/transformers/modeling_utils.py", line 2528, in cuda
    return super().cuda(*args, **kwargs)
  File "/Users/user/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 911, in cuda
    return self._apply(lambda t: t.cuda(device))
  File "/Users/user/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  File "/Users/user/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  File "/Users/user/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 802, in _apply
    module._apply(fn)
  [Previous line repeated 2 more times]
  File "/Users/user/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 825, in _apply
    param_applied = fn(param)
  File "/Users/user/miniconda3/envs/py39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 911, in <lambda>
    return self._apply(lambda t: t.cuda(device))
  File "/Users/user/miniconda3/envs/py39/lib/python3.9/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

About the preprocessor way in high and low input?

the SAM using different means compare to Siglip,

but from the code, the VLIChatProcessor are just using 1024 as size, imagenet_mean as input, this is right for SAM, but not right for siglip.

the processed tensors are directly sending to HybridTower without any further process.

Why doing this? Is the correct?

ROCm Support

Would love to see AMD/ROCm support. I know you guys are busy though. If I have the time I'll put in a pull request.

Fine-tuning Script

Congratulations to DeepSeek for the wonderful work. I wonder if there is a script for fine-tuning DeepSeek-VL? Thanks!

Multi-image mixed input

Does DeepSeek-VL series support input of multiple images? This doesn't seem to be stated in the paper, but images in the example script are list, which seems to be supported.

Does the sam-branch use the Vary initialization for dense OCR?

Hi,
I read your report, and I think the pipeline is very similar to Vary. I have a question:
Does the sam-branch use the Vary initialization for dense OCR? Based on my experiments, the vision latent output by the original Sam is noisy for text-latent-based LLM.

Finetuning

Hi, How Can I replace this model's language model with another model and fine-tune it?
and can't i fine tune this model without shift?

text-generation-webui是否支持

ValueError: The checkpoint you are trying to load has model type multi_modality but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.

RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED

如果用 python3.10 + CUDA11.7/CUDA11.8 + torch2.0.1,运行 inference.py 会报错如下:
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED
改为 python3.8 后解决。

模型重复输出最后一句话,是什么原因呢,有办法能解决吗

User: <image_placeholder>描述这张图片

Assistant: 这张图片捕捉了自动驾驶汽车在城市街道上导航的瞬间。汽车是图片的中心焦点,位于画面**,正朝着画面的右侧移动。它正在行驶的街道上标有黄色线条,表明了它的路径。

在汽车的两侧,可以看到两个数字标记。左边的标记显示了汽车的速度,而右边的标记显示了汽车的距离。这些标记为汽车提供了方向和距离的清晰指示。

在汽车的右侧,有一个蓝色的图标。这个图标可能代表一个特定的功能或设置,如转向辅助或车道保持辅助。

在汽车的左侧,有一个灰色的图标。这个图标可能代表一个特定的功能或设置,如自动泊车或自动变道辅助。

在汽车的上方,有一个黄色的图标。这个图标可能代表一个特定的功能或设置,如自动变道辅助或车道保持辅助。

在汽车的上方,有一个灰色的图标。这个图标可能代表一个特定的功能或设置,如自动泊车或自动变道辅助。

在汽车的上方,有一个灰色的图标。这个图标可能代表一个特定的功能或设置,如自动泊车或自动变道辅助。

在汽车的上方,有一个灰色的图标。这个图标可能代表一个特定的功能或设置,如自动泊车或自动变道辅助。

在汽车的上方,有一个灰色的图标。这个图标可能代表一个特定的功能或设置,如自动泊车或自动变道辅助。

在汽车的上方,有一个灰色的图标。这个图标可能代表一个特定的功能或设置,如自动泊车或自动变道辅助。

在汽车的上方,有一个灰色的图标。这个图标可能代表一个特定的功能或设置,如自动泊车或自动变道辅助。

在汽车的上方,有一个灰色的图标。这个图标可能代表一个特定的功能或设置,如自动泊车或自动变道辅助。

在汽车的上方,有一个灰色的图标。这个图标可能代表一个特定的功能或设置,如自动泊车或自动变道辅助。

在汽车的上方,有一个灰色的图标。这个图标可能代表一个特定的功能或设置,如自动泊车或自动变道辅助。

在汽车的上方,有一个灰色的图标。这个图标可能代表一个特定的功能或设置,如自动泊车或自动变道

'FieldInfo' object has no attribute 'required'

RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback):
Failed to import transformers.generation.utils because of the following error (look up to see its traceback):
'FieldInfo' object has no attribute 'required'

transformers==4.38.2

请问如何解决

TypeError: Object of type AlignerConfig is not JSON serializable

当我运行代码 vl_gpt: MultiModalityCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True) 时会报错 TypeError: Object of type AlignerConfig is not JSON serializable 我的模型文件是huggingface上的1.3B-chat版本

when I run code vl_gpt: MultiModalityCausalLM = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True) it show error like TypeError: Object of type AlignerConfig is not JSON serializable my model is 1.3B-chat version download from huggingface

tips for running the model in FP16 on 24GB GPU

I tried to run this model in Gradio GUI on Windows 10 but I had a few issues.

  1. It was loading weights to CPU
  2. Weights were loaded seemingly in FP32, so it was overflowing my VRAM and was therefore super slow.

I modified Inference.py (the one in deepseek_vl\serve) a bit to fix those issues.
I also made sure that my torch was installed with cuda 11.8 and not cpu-only mode.

So, if someone else runs into problems when running this model on 24GB of VRAM, this issue might help you.

Installation instructions (assuming you're already in a virtual env, which you should be using)

git clone https://github.com/deepseek-ai/DeepSeek-VL
cd DeepSeek-VL
pip install torch==2.0.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install -e .[gradio]
##replace inference.py file in the repo in folder  deepseek_vl/serve with the one provided by me
##if you have the model downloaded locally, maybe change the path in app_deepseek.py to a local one
python deepseek_vl/serve/app_deepseek.py

Here's the Inference.py that works for me
https://gist.github.com/adamo1139/511f63c01c6088d7747f47628ffc970c

I will be closing it down, just want to leave a trace that will hopefully save some time for others.

ImportError: cannot import name 'Mapping' from 'collections'

Hey, thanks a lot for sharing this great accomplishment with the community!
I have just tried running the cli_chat on Python3.11 and I get ImportError: cannot import name 'Mapping' from 'collections' at https://github.com/deepseek-ai/DeepSeek-VL/blob/main/deepseek_vl/models/modeling_vlm.py#L21.
I believe this is due to attrdict being broken with python3.10+ https://stackoverflow.com/questions/72361026/how-can-i-get-attrdict-module-working-in-python.

Easiest thing would be to update Readme/requirements to specify >=python3.8, <3.10.

How to convert it to GGUF/GGML for general use?

Sorry for this dummy question but I did search for some answers and try before.
Using llama.cpp

python ./convert-hf-to-gguf.py \
../../deepseek-vl-7b-chat \
--outtype f16 \
--outfile ../../deepseek-vl-7b-chat/deepseek-v1-7b-chat.gguf

returned

Loading model: deepseek-vl-7b-chat
Traceback (most recent call last):
File "/home/zhangyuanfeng/software/ollama/llm/llama.cpp/./convert-hf-to-gguf.py", line 2099, in
main()
File "/home/zhangyuanfeng/software/ollama/llm/llama.cpp/./convert-hf-to-gguf.py", line 2079, in main
model_class = Model.from_model_architecture(hparams["architectures"][0])
File "/home/zhangyuanfeng/software/ollama/llm/llama.cpp/./convert-hf-to-gguf.py", line 215, in from_model_architecture
raise NotImplementedError(f'Architecture {arch!r} not supported!') from None
NotImplementedError: Architecture 'MultiModalityCausalLM' not supported!

So is there any feasible method? Thx.

dataset format of pretraining stage

How did you unify the format of pretraining dataset? During supervised fine tuning stage, the training data are curated as question and answer pairs. For caption or detection dataset, I want to know if they follow the same format as sft data, and how to collect questions for these data as they originally only contains ground truth like caption or boxes?

是否评估过有作为web或者windows的agent的潜力?

如题。
目前开源的多模态大模型似乎都只有有限的agent能力,要么不能完全理解文字,要么图像识别能力无法兼顾,要么没有任务规划能力,要么缺乏函数调度或者解释器能力。

deepseek目前的模型里,67B有文本agent的潜力,但是太大。

很期待deepseek在agent领域的发力。

Does not start

I installed this into a Python 3.8.0 Conda environment. with the "pip install - e ." -commands. Now, when I try to run it, I only get these lines:

(DeepSeekVL) C:\Users\Troc\DeepSeek-VL>python deepseek_vl/serve/app_deepseek.py
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards:   0%|                                                                 | 0/3 [00:00<?, ?it/s]

After that, the process just quits with zero explanation. The only sign that something happened is a slight bump in RAM usage. Did I do something incorrectly?

Batching

Is this code "optimal" for batched inference and preprocessing?

4-bit quant?

Hi! Thank you for releasing this multimodal model. First test are impressive. Even 1.3B is good for its size.
It is just that 7b version in full precision is still taxing on personal HW we have at home.
Would it be possible to quantize it to int4 like Qwen did with their Qwen-VL-Chat-Int4?
I think it would be best if you could do it and put it in your HF repo so community can use it.
If not, maybe you could give us some guidelines how to do it.

Can you address the work you have referenced?

For instance:

  1. SAM pretrained model didn't listed or quated in readme;
  2. the whole architecture looks very same to Vary, please add quate to it and possible made modifications and compare to original work?

thanks.

grdio加载模型卡住

image app_deepseek.py 脚本,gradio加载模型(deepseek-vl-7b-chat),一直卡在loading,这是什么原因呢? V100的卡

error when i run inference.py

Traceback (most recent call last):
File "inference.py", line 56, in
outputs = vl_gpt.language_model.generate(
File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/generation/utils.py", line 1527, in generate
result = self._greedy_search(
File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/generation/utils.py", line 2411, in _greedy_search
outputs = self(
File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 1196, in forward
outputs = self.model(
File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 990, in forward
causal_mask = self._update_causal_mask(attention_mask, inputs_embeds, cache_position)
File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 1076, in _update_causal_mask
causal_mask = torch.triu(causal_mask, diagonal=1)
RuntimeError: "triu_tril_cuda_template" not implemented for 'BFloat16'

python=3.8
accelerate 0.28.0
aiofiles 23.2.1
altair 5.2.0
annotated-types 0.6.0
anyio 4.3.0
attrdict 2.0.1
attrs 23.2.0
Brotli 1.0.9
certifi 2024.2.2
charset-normalizer 2.0.4
click 8.1.7
colorama 0.4.5
contourpy 1.1.1
cycler 0.12.1
deepseek_vl 1.0.0 /app
einops 0.7.0
exceptiongroup 1.2.0
fastapi 0.110.0
ffmpy 0.3.2
filelock 3.13.1
fonttools 4.50.0
fsspec 2024.3.1
gmpy2 2.1.2
gradio 3.48.0
gradio_client 0.6.1
h11 0.14.0
httpcore 1.0.4
httpx 0.27.0
huggingface-hub 0.22.0
idna 3.4
importlib_metadata 7.1.0
importlib_resources 6.4.0
Jinja2 3.1.3
jsonschema 4.21.1
jsonschema-specifications 2023.12.1
kiwisolver 1.4.5
latex2mathml 3.77.0
Markdown 3.4.1
MarkupSafe 2.1.3
matplotlib 3.7.5
mdtex2html 1.3.0
mkl-fft 1.3.8
mkl-random 1.2.4
mkl-service 2.4.0
mpmath 1.3.0
networkx 3.1
numpy 1.24.3
orjson 3.9.15
packaging 24.0
pandas 2.0.3
pillow 10.2.0
pip 23.3.1
pkgutil_resolve_name 1.3.10
psutil 5.9.8
pydantic 2.6.4
pydantic_core 2.16.3
pydub 0.25.1
Pygments 2.12.0
pyparsing 3.1.2
pypinyin 0.50.0
PySocks 1.7.1
python-dateutil 2.9.0.post0
python-multipart 0.0.9
pytz 2024.1
PyYAML 6.0.1
referencing 0.34.0
regex 2023.12.25
requests 2.31.0
rpds-py 0.18.0
safetensors 0.4.2
semantic-version 2.10.0
sentencepiece 0.1.96
setuptools 68.2.2
six 1.16.0
sniffio 1.3.1
starlette 0.36.3
sympy 1.12
tiktoken 0.5.2
timm 0.9.16
tokenizers 0.15.2
toolz 0.12.1
torch 2.0.1
torchaudio 2.0.2
torchvision 0.15.2
tqdm 4.64.0
transformers 4.39.1
triton 2.0.0
typing_extensions 4.9.0
tzdata 2024.1
urllib3 2.1.0
uvicorn 0.29.0
websockets 11.0.3
wheel 0.41.2
zipp 3.18.1

question: help needed for arch clarification

Hi! I'd love vLLM and exllamav2 support and wanted to help with that. Is the LLM side of the model pure llama2 architecture (not counting the lin. proj. layers)?

Thank you in advance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.