Giter Site home page Giter Site logo

hila-chefer / transformer-mm-explainability Goto Github PK

View Code? Open in Web Editor NEW
705.0 8.0 105.0 25.94 MB

[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.

License: MIT License

Jupyter Notebook 90.17% Dockerfile 0.01% Python 9.71% C 0.02% Shell 0.03% JavaScript 0.05% CSS 0.01%
transformers transformer vqa detr visualization explainability explainable-ai interpretability lxmert visualbert

transformer-mm-explainability's People

Contributors

bpiyush avatar hila-chefer avatar josh-freeman avatar t-hichef avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

transformer-mm-explainability's Issues

attn_grad

Dear Hila,

Thank you for your work, I really like it.

In clip nootbook,

'''
image_attn_blocks = list(dict(model.visual.transformer.resblocks.named_children()).values())
'''

then

'''
grad = blk.attn_grad
cam = blk.attn_probs
'''

If I understand correctly, each blk is a clip ResidualAttentionBlock. But there is not attn_grad or attn_probs in ResidualAttentionBlock class, they are inherit from nn.Module? I try to google it, but I can not find related resource.

Similarly, in ViT nootbook, there are

'''
grad = blk.attn.get_attn_gradients()
cam = blk.attn.get_attention_map()
'''

The function is from here, but I still have trouble to understand how you get the gradient and attention map. Sorry, I am new to torch.

Could you help me to understand your implementation?

Thank you for your help.

Best Wishes,

Alex

How can I choose the method when I run the script ?

CUDA_VISIBLE_DEVICES=0 PYTHONPATH=pwd python VisualBERT/run.py --method=<method_name> --is-text-pert=<true/false> --is-positive-pert=<true/false> --num-samples=10000 config=projects/visual_bert/configs/vqa2/defaults.yaml model=visual_bert dataset=vqa2 run_type=val checkpoint.resume_zoo=visual_bert.finetuned.vqa2.from_coco_train env.data_dir=/path/to/data_dir training.num_workers=0 training.batch_size=1 training.trainer=mmf_pert training.seed=1234

Swin Transformer

Hello Hila,
Thank you for your great work. It is impressive.
Right now, I am working on visualizing attention maps with Swin Transformer. Your work brings me some interesting insights. In your code CLIP-explainability.ipynb

for blk in image_attn_blocks:
      grad = blk.attn_grad
      cam = blk.attn_probs
      cam = cam.reshape(-1, cam.shape[-1], cam.shape[-1])
      grad = grad.reshape(-1, grad.shape[-1], grad.shape[-1])
      cam = grad * cam
      cam = cam.clamp(min=0).mean(dim=0)
      R += torch.matmul(cam, R)

the shapes of grad and cam are supposed to be consistent in attention blocks. However, in Swin Transformer, the patch size changes across blocks, which results in different attention sizes.
Can you give me some advice on how can I apply your work to generate relevance in Swin Transformer?
Thank you for your time.
Best wishes,
Kevin

CLIP ViT-B/16

Hi Hila, thanks for your great work. I am trying to run the CLIP code from the notebook on the ViT-B/16 model, but I am getting attention maps that don't make any sense (not able to get similar results to what's in the notebook). For the ViT-B/32 model, I'm able to reproduce the results, but for some reason the ViT-B/16 model is causing an issue. Do you know why this is? The only things in the code I needed to change are:

  • Add the link to ViT-B/16 in the _MODELS dictionary in CLIP/clip/clip.py
  • Change the reshape in interpret from (1, 1, 7, 7) to (1, 1, dim, dim) where dim = int(image_relevance.numel() ** 0.5)
    Thanks!

save_visual_results in visualBERT

Hi HIla,
Could you point where is the save_visual_results function definition? I use ViLT for multimodal transformer but cannot use num_tokens = image_attn_blocks[0].attn_probs.shape[-1] to set the number of tokens. For example, VILT for VQA task and image is 384*384 size. The number of vision and text mixed token is 185 including cls token, so the vision token is 144 and the text token is 40 (max length).
Thanks very much

COCO 2014 or 2017?

Dear @hila-chefer ,

Thank you for releasing this repo of your fascinating work!!

Would you mind clarifying these two questions about your results for me? :)

  1. Were you doing Detection or Segmentation? (i.e. evaluated on bounding-boxes or polygons?) As I see these two words used interchangeably in Table 1 and Fig. 6.
  2. Were your COCO results evaluated on COCO 2014 or COCO 2017? (in the ReadMe I see "COCO_val2014" but in coco.py it reads "val2017.json". I could not find this detail in the paper.

Thank you so much!

Anh

Question about Vit

Thanks for the great work.
I want to apply your Vit work (both CVPR2021 work and ICCV2021 work) from base 224 vision to vit_base_patch16_384, because I think it will have better result of relevancy map?
Can I directly modify the config in here to 384 x 384 config and download the pre-trained weight for 384 version?
Or do I need to make other changes?

Thank you in advance for your help.

Question about the visualization for VIT?

Hi, thank you very much to opensource such a wonderful work!
I have a question for the visualization of VIT.

In the notebook for VIT.
You use the rule 6 to calculate the relevence. But as you mentioned in the paper, "Since Eq. 5 uses gradients to average across heads, the values of R_qq are typically small due to the multiplication with the gradients. We wish to account equally both for the fact that each token influences itself and for the contextualization by the self-attention mechanism. Therefore, we normalize each row in R qqso that it sums to 1."

So, why you do not normalize each row for the visualization of VIT?

No negative word importance

Hello, I have tried to use the clip notebook, but concering the word importance, no configuration has never output some negative importance.

I attach a sample code snipped:

img_path = "CLIP/glasses.png"
img = preprocess(Image.open(img_path)).unsqueeze(0).to(device)
texts = ["a bear"]
text = clip.tokenize(texts).to(device)

R_text, R_image = interpret(model=model, image=img, texts=text, device=device)
batch_size = text.shape[0]
for i in range(batch_size):
  show_heatmap_on_text(texts[i], text[i], R_text[i])
  show_image_relevance(R_image[i], img, orig_image=Image.open(img_path))
  plt.show()

The output is the following:
image

Thanks for the help :)

Use non hacked models

Is it possible to make it so that attn_probs and the other attrinbutes necessary for explainability are present on residual attention blocks of a model coming from here or here?

Cheerio

Is this really using the technique from the publication?

The top of this repo's README links the article [ICCV 2021- Oral] PyTorch Implementation of Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, but I'm looking at the CLIP_explainability.ipynb notebook, and it appears to me as if this does not demonstrate the technique introduced by the paper. Have I missed something? Or should this notebook be updated?

Each of the examples in the notebook does no more than give heatmaps using the output of the helper function interpret, which makes use of simple self-attention, whereas the publication computed self-attention with effects from co-attention (cf. equation 11).

Here's a relevant excerpt from interpret with comments to indicate correspondence between the python code and the publication. Equation 11 is absent.

    R = torch.eye(num_tokens, num_tokens, dtype=image_attn_blocks[0].attn_probs.dtype).to(device) # eq 1: self-attn Relevancy map
    R = R.unsqueeze(0).expand(batch_size, num_tokens, num_tokens)
    for i, blk in enumerate(image_attn_blocks):
        if i < start_layer:
          continue
        grad = torch.autograd.grad(one_hot, [blk.attn_probs], retain_graph=True)[0].detach()
        cam = blk.attn_probs.detach() # A (attention map)
        cam = cam.reshape(-1, cam.shape[-1], cam.shape[-1])
        grad = grad.reshape(-1, grad.shape[-1], grad.shape[-1])
        cam = grad * cam # A-bar, eq 5
        cam = cam.reshape(batch_size, -1, cam.shape[-1], cam.shape[-1])
        cam = cam.clamp(min=0).mean(dim=1)
        R = R + torch.bmm(cam, R) # eq 6. It's not eq 7 b/c 7 starts from an R which is zeros, whereas this starts from an R which is identity.

self.attn_probs in ResidualAttentionBlock() causes problems - how to make explainability work with mlfoundations / open_clip model

Hi,

I am running experiments using OpenCLIP model from mlfoundations. . It turns out that openCLIP's architecture is different from CLIP's architecture you defined. I encountered the same error as mentioned here, then tried to add missing parts but somehow still get attn_probs = None at the end, instead of being a tensor calculated along the way.

Could you please guide me on how to make your code work with openCLIP? In particular, what should be enough to modify in respective modules of openCLIP ( transformer.py is the equivalent module to yours model.py).

P.S. if you are interested in working on this for an hourly-based compensation, I would gladly discuss it. Please let me know.

when trying to use the colab notebook for RN50 im getting AttributeError: 'ModifiedResNet' object has no attribute 'transformer'

I encountered an issue while running the Colab notebook with the clip RN50 model. Specifically, I received an AttributeError stating that the 'ModifiedResNet' object does not have an attribute named 'transformer'. Upon examining the source code in model.py, it is evident that the ModifiedResNet class does not possess a 'transformer' attribute like the VisualTransformer class does. Although I attempted to make changes, I was unable to resolve the problem.

I would like to verify whether it is feasible to achieve the desired functionality, and if so, I kindly request your assistance in providing suggestions on how to proceed.

Problems with running it in Google colab

The firt cell went through, however with the following error message:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
xarray-einstats 0.2.2 requires numpy>=1.21, but you have numpy 1.19.2 which is incompatible.
torchtext 0.13.0 requires torch==1.12.0, but you have torch 1.7.0 which is incompatible.
torchaudio 0.12.0+cu113 requires torch==1.12.0, but you have torch 1.7.0 which is incompatible.
tensorflow 2.8.2+zzzcolab20220527125636 requires numpy>=1.20, but you have numpy 1.19.2 which is incompatible.
pymc3 3.11.5 requires scipy<1.8.0,>=1.7.3, but you have scipy 1.5.2 which is incompatible.
fastai 2.7.6 requires torchvision>=0.8.2, but you have torchvision 0.8.1 which is incompatible.
datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.
albumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.

Then it stopped in the second cell. First with from lxmert.lxmert.src.modeling_frcnn import GeneralizedRCNN, I got AttributeError: module 'PIL.Image' has no attribute 'Resampling'. I pip install Pillow==9.0.0 following https://stackoverflow.com/questions/71738218/module-pil-has-not-attribute-resampling. Afterwards, I obtained AttributeError: module 'matplotlib.cbook' has no attribute '_deprecate_privatize_attribute' with from captum.attr import visualization. I can not find solution to get over this error.

Most grateful if you could help.

In 6.2 LXMERT, report an error"requests.exceptions.MissingSchema: Invalid URL 'val2014COCO_val2014_000000092107.jpg': No schema supplied. "

run the script:
!CUDA_VISIBLE_DEVICES=0 PYTHONPATH=pwd python lxmert/lxmert/perturbation.py --COCO_path val2014 --method transformer_att --is-text-pert false --is-positive-pert true

the error:
`loading configuration file cache
loading weights file https://cdn.huggingface.co/unc-nlp/frcnn-vg-finetuned/pytorch_model.bin from cache at /root/.cache/torch/transformers/57f6df6abe353be2773f2700159c65615babf39ab5b48114d2b49267672ae10f.77b59256a4cf8343ae0f923246a81489fc8d82f98d082edc2d2037c977c0d9d0
All model checkpoint weights were used when initializing GeneralizedRCNN.

All the weights of GeneralizedRCNN were initialized from the model checkpoint at unc-nlp/frcnn-vg-finetuned.
If your task is similar to the task the model of the checkpoint was trained on, you can already use GeneralizedRCNN for predictions without further training.
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py:435: UserWarning: Setting attributes on ParameterList is not supported.
warnings.warn("Setting attributes on ParameterList is not supported.")
Load 214354 data from split(s) valid.
Load 214354 data from split(s) valid.
0% 0/10000 [00:00<?, ?it/s]runnig positive pert test for text modality with method transformer_att
0% 0/10000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "lxmert/lxmert/perturbation.py", line 254, in
main(args)
File "lxmert/lxmert/perturbation.py", line 218, in main
R_t_t, R_t_i = baselines.generate_transformer_attr(item)
File "/content/drive/MyDrive/可解释性代码/Transformer-MM-Explainability-main/Transformer-MM-Explainability-main/lxmert/lxmert/src/ExplanationGenerator.py", line 375, in generate_transformer_attr
output = self.model_usage.forward(input).question_answering_score
File "lxmert/lxmert/perturbation.py", line 50, in forward
images, sizes, scales_yx = self.image_preprocess(image_file_path)
File "/content/drive/MyDrive/可解释性代码/Transformer-MM-Explainability-main/Transformer-MM-Explainability-main/lxmert/lxmert/src/processing_image.py", line 110, in call
torch.as_tensor(img_tensorize(images.pop(i), input_format=self.input_format))
File "/content/drive/MyDrive/可解释性代码/Transformer-MM-Explainability-main/Transformer-MM-Explainability-main/lxmert/lxmert/src/vqa_utils.py", line 550, in img_tensorize
img = get_image_from_url(im)
File "/content/drive/MyDrive/可解释性代码/Transformer-MM-Explainability-main/Transformer-MM-Explainability-main/lxmert/lxmert/src/vqa_utils.py", line 518, in get_image_from_url
response = requests.get(url)
File "/usr/local/lib/python3.7/dist-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 516, in request
prep = self.prepare_request(req)
File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 459, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks),
File "/usr/local/lib/python3.7/dist-packages/requests/models.py", line 314, in prepare
self.prepare_url(url, params)
File "/usr/local/lib/python3.7/dist-packages/requests/models.py", line 388, in prepare_url
raise MissingSchema(error)
requests.exceptions.MissingSchema: Invalid URL 'val2014COCO_val2014_000000092107.jpg': No schema supplied. Perhaps you meant http://val2014coco_val2014_000000092107.jpg/?`

Generate relevance matrix in ViT of Hugging Face

Hi, thank you for this great work!

I have trained a Transformer model with ViT - HuggingFace. When I tried to visualise the attention maps I found your work. I am quite interesting but I find your code and HuggingFace's are different. I tried to modify the source code like this.

class ViTLayer(nn.Module):

    def save_attn_gradients(self, attn_gradients):
        self.attn_gradients = attn_gradients

    def forward(self, hidden_states, head_mask=None, output_attentions=False):
        self_attention_outputs = self.attention(
            self.layernorm_before(hidden_states),  # in ViT, layernorm is applied before self-attention
            head_mask,
            output_attentions=output_attentions,
        )

        self_attention_outputs.register_hook(self.save_attn_gradients)

I am new to Transformer. I am not sure whether I register the hook in the right tensor. Can you help me check it?

Thank you very much!

The data set downloaded automatically is too large

Hi~ I want to reproduce your results to follow your work, the problem is when I execute the following script:

CUDA_VISIBLE_DEVICES=0 PYTHONPATH=pwd python VisualBERT/run.py --method=<method_name> --is-text-pert=<true/false> --is-positive-pert=<true/false> --num-samples=10000 config=projects/visual_bert/configs/vqa2/defaults.yaml model=visual_bert dataset=vqa2 run_type=val checkpoint.resume_zoo=visual_bert.finetuned.vqa2.from_coco_train env.data_dir=/path/to/data_dir training.num_workers=0 training.batch_size=1 training.trainer=mmf_pert training.seed=1234

the colab will download the big dataset which reach 104G size.

I'm asking if I did the right thing. If so, can I have an alternative to downloading such a large data set? How to execute the script correctly if it is wrong. Thanks for your reading~

Request for vanilla example notebook

Wonderful paper. To see if I'm getting something very wrong this is my understanding of the two papers differences.

Transformer-Explainability
You generate the building blocks with relprop as an alongside function to propagate backward the relevancies.

This is LRP, working from the CAMs (Class Activation Mapping) backwards. So you propagate outputs to inputs. To back propogate you have to hard code in all the flow - i.e. all the concats and splits of data, eg when you have to diverge to cam1, cam2 then use /=2 for matrix multiplication then rejoin them in the clone during self attention. This is awkward, and is what you're referring to when you say 'LRP requires a custom implementation of all network layers.' in the MM paper.

Transformer-MM-Explainability
You work forwards and simply add the methods

get_attn
save_attn
save_attn_gradients
get_attn_gradients

and the hooks in the forward pass to save them. This makes tracking things far easier as you don't need to reverse engineer the flow.

Request
You have great alterations of the DETR, CLIP, LXMERT and VisualBERT repos allowing all the interaction coupling scores for the baselines and your method to be calculated and smoothly plug-in to the whole repo.

Could you provide an example using the forward pass formulation of Transformer-MM-Explainability on a Vanilla Transformer model (just your interaction scores, not all the other baseline methods) to act as a very simple demonstrative example which is single-modality, ideally a Jupyter notebook with comments.

Using the methods for a custom architecture

Hello! Thank you for the excellent work and examples, also really appreciate the time you've taken to respond to queries.

In this spirit, I attempted to implement the mechanism to understand a multi-modal architecture of https://github.com/facebookresearch/av_hubert which is necessarily frames of a video -> text (with an encoder which does video frames -> tokens and a decoder that takes this as input and provides words as output).

To summarize what I attempted to do, I took references from the CLIP example and saved the attention weights from the output projection, and then used the gradients to compute the grad-cam and work further.

But it did not work out as expected in the context of this problem. I have attached an example visualization of the output that I got.

So to potentially TL;DR my questions
(1) If I have a custom architecture which is not a straightforward classification task, what modifications do I need to make to incorporate the rules presented in the paper?
(2) are the 'attn_probs' the softmax attention output of each layer in the CLIP example (I assumed as such, but was not sure based on the code).

Example of the output I am getting
Unknown

Thank you in advance!

Batching for CLIP Explainability

Is there a way to batch the heatmap creation process in CLIP_explainability? Specifically, I would like to pass in a batch_size-length list of (image, text) pairs and get a list of batch_size heatmaps, one per (image, text) pair. (I think this was asked in your other repo; you referenced the batching in the new ViT notebook, but it looks quite similar to the CLIP ipynb in this repo, and I don’t see how either supports creating multiple heatmaps for multiple images at once. Perhaps I am missing something.)

Batching the forward pass is straightforward, but it seems difficult to batch the backward pass in interpret, when we are starting with the relevant image logit to call backward on. Currently, I call one_hot.backward(.) on every image in the batch, which is quite time consuming.

Thanks for making your code public, and I appreciate your help in advance.

ssl error

self.frcnn_cfg = Config.from_pretrained("unc-nlp/frcnn-vg-finetuned")
An ssl error occurred while the model was loading。
Would like to ask how to solve, looking forward to your answer,thanks!

Details about the changes in the code of base models

I am trying to study the code in this repository. However, it is difficult to figure out the changes that have been made in sub folders of the Base Models (VisualBERT, LXMERT, DETR, etc) for this project.

Since the original repository of the Base Models may have changes after the code has been copied to this repository (i.e. their histories may not align), it becomes difficult to compare the Git diff.

It would be helpful if it is possible to attach the git commit tag/id of the Base Models repositories corresponding to the latest commit when they were cloned. Using the commit tag, it will be convenient to align the original code with the code in this repository and compare the changes in the model.

Additionally, it may be helpful for future research, but probably time consuming if those changes can be documented.

Question about the CLIP Demo

Hello, I have two questions about the CLIP demo, may I take the liberty of asking you?
(1) variable "image_relevance" in function "interpret"
This code "image_relevance = R[:, 0, 1:]"applied after updating the attention prob map; I confused about why choosing the first row of the relevant scores ; Is this corresponding to the saying in paper " to ex-tract relevancies per text token, one should consider the first row of Rtt, and to extract the image token relevancies, consider the first row in Rti which describes the connections between the [CLS] token and each image token" in section 3.1? Or can you give me some other insights?

(2) variable "R_text" in function "show_heatmap_on_text"
I also confused about why "CLS_idx = text_encoding.argmax(dim=-1)", The [CLS] token is usually the first one in tokens. Can you give me some insights about this ?

Thanks!

How to apply this work on google/vit model from hugging face ?

Thanks for the great work,

When I tried to follow the vit notebook I got a lot of attribute errors and I think the model architecture and layers names may not be the same with google/vit-base-patch16-224-in21k so Can you please help me with the steps and modifications needed to apply the explainability on the mentioned model.

Questions about the Relavance Matrix

First of all, thanks for the great work! It's always nice with approaches that are not too complicated to implement but still yield impressive results!

I have a question regarding the final Relevancy matrix. In the code for the different models I see that before the relevancy matrix is sent back, we pick out the scores for the first token and then remove the "self" relevancy score for this token R[0, 1:]. This is done since in this case this token represents the CLS token, right? And the "self" relevancy is removed since we only want to keep the entries for the tokens from the original data, is that correct?
image
I am using your approach for a GPT model and it has been working well. I initially tested it on a toy problem where the model had to predict whether a certain word was present in the input or not (classification was done on a SEP token at the end of the sequence) and the relevancy score properly highlighted the words if they were present. I am now trying to generate embeddings for another task by extracting the final SEP token and using it as embedding. I have tried a similar approach to the positive and negative perturbation from your paper and noticed that I can remove up to 90% of the tokens from the input and still get very similar final embeddings. My issue is now how to interpret these tokens that get high relevance scores. For some reason the token at position 0 almost always get the highest score and in this case this token is not the CLS token, instead it can be any token from the vocabulary. So this made me question if there perhaps were any other reasons why R[0, 1:] is performed in your code?

Questions about CLIP visualization.

I do not understand why the visualization of CLIP only calculates the last two layers, due to the existence of num_layers. Could you share some insights with us?

The existence of variable "num_layers" makes the heat map of CLIP clearer, but I think visualization of CLIP and ViT should be similar or at least comparable because they share the same architecture.

Question about the CLIP Demo

Thank you for creating this great repository! I had a few questions specifically about the CLIP demo,

  • is the attention map visualization based on the method mentioned in your paper Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers? I was a little confused on why the maps look different from the ones for LXMERT / DETR and why there isn't a visualization for text token attention.
  • how would you suggest visualizing the attention for the CLIP ResNet backbone?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.