Giter Site home page Giter Site logo

awslabs / gap-text2sql Goto Github PK

View Code? Open in Web Editor NEW
100.0 5.0 25.0 265 KB

GAP-text2SQL: Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training

Home Page: https://arxiv.org/abs/2012.10309

License: Apache License 2.0

Jsonnet 0.81% Python 98.42% Shell 0.08% Jupyter Notebook 0.69%
text2sql semantic-parsing deep-learning language-model pretrained-models machine-learning nlu nlp pytorch text-generation

gap-text2sql's Introduction

GAP-text2SQL: Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training

Code and model from our AAAI 2021 paper

Updates

[2020/02/05] Support to run the model on own databases and queries. Check out the notebook.

Abstract

Most recently, there has been significant interest in learning contextual representations for various NLP tasks, by leveraging large scale text corpora to train large neural language models with self-supervised learning objectives, such as Masked Language Model (MLM). However, based on a pilot study, we observe three issues of existing general-purpose language models when they are applied to text-to-SQL semantic parsers: fail to detect column mentions in the utterances, fail to infer column mentions from cell values, and fail to compose complex SQL queries. To mitigate these issues, we present a model pre-training framework, Generation-Augmented Pre-training (GAP), that jointly learns representations of natural language utterances and table schemas by leveraging generation models to generate pre-train data. GAP MODEL is trained on 2M utterance-schema pairs and 30K utterance-schema-SQL triples, whose utterances are produced by generative models. Based on experimental results, neural semantic parsers that leverage GAP MODEL as a representation encoder obtain new state-of-the-art results on both SPIDER and CRITERIA-TO-SQL benchmarks.

Setup

conda create --name gap-text2sql python=3.7
source activate gap-text2sql
conda install pytorch=1.5 cudatoolkit=10.2 -c pytorch
pip install -r requirements.txt
python -c "import nltk; nltk.download('stopwords'); nltk.download('punkt')"

Download the dataset

pip install gdown
cd rat-sql-gap
gdown --id 1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0
unzip spider.zip
bash data/spider/generate.sh ./spider

Build dataset directory

mkdir data/spider-bart
cp ./spider/tables.json data/spider-bart/
cp ./spider/train_spider.json data/spider-bart/
cp ./spider/train_others.json data/spider-bart/
cp ./spider/dev.json data/spider-bart/
ln -s $(pwd)/spider/database data/spider-bart/database

Download the library

mkdir third_party
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2018-10-05.zip
unzip stanford-corenlp-full-2018-10-05.zip -d third_party/

Start the Stanford library

pushd third_party/stanford-corenlp-full-2018-10-05
nohup java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 8999 -timeout 15000 > server.log &
popd

Download the checkpoint

mkdir -p logdir/bart_run_1/bs\=12\,lr\=1.0e-04\,bert_lr\=1.0e-05\,end_lr\=0e0\,att\=1/
mkdir ie_dirs
aws s3 cp s3://gap-text2sql-public/checkpoint-artifacts/gap-finetuned-checkpoint logdir/bart_run_1/bs\=12\,lr\=1.0e-04\,bert_lr\=1.0e-05\,end_lr\=0e0\,att\=1/model_checkpoint-00041000

mkdir -p pretrained_checkpoint
aws s3 cp s3://gap-text2sql-public/checkpoint-artifacts/pretrained-checkpoint pretrained_checkpoint/pytorch_model.bin

Alternatively, you can download them here if you don't have awscli: gap-finetuned-checkpoint and pretrained-checkpoint

curl https://gap-text2sql-public.s3.amazonaws.com/checkpoint-artifacts/gap-finetuned-checkpoint -o logdir/bart_run_1/bs\=12\,lr\=1.0e-04\,bert_lr\=1.0e-05\,end_lr\=0e0\,att\=1/model_checkpoint-00041000
curl https://gap-text2sql-public.s3.amazonaws.com/checkpoint-artifacts/pretrained-checkpoint -o pretrained_checkpoint/pytorch_model.bin

Preprocess dataset

python run.py preprocess experiments/spider-configs/gap-run.jsonnet

Inference

python run.py eval experiments/spider-configs/gap-run.jsonnet

You then get the inference results and evaluation results in the paths:ie_dirs/bart_run_1_true_1-step41000.infer and ie_dirs/bart_run_1_true_1-step41000.eval.

Training

python run.py train experiments/spider-configs/gap-run.jsonnet

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

gap-text2sql's People

Contributors

amazon-auto avatar impavidity avatar pnpnpn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

gap-text2sql's Issues

Input sizes bigger than 512 tokens

I tried to run the eval script on the baseball_1 and cre_Drama_Workshop_Groups DBs, however, I got an error related to the input size.
How do you overcome this error?

Errors in inference on my table

Hello,
I want to know why this error happened? Is this because the table data format or the missing data in table ?

Traceback (most recent call last):
File "my_test.py", line 91, in
code, ret_v = infer("how much dollar signed in 2021?")
File "my_test.py", line 86, in infer
output = inferer._infer_one(model, data_item, preproc_data, beam_size=1, use_heuristic=True)
File "/Users/Documents/gap-text2sql-main/rat-local-version/seq2struct/commands/infer.py", line 97, in _infer_one
model, data_item, preproc_item, beam_size=beam_size, max_steps=1000, from_cond=False)
File "/Users/Documents/gap-text2sql-main/rat-local-version/seq2struct/models/spider/spider_beam_search.py", line 21, in beam_search_with_heuristics
inference_state, next_choices = model.begin_inference(orig_item, preproc_item)
File "/Users/Documents/gap-text2sql-main/rat-local-version/seq2struct/models/enc_dec.py", line 133, in begin_inference
enc_state, = self.encoder([enc_input])
File "/Users/anaconda3/envs/chat/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(*input, **kwargs)
File "/Users/Documents/gap-text2sql-main/rat-local-version/seq2struct/models/spider/spider_enc.py", line 1416, in forward
padded_token_lists, att_mask_lists, tok_type_lists = self.pad_sequence_for_bert_batch(batch_token_lists)
File "/Users/Documents/gap-text2sql-main/rat-local-version/seq2struct/models/spider/spider_enc.py", line 1556, in pad_sequence_for_bert_batch
max_len = max([len(it) for it in tokens_lists])
ValueError: max() arg is an empty sequence

Problems in downloading the dataset

Hello,

I am going through your setup instructions in the README, and for downloading the dataset there's this command:
bash data/spider-20190205/generate.sh ./spider, which is meant to be run from the rat-sql-gap dir, I believe, however there is no directory containing this generation script, and I cannot find this script anywhere.

Please let me know if I am missing something obvious.

Thanks!

The code is missing the relogic.logickit file!please !

I'm trying to pre-train the GAP on my own datasets from scratch, but I found that there is no logickit file under the relogic(more spercifically the 'AverageSpanExtractor' class is essential ), which is necessary in the TaBARTModel in 'relogic/pretrainkit/models/semparse/tabart.py' and somewhere else
Does the author have not uploaded yet?
Hope someone could hep.
Thanks

Where is the pre-training data stored, want to know the format of input data.

In the relogic folder, about the tabart-pretraining.py, compare with rat-sql part, I didn't find the specific config file like xx.jsonet.
Are the paths of all input data and configuration files specified by the user? how could i know more information about the input data, I have a project that I want to use the GAP method to train on a Chinese dataset, so its important to me to know the format of original pre-training input data.
I would be grateful if anyone could tell me,Thanks

Can we train our own questions and SQL queries?

Hello,
I am working on this module but I am not able to train the model with the own questions and queries except spider.

I want to train the model with my own questions and queries to generate the queries for the same type of questions. If possible, please explain the process.

process data problem

when i run python run.py preprocess experiments/spider-configs/gap-run.jsonnet
erro:

Traceback (most recent call last):
File "run.py", line 104, in
main()
File "run.py", line 62, in main
preprocess.main(preprocess_config)
File "/home/lishuan/hxh/text2sql/gap-text2sql-main/rat-sql-gap/seq2struct/commands/preprocess.py", line 44, in main
preprocessor.preprocess()
File "/home/lishuan/hxh/text2sql/gap-text2sql-main/rat-sql-gap/seq2struct/commands/preprocess.py", line 23, in preprocess
data = registry.construct('dataset', self.config['data'][section])
File "/home/lishuan/hxh/text2sql/gap-text2sql-main/rat-sql-gap/seq2struct/utils/registry.py", line 36, in construct
**kwargs)
File "/home/lishuan/hxh/text2sql/gap-text2sql-main/rat-sql-gap/seq2struct/utils/registry.py", line 44, in instantiate
raise ValueError('Unsupported kind for param {}: {}'.format(name, param.kind))
ValueError: Unsupported kind for param args: 2

Why are the EXEC scores always zero?

For every model I train, including the pretrained model, EXEC scores are zero. does this mean that none of the generated queries matches the gold SQL query when runned to produce an output table?

Why overfitting train yields better scores?

I've tried to retrain your model and managed to get the same scores as you report.
What I don't understand is:

  • there is not early stopping
  • the model used is the last saved model checkpoint, when the model is with huge overfit on train (very close to 0 loss)

However, I've tried to get scores on another model checkpoint with a lower loss in validation and it yielded worse EXACT matches.
Therefore, why does it yield better scores if we overfit the training set?

How to execute own queries?

Hello I would like to insert my own Questions and Databases but when I try to change the Spider json files it get the error:

RuntimeError: Error(s) in loading state_dict for EncDecModel:
	size mismatch for decoder.rule_logits.2.weight: copying a param with shape torch.Size([97, 128]) from checkpoint, the shape in current model is torch.Size([76, 128]).
	size mismatch for decoder.rule_logits.2.bias: copying a param with shape torch.Size([97]) from checkpoint, the shape in current model is torch.Size([76]).
	size mismatch for decoder.rule_embedding.weight: copying a param with shape torch.Size([97, 128]) from checkpoint, the shape in current model is torch.Size([76, 128]).
	size mismatch for decoder.node_type_embedding.weight: copying a param with shape torch.Size([55, 64]) from checkpoint, the shape in current model is torch.Size([49, 64]).

Is the an elegant solution to test my own data?
Thanks in advance!

I can't find AverageSpanExtractor module :(

Hi. @Impavidity @pnpnpn,
I'm referring to your code for my text-to-SQL study.
To modify the GAP encoder part, I tuned relogic/tabart-pretraining.py.
In order to execute the code, AverageSpanExtractor module is needed.
But I can't find this module from relogic.logickit.modules.span_extractors.average_span_extractor.
Is the module missing from github?

Best regard!

On clause are missing in inference output

Hi,

I am facing some issue on identity the "ON clause" from output,

Question: List the pilots are from london
Result: SELECT pilot.Name FROM pilot JOIN match WHERE match.Location = 'terminal'

Is there any solution to find the on clauses?

Thanks in advance

Generators used for pre-training

The paper mentions the use of a SQL-to-Text and Table-to-Text model to generate synthetic samples for pre-training. I would like to use these models to try generate synthetic training examples for my own custom datasets. It doesn’t seem like the weights for these models were made public, is there any way I can train these models myself? I saw some code under relogic and pretrainkit which seems relevant for this but couldn’t figure out what data it uses and how to run it. Thanks!

The number of grammar rules (94) is inconsistent with the pre-trained model (97)

I find that the number of the grammar rules is 94 when I follow the instructions to preprocess the data (with the hyper-parameter fs=2). But the size of rule_embedding in the pre-trained model is 97.

Traceback (most recent call last):
File "run.py", line 104, in
main()
File "run.py", line 83, in main
infer.main(infer_config)
File "/home/yuhao/zhenwen/repair_model/gap-text2sql-main/rat-sql-gap/seq2struct/commands/infer.py", line 239, in main
model = inferer.load_model(args.logdir, args.step)
File "/home/yuhao/zhenwen/repair_model/gap-text2sql-main/rat-sql-gap/seq2struct/commands/infer.py", line 48, in load_model
last_step = saver.restore(logdir, step=step, map_location=self.device, item_keys=["model"])
File "/home/yuhao/zhenwen/repair_model/gap-text2sql-main/rat-sql-gap/seq2struct/utils/saver.py", line 122, in restore
items2restore, model_dir, map_location, step)
File "/home/yuhao/zhenwen/repair_model/gap-text2sql-main/rat-sql-gap/seq2struct/utils/saver.py", line 40, in load_checkpoint
item_dict[item_name].load_state_dict(checkpoint[item_name])
File "/home/yuhao/.conda/envs/zhenwen/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1407, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for EncDecModel:
Unexpected key(s) in state_dict: "decoder.state_update._input_dropout_mask", "decoder.state_update._h_dropout_mask".
size mismatch for decoder.rule_logits.2.weight: copying a param with shape torch.Size([97, 128]) from checkpoint, the shape in current model is torch.Size([94, 128]).
size mismatch for decoder.rule_logits.2.bias: copying a param with shape torch.Size([97]) from checkpoint, the shape in current model is torch.Size([94]).
size mismatch for decoder.rule_embedding.weight: copying a param with shape torch.Size([97, 128]) from checkpoint, the shape in current model is torch.Size([94, 128])

Can anyone help out in figuring out what is "terminal" here?

While executing a natural language query "Birth year of Gina Rinehart", we got the following sql query output.
The output sql query is okay, but whenever we are calling any variable data(in this case, our variable is 'Gina Rinehart'), we are getting "terminal"

Natural language query from spider's singer database: Birth year of Gina Rinehart.
Corresponding output SQL query: SELECT singer.Birth_Year FROM singer WHERE singer.Name = 'terminal'

image

Error while running Inference

Hi,

Thanks for open-sourcing the project.
I was trying this on a non gpu windows 10 machine (conda environment, python 3.7.9, pytorch 1.5)
I was able to run Preprocess dataset, but got the bellow error while running Inference

(envs)
Lenovo-PC MINGW64 /d/NLP/NL_to_SQL/gap-text2sql/rat-sql-gap (main)
$ python run.py eval experiments/spider-configs/gap-run.jsonnet
WARNING <class 'seq2struct.models.enc_dec.EncDecModel.Preproc'>: superfluous {'name': 'EncDec'}
WARNING <class 'seq2struct.models.enc_dec.EncDecModel'>: superfluous {'decoder_preproc': {'grammar': {'clause_order': None, 'end_with_from': True, 'factorize_sketch': 2, 'include_literals': False, 'infer_from_conditions': True, 'name': 'spider', 'output_from': True, 'use_table_pointer': True}, 'save_path': 'data/spider-bart/nl2code-1115,output_from=true,fs=2,emb=bart,cvlink', 'use_seq_elem_rules': True}, 'encoder_preproc': {'bart_version': 'facebook/bart-large', 'compute_cv_link': True, 'compute_sc_link': True, 'db_path': 'data/spider-bart/database', 'fix_issue_16_primary_keys': True, 'include_table_name_in_column': False, 'save_path': 'data/spider-bart/nl2code-1115,output_from=true,fs=2,emb=bart,cvlink'}}
Parameter containing:
tensor([[-0.0370, 0.1117, 0.1829, ..., 0.2054, 0.0578, -0.0750],
[ 0.0055, -0.0049, -0.0069, ..., -0.0030, 0.0038, 0.0087],
[-0.0448, 0.4604, -0.0604, ..., 0.1073, 0.0310, 0.0477],
...,
[-0.0138, 0.0278, -0.0467, ..., 0.0455, -0.0265, 0.0125],
[-0.0043, 0.0153, -0.0567, ..., 0.0496, 0.0108, -0.0099],
[ 0.0053, 0.0324, -0.0179, ..., -0.0085, 0.0223, -0.0020]],
requires_grad=True)
Updated the model with ./pretrained_checkpoint\pytorch_model.bin
Parameter containing:
tensor([[-0.0383, 0.1205, 0.1776, ..., 0.1973, 0.0594, -0.0699],
[ 0.0046, -0.0023, -0.0084, ..., -0.0036, 0.0047, 0.0084],
[-0.0460, 0.4671, -0.0650, ..., 0.1027, 0.0256, 0.0475],
...,
[ 0.0086, 0.0037, 0.0363, ..., -0.0296, -0.0097, -0.0068],
[-0.0160, 0.0123, 0.0015, ..., 0.0040, 0.0185, 0.0038],
[-0.0049, -0.0121, -0.0235, ..., 0.0200, 0.0148, -0.0020]],
requires_grad=True)
Loading model from logdir/bart_run_1\bs=12,lr=1.0e-04,bert_lr=1.0e-05,end_lr=0e0,att=1\model_checkpoint-00041000
Traceback (most recent call last):
File "run.py", line 104, in
main()
File "run.py", line 83, in main
infer.main(infer_config)
File "D:\NLP\NL_to_SQL\gap-text2sql\rat-sql-gap\seq2struct\commands\infer.py", line 215, in main
model = inferer.load_model(args.logdir, args.step)
File "D:\NLP\NL_to_SQL\gap-text2sql\rat-sql-gap\seq2struct\commands\infer.py", line 48, in load_model
last_step = saver.restore(logdir, step=step, map_location=self.device, item_keys=["model"])
File "D:\NLP\NL_to_SQL\gap-text2sql\rat-sql-gap\seq2struct\utils\saver.py", line 122, in restore
items2restore, model_dir, map_location, step)
File "D:\NLP\NL_to_SQL\gap-text2sql\rat-sql-gap\seq2struct\utils\saver.py", line 40, in load_checkpoint
item_dict[item_name].load_state_dict(checkpoint[item_name])
File "D:\NLP\NL_to_SQL\gap-text2sql\envs\lib\site-packages\torch\nn\modules\module.py", line 847, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for EncDecModel:
size mismatch for decoder.rule_logits.2.weight: copying a param with shape torch.Size([97, 128]) from checkpoint, the shape in current model is torch.Size([94, 128]).
size mismatch for decoder.rule_logits.2.bias: copying a param with shape torch.Size([97]) from checkpoint, the shape in current model is torch.Size([94]).
size mismatch for decoder.rule_embedding.weight: copying a param with shape torch.Size([97, 128]) from checkpoint, the shape in current model is torch.Size([94, 128]).
(envs)
Lenovo-PC MINGW64 /d/NLP/NL_to_SQL/gap-text2sql/rat-sql-gap (main)
$

Can you guide me, where I need to make changes.

The JOIN table part of the output SQL does not contain ON clauses.

Hello,
I found the output sql does not contain ON clauses.
For example,
"question": "Show the stadium name and the number of concerts in each stadium."
"predicted": "SELECT stadium.Name, Count() FROM stadium JOIN concert GROUP BY stadium.Stadium_ID"
"gold": "SELECT T2.name , count(
) FROM concert AS T1 JOIN stadium AS T2 ON T1.stadium_id = T2.stadium_id GROUP BY T1.stadium_id"
I want to know how to get the on clauses "ON T1.stadium_id = T2.stadium_id".
Thank you.

ModuleNotFoundError: No module named 'relogic.logickit.dataflow'

I'm trying to pre-train the GAP on my own datasets from scratch, but I found that there is no "dataflow" folder under the relogic(more spercifically the 'relogic.logickit.dataflow.semtransparse.grammar.keywords' is missing ), which is necessary in the TaBARTModel in 'relogic/pretrainkit/models/semparse/modeling_bart_copy.py'.

from relogic.logickit.dataflow.semtransparse.grammar.keywords import SKETCH_KEYWORDS, KEYWORDS

Hope someone could help on this.
Thanks

How to comprehend the evaluation results?

Hello,
when I get the evaluation results,I am not sure the meaning of "predicted_parse_error" and "exact".I guess when "predicted_parse_error" is true it means the model can’t produce a predicted sql,is that right?And I found "exact" has 3 possibilities:true,0 and false,I guess when "exact" is true ,it means the predicted sql is right.But what do 0 and false mean?
Thank you.

Can we get the pretraining data?

Can you please let me know how to download the additional 30K data [(Utterance, Schema, SQL) triples] that was used for pretraining?

Thanks in Advanced!

Cannot access model checkpoint

Hi, I am trying to learn and test your work and strictly following your instructions, but I cannot quite access model checkpoint from aws s3 bucket; is your project closed now, or checkpoint bucket is not public, or am I having some other connectivity issues?
Running the following command:
aws s3 cp s3://gap-text2sql-public/checkpoint-artifacts/gap-finetuned-checkpoint logdir/bart_run_1/bs\=12\,lr\=1.0e-04\,bert_lr\=1.0e-05\,end_lr\=0e0\,att\=1/model_checkpoint-00041000
Getting this error:
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Would be grateful for feedback,
Regards

Query formed for total irrelevant input string

Hi,
I was trying this on a dummy database attached with couple of tables. I observed that for a totally irrelevant input text it still outputs a query. Can we have a confidence score too along with the query in the response so that we can look into the confidence value and decide how relevant it is. Attached a screenshot of the issue too
sqlprblm

Issue while loading the model

Hey @Impavidity @TheurgicDuke771 ,
I'm facing the same issue, I think there is no issue with data( Spider) I'm using.

Loading model from logdir/bart_run_1/bs=12,lr=1.0e-04,bert_lr=1.0e-05,end_lr=0e0,att=1/model_checkpoint-00041000

---------------------------------------------------------------------------

KeyError                                  Traceback (most recent call last)

<ipython-input-45-2534e992a832> in <module>()
----> 1 model = inferer.load_model(model_dir, checkpoint_step)

6 frames

/content/gap-text2sql/rat-sql-gap/seq2struct/models/variational_lstm.py in _hook_remove_dropout_masks_from_state_dict(cls, instance, state_dict, prefix, local_metadata)
     75     @classmethod
     76     def _hook_remove_dropout_masks_from_state_dict(cls, instance, state_dict, prefix, local_metadata):
---> 77         del state_dict[prefix + '_input_dropout_mask']
     78         del state_dict[prefix + '_h_dropout_mask']
     79 

KeyError: 'decoder.state_update._input_dropout_mask'

We can see the folder structure below,
spider_data_issue

Please guide me to tackle this issue.
Thanks in advance.

About max_steps

Hello!
Is max_steps set to 41000? The RAT bert version is set to 81000.
I want to confirm it. Thank you!

Export to ONNX?

Hi everyone. After training the model in this project it is possible to export to ONNX format suing Pytorch Api.

The Google Folders mentioned in the scripts are not available...

./BERTimbau-base.sh
Folders structure preparation
Download Checkpoint data/ ie_dirs/ logdir/ mT5_large.sh* seq2struct/ spider/ venv/
/Users/avgr/git/gap-text2sql/mrat-sql-gap/venv/lib/python3.9/site-packages/urllib3/init.py:34: NotOpenSSLWarning: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: urllib3/urllib3#3020
warnings.warn(
/Users/avgr/git/gap-text2sql/mrat-sql-gap/venv/lib/python3.9/site-packages/gdown/cli.py:126: FutureWarning: Option --id was deprecated in version 4.3.1 and will be removed in 5.0. You don't need to pass it anymore to use a file ID.
warnings.warn(
Access denied with the following error:

    Cannot retrieve the public link of the file. You may need to change
    the permission to 'Anyone with the link', or have had many accesses. 

You may still be able to access the file from the browser:

     https://drive.google.com/uc?id=1gIZS0RuIxdjmm7sNbA3R6p6--9iMJmW8 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.