Giter Site home page Giter Site logo

sam's Introduction

SAM

SAM is a learning-based method for high-fidelity database generation using deep autoregressive models.

Your can learn more about SAM in our SIGMOD 2022 paper, SAM: Database Generation from Query Workloads with Supervised Autoregressive Models.


Getting Started

This project contains two main directories:

sam_single: SAM for single-relation database generation

sam_multi: SAM for multi-relation database generation

Here we give a quick example (~10 minutes) of using SAM to generate the IMDB database from pre-trained autoregressive model. More detailed instructions on SAM can be found in the README of the respective directories.

Set up the conda environment for the project:

conda env create -f environment.yml
conda activate sam

Enter the directory and download the IMDB database:

cd sam_multi
bash scripts/download_imdb.sh

Generate the IMDB database using the pretrained model at ./sam_multi/models/uaeq-mscn-400.pt. The model is trained from the first 400 queries in the MSCN workload. The generated data csv files are saved at ./sam_multi/generated_database/imdb.

python run_dbgen.py --run data-generation-job-light-mscn-worklod

To test the fidelity of generated database, import the files to a PostgreSQL database:

create table title (id int PRIMARY KEY, production_year int, kind_id int);
copy title from 'SAM/sam_multi/generated_database/imdb/title_100.csv' delimiter ',' header csv;

create table movie_keyword (movie_id int, keyword_id int);
copy movie_keyword from 'SAM/sam_multi/generated_database/imdb/movie_keyword_100.csv' delimiter ',' header csv;

create table movie_info_idx (movie_id int, info_type_id int);
copy movie_info_idx from 'SAM/sam_multi/generated_database/imdb/movie_info_idx_100.csv' delimiter ',' header csv;

create table movie_info (movie_id int, info_type_id int);
copy movie_info from 'SAM/sam_multi/generated_database/imdb/movie_info_100.csv' delimiter ',' header csv;

create table movie_companies (movie_id int, company_type_id int, company_id int);
copy movie_companies from 'SAM/sam_multi/generated_database/imdb/movie_companies_100.csv' delimiter ',' header csv;

create table cast_info (movie_id int, role_id int, person_id int);
copy cast_info from 'SAM/sam_multi/generated_database/imdb/cast_info_100.csv' delimiter ',' header csv;

Run the 400 training queries on the generated database and get the result Q-error:

python query_execute.py --queries ./queries/mscn_400.sql --cards ./queries/mscn_400_card.csv

Citation

@inproceedings{
  title={SAM: Database Generation from Query Workloads with Supervised Autoregressive Models},
  author={Yang, Jingyi and Wu, Peizhi and Cong, Gao and Zhang, Tieying and He, Xiao},
  booktitle={Proceedings of the 2022 International Conference on Management of Data},
  pages={1542--1555},
  year={2022},
  location = {Philadelphia, PA, USA},
  publisher = {Association for Computing Machinery}
}

Acknowledgements

This project builds on top of UAE and NeuroCard.

License

This project is licensed under NTUItive Dual License. You can find the License at LICENSE.rtf

sam's People

Contributors

jamesyang2333 avatar pagegitss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

sam's Issues

segmentation fault, no luck

image
after struggling with OS issue, I switched to a centos x86, still no luck
(sam) [root@copy-of-vm-ee-centos76-v1 sam_multi]# uname -a
Linux copy-of-vm-ee-centos76-v1.05 3.10.0-1127.el7.x86_64 #1 SMP Tue Mar 31 23:36:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Jamesyang 2333, could you please help ?

error when test the generated database 1000 queries sam single

I got below error when try to run 1000 test queries on the generated database sam single. It only occur with "dmv" database, the "census" run is ok.

python query_execute_single.py --dataset dmv --data-file ./generated_data_tables/dmv.csv --query-file ./queries/dmv_test.txt

Traceback (most recent call last):
File "query_execute_single.py", line 43, in
cols = [sample_table.columns[sample_table.ColumnIndex(col)] for col in train_data_raw['column'][i]]
File "query_execute_single.py", line 43, in
cols = [sample_table.columns[sample_table.ColumnIndex(col)] for col in train_data_raw['column'][i]]
File "/home/mltest/SAM/sam_single/common.py", line 152, in ColumnIndex
assert name in self.name_to_index
AssertionError

Testing Training Dataset Size vs Training Time Relationship

I am attempting to test the relationship between the training dataset size and training time in the SAM repository. I adjusted the train_queries variable in sam_multi/experiments.py to 1000 and ran the following command:

python run_uae.py --run job-light-ranges-mscn-workload

However, I encountered the following error:

Traceback (most recent call last):
  File "/root/anaconda3/envs/sam/lib/python3.7/site-packages/ray/tune/trial_runner.py", line 471, in _process_trial
    result = self.trial_executor.fetch_result(trial)
  File "/root/anaconda3/envs/sam/lib/python3.7/site-packages/ray/tune/ray_trial_executor.py", line 430, in fetch_result
    result = ray.get(trial_future[0], DEFAULT_GET_TIMEOUT)
  File "/root/anaconda3/envs/sam/lib/python3.7/site-packages/ray/worker.py", line 1538, in get
    raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(RuntimeError): ray::NeuroCard.train() (pid=81614, ip=172.17.0.5)
  File "python/ray/_raylet.pyx", line 479, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 432, in ray._raylet.execute_task.function_executor
  File "/root/anaconda3/envs/sam/lib/python3.7/site-packages/ray/tune/trainable.py", line 332, in train
    result = self.step()
  File "/root/anaconda3/envs/sam/lib/python3.7/site-packages/ray/tune/trainable.py", line 636, in step
    result = self._train()
  File "run_uae.py", line 1264, in _train
    q_weight=self.q_weight if self.semi_train else 0
  File "run_uae.py", line 542, in run_epoch_query_only
    all_loss.backward(retain_graph=True)
  File "/root/anaconda3/envs/sam/lib/python3.7/site-packages/torch/tensor.py", line 195, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/root/anaconda3/envs/sam/lib/python3.7/site-packages/torch/autograd/__init__.py", line 99, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: Function 'MmBackward' returned nan values in its 0th output.

In the job-light-ranges-mscn-workload configuration within sam_multi/experiments.py, are there any additional parameters or settings that need to be adjusted to properly test the relationship between training dataset size and training time?
I appreciate your time and assistance. Looking forward to your guidance on resolving this issue. Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.