Giter Site home page Giter Site logo

automl / naslib Goto Github PK

View Code? Open in Web Editor NEW
495.0 17.0 112.0 531.67 MB

NASLib is a Neural Architecture Search (NAS) library for facilitating NAS research for the community by providing interfaces to several state-of-the-art NAS search spaces and optimizers.

License: Apache License 2.0

Python 76.55% Shell 6.22% Jupyter Notebook 17.23%
neural-architecture-search nas automl

naslib's Introduction

** For the Zero-Cost NAS Competition, please switch to the automl-conf-competition branch **

  PyTorch Version   Open Source GitHub Repo Stars

       NASLib is a modular and flexible framework created with the aim of providing a common codebase to the community to facilitate research on Neural Architecture Search (NAS). It offers high-level abstractions for designing and reusing search spaces, interfaces to benchmarks and evaluation pipelines, enabling the implementation and extension of state-of-the-art NAS methods with a few lines of code. The modularized nature of NASLib allows researchers to easily innovate on individual components (e.g., define a new search space while reusing an optimizer and evaluation pipeline, or propose a new optimizer with existing search spaces). It is designed to be modular, extensible and easy to use.

NASLib was developed by the AutoML Freiburg group and with the help of the NAS community, we are constantly adding new search spaces, optimizers and benchmarks to the library. Please reach out to [email protected] for any questions or potential collaborations.

naslib-overview

Setup | Usage | Docs | Contributing | Cite

Setup

While installing the repository, creating a new conda environment is recomended. Install PyTorch GPU/CPU for your setup.

conda create -n mvenv python=3.7
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia

Run setup.py file with the following command, which will install all the packages listed in requirements.txt

pip install --upgrade pip setuptools wheel
pip install -e .

To validate the setup, you can run tests:

cd tests
coverage run -m unittest discover -v

The test coverage can be seen with coverage report.

Queryable Benchmarks

NASLib allows you to query the following (tabular and surrogate) benchmarks for the performance of any architecture, for a given search space, dataset and task. To set them up, simply download the benchmark data files from the these URLs and place them in naslib/data.

Benchmark Task Datasets Data URL Required Files
NAS-Bench-101 Image Classification CIFAR10 cifar10 naslib/data/nasbench_only108.pkl
NAS-Bench-201 Image Classification CIFAR10
CIFAR100
ImageNet16-120
cifar10
cifar100
imagenet
naslib/data/nb201_cifar10_full_training.pickle
naslib/data/nb201_cifar100_full_training.pickle
naslib/data/nb201_ImageNet16_full_training.pickle
NAS-Bench-301 Image Classification CIFAR10 cifar10
models
naslib/data/nb301_full_training.pickle
naslib/data/nb_models/...
NAS-Bench-ASR Automatic Speech Recognition TIMIT timit naslib/data/nb-asr-bench-gtx-1080ti-fp32.pickle
naslib/data/nb-asr-bench-jetson-nano-fp32.pickle
naslib/data/nb-asr-e40-1234.pickle
naslib/data/nb-asr-e40-1235.pickle
naslib/data/nb-asr-e40-1236.pickle
naslib/data/nb-asr-info.pickle
NAS-Bench-NLP Natural Language Processing Penn Treebank ptb, models naslib/data/nb_nlp.pickle
naslib/data/nbnlp_v01/...
TransNAS-Bench-101 7 Computer Vision tasks Taskonomy taskonomy naslib/data/transnas-bench_v10141024.pth

For NAS-Bench-301 and NAS-Bench-NLP, additionally, you will have to install the NASBench301 API from here.

Once set up, you can test if the APIs work as follows:

python test_benchmark_apis.py --all --show_error

You can also test any one API.

python test_benchmark_apis.py --search_space <search_space> --show_error

Usage

To get started, check out demo.py.

search_space = SimpleCellSearchSpace()

optimizer = DARTSOptimizer(**config.search)
optimizer.adapt_search_space(search_space, config.dataset)

trainer = Trainer(optimizer, config)
trainer.search()        # Search for an architecture
trainer.evaluate()      # Evaluate the best architecture

For more examples see naslib tutorial, intro to search spaces and intro to predictors.

Scripts for running multiple experiments on a cluster

The scripts folder contains code for generating config files for running experiments across various configurations and seeds. It writes them into the naslib/configs folder.

cd scripts
bash bbo/make_configs_asr.sh

It also contains scheduler.sh files to automatically read these generated config files and submits a corresponding job to the cluster using SLURM.

Contributing

We welcome contributions to the library along with any potential issues or suggestions. Please create a pull request to the Develop branch.

Cite

If you use this code in your own work, please use the following bibtex entries:

@misc{naslib-2020, 
  title={NASLib: A Modular and Flexible Neural Architecture Search Library}, 
  author={Ruchte, Michael and Zela, Arber and Siems, Julien and Grabocka, Josif and Hutter, Frank}, 
  year={2020}, publisher={GitHub}, 
  howpublished={\url{https://github.com/automl/NASLib}} }
  
@inproceedings{mehta2022bench,
  title={NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy},
  author={Mehta, Yash and White, Colin and Zela, Arber and Krishnakumar, Arjun and Zabergja, Guri and Moradian, Shakiba and Safari, Mahmoud and Yu, Kaicheng and Hutter, Frank},
  booktitle={International Conference on Learning Representations},
  year={2022}
}

predictors

NASLib has been used to run an extensive comparison of 31 performance predictors (figure above). See the separate readme: predictors.md and our paper: How Powerful are Performance Predictors in Neural Architecture Search?

@article{white2021powerful,
  title={How Powerful are Performance Predictors in Neural Architecture Search?},
  author={White, Colin and Zela, Arber and Ru, Robin and Liu, Yang and Hutter, Frank},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  year={2021}
}

naslib's People

Contributors

abhash-er avatar arberzela avatar crwhite14 avatar dependabot[bot] avatar gierle avatar gurizab avatar jr2021 avatar juliensiems avatar mahmoud-safari avatar mlindauer avatar neonkraft avatar rheasukthanker avatar rubinxin avatar ruchtem avatar rufaim avatar shakibamrd avatar shashankskagnihotri avatar shenyann avatar yangliu-re avatar yashsmehta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

naslib's Issues

How do I use my custom dataset?

I don't want CIFAR, Taskonomy or anything, I have my own data, but I can't get it to work. I couldn't replace CIFAR10 and I don't know the data format and it overwhelms me with errors. Can you help me with this?

Is Imagenet-16 dataset works fine on competition server?

Hi, I tried to clone the automl-competition repo to the local machine and tried to set up an offline evaluation with scripts provided by the tutorial. It seems all datasets work fine except Imagenet-16(Hash check failed.).

Traceback (most recent call last): ..... File "//workspace/automl_naslib/naslib/utils/DownsampledImageNet.py", line 80, in __init__ entry = pickle.load(f, encoding="latin1") _pickle.UnpicklingError: invalid load key, '\x1d'. .....

Querying ZeroCost predictor in Supported Optimizers

A problem occurs in the Bananas optimizer, when an architecture outside of the search space is sampled, and the zero_cost_scores must be calculated on-the-fly using the ZeroCost

Exception has occurred: StopIteration
exception: no description
  File "/home/jhaa/NASLib/naslib/predictors/utils/pruners/measures/model_stats.py", line 7, in get_model_stats
    model_stats = tw.ModelStats(model, input_tensor_shape,
  File "/home/jhaa/NASLib/naslib/predictors/utils/pruners/predictive.py", line 141, in find_measures
    model_stats = get_model_stats(
  File "/home/jhaa/NASLib/naslib/predictors/zerocost.py", line 31, in query
    score = predictive.find_measures(
  File "/home/jhaa/NASLib/naslib/optimizers/discrete/bananas/optimizer.py", line 93, in query_zc_scores
    score = zc_method.query(arch, dataloader=zc_method.train_loader)
  File "/home/jhaa/NASLib/naslib/optimizers/discrete/bananas/optimizer.py", line 241, in _get_best_candidates
    model.zc_scores = self.query_zc_scores(model.arch)
  File "/home/jhaa/NASLib/naslib/optimizers/discrete/bananas/optimizer.py", line 232, in new_epoch
    self.next_batch = self._get_best_candidates(candidates, acq_fn)
  File "/home/jhaa/NASLib/naslib/defaults/trainer.py", line 112, in search
    self.optimizer.new_epoch(e)
  File "/home/jhaa/NASLib/naslib/runners/bbo/runner.py", line 75, in <module>
    trainer.search(resume_from="", summary_writer=writer, report_incumbent=False)

Currently, we are passing model.arch into the zc_method.query() function. This differs slightly from the Fall-School Tutorial, where a graph is being initialized, an architecture is being sampled, the graph is being parsed and then the graph object is being passed in.

Zero-Cost Predictor - Run on all 15,625 architecture for NasBench201

Hi, I wanted to test a Zero-Cost Predictor on all 15,625 architectures for NASBench201. In the NAS-Bench-Suite-Zero paper you state that you did this for all the architectures -

For all NAS-Bench-201 and TransNAS-Bench-101 tasks, we evaluate all ZC proxy values and the respective runtimes, for all architectures

But I am running across some issues. In the predictor_config.yaml it only has an option for random sampling of architectures with the parameter uniform_random.

I wrote my own code to exhaustively sample all architectures using naslib.search_spaces.nasbench201.graph.get_arch_iterator(). However, PredictorEvaluator.load_dataset() then loads 15,625 graphs into a list, which around 6,000 architectures in, requires too much memory for my computer. I wanted to see if I was going about this the best way or if there was a better way.

Thanks.

Issue with Running Predictor Experiments Due to Missing benchmarks Folder

Hi,

I'm currently working on predictor experiments with NASLib and encountered a snag. I was trying to use the script /NASLib/scripts/predictors/run_nb101.sh, but it seems to need access to a benchmarks folder. However, this folder appears to be missing in the source code, specifically in the path NASLib/naslib/benchmarks/create_configs.py.

Could you please provide some guidance on how to set up or find this benchmarks folder? Any advice or direction towards the right documentation would be very much appreciated.

Thanks for your help!

Best,
José Carreira

How to initialize a NASBench-301 model for training?

For a hash, say

from naslib.search_spaces import nasbench301
hash = '([(0,6),(1,2),(1,4),(0,6),(0,6),(1,3),(4,0),(3,6)],[(0,6),(1,2),(1,4),(0,6),(0,6),(1,3),(4,0),(3,6)])'
nbg = nasbench301.graph.NasBench301SearchSpace()
nbg.set_spec(eval(hash))
nbg.prepare_evaluation()
nbg.parse()
print(nbg.adj)
nbg = nasbench301.graph.NasBench301SearchSpace()
hash = '([(1,3),(0,5),(0,4),(2,4),(1,4),(2,0),(2,5),(1,1)],[(1,3),(0,5),(0,4),(2,4),(1,4),(2,0),(2,5),(1,1)])'
nbg.set_spec(eval(hash))
nbg.prepare_evaluation()
nbg.parse()
print(nbg.adj)

The adjacency matrix and other properties of the model 'nbg' does not change.

How can I initialize a NASBench-301 model with NASLib such that it can be trained?
Also, is it possible to extract a pure PyTorch model from the nbg graph?

Missing Layers in Search Space for NASBench-201

Hi everyone,

I was comparing the code for the search space in the original repo (as per the changelog) with the NASLib implementation since we were observing consistently worse performance (down by 5-6 percentage points) when trying to re-create the top-10 or even top-100 architectures' performance and noticed that, in the NASLib implementation here, when compared to the corresponding original code here, there is a missing BatchNorm + ReLU before the global adaptive pooling layer. Interestingly, I did not see any mention of these layers in the paper on arXiv. But introducing them by making the following change to the aboive-mentioned NASLib code instantly re-created the expected top-10 model performance:

# post-processing
  self.edges[edge_names["postproc"]].set('op', ops.Sequential(
        nn.BatchNorm2d(channels[-1]),
        nn.ReLU(inplace=False),
        nn.AdaptiveAvgPool2d(1),
        nn.Flatten(),
        nn.Linear(channels[-1], self.num_classes)
  ))

Search algorithms for zero-shot NAS

Hi,

Are there any plans to add and make a centralized api for differed zeroshot searching algorithms?
For example, "random search" from NAs w/o training paper; "prune-based search" from TE-NAS paper; "genetic search" from Zen-NAS; "reinforce" and etc.

Error during submision on ZeroCost Competition

Hi, I come across this error after submitting it to Coda-Lab and I believed it should have been something wrong in the evaluation system.

Traceback (most recent call last):
File "/worker/worker.py", line 705, in run
put_blob(stdout_url, stdout_file)
File "/worker/worker.py", line 214, in put_blob
'x-ms-version': '2018-03-28',
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 99, in put
return request('put', url, data=data, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 335, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 438, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 324, in send
raise ConnectionError(sockerr)
ConnectionError: [Errno -3] Temporary failure in name resolution

Recreate a subset of benchmarks

Hi, what would be the best way to recreate a subset of benchmarking accuracy data from the ZC json?
For ex: I would want to replicate the val accuracy obtained for the architecture 1-100 indexed from NASBench201. I understand that the val accuracies and corresponding ZC scores were open-sourced to remove the need for training, but I need to perform training for an internal evaluation. How do you suggest to go about this?
Thank you.

XGBoost experiments generalization

This question is more about what the paper ("NAS-Bench-Suite-Zero: Accelerating Research on Zero Cost Proxies") describes in Table 10 from Appendix D. It is still unclear to me whether the XGBoost was trained on ZC scores and/or encodings on one single benchmark and then tested on the rest of the benchmarks or XGBoost was trained on a subset of benchmark models and later tested on models from the same benchmark.

Official Release

Hi,

when should we expect the official release to be? And until then, which branch is the most updated yet reliable to use?
TIA.

Zero-Cost Metrics

I downloaded zero-cost metrics using the script download_nbs_zero.sh. Can you explain how can I relate architecture encoding such as (4, 0, 3, 1, 4, 3) in nasbench201 to architecture string such as |nor_conv_1x1~0|+|avg_pool_3x3~0|avg_pool_3x3~1|+|avg_pool_3x3~0|nor_conv_3x3~1|avg_pool_3x3~2|? In addition, can you explain how you extract validation accuracy from NASBench201 api? Thank you!

Extra .json files for zc_intro.py

I am running tutorial/zc_intro.py and it returns this error:
`[Errno 2] No such file or directory: '/home.../naslib/data/zc_nasbench201.json'

File "/home.../naslib/utils/get_dataset_api.py", line 24, in get_zc_benchmark_api
with open(datafile_path) as f:
File "/home.../tutorial/zc_intro.py", line 113, in
zc_api = get_zc_benchmark_api('nasbench201', 'cifar10')

FileNotFoundError: [Errno 2] No such file or directory: '/home.../naslib/data/zc_nasbench201.json'`

I have already checked the data for the benchmark but this file was not there to download.

Drive links no longer exist

The download script source scripts/bash_scripts/download_nbs_zero.sh all no longer works, and it seems like the drive folders do not exist anymore. Is this data hosted somewhere else? This is for the running the correlation results of the zerocost branch.

Storing of models or architectures in optimizers

In the get_candidates function of the zerocost branch optimizers Bananas and Npenas there is a discrepancy between how candidates in the next_batch are stored.

If the acquisition function is being optimized via "random_sampling", then model is being stored:

...
candidates.append(model)

Otherwise, if it is being optimized via "mutation", then model.arch is being stored:

...
    candidate = arch
candidates.append(candidate)

However, the function get_best_candidates (which is called directly after get_new_candidates) treats candidates as a list of models:

values = [acq_fn(model.arch, [{'zero_cost_scores' : model.zc_scores}]) for model in candidates]

Does this imply that the optimization of the acquisition function via "mutation" is not used in the main loop of either Bananas or Npenas? If so, how and when should this option be used?

Checkpoint restart training

Hi,
When I try to restart my training from the stored checkpoint it takes some random architecture instead of the one already stored. Is there any way to avoid this?

Currently I am running DrNAS, DARTS and GDAS optimizers for search and evaluation purpose with CIFAR 10 dataset.

Nasbench301 and Naslib inconsistent dependencies

Hi,

I have been trying to set up NASLib for a few days. I am trying to run naslib/benchmarks/darts/runner.py. But get_darts_api function is trying to import nasbench301 library. Requirements.txt of the nasbench301 and naslib is inconsistent. For example: nasbench301 tries to install torch=1.5.0 while naslib tries to install torch=1.9.0. That's why I get an error when I try to run it. Can you help me solve this dependency problem? How should I edit the requirements.txt file?

Screenshot from 2021-11-07 20-49-36

Inconsistent dependencies

Hi everyone,

I have been trying to set up NASLib for a few days now and have been running into issues with the stated dependencies of the library. A number of dependencies have very precise version numbers, such as numpy==1.16.4 and pytorch==1.15.4, which are causing compatibility problems down the line for other libraries which don't have any version numbers specified at all. My estimate is that the latest versions of some other libraries deeper in the dependency tree, which are being pulled by conda/pip, are incompatible with the specific older versions specified in the requirements.txt. As of writing this issue, I have been unable to satisfy all the version dependencies in the requirements.txt with any of the python versions I tested, including 3.5 - which was specified as the minimum working version in the setup.py. Could you kindly provide some advice on this?

Setting up an environment that ignored the version-specific requirements but did include the requested libraries resulted in the unit tests failing with these errors

Taskonomy dataset loading

Hello,
Can you help me with downloading and loading Taskonomy dataset data?

Download script only fetches raw pixel pngs, "class_object" labels and "class_scene" labels. That only covers "autoencoder", "class_object", "class_scene" and "jigsaw" tasks. I've found that "normal" task labels can also be downloaded from downloads.cs.stanford.edu similarly to "class_object". That makes 5 out of 7 in total. How to get "room_layout" and "segmentsemantic" tasks?

When loading TaskonomyDataset it requires some json containing template paths. In loading configs (like here) they are marked as "final5K_splits".
How do i generate those files? I can not seem to find any scripts related to this.

NasBench-201 Benchmark API

Hi,

How can I download nb201_cifar10_full_training.pickle, nb201_cifar100_full_training.pickle and nb201_ImageNet16_full_training.pickle files? I can't find these files. Can you help me?

Thanks

Provide option that diasble in-place ReLu that allows use pytorch 'register_full_backward_hook'

Hi, dear organizer:
I come across some issues when coding on submission, it seems like we don't have an option for re-define the network, which means in-place ReLU will break the register_full_backward_hook by rising errors like:

RuntimeError: Output 0 of BackwardHookFunctionBackward is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can remove this warning by cloning the output of the custom Function.

This errors that occur when I validate the setup by running tests

Hope to get help. Some errors that occur when I execute the command :
coverage run -m unittest discover -v,
The test result is “Ran 81 tests in 247.720s,FAILED (failures=6, errors=18)”

  1. Six of the same errors as the following
======================================================================
ERROR: test_get_config_from_args_default_bbo_bs (test_utils.UtilsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/data/marui/NAS/NASLib-Develop/tests/test_utils.py", line 38, in test_get_config_from_args_default_bbo_bs
    self._test_get_config_from_args_default(config_type="bbo-bs")
  File "/data/marui/NAS/NASLib-Develop/tests/test_utils.py", line 71, in _test_get_config_from_args_default
    config_child = utils.get_config_from_args(config_type=config_type)
  File "/data/marui/NAS/NASLib-Develop/naslib/utils/__init__.py", line 205, in get_config_from_args
    value) if arg in config else eval(value)
  File "<string>", line 1, in <module>
NameError: name 'v' is not defined
  1. Six of the AssertionErrors as the following
======================================================================
FAIL: test_forward_pass_aux_head (test_nb301_search_space.NasBench301SearchSpaceTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/data/marui/NAS/NASLib-Develop/tests/test_nb301_search_space.py", line 120, in test_forward_pass_aux_head
    self.assertEqual(aux_out.shape, (3, 512, 8, 8))
AssertionError: torch.Size([3, 256, 8, 8]) != (3, 512, 8, 8)
  1. Twelve of the RuntimeErrors as the following
======================================================================
ERROR: test_feed_forward (test_hierarchical_search_space.HierarchicalDartsIntegrationTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/data/marui/NAS/NASLib-Develop/tests/test_hierarchical_search_space.py", line 50, in test_feed_forward
    logits = final_arch(data_train[0])
  File "/data/marui/anaconda3/envs/naslib/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/marui/NAS/NASLib-Develop/naslib/search_spaces/core/graph.py", line 410, in forward
    edge_output = edge_data.op.forward(x, edge_data=edge_data)
  File "/data/marui/NAS/NASLib-Develop/naslib/search_spaces/core/primitives.py", line 388, in forward
    return self.seq(x)
  File "/data/marui/anaconda3/envs/naslib/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/marui/anaconda3/envs/naslib/lib/python3.7/site-packages/torch/nn/modules/container.py", line 139, in forward
    input = module(input)
  File "/data/marui/anaconda3/envs/naslib/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/marui/anaconda3/envs/naslib/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 457, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/data/marui/anaconda3/envs/naslib/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 454, in _conv_forward
    self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [64, 64, 3, 3], expected input[2, 3, 32, 32] to have 64 channels, but got 3 channels instead

Are these errors normal ? Is there something wrong with my environment settings ? How should I resolve these errors ?
Looking forward to everyone's response, thanks!

Extend predictors to zero-cost case

In the zerocost branch, Ensemble has been extended to the Zero Cost case and contains a single option for its base predictor, XGBoost. The XGBoost predictor has been adapted to the zero-cost case by implementing a set_pre_computations function, as well as modifying the BaseTree class. Currently, the Ensemble class supports only this single predictor:

trainable_predictors = {
    "xgb": XGBoost(
        ss_type=self.ss_type, zc=self.zc, encoding_type="adjacency_one_hot", zc_only=self.zc_only
    )
}

Should all other predictors be available in the merged Ensemble class and extended to the zero-cost case?

Examples on TransNAS-Bench-101

Are there any examples about how to train an architecture on TransNAS-Bench-101 search space? And where can I download the dataset to train the architecture?

How to recover the search space from `NAS-Bench-301`?

Hi, thank you for sharing these benchmark tools!

Let me ask one question about a dataset provided by this repo.

When I looked at data in NAS-Bench-301 trained on CIFAR-10 whose URL is provided in README.md in this repo, the format of architecture is supposed to be encoded numerically unlike NAS-Bench-201 (see also the code and its output below).

import pickle

data = pickle.load(open("./nb301_full_training.pickle", "rb"))
print(list(data.keys())[0])
(((0, 6), (1, 4), (0, 0), (1, 5), (1, 4), (3, 2), (0, 6), (3, 2)),
 ((0, 1), (1, 4), (0, 1), (1, 4), (2, 6), (3, 4), (1, 5), (2, 1)))

I'm wondering if we could know the encoding rule somewhere. I tried to do so but I found it difficult because we have at least four related NAS-Bench-301 repositories as follows.

Best regards,

No Single Path Search Space NAS Benchmark

The search space of nas benchmarks can be classified into two categories, one is NAS-Bench-101 which using DAG to represent search space, and the other is DARTS-like search space such as NAS-Bench-201, NAS-Bench-301, NATS-Bench etc.

Why there is no single path search space such as the one used in Sinlge Path One Shot, FairNAS,AngleNAS,RLNAS etc.

How to parse any arch in NAS-Bench-101 as pytorch class to train.

Hi,
I'm trying to sample few architectures from NAS-Bench-101, and then query the socre, and also get the torch-model according to the architecutre.

I wanna use the model to do some customer training or testing.

In core/graph.py, it shows we should us the following code to parse the model to pytorch module

    **Use as pytorch module**
    If you want to learn the weights of the operations or any
    other parameters of the graph you have to parse it first.
    >>> graph = getFancySearchSpace()
    >>> graph.parse()
    >>> logits = graph(data)
    >>> optimizer.min(loss(logits, target))

But the graph.parse() will fail if i use NasBench101SearchSpace as graph instance ,

Randomly sample architectures/cell from Search Spaces

I am trying, without success, sampling random architectures from the defined Search Spaces such as DARTS and SimpleCell spaces.

Is it possible to sample an entire PyTorch object representing the whole architecture? If not, is it possible to sample all the possible cells from the search spaces? Can you provide some code examples for doing this?

Thanks.

How to get the trained weights of architecture network from NASbenchmark

Dear Sir,

Thank you for your interesting and valuable work on NASBench.

I am currently studying zero-cost proxy methods and would greatly appreciate it if you could provide information on how to access the trained weights of the architecture networks within NASBench.

Your assistance would be much appreciated.

Thank you for your time and consideration.

Sincerely,

TransNAS-Bench-101 Download

Hi, the download link of TransNAS-Bench-101 is invalid now. I wonder if anyone has the data file and would like to share it with me. Thanks for the help!

Zero cost configuration parameters for optimizers

We have included several new parameters for Bananas and Npenas in order to determine whether the zero cost proxies are used, and whether they come from the benchmark or the ZeroCost predictor:

  • zc
  • use_zc_api
  • zc_only

Should these be included as initialization parameters, or in the config files?

If they were to be included in the configs, create_config.py would need to be modified.

Why does the zero-cost predictor not integrate with model search space module

In file NASLib/tree/Develop/naslib/predictors/zerocost_v1.py, it use the search space defined in NASLib/tree/Develop/naslib/predictors/utils/models. It basically defines some architectures of nasbench101 and nasbench201.

However, there is already a module called search_space. including many search spaces, and they are well-structured,
Why the zerocost_v1 don't use the model defined in search_space?

How to use NB301?

Hello,

I was trying to use NasBenchmark301 based on the example notebook here there are lots of dependency incompatibilities with the NASLib.

It also requires certain branch of Auto-PyTorch and Auto-Pytorch also has incompatibility issues with the NASLib.

Following the another reference in the NASLib's readme I reached the nas-bench-x11 repository and it has also some incompatibility issues.

My question is, what is the suggested and proper way to use NB301 benchmarks?

Based on that we are planning to develop a surrogate model for a task.

Thank you in advance,

Simplify package dependencies by removing unnecessary imports

I currently use NASLib as a backend for defining my own search space for a benchmark. I noticed while sharing my code that the following imports introduce a number of dependencies that are not strictly necessary in use cases where only a subset of all the search spaces defined by NASLib are to be used. Auto-importing these search spaces enforces the constraint that the dependencies of each and every search space must be satisfied in order to use any single one of the search spaces. Is this truly necessary? For reference, I am currently using only the NASBench-201 search space.

from .simple_cell.graph import SimpleCellSearchSpace
from .darts.graph import DartsSearchSpace
from .nasbench101.graph import NasBench101SearchSpace
from .nasbench201.graph import NasBench201SearchSpace
from .nasbenchnlp.graph import NasBenchNLPSearchSpace
from .nasbenchasr.graph import NasBenchASRSearchSpace
from .hierarchical.graph import HierarchicalSearchSpace
from .transbench101.graph import TransBench101SearchSpace
from .transbench101.api import TransNASBenchAPI

Correspondance between numbers and operations in NAS-Bench-Zero-Suite.

Thank you for your fantastic work NAS-Bench-Zero-Suite.
I found a lot of zc_<name_of_nasbench>.json like zc_nasbench201.json, but I can't tell what tuples represent what actual architectures in the search space.
For example, I don't know (4, 0, 3, 1, 4, 3) in zc_nas-bench-201.json represents what architecture (like |avg_pool_3x3~0|+|skip_connect~0|nor_conv_1x1~1|+|none~0|avg_pool_3x3~1|nor_conv_1x1~2|).
I would be glad to know. Thank you.

Setup script fails

When running the pip install -e . command, set up fails. It seems tripped up on the numpy installation, but numpy is already installed (version consistent with that required).
Screen Shot 2022-07-07 at 3 00 37 PM
See error message above. The result is consistent across three different systems: M1 Mac and two Linux. Although not making the entire set up script fail, torchaudio also cannot be installed with the command suggested under Set Up in the NASLib docs. Additionally, it seems many of the packages specified in requirements.txt seem to have conflicts.

Units tests failing on commits

I have been attempting to use the library recently but I have noticed all the unit tests fail and have done for a for all the commits I have checked. I have also tried the tutorial and some of the methods but the same errors that the unit tests come up. Do you know what the last commit for the library is? I think the errors mainly originate in the graph construction when I try and debug.

Documentation

In the tutorial, when for example, it is written, "with NB201() you can simply query the benchmark", you may provide some examples or precise documentation. It's not clear what you can fill in for the query(arguments exactly).
It seems like you would be able to do NB201.query('cifar10', Metric.TEST_ACCURACY)
Instead of listing the setup and call, you're guided to use a config file. It's better to give the config parameters in the documentation with the function calls. Then say, you can collect these things in config files.

In the video and tutorial, I find it difficult to see why the introductory example doesn't actually train any models and the one using a NAS predictor supposedly does. It seems as if the former happens simply because the NASBenchSearchSpace is loaded, but that stays the same through the second (XGBoost example), there you do have load_labeled=False in one of the calls. That sounds like it may prevent accuracy lookup, but it's more likely that it is in the configuration file that is loaded. I would make it explicit, either in the code or in the configuration file.

Downloading taskonomy with download_tnb.sh

I am having trouble downloading the taskonomy dataset using download_tnb.sh. I created a directory for each of the buildings and ran the file. Now, my files look like this (I will give one filename as example):
dataset/
----benevolence/
--------class_object/
------------point_0_view_0_domain_class_object.npy
--------class_scene/
------------point_0_view_0_domain_class_scene.npy
--------normal/
------------point_0_view_0_domain_normal.npy
--------rgb/
------------point_0_view_0_domain_rgb.png
----other_building/
etc

With this file structure, generate_splits.py looks at wrong files and my splits are empty, for example benevolence.json has "[]" an empty list as entry.

I have been struggling to find the right way to download the data with the correct naming and could really use some help.

Source of validation accuracy in zero-cost case

In the zero-cost branch optimizers Npenas and Bananas, the validation accuracy of architectures is being queried from the zero-cost-benchmark as follows:

model.accuracy = self.zc_api[str(model.arch_hash)]['val_accuracy']

The question is, whether this supports the case where the user wants to use the ZeroCost predictor because their dataset or search space is not supported by the zero-cost benchmark.

If this is a case that we want to support, one option would be to introduce a parameter use_zc_api and use it as follows:

if self.use_zc_api:
    model.accuracy = self.zc_api[str(model.arch_hash)]['val_accuracy']
else:
    model.accuracy = model.arch.query(
        self.performance_metric, self.dataset, dataset_api=self.dataset_api
    )

NB301 cannot use

When installed by pip or GitHub, with full data, test_benchmark_apis.py always fails.

is there anything help doc when installing NB301? or the problem is that the API in NB301 has changed now?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.