Giter Site home page Giter Site logo

candidate_models's Introduction

Build Status Documentation Status Contributor Covenant

Brain-Score is a platform to evaluate computational models of brain function on their match to brain measurements in primate vision. The intent of Brain-Score is to adopt many (ideally all) the experimental benchmarks in the field for the purpose of model testing, falsification, and comparison. To that end, Brain-Score operationalizes experimental data into quantitative benchmarks that any model candidate following the BrainModel interface can be scored on.

Note that you can only access a limited set of public benchmarks when running locally. To score a model on all benchmarks, submit it via the brain-score.org website.

See the documentation for more details, e.g. for submitting a model or benchmark to Brain-Score. For a step-by-step walkthrough on submitting models to the Brain-Score website, see these web tutorials.

See these code examples on scoring models, retrieving data, using and defining benchmarks and metrics. These previous examples might be helpful, but their usage has been deprecated after the 2.0 update.

Brain-Score is made by and for the community. To contribute, please send in a pull request.

Local installation

You will need Python = 3.7 and pip >= 18.1.

pip install git+https://github.com/brain-score/vision

Test if the installation is successful by scoring a model on a public benchmark:

from brainscore_vision.benchmarks import public_benchmark_pool

benchmark = public_benchmark_pool['dicarlo.MajajHong2015public.IT-pls']
model = my_model()
score = benchmark(model)

# >  <xarray.Score ()>
# >  array(0.07637264)
# >  Attributes:
# >  Attributes:
# >      error:                 <xarray.Score ()>\narray(0.00548197)
# >      raw:                   <xarray.Score ()>\narray(0.22545106)\nAttributes:\...
# >      ceiling:               <xarray.DataArray ()>\narray(0.81579938)\nAttribut...
# >      model_identifier:      my-model
# >      benchmark_identifier:  dicarlo.MajajHong2015public.IT-pls

Some steps may take minutes because data has to be downloaded during first-time use.

Environment Variables

Variable Description
RESULTCACHING_HOME directory to cache results (benchmark ceilings) in, ~/.result_caching by default (see https://github.com/brain-score/result_caching)

License

MIT license

Troubleshooting

`ValueError: did not find HDF5 headers` during netcdf4 installation pip seems to fail properly setting up the HDF5_DIR required by netcdf4. Use conda: `conda install netcdf4`
repeated runs of a benchmark / model do not change the outcome even though code was changed results (scores, activations) are cached on disk using https://github.com/mschrimpf/result_caching. Delete the corresponding file or directory to clear the cache.

CI environment

Add CI related build commands to test_setup.sh. The script is executed in CI environment for unittests.

References

If you use Brain-Score in your work, please cite "Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?" (technical) and "Integrative Benchmarking to Advance Neurally Mechanistic Models of Human Intelligence" (perspective) as well as the respective benchmark sources.

@article{SchrimpfKubilius2018BrainScore,
  title={Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?},
  author={Martin Schrimpf and Jonas Kubilius and Ha Hong and Najib J. Majaj and Rishi Rajalingham and Elias B. Issa and Kohitij Kar and Pouya Bashivan and Jonathan Prescott-Roy and Franziska Geiger and Kailyn Schmidt and Daniel L. K. Yamins and James J. DiCarlo},
  journal={bioRxiv preprint},
  year={2018},
  url={https://www.biorxiv.org/content/10.1101/407007v2}
}

@article{Schrimpf2020integrative,
  title={Integrative Benchmarking to Advance Neurally Mechanistic Models of Human Intelligence},
  author={Schrimpf, Martin and Kubilius, Jonas and Lee, Michael J and Murty, N Apurva Ratan and Ajemian, Robert and DiCarlo, James J},
  journal={Neuron},
  year={2020},
  url={https://www.cell.com/neuron/fulltext/S0896-6273(20)30605-X}
}

candidate_models's People

Contributors

anayebi avatar dapello avatar franzigeiger avatar jjpr-mit avatar mike-ferguson avatar mschrimpf avatar qbilius avatar tiagogmarques avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

candidate_models's Issues

quick question regarding batch norm for vgg models

Hi @mschrimpf,
I hope all is going well.
Quick question: for models "vgg-16" and "vgg-19" on brain-score.org, do these correspond to PyTorch models torchvision.models.vgg16_bn or to torchvision.models.vgg16 (with / without batch norm)?
Cheers, Robert

store pca projection

instead of re-computing pca projection for all stimuli, store it and re-use it

ERROR: Cannot find key: --model

Hi, I am trying to run the minimal example that is in the documentation, but I am facing this error when invoquing candidate_models:

2020-02-24 15:57:15,230 INFO:main:Running candidate_models --model alexnet
ERROR: Cannot find key: --model

Not sure if I am running it correctly. Thanks.

ModuleNotFoundError: No module named 'unsup_vvs.network_training.models.simclr'

I'm trying to use candidate_models with Brain Score, but I received the following error:

  File "python3.7/site-packages/candidate_models/base_models/unsupervised_vvs/__init__.py", line 77, in __call__
    cfg_kwargs=self.CFG_KWARGS.get(identifier, {}))
  File "python3.7/site-packages/candidate_models/base_models/unsupervised_vvs/__init__.py", line 133, in __get_tf_model
    model_type=model_type, cfg_kwargs=cfg_kwargs)
  File "python3.7/site-packages/candidate_models/base_models/unsupervised_vvs/__init__.py", line 157, in _build_model_ending_points
    **cfg_kwargs)
  File "python3.7/site-packages/unsup_vvs/neural_fit/cleaned_network_builder.py", line 189, in get_network_outputs
    ending_points = get_simclr_ending_points(inputs)
  File "python3.7/site-packages/unsup_vvs/neural_fit/cleaned_network_builder.py", line 22, in get_simclr_ending_points
    from unsup_vvs.network_training.models.simclr import resnet
ModuleNotFoundError: No module named 'unsup_vvs.network_training.models.simclr'

cannot import name 'LayerModel'

from candidate_models import score_model raises on error cannot import name 'LayerModel'

candidate_models/base_models/cornet.py attempts to import from model_tools.brain_transformation import LayerModel, but LayerModel is not defined.
Removing LayerModel resolved the error.

Issue with code crashing due to high RAM usage

I am currently evaluating a 3D ResNet that has been pre-trained on short video segments. However, I encountered a problem while running the "score_model" function with an instance of the "ModelCommitment" class. It appears that the code is utilizing all available memory resources, causing it to crash.

To reproduce this issue, I modified the "preprocess_images" method located in the "activations/pytorch.py" Python file. In my modification, I pass concatenated images as a 4D input to the model. The activations model is then initialized based on this preprocessing step.

During execution, I monitored both GPU and CPU activity. Although the GPU usage is temporary, the CPU usage gradually increases until it eventually freezes the procedure. I would like to note that when using small resolutions for input images that are concatenated to create a static video frame, everything works fine. However, increasing the input image size beyond 32x32 leads to the problem mentioned above.

Please investigate this issue and provide guidance on resolving it. Thank you.

Layers corresponding to each region

Hi, I see here that you list the layer that you used to compare to V1 but I cant find the layers you used for V2, V4, IT, or Behavior. I was wondering where I can find this information for all the candidate models you all benchmarked

Cannot install candidate models because of xarray version conflict

I try to install candidate models to test my trained CORnet-S via pip install "candidate_models @ git+https://github.com/brain-score/candidate_models", but it has an error message below.

ERROR: Cannot install brain-score and candidate-models==0.1.0 because these package versions have conflicting dependencies.

The conflict is caused by:
candidate-models 0.1.0 depends on xarray<=0.12
brainio-base 0.1.0 depends on xarray==0.16.1

To fix this you could try to:

  1. loosen the range of package versions you've specified
  2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies

I try to install it using different sever with both windows and Linux, and cannot solve the problem.
What should I do?

TF-slim models label_offset is not handled

all the models that have a non-zero label offset (i.e. TensorFlow slim models with a background class at index zero) are outputting the wrong ImageNet logits. E.g. inception_v1 thereby produces a score of 0 on the ImageNet benchmark.

package ImageNet validation in brainscore instead of manually loading

right now, ImageNet validation images are downloaded through candidate_models to initialize the PCA.
The suggestion here is to package ImageNet-val in brainscore as a StimulusSet so that we can just retrieve it from there.
However, this introduces additional dependency between the two frameworks such that candidate_models with PCA can no longer be used without brainscore.

R50 example

Hi,

I would like to test some R50 weights that I have, but am struggling to translate the AlexNet sample to work for ResNet50.
If possible could you please point me to some implementation / code that allows me to just specify the weights and otherwise submits a ResNet50?

Best

visual-degrees branch tries to access PixelsToDegrees from model_tools which was moved to brainscore

On candidate_models/model_commitments/ml_pool.py
It adds an activation_hook that no longer exists.

from model_tools.brain_transformation import ModelCommitment, PixelsToDegrees

class Hooks:
HOOK_SEPARATOR = "--"

def __init__(self):
    pca_components = 1000
    self.activation_hooks = {
        f"pca_{pca_components}": lambda activations_model: LayerPCA.hook(
            activations_model, n_components=pca_components),
        "degrees": lambda activations_model: PixelsToDegrees.hook(
            activations_model, target_pixels=activations_model.image_size)}

containerify models

Brain-Score now allows the submission of zip files and runs them in their own conda environment. We are starting to run into version mismatches between the many different models in candidate_models (e.g. 8885cd4) so should start to tease these apart.

Especially together with brain-score/vision#231, each model family can be its own submission that Brain-Score then provides for download without version conflicts.

cross validation is slow

Hi

I am working with public benchmarks for IT and V4 layers. My goal is to get scores for a modified 2-block resnet architecture for each layer on the benchmarks to select layers to run on private benchmarks.
It is taking me around 7 hours for encoder.conv1 layer out of which cross validation takes ~5-6 hours and similar for other layers as well.
Attaching screenshot for reference.
issue

ONNX i/o

From Ko:

I was wondering whether you have considered the ONNX format (https://onnx.ai) for dealing with neural network models built on various different platforms. This is the open neural network exchange format especially built for interchangeable AI models built on almost any platform (PyTorch, tensor-flow, Caffe etc). This has been developed by Microsoft and Facebook. It seems fairly simple to convert any existing model built on any of these platforms to ONNX format and share it with others (who might not be using the same platform for their own model development endeavors).

I was thinking may be for simplicity at our end โ€” we can make it mandatory to convert every model to the ONNX format before submission.

Having ONNX I/O (import/export) is probably a good idea once we want to release mapped brain-models.

Difference between public and private benchmarks?

I'm trying to replicate the brain-scoring of CORNet-S. I noticed that the brain-score leaderboard uses the MajajHong2015.IT-pls (V3) benchmark while the PyTorch candidate_models example uses dicarlo.MajajHong2015public.IT-pls. Are these the same?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.