Giter Site home page Giter Site logo

pysaliency's Introduction

Pysaliency

test

Pysaliency is a python package for saliency modelling. It aims at providing a unified interface to both the traditional saliency maps used in saliency modeling as well as probabilistic saliency models.

Pysaliency can evaluate most commonly used saliency metrics, including AUC, sAUC, NSS, CC image-based KL divergence, fixation based KL divergence and SIM for saliency map models and log likelihoods and information gain for probabilistic models.

Installation

You can install pysaliency from pypi via

pip install pysaliency

Quickstart

import pysaliency

dataset_location = 'datasets'
model_location = 'models'

mit_stimuli, mit_fixations = pysaliency.external_datasets.get_mit1003(location=dataset_location)
aim = pysaliency.AIM(location=model_location)
saliency_map = aim.saliency_map(mit_stimuli.stimuli[0])

plt.imshow(saliency_map)


auc = aim.AUC(mit_stimuli, mit_fixations)

If you already have saliency maps for some dataset, you can import them into pysaliency easily:

my_model = pysaliency.SaliencyMapModelFromDirectory(mit_stimuli, '/path/to/my/saliency_maps')
auc = my_model.AUC(mit_stimuli, mit_fixations)

Check out the Tutorial for a more detailed introduction!

Included datasets and models

Pysaliency provides several important datasets:

  • MIT1003
  • MIT300
  • CAT2000
  • Toronto
  • Koehler
  • iSUN
  • SALICON (both the 2015 and the 2017 edition and each with both the original mouse traces and the inferred fixations)
  • FIGRIM
  • OSIE
  • NUSEF (the part with public images)

and some influential models:

  • AIM
  • SUN
  • ContextAwareSaliency
  • BMS
  • GBVS
  • GBVSIttiKoch
  • Judd
  • IttiKoch
  • RARE2012
  • CovSal

These models are using the original code which is often matlab. Therefore, a matlab licence is required to make use of these models, although quite some of them work with octave, too (see below).

Using Octave

pysaliency will fall back to octave if no matlab is installed. Some models might work with octave, e.g. AIM and GBVSIttiKoch. In Debian/Ubuntu you need to install octave, octave-image, octave-statistics, liboctave-dev.

These models and dataset seem to work with octave:

  • models
    • AIM
    • GBVSIttiKoch
  • datasets
    • Toronto
    • MIT1003
    • MIT300
    • SALICON

Dependencies

The Judd Model needs some libraries to work. In ubuntu/debian you need to install these packages: libopencv-core-dev, libopencv-flann-dev, libopencv-imgproc-dev, libopencv-photo-dev, libopencv-video-dev, libopencv-features2d-dev, libopencv-objdetect-dev, libopencv-calib3d-dev, libopencv-ml-dev, opencv2/contrib/contrib.hpp

pysaliency's People

Contributors

matthias-k avatar naman0210 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pysaliency's Issues

how to use own model?

Hi,matthias-k
I tried the sample code you gave, but it failed。
How to use pysaliency to evaluate the effectiveness of my model if I use my model to generate a saliency image(.png) of the test set .

The fixations format is . jpg

pysaliency roc not found

I installed the pysaliency module. But I still have the issue:

File "/home/pysaliency-master/pysaliency/init.py", line 8, in
from .saliency_map_models import SaliencyMapModel, GeneralSaliencyMapModel, FixationMap
File "/home/pysaliency-master/pysaliency/saliency_map_models.py", line 13, in
from .roc import general_roc
ImportError: No module named roc

I am using python2.7 on a ubuntu system.

Error installing pysaliency through pip requirements.txt

Setting:

I have requirements.txt as below:

...
pysaliency==0.2.20
...

Steps to reproduce the problem:

sudo pip3 install -r requirements.txt

Error:

    ERROR: Command errored out with exit status 1:
     command: /Users/nam.wook.kim/Projects/vislab/env/bin/python3 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/hf/s4wflwvn133cxdhqnln404vw0000gn/T/pip-install-0911sp1o/pysaliency_a0e3a46569494c3ba607eb6126b4c59b/setup.py'"'"'; __file__='"'"'/private/var/folders/hf/s4wflwvn133cxdhqnln404vw0000gn/T/pip-install-0911sp1o/pysaliency_a0e3a46569494c3ba607eb6126b4c59b/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/hf/s4wflwvn133cxdhqnln404vw0000gn/T/pip-pip-egg-info-og66mmr4
         cwd: /private/var/folders/hf/s4wflwvn133cxdhqnln404vw0000gn/T/pip-install-0911sp1o/pysaliency_a0e3a46569494c3ba607eb6126b4c59b/
    Complete output (5 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/private/var/folders/hf/s4wflwvn133cxdhqnln404vw0000gn/T/pip-install-0911sp1o/pysaliency_a0e3a46569494c3ba607eb6126b4c59b/setup.py", line 7, in <module>
        import numpy as np
    ModuleNotFoundError: No module named 'numpy'
    ----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/04/51/bedc02df4be75662d2d11a474c669c67ba4dc495be249a5e6cd4be7a0625/pysaliency-0.2.20.tar.gz#sha256=f3a05df32e4b31a2e459ccfa653686cdde169a45017ac6e15af6cfac276187b3 (from https://pypi.org/simple/pysaliency/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement pysaliency==0.2.20 (from versions: 0.2.0, 0.2.3, 0.2.17, 0.2.19, 0.2.20, 0.2.21)
ERROR: No matching distribution found for pysaliency==0.2.20

Solution

Installing Cython and numpy before pip install solves the issue.

sudo pip3 install Cython
sudo pip3 install numpy
sudo pip3 install -r requirements.txt

Unable to download MIT1003 in Colab

I wanted to download MIT1003 dataset in colab environment. As suggested in Readme, I tried intslling octave and other packages. Still it runs to an error

I installed octave and related packages in Colab via ! apt install octave
! apt install octave liboctave-dev
! apt install octave-statistics
! apt install octave-image

Running this

dataset_location = 'datasets'
model_location = 'models'
mit_stimuli, mit_fixations = pysaliency.external_datasets.get_mit1003(location=dataset_location)

It went to an error as shown:

Downloading file: 235MB [00:07, 33.3MB/s]
Checking md5 sum...
Downloading file: 47.1MB [00:02, 23.1MB/s]
Checking md5 sum...
Downloading file: 32.8kB [00:00, 430kB/s]
Checking md5 sum...
Creating stimuli
Creating fixations
Running original code to extract fixations. This can take some minutes.
Warning: In the IPython Notebook, the output is shown on the console instead of the notebook.

CalledProcessError Traceback (most recent call last)
in ()
3
4
----> 5 mit_stimuli, mit_fixations = pysaliency.external_datasets.get_mit1003(location=dataset_location)

3 frames
/usr/lib/python3.7/subprocess.py in check_call(*popenargs, **kwargs)
361 if cmd is None:
362 cmd = popenargs[0]
--> 363 raise CalledProcessError(retcode, cmd)
364 return 0
365

CalledProcessError: Command '['/usr/bin/octave', '--traditional', '--eval', "try;extract_all_fixations;;catch exc;struct_levels_to_print(10);print_struct_array_contents(true);disp(lasterror);for i=1:size(lasterror.stack);disp(lasterror.stack(i));end;disp('ERROR');exit(1);end;quit"]' returned non-zero exit status 1.

Any solution to this issue??? Please

strange issue in pysaliency-matlab cooperation

I have a strange issue in running pysaliency. It reports notoriously 'file not found' error whenever the code employs the Matlab.
Following Quickstart
1.
It happened when downloading MIT1003 dataset
magically, somehow when in debug I called pysaliency.get_mit1003 interactively then it downloaded, so now i got the fixations and stimuli (so now I can get center bias for deepgaze at least)

but just in next step it happens again
2.

aim = pysaliency.AIM(location=model_location)
Python reports:
_File "C:\Users\squro.conda\envs\DS1\lib\site-packages\scipy\io\matlab\mio.py", line 45, in open_file
return open(file_like, mode), True
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\squro\AppData\Local\Temp\tmpcaf9knjd\saliency_map.mat'

whereas Matlab console shows:
_Reading Image.
Error using imread>get_full_filename (line 567)
File "C:\Users\squro\AppData\Local\Temp\tmp4j41rzxk\stimulus.png" does not exist.

Error in imread (line 374)
fullname = get_full_filename(filename);

Error in AIM (line 142)
inimage = (imresize(im2double((imread(filename))),resizesize));

Error in AIM_wrapper (line 5)
saliency_map = AIM(filename, 1.0, convolve, filters);
ERROR_

What is strange Python (3.7.7) reports the error even before Matlab (2020a) console opens.
edit: Win10 OS - maybe that is the issue.

About salicon-2017 evaluation

The gaze locations of salicon-2017 challenge have a few points beyond the image boundary. For example, the image size is (480,640) but the gaze location is (xxx,641). The question is how to deal with these points in the evaluation. In this code, it raises out-of-bound error.

clip_out_of_stimulus_fixations broken after refactoring

Note to myself: clip_out_of_stimulus_fixations seems broken after the refactoring, because it applies np.clip to a VariableLengthArray which seems to fail in very funny ways. I need to add a test for clip_out_of_stimulus_fixations and fix the bug.

Broken download link for the RARE2012 model

Hello matthias-k,

the download link for the RARE2012 model seems to be broken. Is there a new one that works or is there another way to get the patched model?

With kind regards
Marlow

Context Aware Saliency

recently there are cyber attacks on technion university, so i can't access the ContextAwareSaliency from them. are there any alternative links or model for ContextAwareSaliency ? thank you

Error while downloading mit1003

Hi!
I'm using function to download mit1003 dataset using command.
mit1003_stimuli, mit1003_fixations = pysaliency.get_mit1003(location='test_datasets_py3')

and I'm getting following error (Tried on macOS & Ubuntu with Octave) :

scalar structure containing the fields:

file: 1x40 sq_string
name: 1x21 sq_string
line: 1x1 scalar
column: 1x1 scalar
scope: 1x1 scalar
context: 1x1 scalar

ERROR

Traceback (most recent call last):
File "", line 1, in
File "/home/vadym/.conda/envs/pysaliency3/lib/python3.6/site-packages/pysaliency/external_datasets.py", line 350, in get_mit1003
run_matlab_cmd('extract_all_fixations;', cwd=temp_dir)
File "/home/vadym/.conda/envs/pysaliency3/lib/python3.6/site-packages/pysaliency/utils.py", line 256, in run_matlab_cmd
sp.check_call([matlab] + args, cwd=cwd)
File "/home/vadym/.conda/envs/pysaliency3/lib/python3.6/subprocess.py", line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/octave', '--traditional', '--eval', "try;extract_all_fixations;;catch exc;disp(lasterror);for i=1:size(lasterror.stack);disp(lasterror.stack(i));end;disp('ERROR');exit(1);end;quit"]' returned non-zero exit status 1.

FileNotFoundError when attempting to download MIT1003 database

Hello,

I've encountered a FileNotFoundError issue while trying to download the MIT1003 dataset using pysaliency. Below is the error traceback:

FileNotFoundError                Traceback (most recent call last)
File ~\anaconda3\Lib\site-packages\scipy\io\matlab\_mio.py:39, in _open_file(file_like, appendmat, mode)
     38 try:
---> 39     return open(file_like, mode), True
     40 except OSError as e:
     41     # Probably "not found"

FileNotFoundError: [Errno 2] No such file or directory: '..\\Local\\Temp\\tmpwnyng9c2\\extracted\\i05june05_static_street_boston_p1010764.jpeg_CNG.mat'

This error occurs during the extraction and loading process of the MIT1003 dataset. It seems like the script is unable to find or access the .mat file after downloading and extracting the dataset.

I am using pysaliency version 0.2.21 on an Anaconda environment with Python. I have also ensured that all dependencies are up to date and have attempted to run the code on different MATLAB versions (2023b and 2020a) to check for compatibility issues, but the error persists.

Could you please provide guidance on how to resolve this issue, or is there a known workaround to successfully download and access the MIT1003 dataset through pysaliency?

Thank you for your assistance.

[Documentation] Getting pysaliency to work on OS X

I couldn't get this to work on OS X (clang: error: unsupported option '-fopenmp'), until I installed a new version of gcc. Here's what worked for me:

sudo port install gcc7
sudo port select --set gcc mp-gcc7
python setup.py install

Trying to understand fixations.x_hist vs. fixations.scanpath

Hello,

I'm trying to understand the fixation.x_hist vs. fixations.scanpath. From my understanding, x_hist is a list of fixation history and has items of variable length. Essentially, you have a growing list "sandwiched" between empty lists. The item before the empty list represents a complete scanpath. I'm following the tutorial notebook and am working with MIT1003 so to test this understanding, I have the following block of code:

for x in fixations.x_hist[:20]: print(x)
print('-'*100)
print(scanpaths.xs[0])
print(scanpaths.xs[1])
print(scanpaths.xs[2])

This gives me the following output:
CleanShot 2024-06-25 at 11 54 58@2x

(I pasted the output as an image for the color coding). I expected the colored outputs to match based on my code. It seems like they match but the scanpaths have an extra item (highlighted in yellow).

I'm wondering if my understanding of the data structure is wrong. I would love some clarity.

Fixation Based KL-Divergence Memory Use Issue

Hi Matthias,

Back again with another memory-related issue -- and don't worry, I've updated pysaliency to the latest release since my last post :P

Setting 'caching' to False when using SaliencyMapModelFromDirectory still helps me get around some of the memory use issues I described in issue #9 .

However, I'm running into what feels like a related issue when I try and use "fixation_based_KL_divergence" on the same data when the "nonfixations" flag is set to "uniform".

After examining the code for this a bit, I id'd what I think is the likely culprit: with the flag set this way, you create the nonfixation values by appending the flattened salience map values to a list in a loop (all this happens between lines 531-545 in "saliency_map_models.py").

This is workable for small to medium-sized sets (I've tested it on the MIT 1003, Toronto, and Koehler datasets).

But given the large size of the non-fixation arrays, with the CAT 2000 dataset (and presumably some of the other big ones) the growth of that list ends up filling both my available RAM and swap partition (~64gb total) and my system just kills python before the analysis finishes.

As a simple kludgy work-around, it was straightforward to copy the relevant part of the code and make it run over individual images/maps/fixation sets, then average the scores. But this seems to lead to an overestimate of the divergence, e.g. for the MIT 1003 set:

kl_result

Do you run into the same issue on whatever your dev system for pysaliency is? If you do, do you have a recommended work-around?

As a more general point (and I'm happy to file a feature request separately to this end if that'd help with tracking): would you consider providing some kind of concordance between the MIT Saliency Benchmark metrics and the combinations of functions/settings that pysaliency provides that align with them?

Asking mostly because If I'm giving us both needless headaches because I thought incorrectly that this combination of analysis function/flags is the one needed for reporting consistent with the public benchmarks, I'll be giving myself a good kicking 😝

Thanks again!

Error while getting mit1003 dataset

hi!
I have install pysaliency successfully. But when I tried to run the quickstart example , an error occurred.
logs as follows:

File "C:/Users/Administrator/PycharmProjects/untitled/eye_fixations/test.py", line 7, in
mit_stimuli, mit_fixations = pysaliency.external_datasets.get_mit1003(location=dataset_location)
File "C:\ProgramData\Anaconda3\lib\site-packages\pysaliency\external_datasets.py", line 422, in get_mit1003
return _get_mit1003('MIT1003', location=location, include_initial_fixation=False)
File "C:\ProgramData\Anaconda3\lib\site-packages\pysaliency\external_datasets.py", line 269, in _get_mit1003
stimuli = _load(os.path.join(location, 'stimuli.hdf5'))
File "C:\ProgramData\Anaconda3\lib\site-packages\pysaliency\external_datasets.py", line 78, in _load
return dill.load(open(pydat_filename, 'rb'))
FileNotFoundError: [Errno 2] No such file or directory: 'datasets\MIT1003\stimuli.pydat'

CAT 2000 Dataset/Memory Use Issues

Hi Matthias,

Nice work -- this package's great.

Having an issue with the code for the CAT 2000 training data set/the package's memory footprint.

I've created some saliency maps for the CAT 2000 training data outside pysaliency itself.

I can load the stimuli and fixations for the set from a local copy using the "pysaliency.get_cat2000_train()" command.

The saliency maps are organized into the same folder structure as the source data, with each saliency map contained in its category specific folder.

If I try to load them from the folder containing these category specific sub-folders using the "pysaliency.SaliencyMapModelFromDirectory" command, I get the error in the attached.

Looks like this comes from the fact that this function doesn't work recursively. I can get around this by using the "pysaliency.SaliencyMapModelFromFiles" function with a list of paths to each salience map.

The trouble is that this seems to use quite a bit of memory.

My computer has 32gb of ram and a 32gb swap partition. Calculating the Judd-AUC score using the "SaliencyMapModelFromDirectory" function for the MIT 1003 dataset consumes about 8.1gb. I note also that even after the score is calculated, the additional memory used is not released without restarting the ipython kernel.

If I try to calculate the same score for the CAT 2000 training set using the "SaliencyMapModelFromFiles" function, it fills both the ram and the swap partition completely, causing the ipython kernel to die.

Could you make a recommendation on how to work with this dataset in a slightly more memory efficient way? Do you have a sense of what might be responsible for the memory use issue otherwise?

In case you think this might be a system/python environment specific issue, here are (I think all of) the relevant specs:

OS: Ubuntu 16.04 LTS
Python environment:

  • Anaconda
  • Python 3.5.5
  • Numpy 1.14.5
  • imageio 2.3.0
  • boltons 18.0.0
  • scipy 1.1.0
  • pysaliency built and installed from source using the version hosted here (looks like I cloned the repo on September 24, 2018, so the state of the codebase is at commit: 6d9c394

from_directory_error

Thanks again!

Having issues with pysaliency.external_datasets.coco_freeview.get_COCO_Freeview_train

Hi,

This is a great repo! I had an issue with the pysaliency.external_datasets.coco_freeview.get_COCO_Freeview_train function. Specifically, it seems that the function creates the stimuli and fixations, but then crashes out at the very end with the following error:

UFuncTypeError: ufunc 'logical_or' did not contain a loop with signature matching types (None, <class 'numpy.dtypes.StrDType'>) -> None

Would someone be able to help me resolve this error?

BTW, I was able to work with the MIT1003 and CAT2000 datasets using this repo.

All the best!

Request for updating the pip version

Hi,

I was wondering if you do not mind updating the pip version (currently there is only a 2021 version there) to be as new as the version here?

Thanks,

google_drive download code is invalid

pysaliency.external_datasets._get_SALICON_stimuli
#1293
download_file_from_google_drive('1g8j-hTT-51IG1UFwP0xTGhLdgIUCW5e5', stimuli_file)
check_file_hash(stimuli_file, 'eb2a1bb706633d1b31fc2e01422c5757')

pip install not up to date

Hi!

I have tried to pip install pysaliency but the code I obtained does not seem up to date with the one on this repo (for example it still includes the unsupported imports from scipy.misc).

I have tried to download the code and install it directly but I have run into a number of problems.
Here are the steps which I believe yielded the best results :

  • download and unzip the project
  • move pysaliency-master to /home/me/.local/lib/python3.6/site-packages/
  • cd to pysaliency-master and run python3 setup.py build_ext --inplace

When I try to run a script containing the quickstart I get the following error :
Traceback (most recent call last):
File "AIM_main.py", line 6, in
mit_stimuli, mit_fixations = pysaliency.external_datasets.get_mit1003(location=dataset_location)
AttributeError: module 'pysaliency' has no attribute 'external_datasets'
where AIM_main.py is the name of my script.

I am using python 3.6 on Ubuntu 18.4 and am using Octave (although I doubt Octave has anything to do with this particular error).

Hope I have been clear,
Many thanks in advance

Error with AUC shuffled and NSS

Hello @matthias-k ,

I just find this error when I try to computed auc shuffled and nss.
aucs = self.AUC_per_image(stimuli, fixations, nonfixations=nonfixations, verbose=verbose) File "/home/env/local/lib/python2.7/site-packages/pysaliency/saliency_map_models.py", line 253, in AUC_per_image xs *= stimuli.sizes[n][1]/widths[other_ns] TypeError: Cannot cast ufunc multiply output from dtype('float64') to dtype('int64') with casting rule 'same_kind'

nss = my_model.NSSs(stimuli_salicon_val[0:50], fixations_salicon_val[fixations_salicon_val.n < 50],verbose=True) File "/home/env/local/lib/python2.7/site-packages/pysaliency/saliency_map_models.py", line 468, in NSSs _values -= mean TypeError: Cannot cast ufunc subtract output from dtype('float64') to dtype('uint8') with casting rule 'same_kind'

I think that the problem can be solved by adding
xs *= (stimuli.sizes[n][1]/widths[other_ns]).astype(xs.dtype)
For the both cases. Is that correct?

Cheers,
Junting

Error with import pysaliency

Hello,

I am getting an error while I am doing import pysaliency.:

Traceback (most recent call last):
File "01-evaluate.py", line 1, in
import pysaliency
File "/home/pysaliency-master/pysaliency/init.py", line 8, in
from .saliency_map_models import SaliencyMapModel, GeneralSaliencyMapModel, FixationMap
File "/home/pysaliency-master/pysaliency/saliency_map_models.py", line 13, in
from .roc import general_roc
ImportError: No module named roc

I have followed all installation instructions.

pip install numpy scipy theano Cython natsort dill hdf5storage
pip install git+https://github.com/matthias-k/optpy
pip install git+https://github.com/matthias-k/pysaliency
python setup install

Best,

Error when running code in “QuickStart”

Hello! @matthias-k

After installing pysaliency package,run code in your “QuickStart”,the following error occurred.

In [4]: import pysaliency

In [5]: dataset_location = 'datasets'
   ...: model_location = 'models'
   ...:
   ...: mit_stimuli, mit_fixations = pysaliency.external_datasets.get_mit1003(location=dataset_location)
   ...: aim = pysaliency.AIM(location=model_location)
   ...: saliency_map = aim.saliency_map(mit_stimuli.stimuli[0])
   ...:
   ...: plt.imshow(saliency_map)
   ...:
   ...:
   ...: auc = aim.AUC(mit_stimuli, mit_fixations)
Downloading file:  16%|███████▋                                         | 36.8M/235M [01:59<10:41, 309kB/s]
Checking md5 sum...
~/Anaconda/anaconda3/envs/pytorch_dl/lib/python3.6/site-packages/pysaliency/utils.py:327: UserWarning: MD5 sum of /var/folders/8n/d46946h949n5jj4ndz0q4khr0000gn/T/tmp7rcls1ne/ALLSTIMULI.zip has changed. Expected 0d7df8b954ecba69b6796e77b9afe4b6 but got 78bc4e1d01996155ed7dc0267b751429. This might lead to this code producing wrong data.
  " this code producing wrong data.".format(filename, md5_hash, file_hash))
Downloading file: 47.1MB [00:44, 1.05MB/s]
Checking md5 sum...
Downloading file: 32.8kB [00:00, 318kB/s]
Checking md5 sum...
Creating stimuli
---------------------------------------------------------------------------
BadZipFile                                Traceback (most recent call last)
<ipython-input-5-e4d86b1dc162> in <module>
      2 model_location = 'models'
      3
----> 4 mit_stimuli, mit_fixations = pysaliency.external_datasets.get_mit1003(location=dataset_location)
      5 aim = pysaliency.AIM(location=model_location)
      6 saliency_map = aim.saliency_map(mit_stimuli.stimuli[0])

Help running

Sorry for the inconvenience, but I'm having trouble running the code provided in the getting started section with octave.

I've installed it using pip3 install git+https://github.com/matthias-k/pysaliency.git
and ran it with python3.

But I'm having the error bellow:

Loading DATA/ems i103385030.jpeg
  scalar structure containing the fields:

    message = 'nanmedian' undefined near line 313 column 28
    identifier = Octave:undefined-function
    stack =

      3x1 struct array containing the fields:

        file =
        {
          [1,1] = /tmp/tmpd_xkgbr1/DatabaseCode/checkFixations.m
          [2,1] = /tmp/tmpd_xkgbr1/extract_fixations.m
          [3,1] = /tmp/tmpd_xkgbr1/extract_all_fixations.m
        }


        name =
        {
          [1,1] = checkFixations
          [2,1] = extract_fixations
          [3,1] = extract_all_fixations
        }


        line =
        {
          [1,1] =  313
          [2,1] =  11
          [3,1] =  1118
        }


        column =
        {
          [1,1] =  26
          [2,1] =  19
          [3,1] =  1
        }


        scope =
        {
          [1,1] = []
          [2,1] = []
          [3,1] = []
        }


        context =
        {
          [1,1] = 0
          [2,1] = 0
          [3,1] = 0
        }


  scalar structure containing the fields:

    file = /tmp/tmpd_xkgbr1/DatabaseCode/checkFixations.m
    name = checkFixations
    line =  313
    column =  26
    scope = []
    context = 0
  scalar structure containing the fields:

    file = /tmp/tmpd_xkgbr1/extract_fixations.m
    name = extract_fixations
    line =  11
    column =  19
    scope = []
    context = 0
  scalar structure containing the fields:

    file = /tmp/tmpd_xkgbr1/extract_all_fixations.m
    name = extract_all_fixations
    line =  1118
    column =  1
    scope = []
    context = 0
__ERROR__
Traceback (most recent call last):
  File "aim.py", line 6, in <module>
    mit_stimuli, mit_fixations = pysaliency.external_datasets.get_mit1003(location=dataset_location)
  File "/home/pedrofeo/.local/lib/python3.7/site-packages/pysaliency/external_datasets.py", line 450, in get_mit1003
    return _get_mit1003('MIT1003', location=location, include_initial_fixation=False)
  File "/home/pedrofeo/.local/lib/python3.7/site-packages/pysaliency/external_datasets.py", line 362, in _get_mit1003
    run_matlab_cmd('extract_all_fixations;', cwd=temp_dir)
  File "/home/pedrofeo/.local/lib/python3.7/site-packages/pysaliency/utils.py", line 309, in run_matlab_cmd
    sp.check_call([matlab] + args, cwd=cwd)
  File "/usr/lib/python3.7/subprocess.py", line 347, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/octave', '--traditional', '--eval', "try;extract_all_fixations;;catch exc;struct_levels_to_print(10);print_struct_array_contents(true);disp(lasterror);for i=1:size(lasterror.stack);disp(lasterror.stack(i));end;disp('__ERROR__');exit(1);end;quit"]' returned non-zero exit status 1.

Unable to run evaluate.py

Hi Matthias, I'm trying to evaluate a model using your code but I get the error
python3 evaluate.py evaluate-model -d MIT1003 /home/paul/Downloads/model.pth.tar
Loading dataset mit1003
Traceback (most recent call last):
File "evaluate.py", line 545, in
cli()
File "/usr/lib/python3/dist-packages/click/core.py", line 829, in call
return self.main(*args, **kwargs)
File "/usr/lib/python3/dist-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/lib/python3/dist-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python3/dist-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python3/dist-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "evaluate.py", line 125, in evaluate_model
_evaluate_model(dataset, type, output, evaluation, model_location)
File "evaluate.py", line 132, in _evaluate_model
model = load_probabilistic_model(dataset, model_location)
File "evaluate.py", line 81, in load_probabilistic_model
model = HDF5Model(stimuli, location)
File "/home/paul/.local/lib/python3.8/site-packages/pysaliency/precomputed_models.py", line 210, in init
self.parent_model = HDF5SaliencyMapModel(stimuli = stimuli,
File "/home/paul/.local/lib/python3.8/site-packages/pysaliency/precomputed_models.py", line 190, in init
self.hdf5_file = h5py.File(self.filename, 'r')
File "/home/paul/.local/lib/python3.8/site-packages/h5py/_hl/files.py", line 442, in init
fid = make_fid(name, mode, userblock_size,
File "/home/paul/.local/lib/python3.8/site-packages/h5py/_hl/files.py", line 195, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 96, in h5py.h5f.open
OSError: Unable to open file (file signature not found)

After digging deeper into the precomputed_models.py file, self.hdf5_file = h5py.File(self.filename, 'r'), here filename is /home/paul/Downloads/model.pth.tar which isn't a valid h5py file. What should I do?

[Not an issue, but a question] - Raw eye tracking data

Hi. Again, I really appreciate this great work.

In addition to fixations, I am also interested in the raw eye tracking data. From my experience, MIT1003 provides this raw data, but I don't think other large datasets like CAT2000 and Coco Free View do. I wanted to confirm this. Also, if anyone knows of any large datasets that provide both the stimuli and the raw eye tracking data before it gets processed into fixations/saccades, I'd love to check them out.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.