Giter Site home page Giter Site logo

braindevel's People

Contributors

mahelita avatar robintibor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

braindevel's Issues

Explanation Input-Feature Unit-Output Correlation Maps (Envelope Activation Correlation)

Filter to frequency bands:

for low_cut_hz, high_cut_hz in filterbands:
log.info("Compute filterband from {:.1f} to {:.1f}...".format(
low_cut_hz, high_cut_hz))
if low_cut_hz > 0 and high_cut_hz < 125:
filtered = bandpass_topo(train_topo, low_cut_hz,
high_cut_hz, sampling_rate=250.0, axis=0, filt_order=4)
elif low_cut_hz == 0:
filtered = lowpass_topo(train_topo, high_cut_hz,
sampling_rate=250.0, axis=0, filt_order=4)
else:

Compute envelope

(absolute of hilbert transform):

env = np.abs(scipy.signal.hilbert(batches_topo, axis=2))

Square Envelope

(square_before_mean was True in our setting)
[Envelope was saved to a file and reloaded]

if square_before_mean:
env = np.square(env)

⚠️ Note there is possibly one mistake/discrepancy in the paper: We square before averaging (next step), not after ⚠️

Compute Moving Average of the envelope within the receptive field of the corresponding layer

Basic steps:

  1. Determine receptive field size of the layer
    def transform_to_meaned_trial_env(env, model, i_layer, train_set,
    n_inputs_per_trial):
    all_layers = lasagne.layers.get_all_layers(model)
    layer = all_layers[i_layer]
    field_size = get_receptive_field_size(layer)
  2. Average the envelopes within the receptive field using pooling
    meaned_env = get_meaned_batch_env(env, field_size)
    pooled = downsample.max_pool_2d(inputs, ds=(field_size, 1), st=(1, 1), ignore_border=True, mode='average_exc_pad')
  3. Aggregate per-trial envelopes from the per-batch envelopes
    fb_envs_per_trial = fb_env.reshape(n_trials,n_inputs_per_trial,
    fb_env.shape[1], fb_env.shape[2], fb_env.shape[3])
    trial_env = transform_to_trial_acts(fb_envs_per_trial,
    [n_inputs_per_trial] * n_trials,
    n_sample_preds=n_sample_preds,
    n_trial_len=n_trial_len)

Compute Correlation with Activations

For trained model

topo_corrs = compute_trial_topo_corrs(model, i_layer, train_set,
exp.iterator, trial_env, split_per_class=True)

and random model
rand_topo_corrs = compute_trial_topo_corrs(rand_model, i_layer, train_set,
exp.iterator, trial_env, split_per_class=True)

Compute Activations

trial_acts = compute_trial_acts(model, i_layer, iterator, train_set)

So Compute per-batch activations and then aggregate to per-trial activations in
# compute number of inputs per trial
i_trial_starts, i_trial_ends = compute_trial_start_end_samples(
train_set.y, check_trial_lengths_equal=True,
input_time_length=iterator.input_time_length)
# +1 since trial ends is inclusive
n_trial_len = i_trial_ends[0] - i_trial_starts[0] + 1
n_inputs_per_trial = int(np.ceil(n_trial_len / float(iterator.n_sample_preds)))
log.info("Create theano function...")
all_layers = lasagne.layers.get_all_layers(model)
all_out_fn = create_pred_fn(all_layers[i_layer])
assert(iterator.input_time_length == get_input_time_length(model))
assert(iterator.n_sample_preds == get_n_sample_preds(model))
log.info("Compute activations...")
all_outs_per_batch = [all_out_fn(batch[0])
for batch in iterator.get_batches(train_set, False)]
batch_sizes = [len(batch[0]) for batch in iterator.get_batches(train_set, False)]
all_outs_per_batch = np.array(all_outs_per_batch)
n_trials = len(i_trial_starts)
log.info("Transform to trial activations...")
trial_acts = get_trial_acts(all_outs_per_batch,
batch_sizes, n_trials=n_trials,
n_inputs_per_trial=n_inputs_per_trial,
n_trial_len=n_trial_len,
n_sample_preds=iterator.n_sample_preds)
log.info("Done.")

Compute Correlation Envelope and Activations

topo_corrs = compute_topo_corrs(trial_env, trial_acts)

flat_trial_env = trial_env.transpose(2,0,1,3).reshape(
trial_env.shape[0] * trial_env.shape[2],
trial_env.shape[1] * trial_env.shape[3])
flat_trial_acts = trial_acts.transpose(1,0,2).reshape(
trial_acts.shape[1],-1)
#flat_corrs = np.corrcoef(flat_trial_env, flat_trial_acts)
#relevant_corrs = flat_corrs[:flat_trial_env.shape[0],
# flat_trial_env.shape[0]:]
relevant_corrs = corr(flat_trial_env, flat_trial_acts)
topo_corrs = relevant_corrs.reshape(trial_env.shape[2], trial_env.shape[0],
trial_acts.shape[1])
return topo_corrs

In the end these correlations for trained and untrained model will be saved:

np.save('{:s}.labelsplitted.env_corrs.{:s}'.format(base_name, file_name_end), topo_corrs)
np.save('{:s}.labelsplitted.env_rand_corrs.{:s}'.format(base_name, file_name_end), rand_topo_corrs)

Now when you have these correlations for trained and untrained model you can average across units in a layer and then compute the difference of them (difference between trained and untrained model correlations). This is Figure 15 in https://onlinelibrary.wiley.com/doi/full/10.1002/hbm.23730

As a comparison we also compute the correlations of the envelope with the class labels (no network involved!). This is in the rightmost plots in Figure 15, or class-resolved/per class in Figure 14.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.