Giter Site home page Giter Site logo

google-research / federated Goto Github PK

View Code? Open in Web Editor NEW
673.0 26.0 192.0 48.93 MB

A collection of Google research projects related to Federated Learning and Federated Analytics.

License: Apache License 2.0

Starlark 1.58% Python 97.16% Jupyter Notebook 1.18% Shell 0.08%

federated's Introduction

Federated Research

Federated Research is a collection of research projects related to Federated Learning and Federated Analytics. Federated learning is an approach to machine learning where a shared global model is trained across many participating clients that keep their training data locally. Federated analytics is the practice of applying data science methods to the analysis of raw data that is stored locally on users’ devices.

Many of the projects contained in this repository use TensorFlow Federated (TFF), an open-source framework for machine learning and other computations on decentralized data. For an overview and introduction to TFF, please see the list of tutorials. For information on using TFF for research, see TFF for research.

Recommended Usage

The main purpose of this repository is for reproducing experimental results in related papers. None of the projects (or subfolders) here is intended to be a resusable framework or package.

  • The recommended usage for this repository is to git clone and follow the instruction in each independent project to run the code, usually with bazel.

There is a special module utils/ that is widely used as a dependency for projects in this repository. Some of the functions in utils/ are in the process of upstreaming to the TFF package. However, utils/ is not promised to be a stable API and the code may change in any time.

  • The recommended usage for utils/ is to fork the necessary piece of code for your own research projects.
  • If you find utils/ and maybe other projects helpful as a module that your projects want to depend on (and you accept the risk of depending on potentially unstable and unsupported code), you can use git submodule and add the module to your python path.

Contributing

This repository contains Google-affiliated research projects related to federated learning and analytics. If you are working with Google collaborators and would like to feature your research project here, please review the contribution guidelines for coding style, best practices, etc.

Pull Requests

We currently do not accept pull requests for this repository. If you have feature requests or encounter a bug, please file an issue to the project owners.

Issues

Please use GitHub issues to communicate with project owners for requests and bugs. Add [project/folder name] in the issue title so that we can easily find the best person to respond.

Questions

If you have questions related to TensorFlow Federated, please direct your questions to Stack Overflow using the tensorflow-federated tag.

If you would like more information on federated learning, please see the following introduction to federated learning. For a more in-depth discussion, see the following manuscripts

federated's People

Contributors

abhin-shah avatar advaitgadhikar avatar alshedivat avatar dancingclover avatar galenmandrew avatar garyxcheng avatar hawkinsp avatar hsidahmed865 avatar jkr26 avatar jonycgn avatar jsimsa avatar jywa avatar kairouzp avatar karan1149 avatar kenziyuliu avatar luyang1125 avatar michaelreneer avatar nicolemitchell avatar nightldj avatar prachetit avatar qlzh727 avatar rchen152 avatar saugenst avatar slowbull avatar wennanzhu avatar wushanshan avatar xiaoyux11 avatar zacharygarrett avatar zcharles8 avatar zhuchen03 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

federated's Issues

[optimization] maybe construct LearningRateSchedule instead of a server_lr_schedule callback?

server_lr = server_lr_schedule(server_state.round_num)

Is there a specific reason not to use the optimizer's ability to receive LearningRateSchedule as a parameter? LearningRateSchedule can be constructed from the command line parameters, and the code that handles server_lr_schedule could be removed. (the learning rate is calculated from the optimizer iter variable that is being stored in the server state and restored each round)

I understand in the client this is a problem, but why is this behavior included in the server? Am I missing something here?

thank you!

Privacy Analysis of DP-FTRL

Screen Shot 2021-03-22 at 12 28 03 PM

From what I understand, the privacy analysis depends on the tree aggregation protocol and the noise addition from such.

On line 7 of the algorithm above, would the privacy analysis still holds if we change from an FTRL update to a standard SGD update rule?

How to load and process with my own datasets

Hi,
In this framework, you use some available dataset from TFF (example: tff.simulation.datasets.emnist.load_data()). I am wondering how I can deal with my own dataset for image classification task. My dataset includes image and label. Could you please give me the hint how to convert my dataset into your input data format or how I can modify the your code corresponding to my own dataset?.
Thanks.

Issues of federated-gans

pip install absl-py
pip install attr
pip install dm-tree
pip install numpy
pip install Pillow
pip install tensorboard
pip install tensorflow
pip install tensorflow-federated
pip install tensorflow-gan
pip install tensorflow-privacy

could you please give a requirement doc or specific version of each library?

I met a lot of version issues about tensorflow and others.

Thanks.

Shakespeare optimization throws and error

hello, when running the following on the latest version (with tff-nightly and tf-nightly)

python optimization/main/federated_trainer.py
--task=shakespeare
--clients_per_round=10
--client_datasets_random_seed=1
--client_epochs_per_round=1
--total_rounds=1200
--client_batch_size=4
--shakespeare_sequence_length=80
--client_optimizer=sgd
--client_learning_rate=1
--server_optimizer=sgd
--server_learning_rate=1
--server_sgd_momentum=0.0
--experiment_name=shakespeare_baseline

I'm really just running the baseline from the adaptive optimization paper, so I can't think of any reason why this should fail.

I'm getting the following error:

Traceback (most recent call last):
  File "/home/amitport/federated-learning-research/federated_learning_research/google_tff_research/optimization/main/federated_trainer.py", line 264, in <module>
    app.run(main)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/absl/app.py", line 303, in run
    _run_main(main, args)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "/home/amitport/federated-learning-research/federated_learning_research/google_tff_research/optimization/main/federated_trainer.py", line 227, in main
    runner_spec = federated_shakespeare.configure_training(
  File "/home/amitport/federated-learning-research/federated_learning_research/google_tff_research/optimization/shakespeare/federated_shakespeare.py", line 98, in configure_training
    iterative_process = task_spec.iterative_process_builder(tff_model_fn)
  File "/home/amitport/federated-learning-research/federated_learning_research/google_tff_research/optimization/main/federated_trainer.py", line 201, in iterative_process_builder
    return fed_avg_schedule.build_fed_avg_process(
  File "/home/amitport/federated-learning-research/federated_learning_research/google_tff_research/optimization/shared/fed_avg_schedule.py", line 269, in build_fed_avg_process
    dummy_model = model_fn()
  File "/home/amitport/federated-learning-research/federated_learning_research/google_tff_research/optimization/shakespeare/federated_shakespeare.py", line 93, in tff_model_fn
    keras_model=model_builder(),
  File "/home/amitport/federated-learning-research/federated_learning_research/google_tff_research/optimization/shakespeare/federated_shakespeare.py", line 33, in create_shakespeare_model
    return shakespeare_models.create_recurrent_model(
  File "/home/amitport/federated-learning-research/federated_learning_research/google_tff_research/utils/models/shakespeare_models.py", line 55, in create_recurrent_model
    model.add(lstm_layer_builder())
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py", line 536, in _method_wrapper
    result = method(self, *args, **kwargs)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/keras/engine/sequential.py", line 228, in add
    output_tensor = layer(self.outputs[0])
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 668, in __call__
    return super(RNN, self).__call__(inputs, **kwargs)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 970, in __call__
    return self._functional_construction_call(inputs, args, kwargs,
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1108, in _functional_construction_call
    outputs = self._keras_tensor_symbolic_call(
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 841, in _keras_tensor_symbolic_call
    return self._infer_output_signature(inputs, args, kwargs, input_masks)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 881, in _infer_output_signature
    outputs = call_fn(inputs, *args, **kwargs)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent_v2.py", line 1153, in call
    inputs, initial_state, _ = self._process_inputs(inputs, initial_state, None)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 868, in _process_inputs
    initial_state = self.get_initial_state(inputs)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 650, in get_initial_state
    init_state = get_initial_state_fn(
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 2516, in get_initial_state
    return list(_generate_zero_filled_state_for_cell(
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 2998, in _generate_zero_filled_state_for_cell
    return _generate_zero_filled_state(batch_size, cell.state_size, dtype)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 3014, in _generate_zero_filled_state
    return nest.map_structure(create_zeros, state_size)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/util/nest.py", line 867, in map_structure
    structure[0], [func(*x) for x in entries],
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/util/nest.py", line 867, in <listcomp>
    structure[0], [func(*x) for x in entries],
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py", line 3011, in create_zeros
    return array_ops.zeros(init_state_size, dtype=dtype)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 2911, in wrapped
    tensor = fun(*args, **kwargs)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 2960, in zeros
    output = _constant_if_small(zero, shape, dtype, name)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py", line 2896, in _constant_if_small
    if np.prod(shape) < 1000:
  File "<__array_function__ internals>", line 5, in prod
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 3030, in prod
    return _wrapreduction(a, np.multiply, 'prod', axis, dtype, out,
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 87, in _wrapreduction
    return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
  File "/home/amitport/.conda/envs/tff/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 871, in __array__
    raise NotImplementedError(
NotImplementedError: Cannot convert a symbolic Tensor (lstm/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

Thank you

[distributed-dp] DSkellam goes weird w/ small num_bits

To whom it may concern,

When using the following interface to calculate the local_stddev for DSkellam,

def skellam_params(epsilon,

I obtained different (scale, local_stddev) pairs when I tried different configurations. So what I felt strange was that when I swept the bits from 20 to 16 when keeping other stuff, all the same, I saw the results

  1. First change smoothly when bits $&gt;= 17$;
  2. All of a sudden, the scale and local_stddev both change drastically, implying that they were no longer useful for actual training (e.g., the local_stddev has mounted to server thousands).

For your convenience, I also made a table for the result:

bits 20 19 18 17 16
local_stddev 5.23 5.29 5.52 6.88 162582.83

Please refer to the appended execution results for more details. It seems that somewhere in the optimization process broke if we are using some bits that is too small. My questions:

  1. Did anyone ever encounter a similar issue to mine?
  2. Does that indicate a limitation for DSkellam, e.g., not necessarily fit for low-bit quantization? (But I think 16bits should not belong to a typical "low-bit" scenario.)

Thanks for the attention and really looking forward to your help!


Detailed execution results.

>>> skellam_params(epsilon=6,
... l2_clip=3,
... bits=20,
... num_clients=16,
... dim=16777216,
... q=1.0,
... steps=150,
... beta=np.exp(-0.5),
... delta=0.01)
(8348.494662007102, 5.233357786360573)

>>> skellam_params(epsilon=6,
... l2_clip=3,
... bits=19,
... num_clients=16,
... dim=16777216,
... q=1.0,
... steps=150,
... beta=np.exp(-0.5),
... delta=0.01)
(4132.065287771359, 5.286782384491639)

>>> skellam_params(epsilon=6,
... l2_clip=3,
... bits=18,
... num_clients=16,
... dim=16777216,
... q=1.0,
... steps=150,
... beta=np.exp(-0.5),
... delta=0.01)
(1979.514945816654, 5.517849242526566)

>>> skellam_params(epsilon=6,
... l2_clip=3,
... bits=17,
... num_clients=16,
... dim=16777216,
... q=1.0,
... steps=150,
... beta=np.exp(-0.5),
... delta=0.01)
(793.6147645649165, 6.881591755490655)

>>> skellam_params(epsilon=6,
... l2_clip=3,
... bits=16,
... num_clients=16,
... dim=16777216,
... q=1.0,
... steps=150,
... beta=np.exp(-0.5),
... delta=0.01)
(0.02190604694235002, 162582.82632490725)

Flags not consistent in federated/utils/optimizers/optimizer_utils.py

Not a major/urgent issue, but just FYI:

In define_optimizer_flags the optimizer flag is defined one way, while in create_optimizer_fn_from_flags the optimizer flag is defined another way.

For example, if your prefix is '' then define_optimizer_flags defines the flag '_optimizer' while create_optimizer_fn_from_flags looks for 'optimizer'. I fixed it in my own repo by using the method prefixed in both define_optimizer_flags and create_optimizer_fn_from_flags.

what is the recommended usage?

I'm a little confused.

I was hoping I could just git submodule this or pip install tff-research or pip install -e git+https://github.com/google-research/federated.

Could you share the intended installation process? (for example how to use just utils in this project?) (I could use with some ugly hacks, but I rather there was a direct way, preferably without having to build everything with bazel)

Computing epsilon

Hi, I know that this is a work in progress but, I am curious about how we can compute epsilon for this implementation of DP for federated learning.

Is it possible to use the compute epsilon proposed here (https://arxiv.org/abs/1908.10530) for the federated model?

Number of client of mnist dataset

Hi,
Could you tell me which parameter to set the number of client in mnist dataset?
in emnist_dataset_test.py, That's varibale TOTAL_NUM_CLIENTS, But in emnist_dataset_test.py, I didn't see any variable to set the number of client value.
Thanks.

AttributeError while trying to run optimization

Hi,
I am currently trying to run :

bazel run :trainer -- --task=emnist_character --total_rounds=100 --client_optimizer=sgd --client_learning_rate=0.1 --client_batch_size=20 --server_optimizer=sgd --server_learning_rate=1.0 --clients_per_round=10 --experiment_name=emnist_fedavg_experiment

The Error I get is:

Traceback (most recent call last):
  File "/home/jui/.cache/bazel/_bazel_jui/6be0c55f2c2ec3b2dd0eade5c0ad91be/execroot/org_federated_research/bazel-out/k8-opt/bin/optimization/trainer.runfiles/org_federated_research/optimization/trainer.py", line 31, in <module>
    from optimization import fed_avg_schedule
  File "/home/jui/.cache/bazel/_bazel_jui/6be0c55f2c2ec3b2dd0eade5c0ad91be/execroot/org_federated_research/bazel-out/k8-opt/bin/optimization/trainer.runfiles/org_federated_research/optimization/fed_avg_schedule.py", line 37, in <module>
    ModelBuilder = Callable[[], tff.learning.Model]
AttributeError: module 'tensorflow_federated' has no attribute 'learning'

And Error2:

Traceback (most recent call last):
  File "/home/jui/.cache/bazel/_bazel_jui/6be0c55f2c2ec3b2dd0eade5c0ad91be/execroot/org_federated_research/bazel-out/k8-opt/bin/optimization/trainer.runfiles/org_federated_research/optimization/trainer.py", line 32, in <module>
    from utils import task_utils
  File "/home/jui/.cache/bazel/_bazel_jui/6be0c55f2c2ec3b2dd0eade5c0ad91be/execroot/org_federated_research/bazel-out/k8-opt/bin/optimization/trainer.runfiles/org_federated_research/utils/task_utils.py", line 25, in <module>
    tff.simulation.baselines.cifar100.create_image_classification_task,
AttributeError: module 'tensorflow_federated' has no attribute 'simulation'

And here is the output of pip freeze of my virtual env:

absl-py==0.12.0
astunparse==1.6.3
attrs==21.2.0
bazel==0.0.0.20200723
cached-property==1.5.2
cachetools==3.1.1
certifi==2021.10.8
charset-normalizer==2.0.7
clang==5.0
cloudpickle==2.0.0
dataclasses==0.8
decorator==5.1.0
dill==0.3.4
dm-tree==0.1.6
flatbuffers==1.12
future==0.18.2
gast==0.4.0
google-auth==1.35.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
googleapis-common-protos==1.53.0
grpcio==1.37.0
h5py==3.1.0
idna==3.3
importlib-metadata==4.8.1
importlib-resources==5.3.0
jax==0.2.17
jaxlib==0.1.69
keras==2.6.0
keras-nightly==2.7.0.dev2021062900
Keras-Preprocessing==1.1.2
libclang==11.1.0
Markdown==3.3.4
mpmath==1.2.1
numpy==1.19.5
oauthlib==3.1.1
opt-einsum==3.3.0
pandas==1.1.5
portpicker==1.3.9
promise==2.3
protobuf==3.19.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
python-dateutil==2.8.2
pytz==2021.3
requests==2.26.0
requests-oauthlib==1.3.0
retrying==1.3.3
rsa==4.7.2
scipy==1.1.0
semantic-version==2.8.5
six==1.15.0
tb-nightly==2.6.0a20210806
tensorboard==2.7.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
tensorflow==2.6.0
tensorflow-addons==0.14.0
tensorflow-datasets==4.4.0
tensorflow-estimator==2.6.0
tensorflow-federated-nightly==0.19.0.dev20210901
tensorflow-hub==0.12.0
tensorflow-metadata==1.2.0
tensorflow-model-optimization==0.5.0
tensorflow-privacy==0.7.3
tensorflow-probability==0.14.1
tensorflow-text-nightly==2.7.0.dev20210811
termcolor==1.1.0
tf-estimator-nightly==2.7.0.dev2021092408
tf-nightly==2.7.0.dev20210806
tfa-nightly==0.15.0.dev20211014164307
tqdm==4.28.1
typeguard==2.13.0
typing-extensions==3.7.4.3
urllib3==1.26.7
Werkzeug==2.0.2
wrapt==1.12.1
zipp==3.6.0

I suspect these errors are due to package version differences. However, I am unable to find the exact package that's causing the errors. Also, I'm using tensorflow-federated-nightly==0.19.0.dev20210901 as suggested in other issues.

[optimization]reproduce the result of paper "ADAPTIVE FEDERATED OPTIMIZATION"

Hi, there,

I am trying to reproduce the FedAdam of cifar100 in figure 1 by using the command below:

bazel run :trainer -- --task=cifar100_image --total_rounds=4000 --client_optimizer=sgd --client_learning_rate=0.031622 --client_batch_size=20 --server_optimizer=adam --server_learning_rate=1.0 --clients_per_round=10 --client_epochs_per_round=1 --experiment_name=fed_Adaptive_sgd_adam_opt -server_adam_epsilon=0.1 --client_datasets_random_seed=1

I got val accuracy around 45% but expected 53% in the paper.

Did I miss something there?
Thanks in advance for your help!

bazel test shared:fed_avg_schedule_test doesn't pass tests due to GPU memory usage

Environment: This docker container tf-nightly, tff-nightly, GPU: Tesla V100-SXM2 32 GB

I ran bazel test shared:fed_avg_schedule_test and it did not pass the tests because the memory was overused:
test.log

Was wondering what could be done to get this to work? Ideally I would want to follow the same testing practices in this repo for my own code and experiments.

I can run

bazel run main:federated_trainer -- --task=emnist_cr --total_rounds=100
--client_optimizer=sgd --client_learning_rate=0.1 --client_batch_size=20
--server_optimizer=sgd --server_learning_rate=1.0 --clients_per_round=10
--client_epochs_per_round=1 --experiment_name=emnist_fedavg_experiment

(the original example) just fine.

Computing epsilon for Federated Learning with differential Privacy

Hi developers,

image
My datasets examples = 3012.
Following your suggestion in this issue. I calculate the sampling ratio = (clients per round / total clients). My wondering is T "steps" (number of rounds from federated model training) parameter in compute_epsilon function should be = (FLAGS.total_rounds * 3012) // FLAGS.client_batch_size or we need to add the ratio also because federated learning step is depend on number of client and client_per_round. If yes, it will be like this:
eps = compute_epsilon((FLAGS.total_rounds * 3012 *(FLAGS.clients_per_round / number_clients)) // FLAGS.client_batch_size)

  • The other question is that my epsilon value is very high (mean not good secure) even I played with some parameters: number of client, batch size, multiplier noise, different kind of T steps like mentioned above but it doesn't improve much.... So I am doubt about my algorithm for computing the epsilon.
    Do you have suggestion? or could you guys please update your code with calculating the compute epsilon in your code. So everyone will be clear about it.
    Thanks.

Unnecessary `del` in federated GAN code?

This is more a comment than an issue, but I was looking through the federated GAN code and found a piece of code that surprised me:

In /gans/experiments/emnist/train.py:348-350 (and similarly in 352-364) the following function is defined:

def server_gen_inputs_fn(round_num):
    del round_num
    return next(server_gen_inputs_iterator)

What is the purpose of round_num being passed as an argument (by value) when the first line of code inside the function deletes it?

`get_model_weights` not yet available in TFF

Hi,

I was using the compression folder to understand some basics of TensorFlow Federated. It turns out I was blocked by this error and didn't seem to figure out a solution.

Traceback (most recent call last):
  File "run_experiment.py", line 245, in <module>
    app.run(main)
  File "/data/zehaohuang/anaconda/envs/fetchsgd/lib/python3.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/data/zehaohuang/anaconda/envs/fetchsgd/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "run_experiment.py", line 241, in main
    run_experiment()
  File "run_experiment.py", line 233, in run_experiment
    **training_loop_dict)
  File "/home/eecs/zehaohuang/federated/utils/training_loop.py", line 195, in run
    _check_iterative_process_compatibility(iterative_process)
  File "/home/eecs/zehaohuang/federated/utils/training_loop.py", line 128, in _check_iterative_process_compatibility
    raise compatibility_error
utils.training_loop.IterativeProcessCompatibilityError: The iterative_process argument must be of type`tff.templates.IterativeProcess`, and must have an attribute `get_model_weights`, which must be a `tff.Computation`. This computation must accept as input the state of `iterative_process`, and its output must be a nested structure of tensors matching the input shape of `validation_fn`.

The command I used was the one suggested in the README file of the compression folder:

python3 run_experiment.py --client_optimizer=sgd --client_learning_rate=0.2 --server_optimizer=sgd --server_learning_rate=1.0 --use_compression=True --broadcast_quantization_bits=8 --aggregation_quantization_bits=8 --use_sparsity_in_aggregation=True

Is there a way to work around this?

[distributed-dp] running of fl_run: AttributeError: module 'tensorflow_federated.python.program' has no attribute 'TensorBoardReleaseManager'

Hi All,

Thank you for publishing the code.

I installed the packages in the same way as recommended here: #57 (comment)

(tff) ady@vws35:~/code/federated/distributed_dp$ pip list
Package                       Version
----------------------------- -------------------
absl-py                       1.0.0
astunparse                    1.6.3
attrs                         21.2.0
cachetools                    3.1.1
certifi                       2021.10.8
charset-normalizer            2.0.12
cloudpickle                   2.0.0
cycler                        0.11.0
decorator                     5.1.1
dill                          0.3.4
dm-tree                       0.1.6
farmhashpy                    0.4.0
flatbuffers                   2.0
fonttools                     4.30.0
gast                          0.5.3
google-auth                   2.6.0
google-auth-oauthlib          0.4.6
google-pasta                  0.2.0
googleapis-common-protos      1.55.0
grpcio                        1.34.1
h5py                          3.6.0
idna                          3.3
importlib-metadata            4.11.3
jax                           0.2.28
jaxlib                        0.1.76
keras                         2.8.0
Keras-Preprocessing           1.1.2
kiwisolver                    1.4.0
libclang                      13.0.0
Markdown                      3.3.6
matplotlib                    3.5.1
mpmath                        1.2.1
numpy                         1.21.5
oauthlib                      3.2.0
opt-einsum                    3.3.0
packaging                     21.3
pandas                        1.4.1
Pillow                        9.0.1
pip                           21.2.4
portpicker                    1.3.9
promise                       2.3
protobuf                      3.19.4
pyasn1                        0.4.8
pyasn1-modules                0.2.8
pyparsing                     3.0.7
python-dateutil               2.8.2
pytz                          2021.3
requests                      2.27.1
requests-oauthlib             1.3.1
rsa                           4.8
scipy                         1.8.0
semantic-version              2.8.5
setuptools                    58.0.4
six                           1.16.0
tensorboard                   2.8.0
tensorboard-data-server       0.6.1
tensorboard-plugin-wit        1.8.1
tensorflow                    2.8.0
tensorflow-addons             0.16.1
tensorflow-datasets           4.5.2
tensorflow-estimator          2.8.0
tensorflow-federated          0.20.0
tensorflow-io-gcs-filesystem  0.24.0
tensorflow-metadata           1.7.0
tensorflow-model-optimization 0.7.1
tensorflow-privacy            0.7.3
tensorflow-probability        0.16.0
termcolor                     1.1.0
tf-estimator-nightly          2.8.0.dev2021122109
tqdm                          4.28.1
typeguard                     2.13.3
typing_extensions             4.1.1
urllib3                       1.26.9
Werkzeug                      2.0.3
wheel                         0.37.1
wrapt                         1.14.0
zipp                          3.7.0
(tff) ady@vws35:~/code/federated/distributed_dp$ python --version
Python 3.9.7

However, I run in the following problem:

(tff) ady@vws35:~/code/federated/distributed_dp$ bazel run :fl_run --     --task=emnist_character     --server_optimizer=sgd     --server_learning_rate=1     --server_sgd_momentum=0.9     --client_optimizer=sgd     --client_learning_rate=0.03     --client_batch_size=20     --experiment_name=my_emnist_test     --epsilon=10     --l2_norm_clip=0.03     --dp_mechanism=ddgauss     --logtostderr  --total_rounds 2
WARNING: Output base '/h/ady/.cache/bazel/_bazel_ady/39df1af3e8de7748262d01b9bcee607d' is on NFS. This may lead to surprising failures and undetermined behavior.
DEBUG: Rule 'rules_python' indicated that a canonical reproducible form can be obtained by modifying arguments commit = "a0fbf98d4e3a232144df4d0d80b577c7a693b570", shallow_since = "1586444447 +0200" and dropping ["tag"]
DEBUG: Repository rules_python instantiated at:
  /h/ady/code/federated/WORKSPACE:5:15: in <toplevel>
Repository rule git_repository defined at:
  /h/ady/.cache/bazel/_bazel_ady/39df1af3e8de7748262d01b9bcee607d/external/bazel_tools/tools/build_defs/repo/git.bzl:199:33: in <toplevel>
INFO: Analyzed target //distributed_dp:fl_run (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //distributed_dp:fl_run up-to-date:
  bazel-bin/distributed_dp/fl_run
INFO: Elapsed time: 0.396s, Critical Path: 0.02s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/distributed_dp/fl_run '--task=emnist_character' '--server_optimizer=sgd' '--server_learning_rate=1' '--server_sgd_momentum=0.9' '--client_optimizer=sgd' '--client_learning_rate=0.03' '--client_batch_size=20' '--experiment_name=my_emnist_test' '--epsilon=10' '--l2_norm_clip=0.03' '--dp_mechanism=ddgauss'INFO: Build completed successfully, 1 total action
2022-03-17 16:06:23.299187: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-03-17 16:06:23.299213: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-03-17 16:06:30.611416: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-03-17 16:06:30.612106: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublas.so.11'; dlerror: libcublas.so.11: cannot open shared object file: No such file or directory
2022-03-17 16:06:30.612540: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublasLt.so.11'; dlerror: libcublasLt.so.11: cannot open shared object file: No such file or directory
2022-03-17 16:06:30.613066: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcufft.so.10'; dlerror: libcufft.so.10: cannot open shared object file: No such file or directory
2022-03-17 16:06:30.613540: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcurand.so.10'; dlerror: libcurand.so.10: cannot open shared object file: No such file or directory
2022-03-17 16:06:30.613981: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusolver.so.11'; dlerror: libcusolver.so.11: cannot open shared object file: No such file or directory
2022-03-17 16:06:30.614415: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusparse.so.11'; dlerror: libcusparse.so.11: cannot open shared object file: No such file or directory
2022-03-17 16:06:30.614841: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudnn.so.8'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory
2022-03-17 16:06:30.614853: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1850] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
I0317 16:06:30.714181 140367148568768 sql_client_data.py:127] Loaded 3400 client ids from SQL database.
2022-03-17 16:06:30.717065: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
I0317 16:06:30.906410 140367148568768 sql_client_data.py:127] Loaded 3400 client ids from SQL database.
I0317 16:06:31.114517 140367148568768 keras_utils.py:365] Adding default num_examples metric to model
I0317 16:06:31.114643 140367148568768 keras_utils.py:368] Adding default num_batches metric to model
I0317 16:06:31.190037 140367148568768 keras_utils.py:365] Adding default num_examples metric to model
I0317 16:06:31.190158 140367148568768 keras_utils.py:368] Adding default num_batches metric to model
I0317 16:06:31.195640 140367148568768 fl_utils.py:71] Shared DP Parameters:
I0317 16:06:31.195845 140367148568768 fl_utils.py:72] {'clip': 0.03,
 'delta': 0.0002941176470588235,
 'dim': 1018174,
 'epsilon': 10.0,
 'mechanism': 'ddgauss',
 'num_clients': 3400,
 'num_clients_per_round': 100,
 'num_rounds': 2,
 'sampling_rate': 0.029411764705882353}
I0317 16:09:09.068672 140367148568768 fl_utils.py:151] ddgauss parameters:
I0317 16:09:09.068935 140367148568768 fl_utils.py:152] {'beta': 0.6065306597126334,
 'bits': 16,
 'dim': 1018174,
 'gamma': 0.0001248800740568264,
 'inflated_l2': 0.0707097967941405,
 'k_stddevs': 4,
 'local_stddev': 0.002430822051759469,
 'mechanism': 'ddgauss',
 'noise_mult_clip': 0.8102740172531564,
 'noise_mult_inflated': 0.3437744360709157,
 'padded_dim': 1048576.0,
 'scale': 8007.682631137392}
I0317 16:09:09.069010 140367148568768 ddpquery_utils.py:44] Conditional rounding set to True (beta = 0.606531)
I0317 16:09:09.157962 140367148568768 keras_utils.py:365] Adding default num_examples metric to model
I0317 16:09:09.158077 140367148568768 keras_utils.py:368] Adding default num_batches metric to model
I0317 16:09:10.512192 140367148568768 keras_utils.py:365] Adding default num_examples metric to model
I0317 16:09:10.512315 140367148568768 keras_utils.py:368] Adding default num_batches metric to model
I0317 16:09:12.278361 140367148568768 keras_utils.py:365] Adding default num_examples metric to model
I0317 16:09:12.278485 140367148568768 keras_utils.py:368] Adding default num_batches metric to model
I0317 16:09:14.021716 140367148568768 keras_utils.py:365] Adding default num_examples metric to model
I0317 16:09:14.021839 140367148568768 keras_utils.py:368] Adding default num_batches metric to model
I0317 16:09:14.286071 140367148568768 keras_utils.py:365] Adding default num_examples metric to model
I0317 16:09:14.286196 140367148568768 keras_utils.py:368] Adding default num_batches metric to model
Traceback (most recent call last):
  File "/h/ady/.cache/bazel/_bazel_ady/39df1af3e8de7748262d01b9bcee607d/execroot/org_federated_research/bazel-out/k8-opt/bin/distributed_dp/fl_run.runfiles/org_federated_research/distributed_dp/fl_run.py", line 290, in <module>
    app.run(main)
  File "/h/ady/.conda/envs/tff/lib/python3.9/site-packages/absl/app.py", line 312, in run
    _run_main(main, args)
  File "/h/ady/.conda/envs/tff/lib/python3.9/site-packages/absl/app.py", line 258, in _run_main
    sys.exit(main(argv))
  File "/h/ady/.cache/bazel/_bazel_ady/39df1af3e8de7748262d01b9bcee607d/execroot/org_federated_research/bazel-out/k8-opt/bin/distributed_dp/fl_run.runfiles/org_federated_research/distributed_dp/fl_run.py", line 272, in main
    program_state_manager, metrics_managers = training_utils.create_managers(
  File "/h/ady/.cache/bazel/_bazel_ady/39df1af3e8de7748262d01b9bcee607d/execroot/org_federated_research/bazel-out/k8-opt/bin/distributed_dp/fl_run.runfiles/org_federated_research/utils/training_utils.py", line 65, in create_managers
    tensorboard_release_manager = tff.program.TensorBoardReleaseManager(
AttributeError: module 'tensorflow_federated.python.program' has no attribute 'TensorBoardReleaseManager'
(tff) ady@vws35:~/code/federated/distributed_dp$ 

[differential_privacy] Learning rates used for Adaptive Clipping experiments

Hi,

I am trying to reproduce the experiments in "Differentially Private Learning with Adaptive Clipping" (2021), the source code for which is provided under federated/differential_privacy. The paper does not report the final server learning rates used for DP-FedAvgM with clipping enabled. It simply states the following in Section 3.1 -

Therefore, for all approaches with clipping—fixed or adaptive—we search over a small grid of five server learning rates, scaling the
values in Table 1 by {1, 10^1/4, 10^1/2, 10^3/4, 10}. For all configurations, we report the best performing model whose server learning rate was chosen from this small grid on the validation set.

It is not computationally feasible for me to search for the optimal server lr in every possible configuration so I was hoping you could specify the learning rates that were used for training the best performing models. Thank you.

some questions when run examples in [federated optimization] module.

Sorry to bother you. I want to run your examples, but there was something wrong when I was executing the next command.

-<zry@zjudai-PowerEdge-R740:~/experiences/federated/optimization [master*]>-                                                                                        -<pts/9>-
-<%>- bazel run :trainer -- --task=emnist_character --total_rounds=100 --client_optimizer=sgd --client_learning_rate=0.1 --client_batch_size=20 --server_optimizer=adagrad --server_learning_rate=1.0 --clients_per_round=10 --client_epochs_per_round=1 --experiment_name=emnist_fedavg_experiment


DEBUG: Rule 'rules_python' indicated that a canonical reproducible form can be obtained by modifying arguments commit = "a0fbf98d4e3a232144df4d0d80b577c7a693b570", shallow_since = "1586444447 +0200" and dropping ["tag"]
DEBUG: Repository rules_python instantiated at:
  /home/zry/experiences/federated/WORKSPACE:5:15: in <toplevel>
Repository rule git_repository defined at:
  /home/zry/.cache/bazel/_bazel_zry/3e380758883002d02020be6e7615e6b0/external/bazel_tools/tools/build_defs/repo/git.bzl:199:33: in <toplevel>
INFO: Analyzed target //optimization:trainer (1 packages loaded, 11 targets configured).
INFO: Found 1 target...
Target //optimization:trainer up-to-date:
  bazel-bin/optimization/trainer
INFO: Elapsed time: 0.135s, Critical Path: 0.01s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/optimization/trainer '--task=emnist_character' '--total_rounds=100' '--client_optimizer=sgd' '--client_learning_rate=0.1' '--client_batchINFO: Build completed successfully, 1 total action
/home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/tensorflow_addons/utils/ensure_tf_install.py:37: UserWarning: You are currently using a nightly version of TensorFlow (2.8.0-dev20211003). 
TensorFlow Addons offers no support for the nightly versions of TensorFlow. Some things might work, some other might not. 
If you encounter a bug, do not file an issue on GitHub.
  warnings.warn(
2021-10-05 12:10:45.349536: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusolver.so.11'; dlerror: libcusolver.so.11: cannot open shared object file: No such file or directory
2021-10-05 12:10:45.350239: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1850] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
I1005 12:10:45.357539 140653232485184 sql_client_data.py:116] Loaded 3400 client ids from SQL database.
I1005 12:10:45.441872 140653232485184 sql_client_data.py:116] Loaded 3400 client ids from SQL database.
WARNING:tensorflow:From /home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/keras/optimizer_v2/adagrad.py:83: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
W1005 12:10:46.078216 140653232485184 deprecation.py:541] From /home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/keras/optimizer_v2/adagrad.py:83: calling Constant.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
/home/zry/.cache/bazel/_bazel_zry/3e380758883002d02020be6e7615e6b0/execroot/org_federated_research/bazel-out/k8-opt/bin/optimization/trainer.runfiles/org_federated_research/utils/training_utils.py:55: UserWarning: `configure_managers` is deprecated, please use `configure_output_managers` instead.
  warnings.warn('`configure_managers` is deprecated, please use '
I1005 12:10:50.664070 140653232485184 training_utils.py:69] Writing...
I1005 12:10:50.664207 140653232485184 training_utils.py:70]     checkpoints to: /tmp/fed_opt/checkpoints/emnist_fedavg_experiment
I1005 12:10:50.664256 140653232485184 training_utils.py:71]     CSV metrics to: /tmp/fed_opt/results/emnist_fedavg_experiment/experiment.metrics.csv
I1005 12:10:50.664298 140653232485184 training_utils.py:72]     TensorBoard summaries to: /tmp/fed_opt/logdir/emnist_fedavg_experiment
I1005 12:10:50.670380 140653232485184 training_loop.py:369] Initializing simulation process
I1005 12:10:50.952509 140653232485184 training_loop.py:373] Running on loop start callback
Traceback (most recent call last):
  File "/home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/tensorflow/python/util/nest.py", line 649, in _pack_sequence_as
    final_index, packed = _packed_nest_with_indices(structure, flat_sequence,
  File "/home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/tensorflow/python/util/nest.py", line 613, in _packed_nest_with_indices
    new_index, child = _packed_nest_with_indices(s, flat, index, is_seq,
  File "/home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/tensorflow/python/util/nest.py", line 618, in _packed_nest_with_indices
    packed.append(flat[index])
IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/zry/.cache/bazel/_bazel_zry/3e380758883002d02020be6e7615e6b0/execroot/org_federated_research/bazel-out/k8-opt/bin/optimization/trainer.runfiles/org_federated_research/optimization/trainer.py", line 161, in <module>
    app.run(main)
  File "/home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/absl/app.py", line 303, in run
    _run_main(main, args)
  File "/home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "/home/zry/.cache/bazel/_bazel_zry/3e380758883002d02020be6e7615e6b0/execroot/org_federated_research/bazel-out/k8-opt/bin/optimization/trainer.runfiles/org_federated_research/optimization/trainer.py", line 145, in main
    state = tff.simulation.run_simulation(
  File "/home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/tensorflow_federated/python/simulation/training_loop.py", line 297, in run_simulation
    return run_simulation_with_callbacks(process, client_selection_fn,
  File "/home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/tensorflow_federated/python/simulation/training_loop.py", line 374, in run_simulation_with_callbacks
    state, start_round = on_loop_start(initial_state)
  File "/home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/tensorflow_federated/python/simulation/training_loop.py", line 161, in on_loop_start
    start_state, start_round = _load_initial_checkpoint(
  File "/home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/tensorflow_federated/python/simulation/training_loop.py", line 73, in _load_initial_checkpoint
    ckpt_state, ckpt_round = file_checkpoint_manager.load_latest_checkpoint(
  File "/home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/tensorflow_federated/python/simulation/checkpoint_manager.py", line 126, in load_latest_checkpoint
    return self._load_checkpoint_from_path(structure, checkpoint_path)
  File "/home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/tensorflow_federated/python/simulation/checkpoint_manager.py", line 160, in _load_checkpoint_from_path
    state = tf.nest.pack_sequence_as(structure, flat_obj)
  File "/home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/tensorflow/python/util/nest.py", line 774, in pack_sequence_as
    return _pack_sequence_as(structure, flat_sequence, expand_composites)
  File "/home/zry/miniconda/envs/faderatedOptimization/lib/python3.9/site-packages/tensorflow/python/util/nest.py", line 656, in _pack_sequence_as
    raise ValueError(
ValueError: Could not pack sequence. Structure had 18 elements, but flat_sequence had 10 elements.  Structure: ServerState(model=ModelWeights(trainable=[array([[[[-0.03318459, -0.06777279,  0.06306353,  0.0405052 ,
           0.04045349,  0.0326656 ,  0.07058266, -0.02883508,
          -0.09374425, -0.08974718, -0.03748533,  0.01343894,
           0.08274788, -0.04463726, -0.06235739,  0.11861287,
           0.04775336, -0.1090233 ,  0.11625318,  0.06177072,
          -0.03656241,  0.02800663, -0.01220091,  0.07400341,
           0.09281383, -0.13544364,  0.04326865, -0.13673696,
           0.09243813,  0.08653699, -0.12354508, -0.10220416]],

        [[-0.02814398,  0.05106805, -0.02759971, -0.11615686,
           0.12885563, -0.08836976,  0.01238382,  0.06606069,
          -0.06350397,  0.00351886, -0.01877474, -0.12781444,
           0.14097883, -0.06933357, -0.13760671,  0.11376186,
          -0.0285569 , -0.10471708, -0.0135484 ,  0.03443798,
           0.09747429, -0.07215008, -0.08348357,  0.01397853,
          -0.02335885, -0.11569057,  0.01682493, -0.07295498,
           0.12627403,  0.11031865,  0.08685736, -0.13573714]],

        [[ 0.0318355 , -0.0334255 , -0.13437314,  0.03591031,
           0.02158549, -0.11969806, -0.13044943,  0.04071821,
           0.06766421,  0.08067909,  0.00765389,  0.0280882 ,
          -0.03140435,  0.09799661, -0.07903714, -0.05490339,
          -0.01559618, -0.05552106,  0.02565786,  0.05198254,
           0.07058781, -0.06974195,  0.12271829, -0.09561117,
          -0.03009186, -0.13116106, -0.09088019, -0.05790815,
          -0.09931069,  0.06903438,  0.04434091, -0.12149669]]],


       [[[-0.06628785,  0.096481  , -0.07999185, -0.12890887,
          -0.0269848 ,  0.09743312,  0.00637878, -0.05712575,
           0.11868145,  0.00022925, -0.09040251, -0.04579847,
           0.12846611,  0.09250072,  0.13082702,  0.08999495,
           0.13998435, -0.14190243,  0.06283151, -0.10743731,
          -0.10329394, -0.00481801, -0.10904849,  0.0439468 ,
           0.02744579,  0.02329257, -0.10663906, -0.02034369,
          -0.02366102, -0.1267819 , -0.1352326 , -0.09782154]],

        [[-0.14149293,  0.07222258,  0.0502066 ,  0.03152685,
           0.04734391,  0.01293792, -0.05004022,  0.05040511,
          -0.05206922,  0.08869639, -0.07820748, -0.08453584,
           0.13471605,  0.08355598, -0.12692043, -0.07118823,
          -0.05825109, -0.02424073,  0.09383196, -0.01547597,
          -0.08595619, -0.00359759, -0.12302685,  0.02730355,
          -0.05858935,  0.06093754,  0.10006811, -0.03799954,
           0.10627602, -0.14058991, -0.07514017, -0.09930408]],

        [[ 0.01642144, -0.08608093, -0.00502844, -0.08376545,
           0.03876716, -0.06583836,  0.09795666, -0.0270032 ,
           0.03902818,  0.11212905, -0.07241423, -0.07133326,
          -0.09918331,  0.10249905, -0.12474951,  0.05423588,
           0.00586078, -0.07533234, -0.02042624, -0.0323925 ,
          -0.07822544,  0.13666181,  0.14166452, -0.13245848,
           0.03729352, -0.09021115, -0.11688019, -0.03306425,
          -0.06957369,  0.0765996 ,  0.12502743, -0.04536054]]],


       [[[ 0.00790474, -0.01440337, -0.0548876 , -0.04154236,
          -0.06596727, -0.00104529, -0.04314966, -0.03977686,
          -0.0983532 , -0.01662597,  0.08379821,  0.05328758,
           0.0333119 ,  0.12793623, -0.12830386,  0.00198875,
           0.09210566, -0.10923053,  0.12772335, -0.03900511,
          -0.01212899,  0.01839177,  0.02339813,  0.03658175,
           0.02562019,  0.10562882, -0.0990341 , -0.14014615,
          -0.03636841,  0.02595174,  0.11003922,  0.03964473]],

        [[ 0.10463665,  0.03114827,  0.04858567,  0.01363383,
           0.08631171, -0.03245272,  0.04511528,  0.04435101,
           0.09302872, -0.00836393, -0.059879  ,  0.06327346,
          -0.09539304,  0.07273261, -0.00992058,  0.11936958,
          -0.03405169,  0.09121154, -0.06253418, -0.05916266,
          -0.01094185,  0.13939185, -0.02105902,  0.11695088,
          -0.08025675,  0.09657767,  0.10844381,  0.07645702,
           0.11931504,  0.00524363, -0.01809259,  0.10240264]],

        [[ 0.10667044, -0.05307768, -0.00207052, -0.00621477,
           0.08852133, -0.12581496,  0.05586481, -0.01930596,
           0.03866881,  0.06738302,  0.01567842, -0.12000071,
           0.07182933,  0.00194602,  0.03695923, -0.01716862,
           0.13303705, -0.12922986, -0.08674587, -0.10184465,
           0.13038744,  0.03971101, -0.02201291,  0.06816947,
           0.02622823, -0.08060776, -0.04874776,  0.07539222,
          -0.08312718, -0.13974316,  0.03355449,  0.12595199]]]],
      dtype=float32), array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
      dtype=float32), array([[[[-7.55335093e-02, -3.37360315e-02, -2.51207761e-02, ...,
           3.52875814e-02,  7.78904036e-02,  2.11859122e-02],
         [-6.51063770e-03,  1.47776827e-02,  3.82856131e-02, ...,
           4.35458198e-02,  4.92555276e-02,  2.75585279e-02],
         [-8.16855878e-02,  4.48697805e-03, -7.46305585e-02, ...,
           6.85861930e-02, -4.98704314e-02,  2.75768861e-02],
         ...,
         [-2.13931799e-02, -5.99610806e-02,  6.48957714e-02, ...,
          -5.14722466e-02, -1.55434385e-02,  2.39252225e-02],
         [ 5.64857200e-02, -7.17885941e-02, -7.43204355e-02, ...,
           8.29935893e-02,  1.51238665e-02, -7.11782202e-02],
         [ 7.95290545e-02,  6.43435344e-02, -7.58896917e-02, ...,
           3.68134603e-02, -8.11447948e-02, -7.35411793e-03]],

        [[ 4.04643044e-02,  2.20070258e-02, -6.51684031e-02, ...,
           5.92495576e-02,  1.19755492e-02, -4.66807894e-02],
         [-1.42102018e-02,  7.21380189e-02,  7.74526075e-02, ...,
          -5.36772609e-02, -2.51265168e-02, -7.31879920e-02],
         [-8.56908411e-03,  1.76500306e-02, -1.97320953e-02, ...,
          -9.12755728e-03,  3.31847072e-02,  4.78754267e-02],
         ...,
         [ 4.77889851e-02,  3.10081616e-02,  3.49214077e-02, ...,
           6.74676672e-02,  6.02719262e-02, -4.39153723e-02],
         [ 4.64499220e-02,  6.29121065e-03,  4.96676341e-02, ...,
           4.46758941e-02,  1.35572553e-02, -1.20642409e-02],
         [-1.78776011e-02, -5.14277630e-02,  3.68895158e-02, ...,
          -5.14064245e-02,  7.23043829e-03, -6.18083477e-02]],

        [[ 4.53440174e-02,  5.69267645e-02,  1.74202770e-03, ...,
          -5.97644076e-02, -2.94980630e-02,  1.08588561e-02],
         [-4.42896113e-02, -1.67151466e-02, -1.90116391e-02, ...,
          -7.73193091e-02,  6.39769807e-02,  3.61670852e-02],
         [-4.77577858e-02,  7.13017061e-02, -4.34088930e-02, ...,
          -1.17266774e-02,  3.09370756e-02,  1.64527521e-02],
         ...,
         [ 4.27244082e-02, -4.35918383e-02,  2.94274092e-02, ...,
           2.98678875e-02, -5.91965318e-02,  2.81897560e-02],
         [ 1.00717768e-02,  1.60767436e-02,  1.87553540e-02, ...,
           4.98921648e-02, -3.19162607e-02, -4.98070531e-02],
         [-3.63901854e-02, -8.19317698e-02, -1.96344033e-02, ...,
           4.69552055e-02, -7.88168907e-02, -1.38552561e-02]]],


       [[[-6.40108436e-03, -1.04682446e-02,  7.58075789e-02, ...,
           1.24695152e-03, -8.95283371e-03,  2.62016878e-02],
         [-3.94004993e-02,  1.65172219e-02,  6.01673350e-02, ...,
           3.96268591e-02,  5.36596403e-02,  6.44329414e-02],
         [-3.25323343e-02, -6.87644333e-02, -1.38958916e-02, ...,
           5.16329780e-02,  4.38024774e-02, -6.62216991e-02],
         ...,
         [ 4.46835831e-02, -2.17625722e-02, -2.02582031e-03, ...,
          -6.84805363e-02, -3.39440517e-02,  7.65615925e-02],
         [-1.30809173e-02,  7.62678161e-02, -3.10212784e-02, ...,
          -3.03452611e-02,  6.94760308e-02,  1.85229778e-02],
         [ 3.88976708e-02,  1.13398209e-02, -5.90028763e-02, ...,
           8.01038742e-03,  5.99861145e-04,  5.72204590e-03]],

        [[-4.39191684e-02, -7.07499832e-02, -1.73937678e-02, ...,
          -7.35136867e-02,  5.77402487e-02, -7.38088340e-02],
         [-7.64881596e-02,  6.18602410e-02,  1.37528032e-03, ...,
           5.16311303e-02,  4.38513234e-02, -6.89591393e-02],
         [-7.60977864e-02,  3.52292061e-02, -3.85083556e-02, ...,
          -2.78141312e-02, -4.53924537e-02,  3.35831270e-02],
         ...,
         [ 6.04002848e-02,  6.11049905e-02, -7.90526122e-02, ...,
          -5.79034686e-02,  4.64039817e-02,  3.14746499e-02],
         [ 1.37667283e-02, -1.83417574e-02, -3.83423977e-02, ...,
          -3.28903198e-02,  5.97896948e-02, -1.44672170e-02],
         [ 2.74074301e-02, -8.23070034e-02, -3.52800898e-02, ...,
           4.89554554e-03, -3.29744220e-02,  2.91144848e-03]],

        [[ 1.55232325e-02, -5.13938479e-02,  8.19362774e-02, ...,
          -1.76378340e-03, -3.50127816e-02, -2.70713568e-02],
         [ 2.13603377e-02,  7.56393299e-02, -3.90702114e-02, ...,
          -6.33155107e-02, -3.24970484e-03, -3.05533819e-02],
         [-1.57613978e-02, -6.36100769e-03, -3.19510512e-02, ...,
          -7.96087831e-02, -2.92249136e-02,  7.82434121e-02],
         ...,
         [ 8.47860426e-03,  3.13885212e-02, -2.18361020e-02, ...,
          -1.17591843e-02, -2.95404010e-02, -1.52508616e-02],
         [ 8.03703144e-02, -1.97612420e-02,  6.86098412e-02, ...,
          -3.16630229e-02, -5.00120334e-02,  4.78009954e-02],
         [-2.03196034e-02,  3.82729769e-02, -4.23518829e-02, ...,
           1.83393359e-02, -1.93881989e-03, -5.01558930e-03]]],


       [[[-6.68466315e-02,  3.59835401e-02,  4.76472154e-02, ...,
          -4.56681661e-02,  6.88236728e-02, -3.14040184e-02],
         [-7.73509368e-02, -6.47409409e-02,  2.68682018e-02, ...,
           5.85381165e-02,  7.25383237e-02,  4.66143265e-02],
         [-1.24285817e-02, -4.78673577e-02, -6.41224384e-02, ...,
          -1.73450112e-02,  8.10593739e-02, -3.59870009e-02],
         ...,
         [-5.41306362e-02, -4.57801633e-02, -3.18225436e-02, ...,
           8.25571194e-02, -6.89921975e-02,  4.90825549e-02],
         [ 2.66094580e-02, -1.88586861e-03,  6.81472197e-02, ...,
           6.76332191e-02,  2.87096500e-02,  9.55412537e-03],
         [ 3.78337726e-02,  2.05727220e-02, -1.51806250e-02, ...,
           4.85093221e-02,  1.95888281e-02, -1.06915832e-05]],

        [[ 1.22259259e-02, -2.28784680e-02,  3.84611115e-02, ...,
          -5.61587624e-02, -1.53615847e-02, -3.59717421e-02],
         [ 2.25810781e-02,  4.82208952e-02, -2.34977007e-02, ...,
           5.50669804e-02, -6.44689798e-02,  2.66750902e-03],
         [ 9.32750851e-03, -3.68545875e-02,  2.59819850e-02, ...,
          -7.01227784e-02,  1.86116099e-02, -2.51841955e-02],
         ...,
         [ 1.65192112e-02,  3.41943726e-02,  6.56753406e-02, ...,
          -3.74060683e-02, -7.39074349e-02, -5.96309714e-02],
         [ 3.00867781e-02, -7.77522475e-03, -1.30677596e-02, ...,
           4.54109684e-02, -8.44007730e-03, -3.37835401e-03],
         [-5.72315864e-02,  4.93388399e-02, -6.32404909e-02, ...,
          -2.68997364e-02, -4.01192531e-02,  7.15255737e-05]],

        [[-7.85931796e-02,  7.18279853e-02, -2.61297449e-02, ...,
          -8.07455927e-03, -7.58291930e-02, -1.50077716e-02],
         [-7.93440193e-02,  7.07295164e-02, -4.57943082e-02, ...,
          -7.80930519e-02,  8.20919648e-02, -4.40981388e-02],
         [-8.26951116e-02,  6.46048561e-02, -1.66572556e-02, ...,
          -7.81856477e-04, -6.94259852e-02, -1.18333921e-02],
         ...,
         [-8.33145976e-02, -6.93895444e-02,  7.79694542e-02, ...,
           2.30937377e-02,  2.45568976e-02, -3.76672745e-02],
         [ 4.89438400e-02, -7.65533894e-02, -5.25526404e-02, ...,
          -4.44827080e-02,  9.38270241e-03,  4.05997634e-02],
         [ 8.07360560e-03, -8.09708685e-02, -4.44971733e-02, ...,
          -6.41789287e-03, -6.64591789e-03,  6.23328760e-02]]]],
      dtype=float32), array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32), array([[ 0.02490162,  0.01700789, -0.00388868, ...,  0.00397584,
         0.00246144,  0.00901834],
       [ 0.01804547, -0.00052892, -0.00102394, ..., -0.00407222,
         0.01573803,  0.0184212 ],
       [ 0.00471485, -0.01300881,  0.00490963, ..., -0.01473219,
         0.0145512 , -0.01845799],
       ...,
       [-0.00115663, -0.02370602, -0.02249233, ...,  0.00756611,
        -0.00260445, -0.00951897],
       [-0.01524574,  0.02205464,  0.01347671, ...,  0.02521276,
        -0.0062952 ,  0.00769828],
       [-0.00871681, -0.01777321, -0.01788342, ...,  0.02038987,
         0.02181385, -0.00186296]], dtype=float32), array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32), array([[ 0.06710801,  0.14207336,  0.13825569, ...,  0.13606563,
         0.03971115, -0.17258172],
       [-0.15345757, -0.16082966,  0.16150314, ..., -0.12163257,
         0.03821175,  0.15300903],
       [ 0.11806652,  0.16615698, -0.17051263, ..., -0.09635399,
        -0.04718627, -0.04817332],
       ...,
       [ 0.00703666,  0.08381972, -0.11201484, ..., -0.1534979 ,
         0.10802796,  0.16509828],
       [-0.00427498, -0.12999111,  0.0295525 , ...,  0.15278795,
        -0.14315034,  0.04883748],
       [ 0.05544043, -0.05918136,  0.13713005, ...,  0.17039368,
        -0.04010352, -0.06717037]], dtype=float32), array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)], non_trainable=[]), optimizer_state=[0, array([[[[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]],

        [[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]],

        [[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]]],


       [[[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]],

        [[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]],

        [[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]]],


       [[[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]],

        [[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]],

        [[0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
          0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]]]], dtype=float32), array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1], dtype=float32), array([[[[0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         ...,
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1]],

        [[0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         ...,
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1]],

        [[0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         ...,
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1]]],


       [[[0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         ...,
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1]],

        [[0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         ...,
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1]],

        [[0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         ...,
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1]]],


       [[[0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         ...,
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1]],

        [[0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         ...,
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1]],

        [[0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         ...,
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
         [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1]]]], dtype=float32), array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1],
      dtype=float32), array([[0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
       [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
       [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
       ...,
       [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
       [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
       [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1]], dtype=float32), array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1],
      dtype=float32), array([[0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
       [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
       [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
       ...,
       [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
       [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1],
       [0.1, 0.1, 0.1, ..., 0.1, 0.1, 0.1]], dtype=float32), array([0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1,
       0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1], dtype=float32)], round_num=0.0), flat_sequence: [<tf.Tensor: shape=(3, 3, 1, 32), dtype=float32, numpy=
array([[[[ 1.16695147e-02, -1.68384433e-01,  8.09514448e-02,
          -4.59272321e-03,  2.09490120e-01, -1.75392658e-01,
           7.90535629e-01,  1.09946188e-02,  1.96537077e-01,
           8.80602524e-02, -3.58418256e-01,  5.92335045e-01,
           4.31125253e-01, -4.31660026e-01, -5.08282125e-01,
          -7.39256898e-03,  1.11976452e-03, -7.04426169e-02,
          -1.31761268e-01,  1.24570817e-01,  6.97333226e-03,
          -2.66125500e-01, -2.14709826e-02, -2.37043872e-01,
           5.09121008e-02, -3.71821612e-01, -2.19227299e-01,
          -8.67451355e-02, -1.31678998e-01, -2.64436871e-01,
          -2.81555206e-01, -8.07608142e-02]],

        [[-3.23685557e-01, -1.97732523e-01, -3.59651633e-02,
           9.62262154e-02, -3.97415161e-01,  1.53946519e-01,
           8.80663693e-01,  2.89618433e-01,  1.00045145e-01,
           3.07005614e-01, -5.73517799e-01, -1.48900241e-01,
           6.82129443e-01, -4.34498824e-02, -1.94983408e-01,
          -1.96484122e-02,  8.73981416e-02,  8.98429900e-02,
          -5.10894433e-02,  1.84513003e-01,  1.69746261e-02,
          -4.85088944e-01, -2.65121043e-01, -5.49785271e-02,
          -9.63986143e-02, -2.70034373e-01,  3.53312433e-01,
          -8.40127170e-02, -2.09615938e-02, -2.81591326e-01,
          -2.16679797e-01, -7.21340626e-02]],

        [[-1.21961229e-01,  1.68002620e-01,  8.41088817e-02,
          -1.05043024e-01, -2.10543334e-01,  3.19517195e-01,
           2.30688974e-01,  2.38988504e-01,  1.24755159e-01,
          -6.38982207e-02, -5.11259794e-01,  2.43757829e-01,
           5.75974405e-01,  2.99747497e-01, -1.50803521e-01,
           3.71607877e-02, -1.39833033e-01, -1.07406132e-01,
           9.44316536e-02,  3.36592257e-01, -9.31903869e-02,
          -6.88107431e-01,  1.54609308e-01,  7.03968406e-02,
           1.14358850e-02,  3.50388855e-01,  5.06917477e-01,
           1.76062640e-02,  7.28423847e-03,  2.15868592e-01,
          -5.77952117e-02,  1.33758962e-01]]],


       [[[-1.85566381e-01, -7.85756484e-03, -2.93120444e-02,
          -2.87204608e-02, -3.46040815e-01, -5.11374772e-01,
           7.07001686e-01,  2.18702421e-01, -4.05541390e-01,
          -2.25637063e-01,  2.85682797e-01, -3.59243691e-01,
          -2.45929211e-01, -2.54183322e-01, -2.16774330e-01,
           3.77104571e-03, -8.90721306e-02, -9.80253890e-02,
          -1.25623152e-01, -3.02696824e-01,  4.88587795e-03,
           7.11502433e-02, -2.72954047e-01, -4.32160735e-01,
          -4.21797857e-02, -4.81399477e-01, -6.99722648e-01,
          -2.45447848e-02, -7.73590617e-03, -1.73519373e-01,
          -1.49208471e-01,  2.35980362e-01]],

        [[ 5.36140008e-03, -9.88016650e-02, -1.13017596e-01,
          -4.51375879e-02, -2.46484697e-01, -1.67061463e-01,
           7.25140572e-01, -2.80674726e-01,  4.23018001e-02,
          -1.92830548e-01,  5.66628240e-02, -5.87748468e-01,
           3.56853276e-01, -9.53182355e-02,  1.24771588e-01,
           9.07629821e-03, -1.39414981e-01, -4.03503160e-04,
          -1.03460804e-01,  2.66834013e-02,  4.57645357e-02,
           3.29584181e-01, -2.06848644e-02, -1.00769080e-01,
           6.90264925e-02,  1.08576845e-02,  3.49833444e-02,
          -3.16838324e-02, -1.35358036e-01, -2.63065040e-01,
           1.00408599e-01,  3.43416989e-01]],

        [[ 1.48327112e-01, -1.93205681e-02,  5.02806492e-02,
          -1.41557410e-01,  1.82420805e-01,  6.15229979e-02,
          -2.04819083e-01, -3.67357016e-01,  3.15815598e-01,
          -3.34568232e-01,  2.12826967e-01,  5.55849791e-01,
          -6.62370101e-02,  4.18981731e-01,  5.28440364e-02,
          -3.24145369e-02, -9.94497612e-02, -1.01720840e-01,
          -5.75170405e-02, -4.96930815e-02, -1.39397427e-01,
           1.64565027e-01,  2.08030075e-01,  2.58581400e-01,
          -5.21896034e-02,  6.22296870e-01,  4.70508486e-01,
          -1.08293407e-01,  1.33779328e-02,  1.52158231e-01,
           1.51105402e-02, -2.44861603e-01]]],


       [[[ 5.60954437e-02,  2.23096639e-01, -8.19787476e-03,
          -5.20726331e-02,  7.83035904e-02,  3.15946154e-02,
          -9.24121812e-02,  1.86978951e-01, -5.67896426e-01,
           1.02460451e-01,  3.87131751e-01, -6.60104573e-01,
          -7.77755260e-01, -2.34034345e-01,  3.90989691e-01,
           2.84432415e-02,  1.22279674e-01, -7.52909184e-02,
          -4.95365821e-02, -3.23704362e-01, -1.28359526e-01,
           5.06213605e-01, -3.79553214e-02, -2.72384018e-01,
           3.77077907e-02, -3.88529420e-01, -4.50823069e-01,
          -9.53851566e-02, -7.68376165e-04,  1.01251692e-01,
           2.07601920e-01,  2.12352261e-01]],

        [[ 1.85227662e-01,  1.17307473e-02,  9.42953303e-02,
           5.11405282e-02,  3.59562069e-01,  6.66990280e-02,
          -6.24670446e-01, -3.07646453e-01, -1.05213344e-01,
           6.08949140e-02,  3.89597148e-01, -3.53870660e-01,
          -4.94767666e-01, -2.71437801e-02,  3.98181140e-01,
          -9.80087817e-02, -5.68412766e-02, -6.18246384e-02,
          -9.06206071e-02, -2.55389363e-01, -1.03112996e-01,
           7.74151921e-01,  2.44697154e-01,  3.53893310e-01,
          -7.99517632e-02,  9.59883332e-02, -3.23618323e-01,
          -2.08093151e-02,  3.50988209e-02,  2.06206918e-01,
           1.97994202e-01,  2.00895026e-01]],

        [[ 1.21342704e-01, -4.65189293e-03, -1.17033377e-01,
          -1.07265733e-01,  2.29665130e-01, -2.26529576e-02,
          -1.02689087e+00,  6.23436645e-02,  2.82413334e-01,
           1.67699337e-01,  4.52404350e-01,  5.27616441e-01,
          -2.62428045e-01,  2.68392652e-01,  7.74346888e-02,
           2.15692110e-02, -1.33247539e-01,  1.29389688e-01,
          -1.00105211e-01,  1.17759936e-01, -7.17788935e-03,
           3.84614319e-01, -1.05326965e-01,  3.34949166e-01,
          -3.46789993e-02,  4.05188620e-01,  1.03057563e-01,
          -3.85998413e-02, -7.17032403e-02,  3.47919703e-01,
           2.19164982e-01, -3.10689926e-01]]]], dtype=float32)>, <tf.Tensor: shape=(32,), dtype=float32, numpy=
array([ 1.2798485e-01,  1.3374409e-01, -7.5941235e-03, -1.2759258e-02,
        2.4146558e-01,  3.4399191e-01, -4.3884769e-01,  1.9516246e-01,
        2.2913916e-01,  1.6411576e-01,  1.0974486e-01,  4.8637685e-01,
        2.5281924e-01,  1.4211978e-01,  2.3165704e-01, -2.8963966e-02,
       -6.4327352e-05, -8.0365356e-04, -2.3900087e-03,  2.9359418e-01,
       -8.0265719e-03, -1.6454977e-01,  1.2118510e-01,  1.4842722e-01,
       -3.1922910e-02,  2.5482392e-01,  3.7399158e-01, -1.6018941e-05,
       -9.8846881e-03,  1.6005120e-01,  7.2506823e-02, -5.1147435e-02],
      dtype=float32)>, <tf.Tensor: shape=(3, 3, 32, 64), dtype=float32, numpy=
array([[[[-0.05149717, -0.06478052,  0.0101147 , ...,  0.03458007,
          -0.07951234, -0.02714727],
         [ 0.04601062, -0.04352377,  0.09463298, ...,  0.0648082 ,
           0.04029906,  0.01583023],
         [ 0.05914894,  0.07493652, -0.0713056 , ..., -0.01779006,
          -0.0511334 , -0.04742443],
         ...,
         [ 0.02872495,  0.05499277, -0.04029483, ..., -0.02730034,
           0.05713897, -0.02631376],
         [ 0.0079747 ,  0.06337251,  0.00124458, ...,  0.04893341,
           0.05640786,  0.04010881],
         [-0.05578797, -0.02296224,  0.01170116, ...,  0.06649599,
           0.05722781, -0.03085676]],

        [[-0.00572343, -0.02978102,  0.06535158, ...,  0.03726587,
          -0.0742821 , -0.08666833],
         [-0.07746004,  0.03558621,  0.06409723, ...,  0.04667291,
          -0.00440917, -0.07841256],
         [ 0.03670233, -0.04292921,  0.07547685, ...,  0.04637239,
          -0.01058017, -0.05636743],
         ...,
         [-0.02849755, -0.01978916,  0.07527698, ..., -0.05339618,
          -0.02423373,  0.049601  ],
         [ 0.04288183,  0.06586208,  0.01457915, ...,  0.03530099,
          -0.06398729, -0.08290093],
         [ 0.06723362,  0.0292151 , -0.01140832, ..., -0.00369346,
           0.10155061,  0.04770206]],

        [[ 0.00063172,  0.01696675, -0.02189039, ..., -0.03084326,
          -0.08218723, -0.0365196 ],
         [-0.06003157, -0.00123441,  0.02399188, ..., -0.05926755,
          -0.04588291, -0.04742533],
         [-0.06865176,  0.04854007, -0.07848576, ...,  0.001809  ,
           0.0202535 , -0.03745435],
         ...,
         [ 0.07188517, -0.01870459,  0.06468269, ...,  0.02604212,
          -0.0952572 ,  0.05608444],
         [ 0.00327452,  0.00196609,  0.02933738, ...,  0.01751942,
          -0.04975957, -0.00564933],
         [-0.09727669, -0.04155525, -0.0808513 , ...,  0.01875488,
           0.05782194, -0.06406955]]],


       [[[ 0.05936607, -0.01542661, -0.0295011 , ...,  0.0553518 ,
          -0.09351768,  0.01630509],
         [-0.08279083, -0.07961085, -0.00141257, ..., -0.00295766,
           0.05630493,  0.03589067],
         [-0.04948141, -0.00826998,  0.04745482, ..., -0.03547035,
          -0.0055791 , -0.03066785],
         ...,
         [ 0.03727515,  0.09058099, -0.01614488, ..., -0.00158172,
           0.05931318, -0.06153264],
         [-0.04125806,  0.0571633 , -0.03386265, ..., -0.06938403,
           0.05973028,  0.06836288],
         [ 0.02720982, -0.02575087,  0.03605637, ...,  0.05639995,
          -0.03120164,  0.02341359]],

        [[-0.06651479,  0.07962425, -0.08439811, ..., -0.00333816,
          -0.05868227, -0.05517734],
         [ 0.06361733, -0.01842716,  0.07608585, ..., -0.04359341,
           0.02001454,  0.07099058],
         [ 0.05537403, -0.02418736,  0.00268303, ...,  0.03582846,
           0.00226729,  0.03286232],
         ...,
         [ 0.01327598, -0.06678947, -0.03070209, ..., -0.0286341 ,
           0.00119874,  0.01138733],
         [-0.10910524,  0.07902722, -0.07663438, ..., -0.04776132,
           0.00246474,  0.03597804],
         [-0.05474539, -0.08690876, -0.03235421, ..., -0.10224095,
           0.03078091, -0.01763843]],

        [[ 0.04914063, -0.04872016,  0.01925665, ..., -0.02603931,
          -0.06959372, -0.08517073],
         [ 0.02192282,  0.00197323,  0.05193935, ...,  0.04174311,
           0.00765755,  0.022777  ],
         [-0.0328092 ,  0.00542083, -0.05456557, ...,  0.07462636,
           0.05480869,  0.01771777],
         ...,
         [ 0.06259646, -0.00098498,  0.03926713, ..., -0.06319539,
          -0.00073602,  0.01279554],
         [-0.0274897 ,  0.06440811,  0.07240936, ..., -0.02511716,
           0.02528956, -0.10477223],
         [-0.02955782,  0.04404796, -0.07779989, ...,  0.04208626,
          -0.11796902, -0.00343067]]],


       [[[-0.01280594, -0.0121543 , -0.05409344, ..., -0.04819906,
          -0.07858101,  0.04584826],
         [-0.0886223 ,  0.03728177,  0.06152387, ...,  0.02516106,
          -0.05853506,  0.09298651],
         [-0.03413287,  0.02343052,  0.00148465, ..., -0.04983848,
           0.06605998, -0.0366875 ],
         ...,
         [-0.0922038 , -0.06889572, -0.06738836, ...,  0.03571727,
          -0.02848814, -0.06392477],
         [ 0.05111682,  0.04308618, -0.00858286, ...,  0.00859857,
          -0.06353313,  0.07860751],
         [ 0.00987281,  0.01301073, -0.04054661, ..., -0.13072807,
           0.0312474 , -0.05712461]],

        [[-0.019525  ,  0.03159716, -0.00119206, ...,  0.0745156 ,
           0.02119745, -0.05222953],
         [ 0.03513197,  0.04432765, -0.00843704, ...,  0.01449043,
          -0.06759445,  0.08548926],
         [ 0.01977573, -0.08572512, -0.07811151, ..., -0.02193626,
          -0.04595621, -0.00868012],
         ...,
         [-0.08034407, -0.06068813, -0.05787837, ..., -0.00720917,
           0.02846966, -0.04123665],
         [-0.01922107, -0.04154959, -0.00455273, ...,  0.01408329,
          -0.05098459,  0.0635207 ],
         [-0.01939552, -0.04437842, -0.03373186, ..., -0.05648719,
          -0.0579007 , -0.0818284 ]],

        [[-0.02263805,  0.00835604, -0.00206674, ...,  0.01269813,
           0.09865518,  0.04852647],
         [ 0.02887385,  0.00528235,  0.03978411, ..., -0.04968354,
          -0.01180815, -0.04692629],
         [-0.07663541, -0.08446281, -0.05346664, ...,  0.01446544,
          -0.06389499, -0.07556653],
         ...,
         [-0.00751158,  0.04169579,  0.08737116, ..., -0.02075274,
          -0.036677  , -0.03277891],
         [ 0.02556275, -0.00071542,  0.06043949, ...,  0.05750635,
          -0.08433686, -0.0801181 ],
         [ 0.03076835,  0.03448997, -0.05512709, ...,  0.0992207 ,
          -0.09456592,  0.01315317]]]], dtype=float32)>, <tf.Tensor: shape=(64,), dtype=float32, numpy=
array([ 0.04494539,  0.00425469,  0.01686814,  0.06894772, -0.01624818,
        0.1358657 ,  0.03693029,  0.0368302 , -0.0072854 ,  0.04130153,
       -0.04658092,  0.0417401 ,  0.00835811,  0.06138914,  0.02495698,
       -0.033563  ,  0.00936771,  0.01700203,  0.00834112,  0.04614886,
        0.00726132,  0.02281104,  0.013099  , -0.12727852,  0.02048355,
        0.02589151, -0.03464295,  0.00239327,  0.03161589,  0.00520899,
        0.04625348,  0.03766713,  0.05464765,  0.0027574 ,  0.00182667,
       -0.01340767,  0.04325823,  0.05698333,  0.01137527, -0.04063511,
       -0.04180152,  0.0068675 ,  0.02925838,  0.00159267, -0.047934  ,
        0.03324355, -0.01275439,  0.00811922, -0.0448549 , -0.00606798,
       -0.00231429,  0.01271121,  0.0137297 , -0.00175962, -0.06443261,
        0.07628222, -0.03946518, -0.01771106,  0.07674828,  0.04972924,
        0.00751509, -0.00614078, -0.00019453, -0.03033596], dtype=float32)>, <tf.Tensor: shape=(9216, 128), dtype=float32, numpy=
array([[-0.00919814,  0.00615945, -0.00828591, ..., -0.00274897,
        -0.02287684, -0.00859622],
       [ 0.02297141, -0.00843501,  0.01214664, ..., -0.02060565,
         0.00077625, -0.00150835],
       [-0.01769582,  0.02367247, -0.02250859, ..., -0.01399742,
        -0.01251451, -0.00015257],
       ...,
       [ 0.01232074, -0.01712592, -0.01460047, ..., -0.0097532 ,
         0.02035574,  0.00714262],
       [ 0.02577922,  0.00266433, -0.0159087 , ...,  0.02311403,
         0.02123032, -0.00833546],
       [ 0.01958366,  0.00182447,  0.0022031 , ..., -0.00849599,
        -0.01668546, -0.0170302 ]], dtype=float32)>, <tf.Tensor: shape=(128,), dtype=float32, numpy=
array([ 0.07734939, -0.02234575,  0.01439793, -0.0299252 , -0.01462307,
       -0.00886392, -0.01139552,  0.00559482,  0.04127979, -0.01196769,
       -0.0214373 ,  0.01707714, -0.05443231,  0.01535976,  0.16560829,
        0.06329321,  0.05265555,  0.01860128, -0.01376393, -0.02081051,
       -0.00295314,  0.04943726, -0.02570331,  0.03997792,  0.00217658,
        0.06879124, -0.06496799, -0.01858464, -0.00429473, -0.02422868,
       -0.03128722,  0.09425041,  0.1123165 ,  0.13730367,  0.07247649,
        0.03562471,  0.05070863,  0.031524  ,  0.01795219, -0.04069934,
        0.03326195,  0.00897864,  0.02801744,  0.06719058,  0.04442226,
       -0.03811817,  0.01250446,  0.01872982,  0.04468756,  0.06648097,
       -0.04473203, -0.00034002, -0.00126831, -0.00290502,  0.00283371,
       -0.00139494,  0.02590998,  0.00248474, -0.07239126,  0.01806799,
        0.0017131 ,  0.09394667,  0.05479456,  0.00896651, -0.02203909,
       -0.03812313, -0.03407428,  0.00374448,  0.00994449,  0.03809858,
        0.10023893, -0.00860863,  0.07667832,  0.04571465,  0.06720333,
        0.00227141, -0.00690709,  0.01152134,  0.14687265,  0.00251609,
       -0.00331101, -0.01406122,  0.04174675, -0.01638682,  0.01990099,
        0.04895613, -0.07097536, -0.04455026, -0.02276479,  0.04397298,
        0.03028593,  0.03422193, -0.00934881,  0.00062588, -0.02278692,
       -0.00247059,  0.07501619, -0.06006632,  0.0889518 ,  0.01343257,
        0.00837272, -0.01861023,  0.11996674, -0.01116913,  0.03002656,
        0.12550598, -0.0168479 ,  0.00320679, -0.00046115, -0.02492448,
        0.00248479,  0.05433161,  0.1253587 , -0.03221332,  0.06650379,
       -0.01310642,  0.00672044,  0.01338116,  0.06384262,  0.00978898,
        0.01042999, -0.02650085,  0.06619064, -0.03112095, -0.03904219,
       -0.05231171,  0.09092057,  0.00754548], dtype=float32)>, <tf.Tensor: shape=(128, 62), dtype=float32, numpy=
array([[ 0.075972  ,  0.06718174, -0.02517173, ...,  0.06122297,
        -0.16623211, -0.0635957 ],
       [-0.08798219, -0.08997547, -0.1408607 , ...,  0.00502468,
         0.09869459,  0.06174567],
       [-0.1119482 ,  0.05686657, -0.3050513 , ..., -0.09150026,
         0.16040288, -0.06428335],
       ...,
       [ 0.20456244, -0.01768758,  0.19464958, ...,  0.01887489,
         0.09758404, -0.06275001],
       [ 0.08484643,  0.17294674, -0.13941547, ..., -0.04182433,
        -0.15331845, -0.07252296],
       [ 0.1606109 , -0.07071617,  0.04775089, ...,  0.08454833,
        -0.1376703 ,  0.19948734]], dtype=float32)>, <tf.Tensor: shape=(62,), dtype=float32, numpy=
array([ 0.1600232 ,  0.2729942 ,  0.12787889,  0.03186841,  0.12073471,
        0.15902543,  0.0708076 ,  0.09770241,  0.28072298, -0.04438544,
        0.00805524, -0.03862653,  0.09057388, -0.1038434 ,  0.00543488,
        0.01641321, -0.11848438, -0.05479115,  0.10295592, -0.06920211,
       -0.07624936, -0.04894207, -0.05150959, -0.00453593,  0.08123987,
       -0.02040869, -0.07013419, -0.03618506,  0.13615358,  0.05130737,
        0.11026313, -0.04382328, -0.01423698, -0.04116461, -0.04752456,
       -0.07080534,  0.02582982, -0.11607827, -0.07461235, -0.00265756,
        0.20662203, -0.08804544,  0.00814969,  0.02124957, -0.03852018,
       -0.07703929, -0.09115116,  0.14367674, -0.13087021,  0.10267537,
       -0.12703255, -0.12746312, -0.14222364,  0.09113843, -0.08114596,
        0.15477304, -0.10664268, -0.10052184, -0.14717786, -0.08346131,
       -0.06111998, -0.12765335], dtype=float32)>, <tf.Tensor: shape=(), dtype=int64, numpy=100>, <tf.Tensor: shape=(), dtype=float32, numpy=100.0>].

Can you provide a Linux version of jaxlib 0.3.14 for download

Please:Can you provide a Linux version of jaxlib 0.3.14 for download

ERROR: Ignored the following yanked versions: 0.1.63, 0.4.0, 0.4.15
ERROR: Could not find a version that satisfies the requirement jaxlib==0.3.14 (from tensorflow-federated) (from versions: 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.6, 0.4.7, 0.4.9, 0.4.10, 0.4.11, 0.4.12, 0.4.13, 0.4.14, 0.4.16, 0.4.17, 0.4.18, 0.4.19, 0.4.20)
ERROR: No matching distribution found for jaxlib==0.3.14

NotImplementedError("b/162106885") for Optimization Folder

Environment: Tensorflow 2.3.0, Tensorflow Federated 0.17.0, Ubuntu 18.04, Bazel 3.1

When I run the given command
bazel run main:federated_trainer -- --task=emnist_cr --total_rounds=100 \ --client_optimizer=sgd --client_learning_rate=0.1 --client_batch_size=20 \ --server_optimizer=sgd --server_learning_rate=1.0 --clients_per_round=10 \ --client_epochs_per_round=1 --experiment_name=emnist_fedavg_experiment

I get the following error
DEBUG: Rule 'rules_python' indicated that a canonical reproducible form can be obtained by modifying arguments commit = "a0fbf98d4e3a232144df4d0d80b577c7a693b570", shallow_since = "1586444447 +0200" and dropping ["tag"] DEBUG: Repository rules_python instantiated at: no stack (--record_rule_instantiation_callstack not enabled) Repository rule git_repository defined at: /jet/home/houc/.cache/bazel/_bazel_houc/c7f7578c4b4c04555c85530cc5b041a3/external/bazel_tools/tools/build_defs/repo/git.bzl:195:18: in <toplevel> INFO: Analyzed target //optimization/main:federated_trainer (0 packages loaded, 0 targets configured). INFO: Found 1 target... Target //optimization/main:federated_trainer up-to-date: bazel-bin/optimization/main/federated_trainer INFO: Elapsed time: 0.184s, Critical Path: 0.01s INFO: 0 processes. INFO: Build completed successfully, 1 total action INFO: Running command line: bazel-bin/optimization/main/federated_trainer '--task=emnist_cr' '--total_rounds=100' '--client_optimizer=sgd' '--client_learning_rate=0.1' '--client_batch_sizeINFO: Build completed successfully, 1 total action 2021-03-02 16:33:04.617392: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 I0302 16:33:29.877769 140271334733632 client_data.py:154] Using newer tf.data.Dataset construction behavior. 2021-03-02 16:33:29.883310: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/.singularity.d/libs 2021-03-02 16:33:29.883337: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303) 2021-03-02 16:33:29.883361: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (br013.ib.bridges2.psc.edu): /proc/driver/nvidia/version does not exist 2021-03-02 16:33:29.941923: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2245890000 Hz 2021-03-02 16:33:29.962079: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x57f41e0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2021-03-02 16:33:29.962143: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version I0302 16:33:39.132875 140271334733632 client_data.py:154] Using newer tf.data.Dataset construction behavior. Traceback (most recent call last): File "/jet/home/houc/.cache/bazel/_bazel_houc/c7f7578c4b4c04555c85530cc5b041a3/execroot/org_federated_research/bazel-out/k8-opt/bin/optimization/main/federated_trainer.runfiles/org_federated_research/optimization/main/federated_trainer.py", line 261, in <module> app.run(main) File "/jet/home/houc/.local/lib/python3.6/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/jet/home/houc/.local/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "/jet/home/houc/.cache/bazel/_bazel_houc/c7f7578c4b4c04555c85530cc5b041a3/execroot/org_federated_research/bazel-out/k8-opt/bin/optimization/main/federated_trainer.runfiles/org_federated_research/optimization/main/federated_trainer.py", line 219, in main task_spec, model=FLAGS.emnist_cr_model) File "/jet/home/houc/.cache/bazel/_bazel_houc/c7f7578c4b4c04555c85530cc5b041a3/execroot/org_federated_research/bazel-out/k8-opt/bin/optimization/main/federated_trainer.runfiles/org_federated_research/optimization/emnist/federated_emnist.py", line 82, in configure_training @tff.tf_computation(tf.string) File "/jet/home/houc/.local/lib/python3.6/site-packages/tensorflow_federated/python/core/impl/wrappers/computation_wrapper.py", line 407, in __call__ result = fn_to_wrap(*args, **kwargs) File "/jet/home/houc/.cache/bazel/_bazel_houc/c7f7578c4b4c04555c85530cc5b041a3/execroot/org_federated_research/bazel-out/k8-opt/bin/optimization/main/federated_trainer.runfiles/org_federated_research/optimization/emnist/federated_emnist.py", line 84, in build_train_dataset_from_client_id client_dataset = emnist_train.dataset_computation(client_id) File "/jet/home/houc/.local/lib/python3.6/site-packages/tensorflow_federated/python/simulation/hdf5_client_data.py", line 86, in dataset_computation raise NotImplementedError("b/162106885") NotImplementedError: b/162106885

I notice that commits are still being actively made in this folder. Is it not currently stable? And if not, is there a commit id on which the code will run? Or is this folder no longer compatible with the environment I specified at the beginning of the post?

TFF reconstruction for time-series prediction

Hi,
I have a time-series data and need to predict the next value. I am looking to use the tensorflow federated reconstruction framework to achieve the same. However, I am a little confused about what would be the correct way to map the sample stackoverflow prediction problem (word embeddings) to time-series prediction (in my case).

Alternative 1: Let's say for example my lookback period is 6 values (embeddings). Global layers will have 'g' layers in it which will be extended by local layers 'l'. So my overall model will have 'g+l' layers for final prediction. Input will remain consistent across local and global layers

Alternative 2: Let's say my overall model will have 'n' layers (both local and global will have n layers). But my lookback period for global model will be 6 values whereas for my local model will have 24 values. So kind of making the additional 18 (24-6) values as oov_vocabulary.

Please help me understanding the problem. I can provide more details if required.

Which of the two alternatives is the correct way to think?

TypeError: The supplied argument maps to TFF type.. which is incompatible with the requested type

Hello;
While running the script emnist_with_targeted_attack , I ran into the following error:

....
....
....
Building Iterative Process!
Traceback (most recent call last):
File "emnist_with_targeted_attack.py", line 327, in
app.run(main)
File "C:\Users\ngndiaye\AppData\Roaming\Python\Python37\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\ngndiaye\AppData\Roaming\Python\Python37\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "emnist_with_targeted_attack.py", line 272, in main
server_optimizer_fn=server_optimizer_fn)
File "C:\Users\ngndiaye\federated\targeted_attack\attacked_fedavg.py", line 470, in build_federated_averaging_process_attacked
federated_dataset_type)
File "C:\Users\ngndiaye\federated\targeted_attack\attacked_fedavg.py", line 375, in build_run_one_round_fn_attacked
federated_bool_type)
File "C:\Users\ngndiaye\AppData\Roaming\Python\Python37\site-packages\tensorflow_federated\python\core\impl\wrappers\computation_wrapper.py", line 407, in call
result = fn_to_wrap(*args, **kwargs)
File "C:\Users\ngndiaye\federated\targeted_attack\attacked_fedavg.py", line 403, in run_one_round
weight=weight_denom)
File "C:\Users\ngndiaye\AppData\Roaming\Python\Python37\site-packages\tensorflow_federated\python\core\impl\utils\function_utils.py", line 520, in call
arg = pack_args(self._type_signature.parameter, args, kwargs, context)
File "C:\Users\ngndiaye\AppData\Roaming\Python\Python37\site-packages\tensorflow_federated\python\core\impl\utils\function_utils.py", line 316, in pack_args
arg = pack_args_into_struct(args, kwargs, parameter_type, context)
File "C:\Users\ngndiaye\AppData\Roaming\Python\Python37\site-packages\tensorflow_federated\python\core\impl\utils\function_utils.py", line 237, in pack_args_into_struct
result_elements.append((name, context.ingest(arg_value, elem_type)))
File "C:\Users\ngndiaye\AppData\Roaming\Python\Python37\site-packages\tensorflow_federated\python\core\impl\federated_context\federated_computation_context.py", line 101, in ingest
val = value_impl.to_value(val, type_spec, self._context_stack)

File"C:\Users\ngndiaye\AppData\Roaming\Python\Python37\sitepackages\tensorflow_federated\python\core\impl\value_impl.py", line 464, in to_value 'the requested type {}.'.format(result.type_signature, type_spec))

TypeError: The supplied argument maps to TFF type {<float32[3,3,1,32],float32[32],float32[3,3,32,64],float32[64],float32[9216,128],float32[128],float32[128,10],float32[10]>}@Clients, which is incompatible with the requested type {<trainable=<float32[3,3,1,32],float32[32],float32[3,3,32,64],float32[64],float32[9216,128],float32[128],float32[128,10],float32[10]>,non_trainable=<>>}@Clients.

I used Tensorflow VERSION 2.3.1 , Tensorflow Federated VERSION 0.17.0 and Keras VERSION 2.4.0 and installed it via pip.
Could you specify which version of tf /keras to use with tff ? Thanks a lot.

Calculating epsilon budget for the federated model with DPquery

Hi is it possible to get the epsilon budget as an evaluation metric in the example from federated/differential_privacy/stackoverflow/run_federated.py? Or how can you calculate it given the parameters for this fed model using the rdp accountant from tensorflow_privacy.privacy.analysis.rdp_accountant?

CIFAR10 experiment in Adaptive Federated Optimization

Hi there,
I'm trying to reproduce the CIFAR10 experiment in "Adaptive Federated Optimization", but I am not sure which dirichlet concentration value to use. I see in

dirichlet_parameter: float = 1
that the dirichlet_parameter is set to 1, and that calls to load_cifar10_federated() do not change that parameter. However, I see in the paper (Appendix C.2) that the dirichlet parameter is set to 0.1. Could you please clarify which value should be used as input to the numpy dirichlet function that will allow for reproducibility? Thanks in advance.

[distributed_dp] Including package versions into the requirements file

Hi everyone,

First on all, thank you very much for providing the very nice distributed_dp package.

I was trying to get it to work, and installed the packages referenced in https://github.com/google-research/federated/blob/master/distributed_dp/requirements.txt. Unfortunately, even though I installed the nightly build versions of all the packages as indicated in the README, there seem to be compatibility issues.

I've tried a couple of different combinations of versions for tf, tf-federated, tf-privacy, tf-estimator, but the code was running in none of them.

My current setup is

...
python                    3.9.7                h12debd9_1
keras-nightly             2.9.0.dev2022030808          pypi_0    pypi
tb-nightly                2.9.0a20220307           pypi_0    pypi
tensorboard               2.8.0                    pypi_0    pypi
tensorboard-data-server   0.6.1                    pypi_0    pypi
tensorboard-plugin-wit    1.6.0                      py_0
tensorflow-datasets       4.5.2                    pypi_0    pypi
tensorflow-federated-nightly 0.19.0.dev20220218          pypi_0    pypi
tensorflow-io-gcs-filesystem 0.24.0                   pypi_0    pypi
tensorflow-metadata       1.7.0                    pypi_0    pypi
tensorflow-model-optimization 0.7.1                    pypi_0    pypi
tensorflow-privacy        0.7.3                    pypi_0    pypi
tensorflow-probability    0.15.0                   pypi_0    pypi
tf-estimator-nightly      2.9.0.dev2022030809          pypi_0    pypi
tf-nightly                2.9.0.dev20220308          pypi_0    pypi
... 

In this setup, I get the error

Traceback (most recent call last):
  File "/home/fraboeni/.cache/bazel/_bazel_fraboeni/eb0df9f25fbadff22165e0e943d33a0f/execroot/org_federated_research/bazel-out/k8-opt/bin/distributed_dp/fl_run.runfiles/org_federated_research/distributed_dp/fl_run.py", line 28, in <module>
    from distributed_dp import fl_utils
  File "/home/fraboeni/.cache/bazel/_bazel_fraboeni/eb0df9f25fbadff22165e0e943d33a0f/execroot/org_federated_research/bazel-out/k8-opt/bin/distributed_dp/fl_run.runfiles/org_federated_research/distributed_dp/fl_utils.py", line 22, in <module>
    from distributed_dp import accounting_utils
  File "/home/fraboeni/.cache/bazel/_bazel_fraboeni/eb0df9f25fbadff22165e0e943d33a0f/execroot/org_federated_research/bazel-out/k8-opt/bin/distributed_dp/fl_run.runfiles/org_federated_research/distributed_dp/accounting_utils.py", line 21, in <module>
    import tensorflow_privacy as tfp
  File "/home/fraboeni/.conda/envs/tf-federated/lib/python3.9/site-packages/tensorflow_privacy/__init__.py", line 30, in <module>
    from tensorflow_privacy import v1
  File "/home/fraboeni/.conda/envs/tf-federated/lib/python3.9/site-packages/tensorflow_privacy/v1/__init__.py", line 32, in <module>
    from tensorflow_privacy.privacy.estimators.v1.dnn import DNNClassifier as DNNClassifierV1
  File "/home/fraboeni/.conda/envs/tf-federated/lib/python3.9/site-packages/tensorflow_privacy/privacy/estimators/v1/dnn.py", line 19, in <module>
    from tensorflow_privacy.privacy.estimators.v1 import head as head_lib
  File "/home/fraboeni/.conda/envs/tf-federated/lib/python3.9/site-packages/tensorflow_privacy/privacy/estimators/v1/head.py", line 22, in <module>
    from tensorflow.python.ops import lookup_ops  # pylint: disable=g-direct-tensorflow-import
ImportError: cannot import name 'lookup_ops' from 'tensorflow.python.ops' (unknown location)

when running bazel run :fl_run

My question now is the following: could you share version numbers in your requirement.txt file for which the code is successfully running?

[Distributed DP] Cannot capture a result of an unsupported type NoneType

Under the directory distributed-dp, I ran the command

bazel run :fl_run -- \
    --task=emnist_character \
    --server_optimizer=sgd \
    --server_learning_rate=1 \
    --server_sgd_momentum=0.0 \
    --client_optimizer=sgd \
    --client_learning_rate=0.01 \
    --client_sgd_momentum=0.9 \
    --client_batch_size=20 \
    --clients_per_round=100 \
    --experiment_name=my_emnist_test \
    --epsilon=6 \
    --num_bits=20 \
    --l2_norm_clip=1 \
    --k_stddevs=3 \
    --client_epochs_per_round=2 \
    --dp_mechanism=dskellam \
    --total_rounds=50 \
    --logtostderr > log.txt 2>&1

In the log which I used to collect redirected messages, I obtained

Loading:
Loading: 0 packages loaded
DEBUG: Rule 'rules_python' indicated that a canonical reproducible form can be obtained by modifying arguments commit = "a0fbf98d4e3a232144df4d0d80b577c7a693b570", shallow_since = "1586444447 +0200" and dropping ["tag"]
DEBUG: Repository rules_python instantiated at:
  /data/samuel/federated/WORKSPACE:5:15: in <toplevel>
Repository rule git_repository defined at:
  /data/samuel/.cache/bazel/_bazel_zjiangaj/3fe67cea0bf4ee0dd0c8eb14b19e1272/external/bazel_tools/tools/build_defs/repo/git.bzl:199:33: in <toplevel>
Analyzing: target //distributed_dp:fl_run (0 packages loaded, 0 targets configured)
DEBUG: Rule 'rules_license' indicated that a canonical reproducible form can be obtained by modifying arguments commit = "e3bdc544ef373156da36638b774d65ff2d978bfa", shallow_since = "1667362800 -0400" and dropping ["tag"]
DEBUG: Repository rules_license instantiated at:
  /data/samuel/federated/WORKSPACE:14:15: in <toplevel>
Repository rule git_repository defined at:
  /data/samuel/.cache/bazel/_bazel_zjiangaj/3fe67cea0bf4ee0dd0c8eb14b19e1272/external/bazel_tools/tools/build_defs/repo/git.bzl:199:33: in <toplevel>
INFO: Analyzed target //distributed_dp:fl_run (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
[0 / 1] [Prepa] BazelWorkspaceStatusAction stable-status.txt
Target //distributed_dp:fl_run up-to-date:
  bazel-bin/distributed_dp/fl_run
INFO: Elapsed time: 4.312s, Critical Path: 0.02s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/distributed_dp/fl_run '--task=emnist_character' '--server_optimizer=sgd' '--server_learning_rate=1' '--server_sgd_momentum=0.0' '--client_optimizer=sgd' '--client_learning_rate=0.01' '--client_sgd_momentum=0.9' '--client_batch_size=20' '--clients_per_round=100' '--experiment_name=my_emnist_test' '--epsilon=6' '--num_bits=20' '--l2_norm_clip=1' '--k_stddevs=3' '--client_epochs_per_round=2' '--dp_mechanism=dskellam' '--total_rounds=50' --logtostderr
INFO: Build completed successfully, 1 total action
I0317 04:07:43.299935 140328278546240 sql_client_data.py:127] Loaded 3400 client ids from SQL database.
I0317 04:07:50.917061 140328278546240 sql_client_data.py:127] Loaded 3400 client ids from SQL database.
I0317 04:07:51.318575 140328278546240 keras_utils.py:362] Adding default num_examples metric to model
I0317 04:07:51.318807 140328278546240 keras_utils.py:365] Adding default num_batches metric to model
I0317 04:07:51.492904 140328278546240 keras_utils.py:362] Adding default num_examples metric to model
I0317 04:07:51.493148 140328278546240 keras_utils.py:365] Adding default num_batches metric to model
I0317 04:07:51.502441 140328278546240 fl_utils.py:72] Shared DP Parameters:
I0317 04:07:51.502845 140328278546240 fl_utils.py:73] {'clip': 1.0,
 'delta': 0.0002941176470588235,
 'dim': 1018174,
 'epsilon': 6.0,
 'mechanism': 'dskellam',
 'num_clients': 3400,
 'num_clients_per_round': 100,
 'num_rounds': 50,
 'sampling_rate': 1.0}
I0317 04:07:52.706335 140328278546240 fl_utils.py:152] dskellam parameters:
I0317 04:07:52.706791 140328278546240 fl_utils.py:153] {'beta': 0.6065306597126334,
 'bits': 20,
 'dim': 1018174,
 'gamma': 2.8479735556541443e-05,
 'inflated_l2': 1.000120752105709,
 'k_stddevs': 3,
 'local_stddev': 0.49762363478987853,
 'mechanism': 'dskellam',
 'noise_mult_clip': 4.976236347898785,
 'noise_mult_inflated': 4.975635529431367,
 'padded_dim': 1048576.0,
 'scale': 35112.68558005667}
I0317 04:07:52.706896 140328278546240 ddpquery_utils.py:44] Conditional rounding set to True (beta = 0.606531)
I0317 04:07:52.983747 140328278546240 keras_utils.py:362] Adding default num_examples metric to model
I0317 04:07:52.983982 140328278546240 keras_utils.py:365] Adding default num_batches metric to model
Traceback (most recent call last):
  File "/data/samuel/.cache/bazel/_bazel_zjiangaj/3fe67cea0bf4ee0dd0c8eb14b19e1272/execroot/org_federated_research/bazel-out/k8-opt/bin/distributed_dp/fl_run.runfiles/org_federated_research/distributed_dp/fl_run.py", line 304, in <module>
    app.run(main)
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/absl/app.py", line 312, in run
    _run_main(main, args)
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/absl/app.py", line 258, in _run_main
    sys.exit(main(argv))
  File "/data/samuel/.cache/bazel/_bazel_zjiangaj/3fe67cea0bf4ee0dd0c8eb14b19e1272/execroot/org_federated_research/bazel-out/k8-opt/bin/distributed_dp/fl_run.runfiles/org_federated_research/distributed_dp/fl_run.py", line 246, in main
    iterative_process = tff.learning.algorithms.build_unweighted_fed_avg(
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/tensorflow_federated/python/learning/algorithms/fed_avg.py", line 324, in build_unweighted_fed_avg
    return build_weighted_fed_avg(
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/tensorflow_federated/python/learning/algorithms/fed_avg.py", line 198, in build_weighted_fed_avg
    aggregator = model_aggregator.create(model_update_type,
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/tensorflow_federated/python/aggregators/factory_utils.py", line 52, in create
    aggregator = self._factory.create(value_type)
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/tensorflow_federated/python/aggregators/mean.py", line 200, in create
    value_sum_process = self._value_sum_factory.create(value_type)
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/tensorflow_federated/python/aggregators/robust.py", line 378, in create
    inner_agg_process = inner_agg_factory.create(value_type)
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/tensorflow_federated/python/aggregators/differential_privacy.py", line 335, in create
    get_noised_result = computations.tf_computation(
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/tensorflow_federated/python/core/impl/wrappers/computation_wrapper.py", line 496, in __call__
    wrapped_func = self._strategy(
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/tensorflow_federated/python/core/impl/wrappers/computation_wrapper.py", line 237, in __call__
    return wrapped_fn_generator.send(result)
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/tensorflow_federated/python/core/impl/wrappers/computation_wrapper.py", line 80, in _wrap_concrete
    concrete_fn = generator.send(result)
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/tensorflow_federated/python/core/impl/wrappers/computation_wrapper_instances.py", line 63, in _tf_wrapper_fn
    comp_pb, extra_type_spec = tf_serializer.send(result)
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/tensorflow_federated/python/core/impl/tensorflow_context/tensorflow_serialization.py", line 111, in tf_computation_serializer
    result_type, result_binding = tensorflow_utils.capture_result_from_graph(
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/tensorflow_federated/python/core/impl/utils/tensorflow_utils.py", line 295, in capture_result_from_graph
    element_type_binding_pairs = [
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/tensorflow_federated/python/core/impl/utils/tensorflow_utils.py", line 296, in <listcomp>
    capture_result_from_graph(e, graph) for e in result
  File "/data/samuel/anaconda3/envs/google/lib/python3.9/site-packages/tensorflow_federated/python/core/impl/utils/tensorflow_utils.py", line 328, in capture_result_from_graph
    raise UnsupportedGraphResultError(
tensorflow_federated.python.core.impl.utils.tensorflow_utils.UnsupportedGraphResultError: Cannot capture a result of an unsupported type NoneType.

By the way, I am using Python 3.9.7, with pip installed packages (only ones related to tensorflow is listed)

tensorboard                   2.8.0
tensorboard-data-server       0.6.1
tensorboard-plugin-wit        1.8.1
tensorflow                    2.8.4
tensorflow-addons             0.19.0
tensorflow-datasets           4.5.2
tensorflow-estimator          2.8.0
tensorflow-federated          0.24.0
tensorflow-io-gcs-filesystem  0.31.0
tensorflow-metadata           1.12.0
tensorflow-model-optimization 0.7.3
tensorflow-privacy            0.8.0
tensorflow-probability        0.15.0

Could anyone help me with this issue, if you can successfully run the above command? Thank you in advance.

Data loading is error

Hi,
Thanks for your framework. I tried to run your code but get this error "TypeError: Values of type <label=int32,pixels=float32[28,28]>* cannot be cast to type <pixels=float32[28,28],label=int32>*" when loading emnist data. Do you know how I can fix it? Thanks.

Stackoverflow validation dataset

It seems that the validation set pick 10000 examples from the test dataset randomly at the beginning of iterations. So does it mean that the validation set is variable over experiments and invariable over different rounds. To conduct experiments over different hyperparameters, should I fix the seed in create_tf_dataset_from_all_clients since 10000 is rather small and there can be different skewness in partial samplings, making comparisons inconvincible? Thank you~

"AttributeError: 'GFile' object has no attribute 'readable' " when try to generate a centralized training benchmark.

I used the following to get a centralized training benchmark and facing a GFile attribute error every time after 1st epoch.

bazel run main:centralized_trainer -- --task=emnist_cr --num_epochs=100 --experiment_name=emnist_central_experiment --centralized_optimizer=sgd --centralized_learning_rate=0.1 --batch_size=64

A tf2.4 and tff-nightly environment was used as mention in another issue NotImplementedError("b/162106885") for Optimization Folder #24.
image

The following pic shows the call stack of this error.

Cifar10 dataset setting get error with flexible number of client

Hi, I am running Federated Learning with differential privacy folder.
It seems the cifar10 is only ok with 10 clients (number of client = number of class). When changing the number of client different with number of class ( example: number of client =20 while number of class in cifar10 = 10). The system get error. I think it is necessary to edit the code with flexible number of client. Do you have any suggestion?
I changed the number of client using bellow code:
1

And It get this error
1

Thanks.

`dataset.reduce` error in Multi-GPU simulation of `optimization`

Hi there,

I am trying to launch a multi-gpu experiments based on research/optimization, but keeps getting errors involving datasets.reduce as below

ValueError: Detected dataset reduce op in multi-GPU TFF simulation: `use_experimental_simulation_loop=True` for `tff.learning`; or use `for ... in iter(dataset)` for your own dataset iteration.Reduce op will be functional after b/159180073.

I tried to replace this line and this line with for batch in iter(dataset), but the issue persists. I couldn't find any other potential usage of dataset.reduce.

Here is the prompt I used to reproduce this issue

bazel run main:federated_trainer -- --task=emnist_cr --total_rounds=100 \
--client_optimizer=sgd --client_learning_rate=0.1 --client_batch_size=20 \
--server_optimizer=sgd --server_learning_rate=1.0 --clients_per_round=10 \
--client_epochs_per_round=1 --experiment_name=emnist_fedavg_experiment \

Any help will be greatly appreciated.

federated_trainer.py slowness

I'm currently running optimization/main/federated_trainer.py with emnist, nightly tf and tff, and cuda 2080ti and each round takes about a minute.

I'm not sure if this qualifies as a performance issue, but if I'm not mistaken the performance was much better with the GPU (about 10 sec per round).

exact execution parameters (baseline FedAvg):
--task=emnist_cr --clients_per_round=10 --client_datasets_random_seed=1 --client_epochs_per_round=1 --total_rounds=1500 --client_batch_size=20 --emnist_cr_model=cnn --client_optimizer=sgd --client_learning_rate=0.1 --server_optimizer=sgd --server_learning_rate=1 --server_sgd_momentum=0.0

StackOverflow NWP centralized dataset consumes > 64 GB of RAM

My machine runs out of RAM when trying to run a centralized baseline on StackOverflow NWP. The following code reproduces the issue (the process gets killed after RAM is overflown on a machine with 64 GB of RAM):

>>> from utils.datasets import stackoverflow_word_prediction
>>> datasets = stackoverflow_word_prediction.get_centralized_datasets(vocab_size=10000, max_sequence_length=20)

I believe the high memory usage comes from this line, which calls create_tf_dataset_from_all_clients(), which in turn creates the centralized dataset from tensor slices, so the whole dataset is kept in memory.

Is it possible to somehow create centralized SO NWP dataset without keeping everything in RAM?

TimeoutError: [Errno 110] Connection timed out

Hi, thanks for your share, I tried to run your code as Example usage do:

bazel run main:federated_trainer -- --task=emnist_cr --total_rounds=100
 --client_optimizer=sgd --client_learning_rate=0.1 --client_batch_size=20
 --server_optimizer=sgd --server_learning_rate=1.0 --clients_per_round=10
 --client_epochs_per_round=1 --experiment_name=emnist_fedavg_experiment

But I got following errors. Could you help me fix them? Thanks.

2021-05-24 22-14-35 的屏幕截图
2021-05-24 22-13-55 的屏幕截图

On Dirichlet Concentration Factor

Dirichlet alpha is the parameter to be given in np.random.dirichlet.

According to the code, alpha = concentration_factor * prior.

In the paper: Adaptive Federated Optimization, Dirichlet parameter 0.1 is used for CIFAR10.

Does that mean concentration_factor * prior == 0.1 (wherein concentration_factor = 1.0 and prior is 0.1 per class, dirichlet alpha is eff. 0.1)?
Or does it mean concentration_factor = 0.1 (wherein dirichlet alpha is effectively 0.01 per class for CIFAR10)?

Referencing the line where this equation is applied:

multinomial = self._rng.dirichlet(self._concentration_factor *

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.