Comments (10)
The issue might be that by default, tf.losses.mean_squared_error
aggregates the losses over all the examples it's given. Can you try this instead?
cost = tf.losses.mean_squared_error(labels=y_true,
predictions=y_pred, reduction=Reduction.NONE
)
from privacy.
Thanks for bringing this up, MADONOKOUKI. Can I ask what the value of batch_size is?
from privacy.
@schien1729
Thank you for replying.
batch size is 256
from privacy.
It seems that the code thinks that the loss you're passing (the variable cost
) has length 1, and therefore can't split it into two microbatches. cost
should be a vector of length 256 (your batch size). Is it possible that you've turned it into a scalar, perhaps by using a reduce function?
from privacy.
I took mean squared error due to minimizing the error between input and output.
So I wrote
ae_net = Autoencoder(inputDim, l2scale, compressDims, aeActivation, decompressDims,
dataType) # autoencoder network
clipnorm = 5.0
standard_deviation = 0.0001
# tf Graph input (only pictures)
X = tf.placeholder("float", [None, inputDim])
# Construct model
loss, latent, output = ae_net(X)
print(output.shape)
# Prediction
y_pred = output
# Targets (Labels) are the input data.
y_true = X
cost = tf.reduce_mean(tf.pow(y_true - y_pred, 2))
print(y_pred)
print(y_true)
cost = tf.losses.mean_squared_error(labels=y_true,
predictions=y_pred,
)
# Calculate loss as a vector (to support microbatches in DP-SGD).
#vector_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
#labels=y_true, logits=y_pred)
#cost = tf.reduce_mean(vector_loss)
# optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) # in medgan
# Use DP version of GradientDescentOptimizer. For illustration purposes,
# we do that here by calling optimizer_from_args() explicitly, though DP
# versions of standard optimizers are available in dp_optimizer.
optimizer = dp_optimizer.DPGradientDescentGaussianOptimizer(
l2_norm_clip=1.0,
noise_multiplier=1.1,
num_microbatches=256,
learning_rate=.15,
population_size=60000).minimize(cost)
But is this cost function wrong? In this time, which cost function in tensorflow should I use?
from privacy.
Thanks!!!!
cost = tf.losses.mean_squared_error(labels=y_true,
predictions=y_pred, reduction="none"
)
I can train my code by this function. I can`t import Reduction, So I directlt write "None"
from privacy.
Hi, I'm having this same error on the MNIST Keras example shipped with the package
ValueError: Dimension size must be evenly divisible by 250 but is 1 for 'training/TFOptimizer/Reshape' (op: 'Reshape') with input shapes: [], [2] and with input tensors computed as partial shapes: input[1] = [250,?].
but only in Tensorflow 1.13, in 1.12 it works fine. Any ideas? Thanks!
from privacy.
Have you made the one liner modification documented at the top of the MNIST keras example ?
from privacy.
Sorry, completely missed it :) I just copied the relevant code somewhere else, so I ended up not looking at the docstring.
So I guess this is just #21, never mind my comment!
from privacy.
I also met the same problem
How to solve this problem, please
from privacy.
Related Issues (20)
- ValueError: Dimension size must be evenly divisible by 1048576 but is 1 for '{{node Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](Mean, Reshape/shape)' with input shapes: [?,1024,1024], [0].
- from tensorflow_privacy.privacy.analysis import privacy_ledger HOT 3
- How can we specify to install it on tensorflow-gpu==2.4.0
- A Question about Research mi_lira_2021: why set training=True at Inference
- Inconsistency in released Versions between GitHub and PyPI HOT 1
- Privacy?
- cannot import name 'dp_event' from 'tensorflow_privacy.privacy.analysis' HOT 2
- Python version restricted to >=3.9, inability to use official docker containers of tensorflow HOT 1
- AttributeError: module 'tensorflow_privacy.privacy.analysis.compute_dp_sgd_privacy' has no attribute 'compute_dp_sgd_privacy' HOT 2
- Why isn't SampledWithoutReplacementDpEvent used instead of PoissonSampledDpEvent for DP-SGD epsilon calculation? HOT 2
- Question about rdp_accountant.py HOT 1
- question about "tensorflow_privacy.privacy.dp_query.gaussian_query" HOT 2
- Privacy guarantees of of privacy amplification by iteration example
- Federated DPFTRL and Adaptive Clipping noise injection. HOT 1
- Fresh install doesn't work (incompatible versions of tensorflow and tensorflow_privacy)
- Incompatible with hub.KerasLayer HOT 1
- Module Not Found Error
- import tensorflow_privacy -> AttributeError
- Any recommended way to save a tf privacy model?
- Is this project abandoned? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from privacy.