ValueError: градиенты не предусмотрены для какой-либо переменной в исследовательской модели Tensorflow, dp_sgd

avatar
leechuchen
8 апреля 2018 в 07:34
209
1
1

Я попытался запустить модель dp_sgd в https://github.com/tensorflow/models/tree/master/research/ Differential_privacy. Я обнаружил следующее сообщение об ошибке на своем Mac после того, как выполнил шаги, описанные в README.md.

.
lizhuzhende-MacBook-Air:dp janicelee$ bazel-bin/differential_privacy/dp_sgd/dp_mnist/dp_mnist     --training_data_path=data/mnist_train.tfrecord     --eval_data_path=data/mnist_test.tfrecord     --save_path=./tmp/mnist_dir

Traceback (most recent call last):
  File "/Users/janicelee/sd/ve/dp/bazel-bin/differential_privacy/dp_sgd/dp_mnist/dp_mnist.runfiles/__main__/differential_privacy/dp_sgd/dp_mnist/dp_mnist.py", line 507, in <module>
    tf.app.run()
  File "/Users/janicelee/sd/ve/privacy/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 30, in run
    sys.exit(main(sys.argv))
  File "/Users/janicelee/sd/ve/dp/bazel-bin/differential_privacy/dp_sgd/dp_mnist/dp_mnist.runfiles/__main__/differential_privacy/dp_sgd/dp_mnist/dp_mnist.py", line 503, in main
    eval_steps=FLAGS.eval_steps)
  File "/Users/janicelee/sd/ve/dp/bazel-bin/differential_privacy/dp_sgd/dp_mnist/dp_mnist.runfiles/__main__/differential_privacy/dp_sgd/dp_mnist/dp_mnist.py", line 337, in Train
    cost, global_step=global_step)
  File "/Users/janicelee/sd/ve/dp/bazel-bin/differential_privacy/dp_sgd/dp_mnist/dp_mnist.runfiles/__main__/differential_privacy/dp_sgd/dp_optimizer/dp_optimizer.py", line 145, in minimize
    global_step=global_step, name=name)
  File "/Users/janicelee/sd/ve/privacy/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py", line 298, in apply_gradients
    (grads_and_vars,))
ValueError: No gradients provided for any variable: ()

Произошла ошибка при вызове минимизации в dp_optimizer.py:

   def minimize(self, loss, global_step=None, var_list=None,
               name=None):
    """Minimize using sanitized gradients.

    This gets a var_list which is the list of trainable variables.
    For each var in var_list, we defined a grad_accumulator variable
    during init. When batches_per_lot > 1, we accumulate the gradient
    update in those. At the end of each lot, we apply the update back to
    the variable. This has the effect that for each lot we compute
    gradients at the point at the beginning of the lot, and then apply one
    update at the end of the lot. In other words, semantically, we are doing
    SGD with one lot being the equivalent of one usual batch of size
    batch_size * batches_per_lot.
    This allows us to simulate larger batches than our memory size would permit.

    The lr and the num_steps are in the lot world.

    Args:
      loss: the loss tensor.
      global_step: the optional global step.
      var_list: the optional variables.
      name: the optional name.
    Returns:
      the operation that runs one step of DP gradient descent.
    """

    # First validate the var_list

    if var_list is None:
      var_list = tf.trainable_variables()
    for var in var_list:
      if not isinstance(var, tf.Variable):
        raise TypeError("Argument is not a variable.Variable: %s" % var)

    # Modification: apply gradient once every batches_per_lot many steps.
    # This may lead to smaller error

    if self._batches_per_lot == 1:
      sanitized_grads = self.compute_sanitized_gradients(
          loss, var_list=var_list)

      grads_and_vars = zip(sanitized_grads, var_list)
      self._assert_valid_dtypes([v for g, v in grads_and_vars if g is not None])


      apply_grads = self.apply_gradients(grads_and_vars,
                                         global_step=global_step, name=name)

      return apply_grads

    # Condition for deciding whether to accumulate the gradient
    # or actually apply it.
    # we use a private self_batch_count to keep track of number of batches.
    # global step will count number of lots processed.

    update_cond = tf.equal(tf.constant(0),
                           tf.mod(self._batch_count,
                                  tf.constant(self._batches_per_lot)))

    # Things to do for batches other than last of the lot.
    # Add non-noisy clipped grads to shadow variables.

Моя версия Python 3.5.3. Моя версия tensorflow — 0.10.0, а версия bazel — 0.3.1. В чем причина этой ошибки и как ее решить?

Спасибо!

Источник

Ответы (1)

avatar
ike
28 апреля 2018 в 15:23
0

У меня были похожие проблемы, которые можно было исправить с помощью models/research/slim/download_and_convert_data.py, создавшего правильный формат tfrecords, как описано здесь: https://github.com/tensorflow/models/issues/2605

sɐunıɔןɐqɐp
28 апреля 2018 в 15:31
0

Как написать хороший ответ? coderhelper.com/help/how-to-answer