Giter Site home page Giter Site logo

tensorflow_constrained_optimization's Introduction

TensorFlow Constrained Optimization (TFCO)

TFCO is a library for optimizing inequality-constrained problems in TensorFlow 1.14 and later (including TensorFlow 2). In the most general case, both the objective function and the constraints are represented as Tensors, giving users the maximum amount of flexibility in specifying their optimization problems. Constructing these Tensors can be cumbersome, so we also provide helper functions to make it easy to construct constrained optimization problems based on rates, i.e. proportions of the training data on which some event occurs (e.g. the error rate, true positive rate, recall, etc).

For full details, motivation, and theoretical results on the approach taken by this library, please refer to:

Cotter, Jiang and Sridharan. "Two-Player Games for Efficient Non-Convex Constrained Optimization". ALT'19, arXiv

and:

Narasimhan, Cotter and Gupta. "Optimizing Generalized Rate Metrics with Three Players". NeurIPS'19

which will be referred to as [CoJiSr19] and [NaCoGu19], respectively, throughout the remainder of this document. For more information on this library's optional "two-dataset" approach to improving generalization, please see:

Cotter, Gupta, Jiang, Srebro, Sridharan, Wang, Woodworth and You. "Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints". ICML'19, arXiv

which will be referred to as [CotterEtAl19].

Proxy constraints

Imagine that we want to constrain the recall of a binary classifier to be at least 90%. Since the recall is proportional to the number of true positive classifications, which itself is a sum of indicator functions, this constraint is non-differentiable, and therefore cannot be used in a problem that will be optimized using a (stochastic) gradient-based algorithm.

For this and similar problems, TFCO supports so-called proxy constraints, which are differentiable (or sub/super-differentiable) approximations of the original constraints. For example, one could create a proxy recall function by replacing the indicator functions with sigmoids. During optimization, each proxy constraint function will be penalized, with the magnitude of the penalty being chosen to satisfy the corresponding original (non-proxy) constraint.

Rate helpers

While TFCO can optimize "low-level" constrained optimization problems represented in terms of Tensors (by creating a ConstrainedMinimizationProblem directly), one of TFCO's main goals is to make it easy to configure and optimize problems based on rates. This includes both very simple settings, e.g. maximizing precision subject to a recall constraint, and more complex, e.g. maximizing ROC AUC subject to the constraint that the maximum and minimum error rates over some particular slices of the data should be within 10% of each other. To this end, we provide high-level "rate helpers", for which proxy constraints are handled automatically, and with which one can write optimization problems in simple mathematical notation (i.e. minimize this expression subject to this list of algebraic constraints).

These helpers include a number of functions for constructing (in "binary_rates.py", "multiclass_rates.py" and "general_rates.py") and manipulating ("operations.py", and Python arithmetic operators) rates. Some of these, as described in [NaCoGu19], require introducing slack variables and extra implicit constraints to the resulting optimization problem, which, again, is handled automatically.

Shrinking

This library is designed to deal with a very flexible class of constrained problems, but this flexibility can make optimization considerably more difficult: on a non-convex problem, if one uses the "standard" approach of introducing a Lagrange multiplier for each constraint, and then jointly maximizing over the Lagrange multipliers and minimizing over the model parameters, then a stable stationary point might not even exist. Hence, in such cases, one might experience oscillation, instead of convergence.

Thankfully, it turns out that even if, over the course of optimization, no particular iterate does a good job of minimizing the objective while satisfying the constraints, the sequence of iterates, on average, usually will. This observation suggests the following approach: at training time, we'll periodically snapshot the model state during optimization; then, at evaluation time, each time we're given a new example to evaluate, we'll sample one of the saved snapshots uniformly at random, and apply it to the example. This stochastic model will generally perform well, both with respect to the objective function, and the constraints.

In fact, we can do better: it's possible to post-process the set of snapshots to find a distribution over at most m+1 snapshots, where m is the number of constraints, that will be at least as good (and will often be much better) than the (much larger) uniform distribution described above. If you're unable or unwilling to use a stochastic model at all, then you can instead use a heuristic to choose the single best snapshot.

In many cases, these issues can be ignored. However, if you experience oscillation during training, or if you want to squeeze every last drop of performance out of your model, consider using the "shrinking" procedure of [CoJiSr19], which is implemented in the "candidates.py" file.

Public contents

  • constrained_minimization_problem.py: contains the ConstrainedMinimizationProblem interface, representing an inequality-constrained problem. Your own constrained optimization problems should be represented using implementations of this interface. If using the rate-based helpers, such objects can be constructed as RateMinimizationProblems.

  • candidates.py: contains two functions, find_best_candidate_distribution and find_best_candidate_index. Both of these functions are given a set of candidate solutions to a constrained optimization problem, from which the former finds the best distribution over at most m+1 candidates, and the latter heuristically finds the single best candidate. As discussed above, the set of candidates will typically be model snapshots saved periodically during optimization. Both of these functions require that scipy be installed.

    The find_best_candidate_distribution function implements the approach described in Lemma 3 of [CoJiSr19], while find_best_candidate_index implements the heuristic used for hyperparameter search in the experiments of Section 5.2.

  • Optimizing general inequality-constrained problems

    • constrained_optimizer.py: contains ConstrainedOptimizerV1 and ConstrainedOptimizerV2, which inherit from tf.compat.v1.train.Optimizer and tf.keras.optimizers.Optimizer, respectively, and are the base classes for our constrained optimizers. The main difference between our constrained optimizers, and normal TensorFlow optimizers, is that ours can optimize ConstrainedMinimizationProblems in addition to loss functions.

    • lagrangian_optimizer.py: contains the LagrangianOptimizerV1 and LagrangianOptimizerV2 implementations, which are constrained optimizers implementing the Lagrangian approach discussed above (with additive updates to the Lagrange multipliers). You recommend these optimizers for problems without proxy constraints. They may also work well on problems with proxy constraints, but we recommend using a proxy-Lagrangian optimizer, instead.

      These optimizers are most similar to Algorithm 3 in Appendix C.3 of [CoJiSr19], which is discussed in Section 3. The two differences are that they use proxy constraints (if they're provided) in the update of the model parameters, and use wrapped Optimizers, instead of SGD, for the "inner" updates.

    • proxy_lagrangian_optimizer.py: contains the ProxyLagrangianOptimizerV1 and ProxyLagrangianOptimizerV2 implementations, which are constrained optimizers implementing the proxy-Lagrangian approach mentioned above. We recommend using these optimizers for problems with proxy constraints.

      A ProxyLagrangianOptimizerVx optimizer with multiplicative swap-regret updates is most similar to Algorithm 2 in Section 4 of [CoJiSr19], with the difference being that it uses wrapped Optimizers, instead of SGD, for the "inner" updates.

  • Helpers for constructing rate-based optimization problems

    • subsettable_context.py: contains the rate_context function, which takes a Tensor of predictions (or, in eager mode, a nullary function returning a Tensor, i.e. the output of a TensorFlow model, through which gradients can be propagated), and optionally Tensors of labels and weights, and returns an object representing a (subset of a) minibatch on which one may calculate rates.

      The related split_rate_context function takes two Tensors of predictions, labels and weights, the first for the "penalty" portion of the objective, and the second for the "constraint" portion. The purpose of splitting the context is to improve generalization performance: see [CotterEtAl19] for full details.

      The most important property of these objects is that they are subsettable: if you want to calculate a rate on e.g. only the negatively-labeled examples, or only those examples belonging to a certain protected class, then this can be accomplished via the subset method. However, you should use great caution with the subset method: if the desired subset is a very small proportion of the dataset (e.g. a protected class that's an extreme minority), then the resulting stochastic gradients will be noisy, and during training your model will converge very slowly. Instead, it is usually better (but less convenient) to create an entirely separate dataset for each rare subset, and to construct each subset context directly from each such dataset.

    • binary_rates.py, multiclass_rates.py, and general_rates.py: contains functions for constructing rates from contexts. These rates are the "heart" of this library, and can be combined into more complicated expressions using python arithmetic operators, or into constraints using comparison operators.

    • operations.py: contains functions for manipulating rate expressions, including wrap_rate, which can be used to convert a Tensor into a rate object, as well as lower_bound and upper_bound, which convert lists of rates into rates representing lower- and upper-bounds on all elements of the list.

    • loss.py: contains loss functions used in constructing rates. These can be passed as parameters to the optional penalty_loss and constraint_loss functions in "binary_rates.py", "multiclass_rates.py" and "general_rates.py" (above).

    • rate_minimization_problem.py: contains the RateMinimizationProblem class, which constructs a ConstrainedMinimizationProblem (suitable for use by ConstrainedOptimizers) from a rate expression to minimize, and a list of rate constraints to impose.

Convex example using proxy constraints

This is a simple example of recall-constrained optimization on simulated data: we seek a classifier that minimizes the average hinge loss while constraining recall to be at least 90%.

We'll start with the required imports—notice the definition of tfco:

import math
import numpy as np
from six.moves import xrange
import tensorflow as tf

import tensorflow_constrained_optimization as tfco

We'll next create a simple simulated dataset by sampling 1000 random 10-dimensional feature vectors from a Gaussian, finding their labels using a random "ground truth" linear model, and then adding noise by randomly flipping 200 labels.

# Create a simulated 10-dimensional training dataset consisting of 1000 labeled
# examples, of which 800 are labeled correctly and 200 are mislabeled.
num_examples = 1000
num_mislabeled_examples = 200
dimension = 10
# We will constrain the recall to be at least 90%.
recall_lower_bound = 0.9

# Create random "ground truth" parameters for a linear model.
ground_truth_weights = np.random.normal(size=dimension) / math.sqrt(dimension)
ground_truth_threshold = 0

# Generate a random set of features for each example.
features = np.random.normal(size=(num_examples, dimension)).astype(
    np.float32) / math.sqrt(dimension)
# Compute the labels from these features given the ground truth linear model.
labels = (np.matmul(features, ground_truth_weights) >
          ground_truth_threshold).astype(np.float32)
# Add noise by randomly flipping num_mislabeled_examples labels.
mislabeled_indices = np.random.choice(
    num_examples, num_mislabeled_examples, replace=False)
labels[mislabeled_indices] = 1 - labels[mislabeled_indices]

We're now ready to construct our model, and the corresponding optimization problem. We'll use a linear model of the form f(x) = w^T x - t, where w is the weights, and t is the threshold.

# Create variables containing the model parameters.
weights = tf.Variable(tf.zeros(dimension), dtype=tf.float32, name="weights")
threshold = tf.Variable(0.0, dtype=tf.float32, name="threshold")

# Create the optimization problem.
constant_labels = tf.constant(labels, dtype=tf.float32)
constant_features = tf.constant(features, dtype=tf.float32)
def predictions():
  return tf.tensordot(constant_features, weights, axes=(1, 0)) - threshold

Notice that predictions is a nullary function returning a Tensor. This is needed to support eager mode, but in graph mode, it's fine for it to simply be a Tensor. To see how this example could work in graph mode, please see the Jupyter notebook containing a more-comprehensive version of this example (Recall_constraint.ipynb).

Now that we have the output of our linear model (in the predictions variable), we can move on to constructing the optimization problem. At this point, there are two ways to proceed:

  1. We can use the rate helpers provided by the TFCO library. This is the easiest way to construct optimization problems based on rates (where a "rate" is the proportion of training examples on which some event occurs).
  2. We could instead create an implementation of the ConstrainedMinimizationProblem interface. This is the most flexible approach. In particular, it is not limited to problems expressed in terms of rates.

Here, we'll only consider the first of these options. To see how to use the second option, please refer to Recall_constraint.ipynb.

Rate helpers

The main motivation of TFCO is to make it easy to create and optimize constrained problems written in terms of linear combinations of rates, where a "rate" is the proportion of training examples on which an event occurs (e.g. the false positive rate, which is the number of negatively-labeled examples on which the model makes a positive prediction, divided by the number of negatively-labeled examples). Our current example (minimizing a hinge relaxation of the error rate subject to a recall constraint) is such a problem.

# Like the predictions, in eager mode, the labels should be a nullary function
# returning a Tensor. In graph mode, you can drop the lambda.
context = tfco.rate_context(predictions, labels=lambda: constant_labels)
problem = tfco.RateMinimizationProblem(
    tfco.error_rate(context), [tfco.recall(context) >= recall_lower_bound])

The first argument of all rate-construction helpers (error_rate and recall are the ones used here) is a "context" object, which represents what we're taking the rate of. For example, in a fairness problem, we might wish to constrain the positive_prediction_rates of two protected classes (i.e. two subsets of the data) to be similar. In that case, we would create a context representing the entire dataset, then call the context's subset method to create contexts for the two protected classes, and finally call the positive_prediction_rate helper on the two resulting contexts. Here, we only create a single context, representing the entire dataset, since we're only concerned with the error rate and recall.

In addition to the context, rate-construction helpers also take two optional named parameters—not used here—named penalty_loss and constraint_loss, of which the former is used to define the proxy constraints, and the latter the "true" constraints. These default to the hinge and zero-one losses, respectively. The consequence of this is that we will attempt to minimize the average hinge loss (a relaxation of the error rate using the penalty_loss), while constraining the true recall (using the constraint_loss) by essentially learning how much we should penalize the hinge-constrained recall (penalty_loss, again).

The RateMinimizationProblem class implements the ConstrainedMinimizationProblem interface, and is constructed from a rate expression to be minimized (the first parameter), subject to a list of rate constraints (the second). Using this class is typically more convenient and readable than constructing a ConstrainedMinimizationProblem manually: the objects returned by error_rate and recall—and all other rate-constructing and rate-combining functions—can be manipulated using python arithmetic operators (e.g. "0.5 * tfco.error_rate(context1) - tfco.true_positive_rate(context2)"), or converted into a constraint using a comparison operator.

Wrapping up

We're almost ready to train our model, but first we'll create a couple of functions to measure its performance. We're interested in two quantities: the average hinge loss (which we seek to minimize), and the recall (which we constrain).

def average_hinge_loss(labels, predictions):
  # Recall that the labels are binary (0 or 1).
  signed_labels = (labels * 2) - 1
  return np.mean(np.maximum(0.0, 1.0 - signed_labels * predictions))

def recall(labels, predictions):
  # Recall that the labels are binary (0 or 1).
  positive_count = np.sum(labels)
  true_positives = labels * (predictions > 0)
  true_positive_count = np.sum(true_positives)
  return true_positive_count / positive_count

As was mentioned earlier, a Lagrangian optimizer often suffices for problems without proxy constraints, but a proxy-Lagrangian optimizer is recommended for problems with proxy constraints. Since this problem contains proxy constraints, we use the ProxyLagrangianOptimizerV2.

For this problem, the constraint is fairly easy to satisfy, so we can use the same "inner" optimizer (an Adagrad optimizer with a learning rate of 1) for optimization of both the model parameters (weights and threshold), and the internal parameters associated with the constraints (these are the analogues of the Lagrange multipliers used by the proxy-Lagrangian formulation). For more difficult problems, it will often be necessary to use different optimizers, with different learning rates (presumably found via a hyperparameter search): to accomplish this, pass both the optimizer and constraint_optimizer parameters to ProxyLagrangianOptimizerV2's constructor.

Since this is a convex problem (both the objective and proxy constraint functions are convex), we can just take the last iterate. Periodic snapshotting, and the use of the find_best_candidate_distribution or find_best_candidate_index functions, is generally only necessary for non-convex problems (and even then, it isn't always necessary).

# ProxyLagrangianOptimizerV2 is based on tf.keras.optimizers.Optimizer.
# ProxyLagrangianOptimizerV1 (which we do not use here) would work equally well,
# but is based on the older tf.compat.v1.train.Optimizer.
optimizer = tfco.ProxyLagrangianOptimizerV2(
    optimizer=tf.keras.optimizers.Adagrad(learning_rate=1.0),
    num_constraints=problem.num_constraints)

# In addition to the model parameters (weights and threshold), we also need to
# optimize over any trainable variables associated with the problem (e.g.
# implicit slack variables and weight denominators), and those associated with
# the optimizer (the analogues of the Lagrange multipliers used by the
# proxy-Lagrangian formulation).
var_list = ([weights, threshold] + problem.trainable_variables +
            optimizer.trainable_variables())

for ii in xrange(1000):
  optimizer.minimize(problem, var_list=var_list)

trained_weights = weights.numpy()
trained_threshold = threshold.numpy()

trained_predictions = np.matmul(features, trained_weights) - trained_threshold
print("Constrained average hinge loss = %f" % average_hinge_loss(
    labels, trained_predictions))
print("Constrained recall = %f" % recall(labels, trained_predictions))

Notice that this code is intended to run in eager mode (there is no session): in Recall_constraint.ipynb, we also show how to train in graph mode. Running this code results in the following output (due to the randomness of the dataset, you'll get a different result when you run it):

Constrained average hinge loss = 0.683846
Constrained recall = 0.899791

As we hoped, the recall is extremely close to 90%—and, thanks to the fact that the optimizer uses a (hinge) proxy constraint only when needed, and the actual (zero-one) constraint whenever possible, this is the true recall, not a hinge approximation.

For comparison, let's try optimizing the same problem without the recall constraint:

optimizer = tf.keras.optimizers.Adagrad(learning_rate=1.0)
var_list = [weights, threshold]

for ii in xrange(1000):
  # For optimizing the unconstrained problem, we just minimize the "objective"
  # portion of the minimization problem.
  optimizer.minimize(problem.objective, var_list=var_list)

trained_weights = weights.numpy()
trained_threshold = threshold.numpy()

trained_predictions = np.matmul(features, trained_weights) - trained_threshold
print("Unconstrained average hinge loss = %f" % average_hinge_loss(
    labels, trained_predictions))
print("Unconstrained recall = %f" % recall(labels, trained_predictions))

This code gives the following output (again, you'll get a different answer, since the dataset is random):

Unconstrained average hinge loss = 0.612755
Unconstrained recall = 0.801670

Because there is no constraint, the unconstrained problem does a better job of minimizing the average hinge loss, but naturally doesn't approach 90% recall.

More examples

The examples directory contains several illustrations of how one can use this library:

  • Colaboratory notebooks:

    1. Recall_constraint.ipynb: Start here! This is a more-comprehensive version of the above simple example. In particular, it can run in either graph or eager modes, shows how to manually create a ConstrainedMinimizationProblem instead of using the rate helpers, and illustrates the use of both V1 and V2 optimizers.

    2. Recall_constraint_keras.ipynb: Same as Recall_constraint.ipynb, but uses Keras instead of raw TensorFlow.

    3. Recall_constraint_estimator.ipynb: Same as Recall_constraint.ipynb, but uses a canned estimator instead of raw TensorFlow. See PRAUC_training.ipynb for a tutorial on using TFCO with a custom estimator.

    4. Wiki_toxicity_fairness.ipynb: This notebook shows how to train a fair classifier to predict whether a comment posted on a Wiki Talk page contain toxic content. The notebook discusses two criteria for fairness and shows how to enforce them by constructing a rate-based optimization optimization problem.

    5. CelebA_fairness.ipynb: This notebook shows how to train a fair classifier to predict to detect a celebrity's smile in images using tf.keras and the large-scale CelebFaces Attributes dataset. The model trained in this notebook is evaluating for fairness across age group, with the false positive rate set as the constraint.

    6. PRAUC_training.ipynb: This notebook shows how to train a model to maximize the Area Under the Precision-Recall Curve (PR-AUC). We'll show how to train the model both with (i) plain TensorFlow (in eager mode), and (ii) with a custom tf.Estimator.

  • Jupyter notebooks:

    1. Fairness_adult.ipynb: This notebook shows how to train classifiers for fairness constraints on the UCI Adult dataset using the helpers for constructing rate-based optimization problems.

    2. Minibatch_training.ipynb: This notebook describes how to solve a rate-constrained training problem using minibatches. The notebook focuses on problems where one wishes to impose a constraint on a group of examples constituting an extreme minority of the training set, and shows how one can speed up convergence by using separate streams of minibatches for each group.

    3. Oscillation_compas.ipynb: This notebook illustrates the oscillation issue raised in the "shrinking" section (above): it's possible that the individual iterates won't converge when using the Lagrangian approach to training with fairness constraints, even though they do converge on average. This motivate more careful selection of solutions or the use of a stochastic classifier.

    4. Post_processing.ipynb: This notebook describes how to use the shrinking procedure of [CoJiSr19], as discussed in the "shrinking" section (above), to post-process the iterates of a constrained optimizer and construct a stochastic classifier from them. For applications where a stochastic classifier is not acceptable, we show how to use a heuristic to pick the best deterministic classifier from the iterates found by the optimizer.

    5. Generalization_communities.ipynb: This notebook shows how to improve fairness generalization performance on the UCI Communities and Crime dataset with the split dataset approach of [CotterEtAl19], using the split_rate_context helper.

    6. Churn.ipynb: This notebook describes how to use rate constraints for low-churn classification. That is, to train for accuracy while ensuring the predictions don't differ by much compared to a baseline model.

tensorflow_constrained_optimization's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow_constrained_optimization's Issues

loss negative when defined constrains and the goal is to minimize the objective function

import tensorflow as tf
import numpy as np
import tensorflow_constrained_optimization as tfco

Define the HouseholdAppliance class

class HouseholdAppliance:
def init(self, name, power_consumption, active_duration, desired_start_times):
self.name = name
self.power_consumption = power_consumption
self.active_duration = active_duration
self.desired_start_times = desired_start_times

Define the RenewableEnergyScheduler class

class RenewableEnergyScheduler(tfco.ConstrainedMinimizationProblem):
def init(self, appliances_data, Θ, Bmax, g_hat_ms, c, En, target_schedule):
self.appliances_data = appliances_data
self.N = len(appliances_data) # Number of devices
self.S = 24 # Number of slots
self.Θ = Θ # Inverter supply limit
self.Bmax = Bmax # Battery capacity
self.g_hat_ms = g_hat_ms # Forecasted renewable energy generation
self.c = c # User dissatisfaction costs
self.En = En # Energy consumption of each device
self.target_schedule = target_schedule # Target schedule for appliances
self.schedule = self.optimize_schedule()

@property
def num_constraints(self):
    return 4  # Number of constraints

def objective(self, schedule):
    # Compute the objective function (1)
    return tf.reduce_sum(tf.multiply(schedule, self.c))

def constraints(self, schedule):
    constraints_list = []

    # Constraint 1: Uniqueness and Operation (Equation 2)
    for n in range(self.N):
        for s in range(self.S):
            #window_sum = tf.reduce_sum(schedule[n, s:s+self.appliances_data[n][2]]) == 1
            window_sum = schedule[n, s:s+self.appliances_data[n][2]] == 1
            constraint_1 = window_sum
            constraints_list.append(constraint_1)
            print(f"Constraint 1 - Appliance {n+1}, Slot {s+1}: {constraint_1}")

    # Constraint 2: Inverter Limitation (Equation 3)
    for s in range(self.S):
        for n in range(self.N):
            constraint_2 = tf.reduce_sum(tf.multiply(self.En[n], schedule[n, max(0, s - (self.appliances_data[n][2] - 1)):s + 1])) <= self.Θ
            constraints_list.append(constraint_2)
            print(f"Constraint 2 - Slot {s+1}, Appliance {n+1}: {constraint_2}")

    # Constraint 3: Maximum Storage (Equation 4)
    for s in range(self.S):
        constraint_3 = tf.reduce_sum(tf.multiply(self.En, schedule[:, s])) <= self.g_hat_ms[s] + self.Bmax
        constraints_list.append(constraint_3)
        print(f"Constraint 3 - Slot {s+1}: {constraint_3}")

    # Constraint 4: Total Consumption (Equation 5)
    for s in range(self.S):
        total_consumption = tf.reduce_sum(tf.multiply(self.En, schedule[:, :s + 1]))
        total_generation = tf.reduce_sum(self.g_hat_ms[:s + 1])
        constraint_4 = total_consumption <= self.Bmax + total_generation
        constraints_list.append(constraint_4)
        print(f"Constraint 4 - Slot {s+1}: {constraint_4}")

    constraints = tf.stack(constraints_list)
    return constraints



def optimize_schedule(self):
    # Define the optimization variables
    schedule = tf.Variable(np.random.uniform(0, 1, (self.N, self.S)), dtype=tf.float32)
    print("Initial schedule")
    print(schedule)
    #target_schedule = tf.Variable(schedule_duration_array, dtype=tf.float32)

    # Define the optimizer
    optimizer = tf.optimizers.Adam(learning_rate=0.01)

    # Optimization loop
    for step in range(1000):
        with tf.GradientTape() as tape:
            loss = self.objective(schedule)
            # Add mean squared error between the current schedule and the target schedule
            loss += tf.reduce_mean(tf.square(schedule - self.target_schedule))
        gradients = tape.gradient(loss, [schedule])
        optimizer.apply_gradients(zip(gradients, [schedule]))

        if step % 100 == 0:
            print("Step:", step, "Loss:", loss.numpy())
            #print("Optimal Solution:")
            #print(schedule.numpy())

    return schedule.numpy()

Placeholder data for parameters

Θ = 10 # Inverter supply limit
Bmax = 200*0.75 # Battery capacity
g_hat_ms = np.random.rand(24).astype(np.float32) * 10 # Forecasted renewable energy generation

Dissatisfaction cost profiles for each appliance

c = np.array([
[ 0.94532998, 0.91934309, 0.89351733, 0.87420559, 0.86701924,
0.87420559, 0.89351733, 0.91934309, 0.94532998, 0.96684095,
0.98200301, 0.99125937, 0.99620134, 0.99852272, 0.99948591,
0.99983991, 0.99995539, 0.99998888, 0.99999752, 0.9999995,
0.99999991, 0.99999999, 1., 1. ],
[100., 100., 100., 100., 100.,
100., 0.86701924, 0.87420559, 0.89351733, 0.91934309,
0.94532998, 0.96684095, 0.98200301, 0.99125937, 0.99620134,
0.99852272, 0.99948591, 0.99983991, 0.99995539, 0.99998888,
0.99999752, 0.9999995, 0.99999991, 0.99999999],
[ 0.92634597, 0.92179146, 0.92021154, 0.92179146, 0.92634597,
0.93335508, 0.94206169, 0.95160586, 0.96116279, 0.97005451,
0.97781583, 0.98420997, 0.98920181, 0.99290508, 0.99552109,
0.99728341, 0.99841691, 0.99911363, 0.99952318, 0.99975356,

0.99987762, 0.99994161, 0.99997323, 0.99998821],
[ 1., 0.99999999, 0.99999851, 0.99986617, 0.99556815,
0.94600903, 0.75802928, 0.60105772, 0.75802928, 0.94600903,
0.99556815, 0.99986617, 0.99999851, 0.99999999, 1.,
1., 1., 1., 1., 1.,
1., 1., 1., 1. ],
[ 1., 1., 1., 1., 1.,
1., 1., 1., 0.99999995, 0.99999926,
0.99999201, 0.99993308, 0.99956366, 0.99778408, 0.99123585,
0.97300452, 0.9352412, 0.87901464, 0.82396734, 0.80052886,
0.82396734, 0.87901464, 0.9352412, 0.97300452],
[100., 100., 100., 100., 100.,
100., 100., 100., 100., 100.,
100., 100., 100., 100., 100.,
100., 0.82396734, 0.80052886, 0.82396734, 100.,
100., 100., 100., 100. ],
[100., 100., 100., 100., 100.,
100., 100., 100., 100., 100.,
100., 100., 100., 100., 100.,
100., 100., 100., 100., 100.,
0.80052886, 0.82396734, 0.87901464, 0.9352412 ],
[ 0.99999851, 0.99986617, 0.99556815, 0.94600903, 0.75802928,
0.60105772, 0.75802928, 0.94600903, 0.99556815, 0.99986617,
0.99999851, 0.99999999, 1., 1., 1.,
1., 1., 1., 1., 1.,
1., 1., 1., 1. ],
[ 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1.,
0.99999999, 0.99999851, 0.99986617, 0.99556815, 0.94600903,
0.75802928, 0.60105772, 0.75802928, 0.94600903, 0.99556815,
0.99986617, 0.99999851, 0.99999999, 1. ],
[ 0.99123585, 0.97300452, 0.9352412, 0.87901464, 0.82396734,
0.80052886, 0.82396734, 0.87901464, 0.9352412, 0.97300452,
0.99123585, 0.99778408, 0.99956366, 0.99993308, 0.99999201,
0.99999926, 0.99999995, 1., 1., 1.,
1., 1., 1., 1. ],
[ 1., 1., 1., 1., 1.,
1., 0.99999995, 0.99999926, 0.99999201, 0.99993308,
0.99956366, 0.99778408, 0.99123585, 0.97300452, 0.9352412,
0.87901464, 0.82396734, 0.80052886, 0.82396734, 0.87901464,
0.9352412, 0.97300452, 0.99123585, 0.99778408]
]) # User dissatisfaction costs

Adjust c to match the number of appliances

c = np.vstack((c, np.zeros((12 - c.shape[0], c.shape[1]))))

Placeholder data for appliances

appliances_data = [
('Washing Machine (warm wash)', 2.3, 2, [14], 11, 17),
('Dryer (avg. load)', 3, 2, [16, 15], 12, 18),
('Robot Vacuum Cleaner', 0.007, 2, [15], 12, 18),
('Iron', 1.08, 2, [8], 6, 10),
('TV', 0.15, 3, [20], 18, 22),
('Refrigerator', 0.083, 24, list(range(1, 25)), 0, 24),
('Oven', 2.3, 1, [18], 16, 20),
('Dishwasher', 2, 2, [21], 18, 22),
('Electric Water Heater', 0.7, 1, [6, 17], 4, 8),
('Central AC', 3, 2, [6, 18], 4, 8),
('Pool Filter Pump', 1.12, 8, [10], 8, 12),
('Electric Vehicle Charger', 7.7, 8, [21, 18

, 23], 16, 20)
]

Placeholder data for energy consumption

En = np.array([[appliance[1]] for appliance in appliances_data])
#En = En/ [appliance[2]] for appliance in appliances_data]

Placeholder data for the target schedule

target_schedule = schedule_arrays

Instantiate the RenewableEnergyScheduler class with the target schedule

#renewable_energy_scheduler = RenewableEnergyScheduler(appliances_data, Θ, Bmax, g_hat_ms, c, En, schedule_arrays)
renewable_energy_scheduler = RenewableEnergyScheduler(appliances_data, Θ, Bmax, g_hat_ms, c, En, target_schedule)

Optimize the schedule and store the optimal solution

optimal_schedule = renewable_energy_scheduler.schedule
print("Optimal Schedule:")
print(optimal_schedule)
Initial schedule
<tf.Variable 'Variable:0' shape=(12, 24) dtype=float32, numpy=
array([[0.13256074, 0.16450572, 0.8457951 , 0.9453654 , 0.81355965,
0.7358516 , 0.08794184, 0.15333892, 0.81696177, 0.38021332,
0.84903854, 0.7157349 , 0.498208 , 0.5965678 , 0.5921458 ,
0.8206383 , 0.01229873, 0.22936128, 0.18474518, 0.91656625,
0.7686425 , 0.07542033, 0.12613778, 0.65400505],
[0.33809242, 0.75684154, 0.43492907, 0.50518215, 0.15035065,
0.9049483 , 0.6134951 , 0.36996025, 0.35187778, 0.7948585 ,
0.58176315, 0.8380317 , 0.75236124, 0.39556384, 0.9194925 ,
0.5489616 , 0.93027705, 0.41712767, 0.55013955, 0.4053882 ,
0.30683264, 0.7026022 , 0.7230601 , 0.3995844 ],
[0.32268175, 0.01340857, 0.64992076, 0.35076147, 0.8179076 ,
0.34622508, 0.9164622 , 0.75923556, 0.0900602 , 0.95063716,
0.96696264, 0.9229608 , 0.5110154 , 0.63442034, 0.23639703,
0.3326485 , 0.65664124, 0.29286385, 0.4382759 , 0.86535966,
0.7005246 , 0.12657063, 0.7797606 , 0.14924023],
[0.27287987, 0.76617527, 0.7999852 , 0.12181633, 0.59384966,
0.47052372, 0.03964209, 0.9590311 , 0.1172531 , 0.45505023,
0.9404046 , 0.63904136, 0.39507118, 0.46241736, 0.19111228,
0.6045205 , 0.932783 , 0.63698184, 0.2173754 , 0.47482145,
0.7889698 , 0.9776223 , 0.6412317 , 0.8131258 ],
[0.908892 , 0.4567255 , 0.8963704 , 0.69411117, 0.3320809 ,
0.35173753, 0.5753125 , 0.8454456 , 0.86093956, 0.13236354,
0.8394588 , 0.54318726, 0.9416756 , 0.49923044, 0.81538606,
0.62006575, 0.9129011 , 0.03342194, 0.01958979, 0.01530793,
0.47847247, 0.616623 , 0.13644028, 0.6019739 ],
[0.1364083 , 0.4845891 , 0.62079614, 0.5805148 , 0.5617434 ,
0.10629182, 0.7125081 , 0.00751793, 0.92072976, 0.8776288 ,
0.84996295, 0.75084686, 0.2285789 , 0.0884176 , 0.55371714,
0.76298535, 0.45321438, 0.01752547, 0.19689475, 0.23334728,
0.10112004, 0.5434103 , 0.40526927, 0.25025436],
[0.5887951 , 0.9824955 , 0.06278563, 0.86175907, 0.7108596 ,
0.26583236, 0.48685464, 0.828518 , 0.62908214, 0.47313878,
0.00100934, 0.6995777 , 0.9216821 , 0.08334628, 0.28457424,
0.99559844, 0.8617576 , 0.78255254, 0.89783245, 0.43711054,
0.6860407 , 0.06558135, 0.16885492, 0.37971818],
[0.45379946, 0.49317613, 0.04614437, 0.80603045, 0.25451985,
0.04263728, 0.28037307, 0.77474093, 0.48194072, 0.06569015,
0.25131875, 0.9613897 , 0.75255924, 0.4056748 , 0.673719 ,
0.91341287, 0.5984254 , 0.3761663 , 0.02007845, 0.0668097 ,
0.37676576, 0.3182803 , 0.12599325, 0.2522299 ],
[0.922594 , 0.89071524, 0.92682 , 0.854768 , 0.8953844 ,
0.80403674, 0.6837802 , 0.45521712, 0.24843054, 0.5990562 ,
0.50878376, 0.56324077, 0.43249542, 0.8839581 , 0.70151794,
0.40107617, 0.16632111, 0.59376204, 0.05594517, 0.5786778 ,
0.2817951 , 0.47485313, 0.50758684, 0.05792021],
[0.62806696, 0.57973677, 0.64528805, 0.9347828 , 0.5556218 ,
0.76326126, 0.7087708 , 0.82534695, 0.382603 , 0.89293385,
0.5971417 , 0.5248764 , 0.44572458, 0.17729422, 0.18038546,
0.17450024, 0.62628835, 0.53557783, 0.43451858, 0.8244354 ,
0.92621034, 0.11028443, 0.42211816, 0.82358235],
[0.7612235 , 0.27980143, 0.6984881 , 0.48584342, 0.52915305,
0.37066036, 0.06534988, 0.9840117 , 0.8012437 , 0.3205646 ,
0.49802637, 0.09734588, 0.8020785 , 0.06789444, 0.61004436,
0.9911238 , 0.36757872, 0.65057296, 0.2414208 , 0.8390205 ,
0.5847151 , 0.6963583 , 0.9320667 , 0.61821 ],
[0.32159528, 0.35302743, 0.4947227 , 0.26265025, 0.98619455,
0.03669829, 0.8141584 , 0.31539217, 0.14040574, 0.407636 ,
0.00701922, 0.6909225 , 0.25403583, 0.3982064 , 0.8695893 ,
0.21706939, 0.5490212 , 0.4424745 , 0.53931886, 0.055005 ,
0.73614633, 0.4397625 , 0.80743927, 0.7542086 ]], dtype=float32)>
Step: 0 Loss: 2582.6624
Step: 100 Loss: -2323.9253
Step: 200 Loss: -7227.796
Step: 300 Loss: -12129.0
Step: 400 Loss: -17027.582
Step: 500 Loss: -21923.582
Step: 600 Loss: -26817.035
Step: 700 Loss: -31707.977
Step: 800 Loss: -36596.492
Step: 900 Loss: -41482.508
Optimal Schedule:
[[-9.70690823e+00 -9.67034531e+00 -8.98508453e+00 -8.88323307e+00
-9.01200199e+00 -9.09245872e+00 -9.74326515e+00 -9.68150425e+00
-9.02451229e+00 -9.46319103e+00 -8.99735641e+00 -9.13304901e+00
-9.35110950e+00 -9.25320721e+00 -9.25670815e+00 -9.02851582e+00
-9.83707619e+00 -9.61917973e+00 -9.66374969e+00 -8.93271446e+00
-9.08047962e+00 -9.77295971e+00 -9.72336483e+00 -9.19499683e+00]
[-9.66043568e+00 -9.24167633e+00 -9.56359482e+00 -9.49333954e+00
-9.84817600e+00 -9.09357357e+00 -9.21319485e+00 -9.45643902e+00
-9.47967911e+00 -9.04079628e+00 -9.25943375e+00 -9.00703049e+00
-9.09502792e+00 -9.45288086e+00 -8.93026924e+00 -9.30076408e+00
-8.91893959e+00 -9.43159008e+00 -9.29874134e+00 -9.44440651e+00
-9.54179287e+00 -9.14750862e+00 -9.12601280e+00 -9.45020485e+00]
[-9.51489544e+00 -9.82296371e+00 -9.18696976e+00 -9.48478031e+00
-9.01904106e+00 -9.49262333e+00 -8.92337990e+00 -9.08322811e+00
-9.75206089e+00 -8.89394855e+00 -8.88000011e+00 -8.92386627e+00
-9.33614540e+00 -9.21453667e+00 -9.61253452e+00 -9.51558876e+00
-9.19318008e+00 -9.55561256e+00 -9.41041851e+00 -8.98488331e+00
-9.14850426e+00 -9.72292614e+00 -9.07043076e+00 -9.69921398e+00]
[-9.57677746e+00 -9.08294392e+00 -9.05023003e+00 -9.72659016e+00
-9.25440121e+00 -9.37065792e+00 -9.75878429e+00 -8.79047489e+00
-9.68321037e+00 -9.38492107e+00 -8.90928459e+00 -9.21098232e+00
-9.45471478e+00 -9.38637543e+00 -9.65845871e+00 -9.24548626e+00
-8.91757488e+00 -9.21199799e+00 -9.63115501e+00 -9.37398815e+00
-9.06017303e+00 -8.87277985e+00 -9.20775414e+00 -9.03710079e+00]
[-8.94038296e+00 -9.39312744e+00 -8.95394421e+00 -9.15598869e+00
-9.51657009e+00 -9.49694061e+00 -9.27466774e+00 -9.00481510e+00
-8.98933983e+00 -9.71714592e+00 -9.01079369e+00 -9.30568409e+00
-8.90756321e+00 -9.35032463e+00 -9.03350258e+00 -9.22462177e+00
-8.92695618e+00 -9.79487419e+00 -9.79698277e+00 -9.79577637e+00
-9.33724213e+00 -9.21109200e+00 -9.70125771e+00 -9.24382019e+00]
[-9.86211491e+00 -9.51393509e+00 -9.37772274e+00 -9.41800690e+00
-9.43677711e+00 -9.89223003e+00 -9.28600979e+00 -9.99099922e+00
-9.07779312e+00 -9.12089157e+00 -9.14855766e+00 -9.24766922e+00
-9.76994991e+00 -9.91010284e+00 -9.44480515e+00 -9.23553276e+00
-9.36246490e+00 -9.79187870e+00 -9.61837196e+00 -9.76517963e+00
-9.89740276e+00 -9.45510960e+00 -9.59325600e+00 -9.74827385e+00]
[-9.40972710e+00 -9.01603031e+00 -9.93573380e+00 -9.13676167e+00
-9.28765774e+00 -9.73269463e+00 -9.51167011e+00 -9.17000198e+00
-9.36943722e+00 -9.52538586e+00 -9.99750805e+00 -9.29893780e+00
-9.07684135e+00 -9.91517544e+00 -9.71395302e+00 -9.00292683e+00
-9.13676357e+00 -9.21596527e+00 -9.10069180e+00 -9.56141281e+00
-9.12449551e+00 -9.74947071e+00 -9.65823746e+00 -9.45827579e+00]
[-9.39604759e+00 -9.35669327e+00 -9.80259514e+00 -9.03435612e+00
-9.54619598e+00 -9.70109463e+00 -9.52039242e+00 -9.06680107e+00
-9.36726284e+00 -9.78265762e+00 -9.59831905e+00 -8.88793659e+00
-9.09654808e+00 -9.44306087e+00 -9.17529869e+00 -8.93586349e+00
-9.25157452e+00 -9.47253513e+00 -9.82823849e+00 -9.78263283e+00
-9.47300148e+00 -9.53035831e+00 -9.72244072e+00 -9.59741020e+00]
[-8.92668915e+00 -8.95959187e+00 -8.92352676e+00 -8.99550629e+00
-8.95387173e+00 -9.04618263e+00 -9.16631031e+00 -9.39356804e+00
-9.60120487e+00 -9.25094509e+00 -9.34005833e+00 -9.28672314e+00
-9.41731262e+00 -8.96460629e+00 -9.13993168e+00 -9.39991188e+00
-9.58082104e+00 -9.20571899e+00 -9.78354359e+00 -9.27062416e+00
-9.56785297e+00 -9.37395573e+00 -9.34231758e+00 -9.79150963e+00]
[-9.22062397e+00 -9.26603127e+00 -9.19303322e+00 -8.89337540e+00
-9.26180077e+00 -9.04740906e+00 -9.10731316e+00 -9.00265598e+00
-9.45539474e+00 -8.95206261e+00 -9.25043583e+00 -9.32364655e+00
-9.40298653e+00 -9.67225075e+00 -9.66810322e+00 -9.67398453e+00
-9.22268391e+00 -9.31435776e+00 -9.41531181e+00 -9.02474403e+00
-8.92413425e+00 -9.73920441e+00 -9.42769814e+00 -9.02559662e+00]
[-9.08895016e+00 -9.56986046e+00 -9.15161705e+00 -9.36404324e+00
-9.32077312e+00 -9.47910118e+00 -9.78302002e+00 -8.86534214e+00
-9.04791069e+00 -9.52806568e+00 -9.35073948e+00 -9.75178814e+00
-9.04680061e+00 -9.77729702e+00 -9.22823620e+00 -8.83711147e+00
-9.44953632e+00 -9.15989971e+00 -9.57392120e+00 -8.99037552e+00
-9.25353146e+00 -9.14841938e+00 -8.91694736e+00 -9.23147774e+00]
[ 1.00000036e+00 -5.79812213e-24 1.00000000e+00 1.00000000e+00
-1.08714514e-20 1.00000072e+00 1.57738216e-22 -7.29723553e-24
6.66426537e-25 6.15038284e-24 1.00000083e+00 1.04001440e-22
9.99999940e-01 1.06457667e-24 1.35238921e-21 -4.12253029e-24
1.00000024e+00 9.99999821e-01 1.00000024e+00 7.99417393e-25
1.61733954e-22 9.99999821e-01 4.20054801e-22 4.26986711e-23]]
[162]
0s
Can you please help to resolve this issue ? I got the loss function negative and the Optimal Schedule is negative. it is suppose to between [0,1]

maximize AUCROC

I am running tfco optimizer to maximize AUCROC on imbalanced dataset. You can find my notebook attached. I will appreciate if you take a look into it. I am getting the following error :
~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow_core\python\ops\cond_v2.py in _IfGrad(op, *grads)
164 # Resolve references to forward graph tensors in grad graphs and ensure
165 # they are in-scope, i.e., belong to one of outer graphs of the grad graph.
--> 166 true_grad_inputs = _resolve_grad_inputs(true_graph, true_grad_graph)
167 false_grad_inputs = _resolve_grad_inputs(false_graph, false_grad_graph)
168

~\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow_core\python\ops\cond_v2.py in _resolve_grad_inputs(cond_graph, grad_graph)
422 # cond_graph.
423 if t.graph != grad_graph.outer_graph:
--> 424 assert t.graph == cond_graph
425 # internal_captures are not treated as intermediates and hence not added
426 # to If op outputs. So we get the outer tensor corresponding to those

AssertionError:

Directly optimize AUC-notebook.zip

Precision rate not correct.

Hi thanks for the great job. I am trying to replicate the Keras example provided. It worked as intend with the recall constraint. But if I would like to maximize recall at given precision constraint (>=0.9), it always ended up ignoring the precision constraints. I then completely remove the constraint and only maximize the precision. It ended up again maximizing the recall instead. So I figure there is something wrong with the precision rate. The code can be found below.

Building Fair Model with Bert

I am trying to build a Fairness Model using BERT extending Wiki_toxicity_fairness . I have successfully built all types of Deep Learning models (LSTN, Bi-LSTM, CNN_LSTm, CNN+Bi-LSTM) , Then I create a BERT model which works fine with unconstrained optimization..

Can someone please help

This is how I create a model..

def create_model(max_seq_len, bert_ckpt_file):

with tf.io.gfile.GFile(bert_config_file, "r") as reader:
bc = StockBertConfig.from_json_string(reader.read())
bert_params = map_stock_config_to_params(bc)
bert_params.adapter_size = None
bert = BertModelLayer.from_params(bert_params, name="bert")

input_ids = keras.layers.Input(shape=(max_seq_len, ), dtype='int32', name="input_ids")
bert_output = bert(input_ids)

print("bert shape", bert_output.shape)

cls_out = keras.layers.Lambda(lambda seq: seq[:, 0, :])(bert_output)
cls_out = keras.layers.Dropout(0.5)(cls_out)
logits = keras.layers.Dense(units=768, activation="tanh")(cls_out)
logits = keras.layers.Dropout(0.5)(logits)
logits = keras.layers.Dense(units=len(classes), activation="softmax")(logits)

model = keras.Model(inputs=input_ids, outputs=logits)
model.build(input_shape=(None, max_seq_len))

load_stock_weights(bert, bert_ckpt_file)

return model

The unconstrained model works fine.

But when I try to use this model in constrained model and robust model , I get error in optimizer.minimize like:

Gradient update.

---> 53 optimizer.minimize(problem, var_list=var_list)
54
55 # Once in every skip_steps iterations, snapshot model parameters

~/Desktop/sharmi/datascience/CrypTen/samplevenv/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py in minimize(self, loss, var_list, grad_loss, name)
316 """
317 grads_and_vars = self._compute_gradients(
--> 318 loss, var_list=var_list, grad_loss=grad_loss)
319
320 return self.apply_gradients(grads_and_vars, name=name)

~/Desktop/sharmi/datascience/CrypTen/samplevenv/lib/python3.7/site-packages/tensorflow_constrained_optimization/python/train/constrained_optimizer.py in _compute_gradients(self, loss, var_list, grad_loss)
627 loss_fn = self._formulation.get_loss_fn(loss)
628 return super(ConstrainedOptimizerV2, self)._compute_gradients(
--> 629 loss_fn, var_list=var_list, grad_loss=grad_loss)
630
631 def _create_slots(self, var_list):

~/Desktop/sharmi/datascience/CrypTen/samplevenv/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py in _compute_gradients(self, loss, var_list, grad_loss)
350 if not callable(var_list):
351 tape.watch(var_list)
--> 352 loss_value = loss()
353 if callable(var_list):
354 var_list = var_list()

~/Desktop/sharmi/datascience/CrypTen/samplevenv/lib/python3.7/site-packages/tensorflow_constrained_optimization/python/train/proxy_lagrangian_optimizer.py in partial_loss_gradient_fn()
574 def partial_loss_gradient_fn():
575 objective, constraints, proxy_constraints = (
--> 576 minimization_problem.components())
577 constraints = tf.reshape(constraints, shape=(num_constraints,))
578 if proxy_constraints is None:

~/Desktop/sharmi/datascience/CrypTen/samplevenv/lib/python3.7/site-packages/tensorflow_constrained_optimization/python/rates/rate_minimization_problem.py in components(self)
265 # will be evaluated (differentiated) only once.
266 value_memoizer = {}
--> 267 objective = self._objective(self._structure_memoizer, value_memoizer)
268 constraints = tf.stack([
269 cc(self._structure_memoizer, value_memoizer) for cc in self._constraints

~/Desktop/sharmi/datascience/CrypTen/samplevenv/lib/python3.7/site-packages/tensorflow_constrained_optimization/python/rates/deferred_tensor.py in call(self, structure_memoizer, value_memoizer)
356 A Tensor-like object containing the value of this DeferredTensor.
357 """
--> 358 value, _ = self._value_and_auto_cast(structure_memoizer, value_memoizer)
359 return value
360

~/Desktop/sharmi/datascience/CrypTen/samplevenv/lib/python3.7/site-packages/tensorflow_constrained_optimization/python/rates/deferred_tensor.py in _value_and_auto_cast(self, structure_memoizer, value_memoizer)
506 # pylint: disable=protected-access
507 arg._value_and_auto_cast(structure_memoizer, value_memoizer)
--> 508 for arg in self._args
509 # pylint: enable=protected-access
510 ]

~/Desktop/sharmi/datascience/CrypTen/samplevenv/lib/python3.7/site-packages/tensorflow_constrained_optimization/python/rates/deferred_tensor.py in (.0)
506 # pylint: disable=protected-access
507 arg._value_and_auto_cast(structure_memoizer, value_memoizer)
--> 508 for arg in self._args
509 # pylint: enable=protected-access
510 ]

~/Desktop/sharmi/datascience/CrypTen/samplevenv/lib/python3.7/site-packages/tensorflow_constrained_optimization/python/rates/deferred_tensor.py in _value_and_auto_cast(self, structure_memoizer, value_memoizer)
583 values[ii] = tf.cast(values[ii], dtype=dtype)
584
--> 585 result = (self._callback(*values), auto_cast)
586
587 if value_memoizer is not None and key not in value_memoizer:

~/Desktop/sharmi/datascience/CrypTen/samplevenv/lib/python3.7/site-packages/tensorflow_constrained_optimization/python/rates/term.py in average_loss_fn(positive_weights_value, negative_weights_value, predictions_value)
996 axis=1)
997 losses = self._loss.evaluate_binary_classification(
--> 998 predictions_value, weights)
999
1000 return tf.reduce_mean(losses)

~/Desktop/sharmi/datascience/CrypTen/samplevenv/lib/python3.7/site-packages/tensorflow_constrained_optimization/python/rates/loss.py in evaluate_binary_classification(self, predictions, weights)
313 Tensor with exactly two columns.
314 """
--> 315 predictions = _convert_to_binary_classification_predictions(predictions)
316 columns = helpers.get_num_columns_of_2d_tensor(weights, name="weights")
317 if columns != 2:

~/Desktop/sharmi/datascience/CrypTen/samplevenv/lib/python3.7/site-packages/tensorflow_constrained_optimization/python/rates/loss.py in _convert_to_binary_classification_predictions(predictions)
60 raise TypeError("predictions must be floating-point")
61
---> 62 return helpers.convert_to_1d_tensor(predictions, name="predictions")
63
64

~/Desktop/sharmi/datascience/CrypTen/samplevenv/lib/python3.7/site-packages/tensorflow_constrained_optimization/python/rates/helpers.py in convert_to_1d_tensor(tensor, name)
68
69 if sum(dim.value != 1 for dim in dims) > 1:
---> 70 raise ValueError("all but one dimension of %s must have size one" % name)
71
72 # Reshape to a rank-1 Tensor.

ValueError: all but one dimension of predictions must have size one


Code for invoking constrained BERT model

Create model, and separate prediction functions for the two streams.

For the predictions, we use a nullary function returning a Tensor to support eager mode.

Create, compile and fit model.

model_constrained = create_model(data.max_seq_len, bert_ckpt_file)
def predictions():
return model_constrained(features_tensor)

def predictions_sen():
return model_constrained(features_tensor_sen)

---------------- Building Constraint--------------------------------

#batch_size = 128
epsilon = 0.02 # Desired false-positive rate threshold.

Set up separate contexts for the two minibatch streams.

context = tfco.rate_context(predictions, lambda: labels_tensor)
context_sen = tfco.rate_context(predictions_sen, lambda: labels_tensor_sen)

print(np.shape(predictions))
print(np.shape(predictions_sen))

Compute the objective using the first stream.

objective = tfco.error_rate(context)

Compute the constraints using the second stream.

For each group, subset the examples belonging to that group from the second stream,

and add a constraint on the group's false positive rate.

constraints = []
for ii in range(num_groups):
context_sen_subset = context_sen.subset(
lambda kk=ii: groups_tensor_sen[:, kk] > 0)
# We pass the group index ii as a default argument to the the function, so
# that the function retains a local copy of this variable, and is unaffected
# by changes to the variable outside the function.
constraints.append(
tfco.false_positive_rate(context_sen_subset) <= epsilon)

Create a rate minimization problem.

problem = tfco.RateMinimizationProblem(objective, constraints)

Set up a constrained optimizer.

optimizer = tfco.ProxyLagrangianOptimizerV2(
optimizer=tf.keras.optimizers.Adam(learning_rate=hparams["learning_rate"]),
num_constraints=problem.num_constraints)

List of variables to optimize include the model weights,

and the trainable variables from the rate minimization problem and

the constrained optimizer.

var_list = (model_constrained.trainable_weights + problem.trainable_variables +
optimizer.trainable_variables())

Create temporary directory to record model snapshots.

#batch_size = 128
temp_directory = tempfile.mktemp()
os.mkdir(temp_directory)

Indices of sensitive group members.

protected_group_indices = np.nonzero(groups_train.sum(axis=1))[0]

num_examples = data.train_x.shape[0]
print(num_examples)
num_examples_sen = protected_group_indices.shape[0]
batch_size = hparams["batch_size"]

Number of steps needed for one epoch over the training sample.

num_steps = int(num_examples / batch_size)

Number of steps to skip before check-pointing the current model.

skip_steps = int(num_steps / 10)

List of recorded objectives and constrained violations.

objectives_list = []
violations_list = []

start_time = time.time()

Loop over minibatches.

for batch_index in range(num_steps):
# Indices for current minibatch in the first stream.
batch_indices = np.arange(batch_index * batch_size, (batch_index + 1) * batch_size)
batch_indices = [ind % num_examples for ind in batch_indices]
#print(batch_indices)

# Indices for current minibatch in the second stream.
batch_indices_sen = np.arange(batch_index * batch_size, (batch_index + 1) * batch_size)
batch_indices_sen = [protected_group_indices[ind % num_examples_sen]
                     for ind in batch_indices_sen]

#print(batch_indices_sen)

# Assign features, labels, groups from the minibatches to the respective tensors.

features_tensor.assign(data.train_x[batch_indices, :])
labels_tensor.assign(labels_train[batch_indices])

features_tensor_sen.assign(data.train_x[batch_indices_sen, :])
labels_tensor_sen.assign(labels_train[batch_indices_sen])
groups_tensor_sen.assign(groups_train[batch_indices_sen, :])

print("****************", np.shape(var_list), problem)

# Gradient update.
optimizer.minimize(problem, var_list=var_list)

# Once in every skip_steps iterations, snapshot model parameters
# and evaluate objective and constraint violations on validation set.
if batch_index % skip_steps == 0:
    # Evaluate model on validation set.
    scores_vali = model_constrained.predict(valid)

    error = error_rate(labels_vali, scores_vali)  # Error rate
    group_fprs = group_false_positive_rates(
        labels_vali, scores_vali, groups_vali)
    violations = [z - epsilon for z in group_fprs]  # FPR constraint violations

    objectives_list.append(error)
    violations_list.append(violations)

    # Save model weights to temporary directory.
    model_constrained.save_weights(
        temp_directory + "constrained_" +
        str(int(batch_index / skip_steps)) + ".h5")

# Display most recently recorded objective and max violation in constraints.
elapsed_time = time.time() - start_time
sys.stdout.write(
    "\rIteration %d / %d: Elapsed time = %d secs, Error = %.3f, Violation = %.3f " %
    (batch_index + 1, num_steps, elapsed_time, objectives_list[-1], 
     max(violations_list[-1])))

Select the beszt model from the recorded iterates using TFCO's find best

candidates heuristic.

best_index = tfco.find_best_candidate_index(
np.array(objectives_list), np.array(violations_list), rank_objectives=False)

Load model weights for the best iterate from the snapshots saved previously.

model_constrained.load_weights(
temp_directory + "constrained_" + str(best_index) + ".h5")

Projected SGD updates for the parameters

Dear authors,
As it is mentioned in "lagrangian_optimizer.py" lines 34-38, the optimizer is most similar to Algorithm 3 of the paper. I am curious if you project the parameter after the each SGD update onto the constraint surface or no.
It seems that in the code the projection is done only for multiplier and not for the parameters. As far as I understood, this way we cannot guaranty to meet the constraint at the end.

I will appreciate if you could let me know this issue which is kinda important for me.

Thanks a lot
Maryam

please push out 0.3 release to PyPi

As others and myself have discovered, the current version of TensorFlow is now incompatible with TFCO release version 0.2 on PyPi.

Would it be possible please to bring the TFCO version in PyPi up to date so those who are trying to get the sample notebooks working won't have to include the latest TFCO version from GitHub but instead can simply go directly to PyPi?

TypeError: _compute_gradients() got an unexpected keyword argument 'tape'

When trying to use TFCO - either for my own problem, or just reproducing the Proxy Lagrangian problem in the README - I get a TypeError:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-6-61c999c50081> in <module>
     15 
     16 for ii in xrange(1000):
---> 17   optimizer.minimize(problem, var_list=var_list)
     18 
     19 trained_weights = weights.numpy()

/usr/local/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py in minimize(self, loss, var_list, grad_loss, name, tape)
    495     """
    496     grads_and_vars = self._compute_gradients(
--> 497         loss, var_list=var_list, grad_loss=grad_loss, tape=tape)
    498     return self.apply_gradients(grads_and_vars, name=name)
    499 

TypeError: _compute_gradients() got an unexpected keyword argument 'tape'

I am running (from pip list):

tensorflow                          2.4.1
tensorflow-constrained-optimization 0.2

Constrained minimization in a regression task

I tried to impose a hard constraint on a regression task using the ConstrainedMinimizationProblem interface.
I created a toy model which predicts the energy E of a particle from its momentum p. For each prediction, the energy momentum relation E²=m²+p² should be fulfilled (m is the fixed mass of the particle). In order to approach such an equality constraint, I set two inequality constraints which require a small mass interval, enclosing the true mass value.
I trained the model and calculated the mass in a final step. I solely expected values in the given interval but I received a broad distribution.
My attempts can be found here:
https://colab.research.google.com/drive/1UizjQE2aFsW39JiJHFkwqPTWoGhyMOuh?usp=sharing
I would be glad to receive some suggestions! :)

Support GPUs?

Hello, does this library run on GPUs? I am interested in solving a inequality constrained minimization problem.

Constrained optimization problem using a keras deep neural network model

Hi I am currently facing an issue where I am trying to implement a constrained optimization problem. Although I can run the code, the constraints of the problem are not kept in the converged solution, and I require that the constraints are strictly kept. Is there a possibility that using the ConstrainedMinimizationProblem that the constraints are not kept in the end? And also, if it is possible, is there a way to track how the lagrange multipliers are varying with each minimization iteration? If it is possible, I would be able to tell if the constraints are satisfied. Thank You.

The optimization problem is shown below:

Goal: I want to obtain the optimized values of m, p and t that maximize the objective function below.

Objective function f(m, p, t) = (L^1.5) / D
where L and D are both functions of m, p and t.
Both L and D are evaluated using a keras deep neural network, where a given m, p and t values would output its L and D values.

Constraints:
0.01 <= m <= 0.06
0.1 <= p <= 0.6
0.18 <= t <= 0.24
(there is another constraint, which is more complicated to write down, but is also a constraint on m, p and t)

Please do let me know if more information about the problem is needed. Thank You.

Minimizing TPR on Adult dataset using Keras Model

I am trying to run the Recall constraint notebook using my own Keras model to implement Equality of Opportunity constraint under a sensitive attribute instead of Recall on the Adult Income dataset.
But using the mentioned KerasPlaceholder will only fetch me the current batch predictions and labels. How can I fetch the corresponding sensitive attribute in the input dataset from the KerasPlaceholder? Basically, how do I implement the constraint implemented in the adult notebook using the KerasPlaceholder? Any help would be much appreciated!
Thank you.

Can we use losses other than rate-based losses?

It seems that the examples in the repository only use the error_rate as the objective function. Would we have reasonable results if we used other out-of-the-box loss functions supported by TensorFlow (e.g., binary cross entropy, or MSE)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.