8

I am trying to implement a multivariate regression using tensorflow 2 api.

import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

df = pd.DataFrame({'A': np.array([100, 105.4, 108.3, 111.1, 113, 114.7]),
                   'B': np.array([11, 11.8, 12.3, 12.8, 13.1,13.6]),
                   'C': np.array([55, 56.3, 57, 58, 59.5, 60.4]),
                   'Target': np.array([4000, 4200.34, 4700, 5300, 5800, 6400])})

X = df.iloc[:, :3].values
Y = df.iloc[:, 3].values

plt.scatter(X[:, 0], Y)
plt.show()

X = tf.convert_to_tensor(X, dtype=tf.float32)
Y = tf.convert_to_tensor(Y, dtype=tf.float32)

def poly_model(X, w, b):
    mult = tf.matmul(X, w)
    pred = tf.add(tf.matmul(X, w), b)
    return pred

w = tf.cast(tf.Variable(np.random.randn(3, 1), name='weight'), tf.float32)
b = tf.Variable(np.random.randn(), name='bias')

model = poly_model(X, w, b)

cost = tf.reduce_sum(tf.square(Y - model))

train_op = tf.optimizers.SGD(0.001)

train_op.minimize(cost, var_list=[w])

At the last line it throws me:

tensorflow.python.framework.ops.EagerTensor' object is not callable

Also, I am a bit confused:

1) How to proceed without using Session. Just do something like : output = train_op(X)?

2) Do I need to use tf.GradientTape() as tape or this is only for graphs?

-- error trace --

TypeError                                 Traceback (most recent call last)
<ipython-input-1-ffbbbe1a3709> in <module>()
     32 train_op = tf.optimizers.SGD(0.001)
     33 
---> 34 train_op.minimize(cost, var_list=[w])

~/anaconda3/envs/dpl/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py in minimize(self, loss, var_list, grad_loss, name)
    294     """
    295     grads_and_vars = self._compute_gradients(
--> 296         loss, var_list=var_list, grad_loss=grad_loss)
    297 
    298     return self.apply_gradients(grads_and_vars, name=name)

~/anaconda3/envs/dpl/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py in _compute_gradients(self, loss, var_list, grad_loss)
    326     with backprop.GradientTape() as tape:
    327       tape.watch(var_list)
--> 328       loss_value = loss()
    329     grads = tape.gradient(loss_value, var_list, grad_loss)
    330 

TypeError: 'tensorflow.python.framework.ops.EagerTensor' object is not callable
5
  • Have you tried enabling eager execution? tf.enable_eager_execution() # requires r1.7 tensorflow.org/tutorials/eager/eager_basics Commented Mar 12, 2019 at 18:46
  • @ScottSkiles:I am using tensorflow 2.0 as I said, in which the eager is enabled by default.If I check if it is enab;ed , it is true. Commented Mar 12, 2019 at 22:20
  • Ah did not realize that. Will follow up Commented Mar 13, 2019 at 1:05
  • Can you add more of the stacktrace leading up to the error message? Commented Mar 13, 2019 at 19:09
  • @ScottSkiles:I updated the question with the trace. Commented Mar 13, 2019 at 21:01

1 Answer 1

8
+50

2) You definitely need to use GradientTape.

Check out the Effective TF2 Guide

1)Something like this:

import tensorflow as tf
import numpy as np

print("TensorFlow version: {}".format(tf.__version__))
print("Eager execution: {}".format(tf.executing_eagerly()))

x = np.array([
    [100, 105.4, 108.3, 111.1, 113, 114.7],
    [11, 11.8, 12.3, 12.8, 13.1, 13.6],
    [55, 56.3, 57, 58, 59.5, 60.4]
])

y = np.array([4000, 4200.34, 4700, 5300, 5800, 6400])


class Model(object):
    def __init__(self, x, y):
        # Initialize variable to (5.0, 0.0)
        # In practice, these should be initialized to random values.
        self.W = tf.Variable(tf.random.normal((len(x), len(x[0]))))
        self.b = tf.Variable(tf.random.normal((len(y),)))

    def __call__(self, x):
        return self.W * x + self.b


def loss(predicted_y, desired_y):
    return tf.reduce_sum(tf.square(predicted_y - desired_y))

optimizer = tf.optimizers.Adam(0.1)
# noinspection PyPep8Naming
def train(model, inputs, outputs):
    with tf.GradientTape() as t:
        current_loss = loss(model(inputs), outputs)
    grads = t.gradient(current_loss, [model.W, model.b])
    optimizer.apply_gradients(zip(grads,[model.W, model.b]))
    print(current_loss)


model = Model(x, y)

for i in range(10000):
    # print(model.b.numpy())
    train(model,x,y)
Sign up to request clarification or add additional context in comments.

7 Comments

I am not sure..It doesn't state anywhere that is mandatory.Only recommended.
From what I understand, that's how gradients are calculated and applied with eager execution
@DecentGradient: Nice one!for a great answer it would be really helpful if you explained how to fix the problem using GradientTape!
Also, I don't see anything in your link? Can you link to something more specific than the entire 2.0 change notes?
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.