4

In the end you can see that i have tried converting this into a numpy array but I don't understand why tensorflow dosen't support it? I have looked at the other related pages but none seemed to help. Is there some other format i have to do to the data in order to properly fit in model?

this is what keras says: x
Vector, matrix, or array of training data (or list if the model has multiple inputs). If all inputs in the model are named, you can also pass a list mapping input names to data. x can be NULL (default) if feeding from framework-native tensors (e.g. TensorFlow data tensors).

y
Vector, matrix, or array of target (label) data (or list if the model has multiple outputs). If all outputs in the model are named, you can also pass a list mapping output names to data. y can be NULL (default) if feeding from framework-native tensors (e.g. TensorFlow data tensors).

import pandas as pd
from sklearn import preprocessing
from collections import deque
import numpy as np
import random as rd
import time
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM, BatchNormalization



data = pd.read_csv("TSLA.csv")

data.set_index("Date", inplace=True)
data = data[["Close", "Volume"]]

Back_period_history = 100
Future_predict = 10


def classify(current, future):
    if float(future) > float(current):
        return 1
    else:
        return 0


data["future"] = data["Close"].shift(-Future_predict)
data["target"] = list(map(classify, data["Close"], data["future"]))


#print(data.head(20))

times = sorted(data.index.values)
last_10pct = times[-int(0.1*len(times))]

validation_data = data[(data.index >= last_10pct)]
data = data[(data.index < last_10pct)]

def preproccesing(data):
    data = data.drop("future", 1)

    for col in data.columns:
        if col != "target":
            data[col] = data[col].pct_change()
            data.dropna(inplace=True)
            data[col] = preprocessing.scale(data[col].values)
        data.dropna(inplace = True)

        sequential_data = []
        prev_days = deque(maxlen=Back_period_history)
        for i in data.values:
            prev_days.append([n for n in i[:-1]])
            if len(prev_days) == Back_period_history:
                sequential_data.append([np.array(prev_days), i[-1]])

        rd.shuffle(sequential_data)

        buys = []
        sells = []

        for seq, target in sequential_data:
            if target == 0:
                sells.append([seq, target])
            elif target == 1:
                buys.append([seq,target])

        rd.shuffle(buys)
        rd.shuffle(sells)

        lower = min(len(buys), len(sells))

        buys = buys[:lower]
        sells = sells[:lower]

        sequential_data = buys+sells

        rd.shuffle(sequential_data)

        X = []
        y = []

        for seq, target in sequential_data:
            X.append(sequential_data)
            y.append(target)

        return np.array(X),y


train_x, train_y = preproccesing(data)
validation_x, validation_y = preproccesing(validation_data)

model = Sequential()

model.add(LSTM(
    128, input_shape = (train_x.shape[1:]), activation = "relu", return_sequences = True
))
model.add(Dropout(0.2))
model.add(BatchNormalization())

model.add(LSTM(
    128, input_shape = (train_x.shape[1:]), activation = "relu", return_sequences = True
))
model.add(Dropout(0.2))
model.add(BatchNormalization())

model.add(LSTM(
    128, input_shape = (train_x.shape[1:]), activation = "relu", return_sequences = True
))
model.add(Dropout(0.2))
model.add(BatchNormalization())

model.add(Dense(32, activation = "relu"))
model.add(Dropout(0.2))

model.add(Dense(2, activation = "softmax"))

opt = tf.keras.optimizers.Adam()

model.compile(loss="mse", optimizer=opt, metrics=["accuracy"])

train_x = np.asarray(train_x)
train_y = np.asarray(train_y)
validation_x = np.asarray(validation_x)
validation_y = np.asarray(validation_y)

history = model.fit(train_x, train_y, batch_size = 64, epochs = 7, validation_data = (validation_x, validation_y))```

 
5
  • 2
    Does this answer your question? (Keras) ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float) Commented Jun 29, 2020 at 15:15
  • i tried and it dosent Commented Jun 29, 2020 at 15:30
  • Full traceback if you want more than superficial help and guesses. Commented Jun 29, 2020 at 15:46
  • i fixed the issue using convert to tensor but it gives this issue Commented Jun 29, 2020 at 16:05
  • tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [Condition x == y did not hold element-wise:] [x (sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/Shape_1:0) = ] [64 1] [y (sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/strided_slice:0) = ] [64 100] [[node sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/assert_equal_1/Assert/Assert (defined at FIRST_ML.py:130) ]] [Op:__inference_train_function_6406] Function call stack: train_function Commented Jun 29, 2020 at 16:06

1 Answer 1

4

this worked tf.convert_to_tensor(y)

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.