1

I am not really a tensorflow expert. I have been using it with provided models and code, played around a bit and am trying to get better with it.

I got a hold of a model that I'd like to play around with in the form of a tensorflowjs model (?). It is in the form a model.json with some "shard1ofX" files. I also got some JS code to accompany it which I kind of understand, but I am not really a JS developer and also would like to use the model and code not on the net but in a standalone application.

The model gets loaded like that in the JS implementation:

tf.loadGraphModel(path_to_model_json)

Is it somehow possible to read said model in the Python tensorflow implementation to use it? Googling around I found a ton of information about converting a model TO tensorflowjs format, but not the other way around.

Help would be greatly appreciated!

2 Answers 2

2

The tfjs-converter project supports going from Javascript to Python. I haven't tested this but it looks like these flags should get the job done.

tfjs_converter --input_format tfjs_layers_model \
    --output_format keras_saved_model \
    /tmp/tensorflowjs_model \
    /tmp/keras_model

https://github.com/tensorflow/tfjs/blob/master/tfjs-converter/README.md#format-conversion-support-tables

Sign up to request clarification or add additional context in comments.

3 Comments

That seems to work with a layered model. But I only have a Graph Model and it seems, there is no way back from that?
Looks like someone wrote a tool for converting a graph model to a saved model. Might for work you? github.com/patlevin/tfjs-to-tf
That looks is if it should do exactly what I want, but unfortunately it crashes with some rather cryptic error message Error: Unsupported data type: 21. I will have to look into it if this is a problem with my environment or versioning or with the model I have, but thanks a ton anyways!
0

You can use the tfjs-graph-converter library:

First you install it using pip:

pip install tfjs-graph-converter

Then you can use

from tfjs_graph_converter.api import load_graph_model, graph_to_function_v2

# Load and convert model
graph = load_graph_model('model_js/model.json')
model_function = graph_to_function_v2(graph)

# Convert inputs to tensors
x_tensor = tf.convert_to_tensor(np.expand_dims(input_npy, axis=0), dtype=tf.float32)
y_tensor = tf.convert_to_tensor(np.expand_dims(ground_truth_npy, axis=0), dtype=tf.float32)

# Get prediction and calculate loss
y_pred = model_function(x_tensor)[0]  # Get first tensor from list of outputs

PS: My inputs were long arrays of shape n,1, so I had to use np.expand_dims in my code. However, the core logic is still pretty much the same. Hope this helps, have a nice day! :D

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.