2

I've trained a .pb object detection model in python using Colab and converted it to the model.json format using the TensorFlow converter. I need to load this model inside the browser (no Node.js!) and run inference there. This is the structure of my model folder the TensorFlow converter produced:

model
| - model.json
| - labels.json
| - group1-shard1of2.bin
| - group1-shard2of2.bin

The Tensorflow documentation suggest the following to load such a model:

const model = await tf.loadGraphModel('model/model.json');

or

const model = await tf.loadLayersModel('model/model.json');

I am using the tf.loadGraphModel function. Loading the model works flawlessly, but when I try to run inference with it using this code:

// detect objects in the image.
const img = document.getElementById('img'); 
model.predict(img).then(predictions => {
    console.log('Predictions: ');
    console.log(predictions);
});

it throws the following error:

Uncaught (in promise) Error: The dict provided in model.execute(dict) has keys [...] that are not part of graph
at e.t.checkInputs (graph_executor.js:607)
at e.t.execute (graph_executor.js:193)
at e.t.execute (graph_model.js:338)
at e.t.predict (graph_model.js:291)
at predictImages (detector.php:39)

Am I using the wrong loading function, did the model loading process fail (even though it didn't throw any errors?) or is the inference function wrong? Thanks in advance for your support!

EDIT: After using @edkeveked's suggestion to convert the image to a tensor first using this code:

const tensor = tf.browser.fromPixels(img);

and running inference using this:

model.predict(tensor.expandDims(0));

I got this error message:

Uncaught (in promise) Error: This execution contains the node 'Preprocessor/map/while/Exit_2', which has the dynamic op 'Exit'. Please use model.executeAsync() instead. Alternatively, to avoid the dynamic ops, specify the inputs [Preprocessor/map/TensorArrayStack/TensorArrayGatherV3]
    at e.t.compile (graph_executor.js:162)
    at e.t.execute (graph_executor.js:212)
    at e.t.execute (graph_model.js:338)
    at e.t.predict (graph_model.js:291)
    at predictImages (detector.php:38)

After replacing model.predict() with model.executeAsync(), it returned a result that was not what I expected to get from an object detection model:

detector.php:40 (2) [e, e]0: e {kept: false, isDisposedInternal: false, shape: Array(3), dtype: "float32", size: 3834, …}1: e {kept: false, isDisposedInternal: false, shape: Array(4), dtype: "float32", size: 7668, …}length: 2__proto__: Array(0)

This is my complete code so far (images added in HTML using PHP):

<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]/dist/tf.min.js"></script>
    <!-- Load the coco-ssd model. -->
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/[email protected]"></script>

    <script>
    async function predictImages() { // async
        console.log("loading model");
        // Load the model.
        const model = await tf.loadGraphModel('model/model.json');
        console.log("model loaded.");
        
        // predict for all images
        for (let i = 0; i <= 4; i++) {
            const img = document.getElementById('img' + i); // check if image exists if (img
            if (img != null) {
                console.log("doc exists: " + 'img' + i);
                const tensor = tf.browser.fromPixels(img);
                model.executeAsync(tensor.expandDims(0)).then(predictions => {
                    console.log('Predictions: ');
                    console.log(predictions);
                });
             
            } else {
                break;
            }
        }
    }
    predictImages();
    </script>

1 Answer 1

1

model.predict expects a tensor but it is given an HTMLImageElement. First a tensor needs to be constructed from the HTMLImageElement.

const tensor = tf.browser.fromPixels(img)

And then the tensor can be used as the parameter to model.predict

model.predict(tensor) // returns a 3d

Last but not the least is to make sure that the tensor shape is the one expected by the model (3d or 4d). If the model expects a 4d, then it should rather be

model.predict(tensor.expandDims(0))
Sign up to request clarification or add additional context in comments.

9 Comments

Loading the image as you suggested works flawlessly, but when I run prediction using the code I added in my question, it returns this: detector.php:40 (2) [e, e]0: e {kept: false, isDisposedInternal: false, shape: Array(3), dtype: "float32", size: 3834, …}1: e {kept: false, isDisposedInternal: false, shape: Array(4), dtype: "float32", size: 7668, …}length: 2__proto__: Array(0) What is the problem with my code?
There is not issue with the code. predictions is an array of tensors. And if you want to get its value, you will need to do predictions[index].dataSync() with index being either 0 or 1
It depends on what is the output of your model. However, If you want the return data to keep the same shape as the tensor, you can use arraySync instead of dataSync
I think it is another question. You can closed this one and open a new question showing the output you currently have
You can upvote this answer and mark it as accepted since it solves your initial issue about predicting from the image :)
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.