2

I would like to use the tensorflow js plugin with cocossd and mobilenet on the server side with nodejs. I've already done a script on the client side that works where when the user submits a form I run tfjs:

const img = new Image(100, 100);
img.src = //base64 encoded image

// Load the model.
mobilenet.load().then(async model => {
    const post_predictions = []; 
    model.classify(img).then(classify_predictions => {
        classify_predictions.forEach(function(element){
            const each_class = element["className"].split(", ")
            each_class.forEach(function(this_element){
                post_predictions.push([this_element, (element.probability*100)]);
            })
        })
        cocoSsd.load().then(model => {
            // detect objects in the image.
            model.detect(img).then(predictions => {
                predictions.forEach(function(this_element){
                    post_predictions.unshift([this_element.class, (this_element.score*100)]);
                });
                post_predictions.sort(function(a, b) {
                    return b[1]-a[1];
                });

                console.log(post_predictions)
            });
        })
    });
});

I would like to do the same on the server side but I have node idea what modules require or how to load an image from it's base 64.

I tried to download cocossd and mobilenet on my server with:

npm i @tensorflow-models/mobilenet

npm i @tensorflow-models/coco-ssd

And then i tried to install tensorflow js for node with:

npm i @tensorflow/tfjs-node

But when i do :

npm i tensorflow

I get this error :

npm ERR! code EBADPLATFORM

npm ERR! notsup Unsupported platform for [email protected]: wanted {"os":"linux,darwin","arch":"any"} (current: {"os":"win32","arch":"x64"})

npm ERR! notsup Valid OS: linux,darwin

npm ERR! notsup Valid Arch: any

npm ERR! notsup Actual OS: win32

npm ERR! notsup Actual Arch: x64

npm ERR! A complete log of this run can be found in:

npm ERR! C:\Users\johan\AppData\Roaming\npm-cache_logs\2020-02-16T05_27_15_276Z-debug.log

Pls someone help me 🙏 Thannks

2 Answers 2

2

I am also encountering a different problem when i do "npm i @tensorflow-models/mobilenet".
Here is the screenshot.
It seems that there is a problem with the package. enter image description here

You can try doing this as an alternative.

So I end up using a CDN for TensorFlow mobilenet
Refer to the below lines of code

<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]/dist/tf.min.js"> </script> //
<!-- Load the MobileNet model. -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/[email protected]/dist/mobilenet.min.js"> </script>

Here are the steps:
1. Create a simple node project using npm init. This will create a package.json file. This is where the packages reside or listed.
2. Note that you need to hit "npm install express --save" on the command line so that express package will be added on the packages.json
3. Create an index.html file with the following code. On the UI side, you will be asked to upload an image which will be evaluated on the console or will be shown as a alert message.

<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]/dist/tf.min.js"> </script> //
<!-- Load the MobileNet model. -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/[email protected]/dist/mobilenet.min.js"> </script>
<input type='file' />
<br><img id="myImg" src="#" alt="your image will be displayed here" >

<script>
  window.addEventListener('load', function() {
  document.querySelector('input[type="file"]').addEventListener('change', function() {
      if (this.files && this.files[0]) {
          var img = document.querySelector('img');  // $('img')[0]
          img.src = URL.createObjectURL(this.files[0]); // set src to blob url
          img.onload = imageIsLoaded;
      }
  });
});



async function run() {
    const img = document.getElementById('myImg');
    print(img)
    const version = 2;
    const alpha = 0.5;
    // Load the model.
    const model = await mobilenet.load({version, alpha});

    // Classify the image.
    const predictions = await model.classify(img);
    console.log('Predictions');
    console.log(predictions);

    // Get the logits.
    const logits = model.infer(img);
    console.log('Logits');
    logits.print(true);

    // Get the embedding.
    const embedding = model.infer(img, true);
    console.log('Embedding');
    embedding.print(true);
  }

function imageIsLoaded() { 
  run();
}

</script>

Step 3: Create a server.js. This file will be used to render the index file on your local server using express npm package. Below is the code:

const express = require('express');
app = express();

app.get('/',function(req,res) {
    res.sendFile('/demo/index.html', { root: __dirname });
});
const port = 3000
app.listen(port, function(){
    console.log(`Listening at port ${port}`);
})

Step 4: Go to the browser and hit localhost:3000
Below is a working screenshot of the project. enter image description here

UPDATE: LOADING ON NODEJS
It seems that the problem is on the sequence of installation
Step 1: Install the following packages

npm install @tensorflow/tfjs @tensorflow/tfjs-node --save
// or...
npm install @tensorflow/tfjs @tensorflow/tfjs-node-gpu --save

Step 2: You can now install the @tensorflow-models/mobilenet -save

npm install @tensorflow-models/mobilenet -save

Step 3: Server.js Example Usage

const tf = require('@tensorflow/tfjs')
// Load the binding (CPU computation)
const mobilenet = require('@tensorflow-models/mobilenet');

// for getting the data images
var image = require('get-image-data')

image('./cup.jpg', async function (err, image) {

    const numChannels = 3;
    const numPixels = image.width * image.height;
    const values = new Int32Array(numPixels * numChannels);
    pixels = image.data
    for (let i = 0; i < numPixels; i++) {
        for (let channel = 0; channel < numChannels; ++channel) {
            values[i * numChannels + channel] = pixels[i * 4 + channel];
        }
    }
    const outShape = [image.height, image.width, numChannels];
    const input = tf.tensor3d(values, outShape, 'int32');
    await load(input)
});

async function load(img){
    // Load the model.
    const model = await mobilenet.load();

    // Classify the image.
    const predictions = await model.classify(img);

    console.log('Predictions: ');
    console.log(predictions);
}

Screenshot of the prediction enter image description here

Sign up to request clarification or add additional context in comments.

4 Comments

Yes. Thanks. Actually i was doing that but when we submit a form, we want it to be fast. But on the client side tensorflow is really slow and even crashes the computer sometimes. So thats why i would like to do it asynchronosly on nodejs server side.
Hi @YoyoBu, I have added a sample implementation using nodejs. It seems that the problem is, you are missing a step that is on my update above. Let me know if this works for you. Thanks
Ok. I will try. But what is the difference between tfjs-node-gpu and tfjs-node ?
Yup, those packages are just dependencies. In the meantime you can just ignore it. Regarding the tfjs-node-gpu, you will install it when your server has a GPU.
0

I did this using @tensorflow-models/coco-ssd and @tensorflow/tfjs-node.

The reason I'm posting my answer is to show how I got the image data. TF_Support's answer has this code:

// for getting the data images
var image = require('get-image-data')

image('./cup.jpg', async function (err, image) {

    const numChannels = 3;
    const numPixels = image.width * image.height;
    const values = new Int32Array(numPixels * numChannels);
    pixels = image.data
    for (let i = 0; i < numPixels; i++) {
        for (let channel = 0; channel < numChannels; ++channel) {
            values[i * numChannels + channel] = pixels[i * 4 + channel];
        }
    }
    const outShape = [image.height, image.width, numChannels];
    const input = tf.tensor3d(values, outShape, 'int32');
    await load(input)
});

Whereas I just did this and it seems to work:

const image = await tf.node.decodeImage(resp.body, 3);
const catObjects = await catModel.detect(image);
image.dispose();

where catObjects is

catModel = await cocoSsd.load();

Up top in my code I have:

const tf = require("@tensorflow/tfjs-node");
const cocoSsd = require("@tensorflow-models/coco-ssd");

I'm not sure that you'd need to use the get-image-data package. The result here is the same--both methods end up spitting out 3d tensors.
Quote from the tfjs-node docs:

Given the encoded bytes of an image, it returns a 3D or 4D tensor of the decoded image. Supports BMP, GIF, JPEG and PNG formats.

Caveats:

I still get that warning that says it's slow:

"Hi there wave Looks like you are running Tensorflow.js in Node.js...."
etc, (see TF_Support's answer), but it works.

Note that I've opened a question regarding how to get rid of that message and properly use resources to have things speed up:

Tensorflow-node not recognized with cocoSsd on node.js

3 Comments

RE: "the answer above": The only way to directly respond to another answer is in the Comment section for that post. If you want to refer to a specific post, You need to include link.(click share beneath the past to copy it). The order answers appear on any page depends on the visitors settings (date, upvotes, etc), and will also change over time as answers are added, deleted, and voted on.
Thanks! I'm new. I did want to link to the post but I didn't know how to do it, so the share button info is good to know. I'll edit this now.
Great. I fixed this one...for future posts, look into markdown for formatting, rather than using html. Also, Welcome to SO!

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.