I need to execute a shell command from my Node.js script, read its output and terminate that program after a certain number of bytes is read. (More precisely, I want to do a partial download of a file via smbget).
The most obvious approach, I guess, is to use childprocess.spawn(), buffer the output manually and simply kill() the process when sufficient data was read.
And this works nicely, except that I looks a bit clunky. So instead I wanted to be clever (TM) and use head. So I wired everything up as indicated in the docs to child_process (or, somewhat more conveniently, using procstreams) to produce a pipeline equivalent to cat /dev/urandom | head --bytes=10. Alas, everything goes up in flames like so:
events.js:72
throw er; // Unhandled 'error' event
^
Error: read ECONNRESET
at errnoException (net.js:883:11)
at Pipe.onread (net.js:539:19)
probably because head just clubs the stream to death, and I couldn't find a way to catch or otherwise handle that error (although that could just be because I'm a node n00b :).
Alternatively, I could do the following:
var cmd = 'cat /dev/urandom | head --bytes=100';
childprocess.exec(cmd, function (err, stdout, stderr) {
// ...
});
except that I can't access the raw (binary) data anymore. When I call
fs.writeFileSync('foo.dat', stdout);
the stream will be utf8 encoded, resulting in the file being around 180 bytes instead of the expected 100 bytes.
This can be circumvented by passing a second parameter to exec:
{ encoding: 'binary' }
Unfortunately, the docs say that this is deprecated.
What is the correct way of doing this? Or do I absolutely need to buffer myself?