Chaining (HelloNode15_Streams/ilk 3 örnek )
Assume that you have an archive and want to decompress it. There are a number of ways to achieve this. But the easiest and cleanest way is to use piping and chaining. Have a look at the following snippet:
var fs = require('fs');
var zlib = require('zlib');
fs.createReadStream('input.txt.gz')
.pipe(zlib.createGunzip())
.pipe(fs.createWriteStream('output.txt'));
First, we create a simple readable stream from the file input.txt.gz. Next, we pipe this stream into another stream zlib.createGunzip() to un-gzip the content. Lastly, as streams can be chained, we add a writable stream in order to write the un-gzipped content to the file.
Chanining is a mechanism to connect output of one stream to another stream and create a chain of multiple stream operations. It is normally used with piping operations. Now we'll use the piping and chaining to first compress a file and then decompress the same.
(1_fs.pipe örneğine bak.
var fs = require("fs");
var zlib = require('zlib');
// Compress the file input.txt to input.txt.gz
fs.createReadStream('input.txt')
.pipe(zlib.createGzip())
.pipe(fs.createWriteStream('input.txt.gz'));
console.log("File Compressed.");
// Compress the file input.txt.gz to input.txt
fs.createReadStream('input.txt.gz')
.pipe(zlib.createGunzip())
.pipe(fs.createWriteStream('output.txt'));
console.log("File Decompressed.");
zlib.createGzip() method'u bir zip dosyası yaratır. input.txt dosyası, input.txt.gz arşivi içerisine yazılır(koyulur).
zlib.createGunzip() method'u bir zip dosyasını extract eder. input.txt.gz arşivi extract edilip içeriği output.txt dosyasına yazılır.
Bu ikisini koduna içine sequentially yani art arda yazarsak createGunzip() düzgün çalışmaz. O yüzden ikisini ayrı dosyalarda yazdım.
Bir callback fonksiyon yardımıyla önce 1. sonra 2.'nin çalışacağını kesin olarak belrtirsek de düzgün çalışır. )
Additional Methods
Readable stream'lerde kullanılabilecek önemli method'lardan bazıları şunlardır:
- Readable.pause() – This method pauses the stream. If the stream is already flowing, it won’t emitdata events anymore. The data will be kept in buffer. If you call this on a static (non-flowing) stream, the stream starts flowing, but data events won’t be emitted.( Stream flow etmekteyse, flowing durdurulur. Örneğin bir kaynaktan okunup başka bir kaynağa yazılma işlemi yapılıyorsa, bu işlem durdurulur.
Stream flow etmiyorsa, flowing başlar.
Her iki case için de Readable.pause() method'u çağırıldıktan sonra artık data event emit edilmeyecektir. )
- Readable.resume() – Resumes a paused stream. (Pause edilen stream tekrar çalıştırılır.)
- readable.unpipe() – This removes destination streams from pipe destinations. If an argument is passed, it stops the readable stream from piping into the particular destination stream. Otherwise, all the destination streams are removed. ( unpipe() method'u argument almazsa, ilgili readable stream'in pipe edildiği tüm destination stream'lerden piping kaldırılır. unpipe() method'u argument alırsa, ilgili readable stream'in pipe edildiği sadece belirtilen destination stream'den piping kaldırılır. )
Writable Streams
Writable streams let you write data to a destination. Like readable streams, these are alsoEventEmitters and emit various events at various points. Let’s see various methods and events available in writable streams. (Writable stream'ler, bir destination'a veri yazmamızı sağlar. Writable stream'ler de, Readable stream'ler gibi EventEmitter'dırlar; dolayısıyla farklı zamanlar çeşitli event'ler emit edebilirler(fırlatabilirler). )
Writing to Streams
To write data to a writable stream you need to call write() on the stream instance. (Bir writable stream'e veri yazmak için, stream object'in write method'unu çağırırız.)
Örnek:
var fs = require('fs');
var readableStream = fs.createReadStream('file1.txt');
var writableStream = fs.createWriteStream('file2.txt');
readableStream.setEncoding('utf8');
readableStream.on('data', function(chunk) {
writableStream.write(chunk);
});
The above code is straightforward. It simply reads chunks of data from an input stream and writes to the destination using write(). This function returns a Boolean value indicating if the operation was successful. If true, then the write was successful and you can keep writing more data. If false is returned, it means something went wrong and you can’t write anything at the moment. The writable stream will let you know when you can start writing more data by emitting a drain event. (Bu örnekte sırayla önce bir readable stream, sonra bir writable stream object yaratılmıştır.
Readable stream'in encoding'i olarak UTF8 seçilmiştir.
readableStream.on('data', function(chunk) { writableStream.write(chunk); });
diyerek şunu yapmış oluruz, input'da okunacak veri olduğu müddetçe data event fırlatılır böylece input'dan veri okunur,
veri okunduktan sonra belirtilen callback fonksiyonununda writablestream'e okunan veri write() method'u çağırılarak yazılır.
veri yazma işlemi başarılı bir şekilde gerçekleştirilmişse write method'u true return eder ve veri yazmaya devam edebiliriz.
veri yazma işlemi başarısızsa, write method'u false return eder ve veri yazma işlemi durur, artık bu stream'e veri yazılamaz.
drain event emit edilirse bu şu anlama gelir : Bu writable stream'e artık veri yazabiliriz. )
End of Data
When you don’t have more data to write you can simply call end() to notify the stream that you have finished writing. Assuming res is an HTTP response object, you often do the following to send the response to browser: (writable stream'e yazacak veri kalmamışsa, writable stream object'in end() method'unu çağırarak stream'e veri yazmayı manuel olarak durdurabiliriz. Aşağıdaki örnekte, res'in bir HTTP response object olduğunu varsayalım. res ayrıca bir writable stream'dir. Browser'a gönderilecek cevabı bu şekilde manuel olarak belirleyebiliriz.)
res.write('Some Data!!');
res.end('Ended.');
When end() is called and every chunk of data has been flushed, a finish event is emitted by the stream. Just note that you can’t write to the stream after calling end(). For example, the following will result in an error.( end() method'u çağırılıdığında buffer'da yazılmayı bekleyen veri varsa flush edilir yani stream'e yazılır ve stream finish event'i emit eder. end() method'unu çağırdıktan sonra artık bu stream'e veri yazamayız. Mesela aşağıdaki örnek hata verecektir:)
res.write('Some Data!!');
res.end();
res.write('Trying to write again'); //Error!
Here are some important events related to writable streams:
- error – Emitted to indicate that an error has occurred while writing/piping.
- pipe – When a readable stream is piped into a writable stream, this event is emitted by the writable stream.
- unpipe – Emitted when you call unpipe on the readable stream and stop it from piping into the destination stream.
(writable stream'lerle ilgili önemli event'lerden bazıları da şunlardır :
error : writing/piping gerçekleşirken bir hata olursa error event emit edilir.
pipe : Bir writable stream'e, bir readable stream pipe edilirse, writable stream pipe event emit eder.
unpipe: Bir readable stream'in unpipe method'unu çağırırsak, ilgili destination stream unpipe event emit eder ve bu destination stream'e pipe etme durur.)
Conclusion
This was all about the basics of streams. Streams, pipes, and chaining are the core and most powerful features in Node.js. If used responsibly, streams can indeed help you write neat and performant code to perform I/O. (Stream'in temelini öğrendik. Streams, pipes, ve chaining, Node.js'nin en güçlü özelliklerindendir. Stream'ler performansı yüksek I/O gerçekleştirmemizi sağlar.)
******
Writing to stream
C:\Users\IBM_ADMIN\Desktop\tutorials rad started on may 2016\Web development\Node.js\HelloNode15_Streams\6_fs.createWriteStream.write()_finish() path'İndeki kodu inceleyelim. createWriteStream() method'u çağırılarak bir writable stream yaratılmıştır. Sonra WriterStream.write(data,'UTF8'); diyerek data isimli string bu stream'e UTF8 formatında yazılmıştır. writable stream'İn end() method'u çağırılacak bu stream'e yazma işlemi durdurulmuştur; end() method'u çağırıldığında writable stream finish event emit eder, bu event emit edildiğinde console'a "Write completed." yazdırırız. Stream'e yazarken error olursa, writable stream error event emit edecektir ve ekrana err.stack yazdırılacaktır,bunu şöyle ayarladık:
writerStream.on('error', function(err){ console.log(err.stack); })
var fs = require("fs");
var data = 'Simply Easy Learning';
// Create a writable stream
var writerStream = fs.createWriteStream('output.txt');
// Write the data to stream with encoding to be utf8
writerStream.write(data,'UTF8');
// Mark the end of file
writerStream.end();
// Handle stream events --> finish, and error
writerStream.on('finish', function() {
console.log("Write completed.");
});
writerStream.on('error', function(err){
console.log(err.stack);
});
console.log("Program Ended");
Now run the main.js to see the result:
$ node main.js
Verify the Output
Program Ended
Write completed.
Now open output.txt created in your current directory and verify the following content available in output.txt file.
Simply Easy Learning
30 - Object Streams
https://nodesource.com/blog/understanding-object-streams/
anlamadım ama çok önemli değil.
31 - Buffer
http://www.tutorialspoint.com/nodejs/nodejs_buffers.htm
Pure JavaScript is Unicode friendly but not nice to binary data. When dealing with TCP streams or the file system, it's necessary to handle octet streams. Node provides Buffer class which provides instances to store raw data similar to an array of integers but corresponds to a raw memory allocation outside the V8 heap.( raw veri yani binary data tutmak için Buffer class'ı kullanılır. Binary stream'leri handle etmek için Buffer class'ını kullanırız.)
Buffer class is a global class and can be accessed in application without importing buffer module.( Buffer class'ı, bir global class'dır, dolayısıyla buffer module'ünü import etmeye gerek yoktur.)
Creating Buffers
Node Buffer can be constructed in a variety of ways.( Farklı yollarla Buffer object yaratmak mümkündür.)
Method 1
Following is the syntax to create an uninitiated Buffer of 10 octets:( Aşağıda 10 byte'lık boş (uninitiated) bir Buffer yaratılmıştır.)
var buf = new Buffer(10);
Method 2
Following is the syntax to create a Buffer from a given array:( Buffer object yaratıp verilen bir array'i kullanarak bu buffer'ı initialize etmek için:)
var buf = new Buffer([10, 20, 30, 40, 50]);
Method 3
Following is the syntax to create a Buffer from a given string and optionally encoding type:( Buffer object yaratıp verilen bir string'i kullanarak bu buffer'ı initialize etmek için:)
var buf = new Buffer("Simply Easy Learning", "utf-8");
Though "utf8" is the default encoding but you can use either of the encodings "ascii", "utf8", "utf16le", "ucs2", "base64" or "hex".( utf8 default encoding'dir. Şu encoding'leri de kullanabiliriz: "ascii", "utf8", "utf16le", "ucs2", "base64" veya "hex".)
Writing to Buffers
( Buffer object'e, write() method'una verilen 1.parametre olan string'in offset'inci index'inden başlanarak length kadar karakter yazılır. write() method'u, buffer object'e yazılan byte sayısını yani karakter sayısını return eder. Eğer Buffer object'de string'i yazacak yeterli alan yoksa, buffer object'e sadece sığdığı kadar karakter yazılır. )
Syntax
Following is the syntax of the method to write into a Node Buffer:
buf.write(string[, offset][, length][, encoding])
Parameters
Here is the description of the parameters used:
- string - This is string data to be written to buffer.
- offset - This is the index of the buffer to start writing at. Default value is 0.
- length - This is the number of bytes to write. Defaults to buffer.length
- encoding - Encoding to use. 'utf8' is the default encoding
Return Value
This method returns number of octets written. If there is not enough space in the buffer to fit the entire string, it will write a part of the string.
Example
buf = new Buffer(256);
len = buf.write("Simply Easy Learning");
console.log("Octets written : "+ len);
When above program is executed, it produces following result:
Octets written : 20
Reading from Buffers
Syntax
(Bir buffer object'in içeriğini okumak için toString() method'u çağırılır. Bu method buffer object'den istediğimiz formatda okunan veriyi string olarak return eder. )
Following is the syntax of the method to read data from a Node Buffer:
buf.toString([encoding][, start][, end])
Parameters
Here is the description of the parameters used:
- encoding - Encoding to use. 'utf8' is the default encoding
- start - Beginning index to start reading, defaults to 0.
- end - End index to end reading, defaults is complete buffer.
Return Value
This method decodes and returns a string from buffer data encoded using the specified character set encoding.
Example
buf = new Buffer(26);
for (var i = 0 ; i < 26 ; i++) {
buf[i] = i + 97;
}
console.log( buf.toString('ascii')); // outputs: abcdefghijklmnopqrstuvwxyz
console.log( buf.toString('ascii',0,5)); // outputs: abcde
console.log( buf.toString('utf8',0,5)); // outputs: abcde
console.log( buf.toString(undefined,0,5)); // encoding defaults to 'utf8', outputs abcde
When above program is executed, it produces following result:
abcdefghijklmnopqrstuvwxyz
abcde
abcde
abcde
Convert Buffer to JSON
( Bir buffer object'i JSON object'e çevirmek için yani buffer object'in JSON representation'ını elde etmek için toJSON() method'u çağırılır. )
Syntax
Following is the syntax of the method to convert a Node Buffer into JSON object:
buf.toJSON()
Return Value
This method returns a JSON-representation of the Buffer instance.
Example
var buf = new Buffer('Simply Easy Learning');
var json = buf.toJSON(buf);
console.log(json);
When above program is executed, it produces following result:
[ 83, 105, 109, 112, 108, 121, 32, 69, 97, 115, 121, 32, 76, 101, 97, 114, 110, 105, 110, 103 ]
Concatenate Buffers
(concat() method'unun 1.parametresi bir array alır. Bu array, concatenate etmek istediğimiz buffer'ları içerir. concat() method'u birleştirilmiş olan buffer'ı return eder. )
Syntax
Following is the syntax of the method to concatenate Node buffers to a single Node Buffer:
Buffer.concat(list[, totalLength])
Parameters
Here is the description of the parameters used:
- list - Array List of Buffer objects to be concatenated
- totalLength - This is the total length of the buffers when concatenated
Return Value
This method returns a Buffer instance.
Example
var buffer1 = new Buffer('TutorialsPoint ');
var buffer2 = new Buffer('Simply Easy Learning');
var buffer3 = Buffer.concat([buffer1,buffer2]);
console.log("buffer3 content: " + buffer3.toString());
When above program is executed, it produces following result:
buffer3 content: TutorialsPoint Simply Easy Learning
Compare Buffers
(2 string'i alfabetik olarak kıyaslamaya(compare etmeye) benzer olarak 2 buffer'ın içerdiği string'lerin hangisinin alfabetik olarak önce/sonra geldiğini öğrenmek için bu method çağırılır. )
Following is the syntax of the method to compare two Node buffers:
buf.compare(otherBuffer);
Parameters
Here is the description of the parameters used:
- otherBuffer - This is the other buffer which will be compared withbuf
Return Value
Returns a number indicating whether this comes before or after or is the same as the otherBuffer in sort order.
Example
var buffer1 = new Buffer('ABC');
var buffer2 = new Buffer('ABCD');
var result = buffer1.compare(buffer2);
if(result < 0) {
console.log(buffer1 +" comes before " + buffer2);
}else if(result == 0){
console.log(buffer1 +" is same as " + buffer2);
}else {
console.log(buffer1 +" comes after " + buffer2);
}
When above program is executed, it produces following result:
ABC comes before ABCD
Copy Buffer
(buf isimli buffer object'in içeriği targetBuffer isimli buffer object'in içeriğine kopyalanacaktır.
Bu işlem, buf isimli buffer object'in sourceStart'ıncı index'inden sourceEnd'inci index'inde kadar kopyalanacak;
sonra targetBuffer isimli buffer object'e targetStart'ıncı index'inden başlayarak yapıştırılır.
buffer1.copy(buffer2); deyince buffer1 buffer2'ye kopyalanır. )
Syntax
Following is the syntax of the method to copy a node buffer:
buf.copy(targetBuffer[, targetStart][, sourceStart][, sourceEnd])
Parameters
Here is the description of the parameters used:
- targetBuffer - Buffer object where buffer will be copied.
- targetStart - Number, Optional, Default: 0
- sourceStart - Number, Optional, Default: 0
- sourceEnd - Number, Optional, Default: buffer.length
Return Value
No return value. Copies data from a region of this buffer to a region in the target buffer even if the target memory region overlaps with the source. If undefined the targetStart and sourceStart parameters default to 0 while sourceEnd defaults to buffer.length.
Example
var buffer1 = new Buffer('ABC');
//copy a buffer
var buffer2 = new Buffer(3);
buffer1.copy(buffer2);
console.log("buffer2 content: " + buffer2.toString());
When above program is executed, it produces following result:
buffer2 content: ABC
Slice Buffer
(var buffer1 = new Buffer('TutorialsPoint');
var buffer2 = buffer1.slice(0,9);
buffer1.slice(0,9) returns 'Tutorials' )
Following is the syntax of the method to get a sub-buffer of a node buffer:
buf.slice([start][, end])
Parameters
Here is the description of the parameters used:
- start - Number, Optional, Default: 0
- end - Number, Optional, Default: buffer.length
Return Value
Returns a new buffer which references the same memory as the old, but offset and cropped by the start (defaults to 0) and end (defaults to buffer.length) indexes. Negative indexes start from the end of the buffer.
Example
var buffer1 = new Buffer('TutorialsPoint');
//slicing a buffer
var buffer2 = buffer1.slice(0,9);
console.log("buffer2 content: " + buffer2.toString());
When above program is executed, it produces following result:
buffer2 content: Tutorials
Buffer Length
(len() method'u buffer'ın length'ini(kaç byte olduğunu) return eder. )
buf.length;
Return Value
Returns a size of buffer in bytes.
Example
var buffer = new Buffer('TutorialsPoint');
//length of the buffer
console.log("buffer length: " + buffer.length);
When above program is executed, it produces following result:
buffer length: 14
Methods Reference
Following is a reference of Buffers module available in Node.js. For a further detail you can refer to official documentation.
SN
|
Method & Description
|
1
|
new Buffer(size)
Allocates a new buffer of size octets. Note, size must be no more than kMaxLength. Otherwise, a RangeError will be thrown here. |
2
|
new Buffer(buffer)
Copies the passed buffer data onto a new Buffer instance. |
3
|
new Buffer(str[, encoding])
Allocates a new buffer containing the given str. encoding defaults to 'utf8'. |
4
|
buf.length
Returns the size of the buffer in bytes. Note that this is not necessarily the size of the contents. length refers to the amount of memory allocated for the buffer object. It does not change when the contents of the buffer are changed. |
5
|
buf.write(string[, offset][, length][, encoding])
Writes string to the buffer at offset using the given encoding. offset defaults to 0, encoding defaults to 'utf8'. length is the number of bytes to write. Returns number of octets written. |
32 - child_process
http://www.tutorialspoint.com/nodejs/nodejs_scaling_application.htm
https://nodejs.org/api/child_process.html#child_process_child_process
http://www.codingdefined.com/2014/08/difference-between-fork-spawn-and-exec.html
https://gist.github.com/leommoore/4484379
http://www.hacksparrow.com/difference-between-spawn-and-exec-of-node-js-child_process.html
(child process'ler ve parent process'ler genellikle stdio(standard input output) içerir.
child_process modülündeki 3 farklı method'dan herhangibirini çağırarak child process yaratabiliriz: exec,spawn ve fork
Node.js single thread'de çalışır ve concurrency'yi handle etmek için event-driven paradigm kullanır. Child process yaratarak çok çekirdekli cpu'larda paralel işlem yapabiliriz.
exec() : komut bir shell'de çalıştırır. Komutun output'u bir buffer'a yazılır, bu buffer standart io'ya yazılır. Bu method'un aldığı 1.parameter shell'de çalışmasını istediğimiz komutu aynen tamamen yazarız.2.parametre optional'dır. Bu komut shell'de çalıştırılıp bittikten sonra 3. parametredeki callback fonksiyonu çağırılır.
var workerProcess = child_process.exec('node support.js '+i, function (error, stdout, stderr) { ... });
workerProcess.on('exit', function (code) { console.log('Child process exited with exit code '+code); });
Bu method, ChildProcess return eder. The 'exit' event is emitted after the child process ends. Yani child process bitince exit event emit edilir.
spawn() : Yeni bir process yaratıp, komutu bu process'de çalıştırır. child_process.spawn(command[, args][, options])
Bu method'un 1.parametresine sadece komut ismi verilir. Komut ile birlikte verilmek istenen argument'ler 2.parametrede array içerisinde verilir.
The most significant difference between child_process.spawn and child_process.exec is in what they return - spawn returns a stream and exec returns a buffer.
(Her iki method da child_process return ediyor. exec'de, komutun output'a buffer'a buffer da stdio'ya yazılır. spawn'da ise komutun output'u stream'e yazılır. ben de tam anlamadım açıkçası)
The spawn() method returns streams (stdout & stderr) and it should be used when the process returns large amount of data(Process büyük miktarda veri return edecekse spawn kullanılmalıdır).
spawn() starts receiving the response as soon as the process starts executing(Process çalışmaya başlar başlamaz spawn veri return etmeye başlar. Stream'den sürekli bir şekilde okuyarak bunu takip edebiliriz).
fork() : a special case of the spawn() to create Node processes(). Yeni bir process yaratıp, komutu bu process'de çalıştırır. child_process.fork(modulePath[, args][, options])
child process'de çalışmasını istediğimiz modülü, yani .js dosyasının ismini fork() method'una 1.parametre olarak veririz. 2.paremetredeki array, komutun argument'lerini içerir.
ex :
var worker_process = child_process.fork("support.js", [i]);
worker_process.on('close', function (code) {
console.log('child process exited with code ' + code);
});
The fork method returns object with a built-in communication channel in addition to having all the methods in a normal ChildProcess instance.
)
As Node.js runs in a single thread mode but it uses an event-driven paradigm to handle concurrency. It also facilitates creation of child processes to leverage parallel processing on multi-core cpu based systems.
Child processes always have three streams child.stdin, child.stdout, andchild.stderr which may be shared with the stdio streams of the parent process.
Node provides child_process module which has following three major ways to create child process.
- exec - child_process.exec method runs a command in a shell/console and buffers the output.
- spawn - child_process.spawn launches a new process with a given command
- fork - The child_process.fork method is a special case of the spawn() to create child processes.
The exec() method
child_process.exec method runs a command in a shell and buffers the output. It has the following signature:
child_process.exec(command[, options], callback)
Parameters
Here is the description of the parameters used:
- command String The command to run, with space-separated arguments
- options Object may comprise one or more of the following options:
- cwd String Current working directory of the child process
- env Object Environment key-value pairs
- encoding String (Default: 'utf8')
- shell String Shell to execute the command with (Default: '/bin/sh' on UNIX, 'cmd.exe' on Windows, The shell should understand the -c switch on UNIX or /s /c on Windows. On Windows, command line parsing should be compatible with cmd.exe.)
- timeout Number (Default: 0)
- maxBuffer Number (Default: 200*1024)
- killSignal String (Default: 'SIGTERM')
- uid Number Sets the user identity of the process.
- gid Number Sets the group identity of the process.
- callback Function gets three arguments error, stdout and stderrwhich is called with the following output when process terminates
The exec() method returns a buffer with a max size and waits for the process to end and tries to return all the buffered data at once.
Example
Let us create two js file named support.js and master.js:
File: support.js
console.log("Child Process " + process.argv[2] + " executed." );
File: master.js
const fs = require('fs');
const child_process = require('child_process');
for(var i=0; i<3; i++) {
var workerProcess = child_process.exec('node support.js '+i,
function (error, stdout, stderr) {
if (error) {
console.log(error.stack);
console.log('Error code: '+error.code);
console.log('Signal received: '+error.signal);
}
console.log('stdout: ' + stdout);
console.log('stderr: ' + stderr);
});
workerProcess.on('exit', function (code) {
console.log('Child process exited with exit code '+code);
});
}
Now run the master.js to see the result:
$ node master.js
Verify the Output. Server has started
Child process exited with exit code 0
stdout: Child Process 1 executed.
stderr:
Child process exited with exit code 0
stdout: Child Process 0 executed.
stderr:
Child process exited with exit code 0
stdout: Child Process 2 executed.
The spawn() method
child_process.spawn method launches a new process with a given command. It has the following signature:
child_process.spawn(command[, args][, options])
Parameters
Here is the description of the parameters used:
- command String The command to run
- args Array List of string arguments
- options Object may comprise one or more of the following options:
- cwd String Current working directory of the child process
- env Object Environment key-value pairs
- stdio Array|String Child's stdio configuration
- customFds Array Deprecated File descriptors for the child to use for stdio
- detached Boolean The child will be a process group leader
- uid Number Sets the user identity of the process.
- gid Number Sets the group identity of the process.
The spawn() method returns streams (stdout & stderr) and it should be used when the process returns large amount of data. spawn() starts receiving the response as soon as the process starts executing.
Example
Create two js file named support.js and master.js:
File: support.js
console.log("Child Process " + process.argv[2] + " executed." );
File: master.js
const fs = require('fs');
const child_process = require('child_process');
for(var i=0; i<3; i++) {
var workerProcess = child_process.spawn('node', ['support.js', i]);
workerProcess.stdout.on('data', function (data) {
console.log('stdout: ' + data);
});
workerProcess.stderr.on('data', function (data) {
console.log('stderr: ' + data);
});
workerProcess.on('close', function (code) {
console.log('child process exited with code ' + code);
});
}
Now run the master.js to see the result:
$ node master.js
Verify the Output. Server has started
stdout: Child Process 0 executed.
child process exited with code 0
stdout: Child Process 1 executed.
stdout: Child Process 2 executed.
child process exited with code 0
child process exited with code 0
The fork method
child_process.fork method is a special case of the spawn() to create Node processes. It has the following signature
child_process.fork(modulePath[, args][, options])
Parameters
Here is the description of the parameters used:
- modulePath String The module to run in the child
- args Array List of string arguments
- options Object may comprise one or more of the following options:
- cwd String Current working directory of the child process
- env Object Environment key-value pairs
- execPath String Executable used to create the child process
- execArgv Array List of string arguments passed to the executable (Default: process.execArgv)
- silent Boolean If true, stdin, stdout, and stderr of the child will be piped to the parent, otherwise they will be inherited from the parent, see the "pipe" and "inherit" options for spawn()'s stdio for more details (default is false)
- uid Number Sets the user identity of the process.
- gid Number Sets the group identity of the process.
The fork method returns object with a built-in communication channel in addition to having all the methods in a normal ChildProcess instance.
Example
Create two js file named support.js and master.js:
File: support.js
console.log("Child Process " + process.argv[2] + " executed." );
File: master.js
const fs = require('fs');
const child_process = require('child_process');
for(var i=0; i<3; i++) {
var worker_process = child_process.fork("support.js", [i]);
worker_process.on('close', function (code) {
console.log('child process exited with code ' + code);
});
}
Now run the master.js to see the result:
$ node master.js
Verify the Output. Server has started
Child Process 0 executed.
Child Process 1 executed.
Child Process 2 executed.
child process exited with code 0
child process exited with code 0
child process exited with code 0
Explanation 2 :
Difference between spawn and exec functions of child_process
The Node.js Child Processes module (child_process) has two functions spawn and exec, using which we can start a child process to execute other programs on the system.
The most significant difference between child_process.spawn and child_process.exec is in what they return - spawn returns a stream and exec returns a buffer.
child_process.spawn returns an object with stdout and stderr streams. You can tap on the stdout stream to read data that the child process sends back to Node.
stdout being a stream has the "data", "end", and other events that streams have.
spawn is best used to when you want the child process to return a large amount of data to Node - image processing, reading binary data etc.
child_process.spawn is "asynchronously asynchronous", meaning it starts sending back data from the child process in a stream as soon as the child process starts executing.
You can see an example here where I used spawn to read the results of a curl request to Node. (child_process.spawn(), stdout ve stderr streamlerini içeren bir object return eder. child process'in
Node'a ne gönderdiğini ne yazdırdığını görebilmek için child process'in stdout stream'ine bakmalıyız. stdout bir stream olduğu için "data", "end" ve stream'lerin sahip olduğu diğer event'lere sahiptir.
child process'in çok fazla veri return etmesini bekliyorsak(örneğin image processing yapacaksak) spawn() method'unu kullanmalıyız. child process çalışmaya başlar başlamaz, process'in output'u stream'e düşmeye başlar.
Bir curl request'e gelecek response'ı okuyan bir child process yaratılmıştır aşağıdaki örnekte:
Bu örnekte Node.js'deki curl'ü kullanarak dosya indireceğiz. spawn() method'unu kullanarak yeni bir process yarattık ve bu process'de curl komutunu çalıştırdık, bu komutun output'u olduğu müddetçe ilgili stream'in data event'i emit edilecektir. Dolayısıyla ilgili callback fonksiyonu çağırılacaktır.okunacak veri kalmayınca end event emit edilir. process bitince exit event edilir. )
We will be calling curl using child_process's spawn method.
We are using spawn instead of exec for the sake of convenience - spawn returns a stream with data event and doesn't have buffer size issue unlike exec.
That doesn't mean exec is inferior to spawn; in fact we will use exec to download files using wget.
// Function to download file using curl
var download_file_curl = function(file_url) {
// extract the file name
var file_name = url.parse(file_url).pathname.split('/').pop();
// create an instance of writable stream
var file = fs.createWriteStream(DOWNLOAD_DIR + file_name);
// execute curl using child_process' spawn function
var curl = spawn('curl', [file_url]);
// add a 'data' event listener for the spawn instance
curl.stdout.on('data', function(data) { file.write(data); });
// add an 'end' event listener to close the writeable stream
curl.stdout.on('end', function(data) {
file.end();
console.log(file_name + ' downloaded to ' + DOWNLOAD_DIR);
});
// when the spawn child process exits, check if there were any errors and close the writeable stream
curl.on('exit', function(code) {
if (code != 0) {
console.log('Failed: ' + code);
}
});
};
The way data was written to the instance of fs.createWriteStream is similar to way we did for HTTP.get.
The only difference is that the data and end events are listened on the stdout object of spawn. Also we listen to spawn's exit event to make note of any errors.
The child_process.spawn() method spawns a new process using the given command, with command line arguments in args. If omitted, args defaults to an empty array.
fork: The child_process.fork() method is a special case of child_process.spawn() used specifically to spawn new Node.js processes. Like child_process.spawn(), aChildProcess object is returned. The returned ChildProcess will have an additional communication channel built-in that allows messages to be passed back and forth between the parent and child. See child.send() for details.
It is important to keep in mind that spawned Node.js child processes are independent of the parent with exception of the IPC communication channel that is established between the two. Each process has it's own memory, with their own V8 instances. Because of the additional resource allocations required, spawning a large number of child Node.js processes is not recommended.
child_process.exec returns the whole buffer output from the child process. By default the buffer size is set at 200k. If the child process returns anything more than that, you program will crash with the error message "Error: maxBuffer exceeded". You can fix that problem by setting a bigger buffer size in the exec options. But you should not do it because exec is not meant for processes that return HUGE buffers to Node. You should use spawn for that. So what do you use exec for? Use it to run programs that return result statuses, instead of data.
child_process.exec is "synchronously asynchronous", meaning although the exec is asynchronous, it waits for the child process to end and tries to return all the buffered data at once. If the buffer size of exec is not set big enough, it fails with a "maxBuffer exceeded" error.
See an example here where I used exec to execute wget to download files and update Node with the status the execution.
So there it is - the differences between span and exec of Node's child_process. Use spawn when you want the child process to return huge binary data to Node, use exec when you want the child process to return simple status messages.
Explanation 3 :
We can create a child process using require ('child_process').spawn() or require('child_process').fork() or require('child_process').exec(). So, you might ask that then whats the difference between these processes.
require('child_process').spawn() starts sending back data from the child process in a stream as soon as the child process starts executing. When you run this command, it send a system command that will run on its own process rather than executing code within your node process. In this no new V8 instance will be created and only one copy of the node module will be active on the processor. It is used when you want the child process to return large amount of data to Node.
child_process.spawn(command, [args], [options])
Suppose you have a file named YourJs.js as
console.log("Process " + process.argv[2]);
fs = require('fs');
process = require('child_process');
var ls = process.spawn('node','YourJs.js');
ls.stdout.on('data', function(data)) {
console.log('stdout: ' + data);
});
When you run the above peice of code, you will get an output as stdout: Process 0
require('child_process').fork() is a special instance of spawn thats runs a new instance of the V8 engine. Which actually means you are creating multiple workers running on the same Node code base for different task.
fs = require('fs');
process = require('child_process');
var ls = process.fork('YourJs.js');
ls.stdout.on('data', function(data)) {
console.log('stdout: ' + data);
});
When you run the above peice of code, you will get an output as stdout: Process 0
require('child_process').exec() returns a buffer from the child process. The default buffer size is 200k. It is asynchronous, but it waits for the child process to end and tries to return all the buffered data at once. If your return data from the child process is greater than 200k then you will get maxBuffer exceeded.
fs = require('fs');
process = require('child_process');
var ls = process.exec('node YourJs.js', function (error, stdout, stderr) {
if(error)
console.log(error.code);
});
When you run the above peice of code, you will get an output as stdout: Process 0
require('child_process').spawn() starts sending back data from the child process in a stream as soon as the child process starts executing. When you run this command, it send a system command that will run on its own process rather than executing code within your node process. In this no new V8 instance will be created and only one copy of the node module will be active on the processor. It is used when you want the child process to return large amount of data to Node.
child_process.spawn(command, [args], [options])
Suppose you have a file named YourJs.js as
console.log("Process " + process.argv[2]);
fs = require('fs');
process = require('child_process');
var ls = process.spawn('node','YourJs.js');
ls.stdout.on('data', function(data)) {
console.log('stdout: ' + data);
});
When you run the above peice of code, you will get an output as stdout: Process 0
require('child_process').fork() is a special instance of spawn thats runs a new instance of the V8 engine. Which actually means you are creating multiple workers running on the same Node code base for different task.
fs = require('fs');
process = require('child_process');
var ls = process.fork('YourJs.js');
ls.stdout.on('data', function(data)) {
console.log('stdout: ' + data);
});
When you run the above peice of code, you will get an output as stdout: Process 0
require('child_process').exec() returns a buffer from the child process. The default buffer size is 200k. It is asynchronous, but it waits for the child process to end and tries to return all the buffered data at once. If your return data from the child process is greater than 200k then you will get maxBuffer exceeded.
fs = require('fs');
process = require('child_process');
var ls = process.exec('node YourJs.js', function (error, stdout, stderr) {
if(error)
console.log(error.code);
});
When you run the above peice of code, you will get an output as stdout: Process 0
Hiç yorum yok:
Yorum Gönder