While writing JS code, developers may never actually find themselves needing iterables, and rather rely on Arrays,Maps etc. to extract iterable information and execute iterations. So why should Interators be used and what was the purpose of introducing them to the javascript language, is something that needs to be investigated.
The advantages in short
Before going into detailed explanations and code examples, here is a summary of all the reasons why Iterablesare helpful in writing better code.
Abstraction for seperation of concerns
The Iterable protocol allows the logic of the actual iteration over the items in a suitable object to be isolated in the implementation of the Iterable. The action that needs to be performed on the items themselves is completely decoupled from this logic, making the code much more readable and testable.
Memory usage
Iterating over an object that is not a an Array requies the code to often collecting all items belonging to it into an Array and then processing them. This creates, potentially, a large collection of items in memory which can be ineffecient. Iterables solves this problem.
Avoiding callback hell
Another mechanism that is traditionally used to iterate while operating on a collection of items is to pass a callback to the function that iterates on it. This causes the callback hell problem and iterables provide an easy mechnism to do the same without landing into callback issues.
Understanding the advantages
Now for a more detailed understanding of the advantages of using Iterables in JS code, the following sections will take up a problem and try to improve its implementation.
The first shot
Consider the following code written for the purpose of finding all files in a folder that have a certain matching text in its contents.
const {promises: fsp} = require('fs');
const path = require('path');
async function getMatchingFiles(folderPath, text, results = []) {
// Collecting items for iteration
const folderContents = await fsp.readdir(folderPath);
for(const folderContent of folderContents) {
const stat = await fsp.stat(path.join(folderPath, folderContent));
const fullContentPath = path.join(folderPath, folderContent)
if(stat.isDirectory()) {
getMatchingFiles(fullContentPath, text, results)
} else {
const fileContent = await fsp.readFile(fullContentPath, 'utf8');
// Filtering items
if(fileContent.includes(text)) {
// Gathering results
results.push(fullContentPath)
}
}
}
return results;
}
The problem with this is straightforward. The function getMatchingFiles is complicated and difficult to read, along with being difficult to test. It combines the logic of iterating, filtering and gathering results together.
Improve by abstraction for seperation of concerns
The next logical step would be to seperate the iteration and filtering from the logic that collects the files in a folder.
const {promises: fsp} = require('fs');
const path = require('path');
async function getAllFilePathsInFolder(folderPath) {
const folderContents = await fsp.readdir(folderPath);
let results = [];
for(const folderContent of folderContents) {
const stat = await fsp.stat(path.join(folderPath, folderContent));
const fullContentPath = path.join(folderPath, folderContent)
results = [
...results,
...(
stat.isDirectory() ?
getAllFilePathsInFolder(fullContentPath) :
[fullContentPath]
)
];
}
return results;
}
async function getMatchingFiles(folderPath, text) {
// The entire list of items is collected before it can be filtered
const allFilePathsInFolder = await getAllFilePathsInFolder(folderPath);
const results = [];
for(const filePath of allFilePathsInFolder) {
const content = await fsp.readFile(filePath, 'utf8');
if(content.includes(text)) {
results.push(filePath);
}
}
return results;
}
Here the problem of the ineffective use of memory comes up. Notice that the function getAllFilePathsInFoldercollects all the files in the folder returning them in an Array. So the entire list of files, which can potentially be very large in comparison to the results needed after filtering, will need to be kept in memory. This can be problematic.
Improve with better memory management
To reduce the amount of data to be kept in memory, the code needs to ensure that the filtering happens while the files in the folder are being found. Also this has to be done while making sure that the code still does not go back to combinding the logic of finding files and filtering together in one function.
const {promises: fsp} = require('fs');
const path = require('path');
async function getAllFilePathsInFolder(folderPath, fiterFunction) {
const folderContents = await fsp.readdir(folderPath);
let results = [];
for(const folderContent of folderContents) {
const stat = await fsp.stat(path.join(folderPath, folderContent));
const fullContentPath = path.join(folderPath, folderContent);
results = [
...results,
...(
stat.isDirectory() ?
getAllFilePathsInFolder(fullContentPath) :
// This function additionally applies the filtering function
((await fiterFunction(fullContentPath)) ? [fullContentPath] : [])
)
];
}
// Filtered results are returned without extra memory usage
return results;
}
const fileContainsText = async (filePath, text) => {
const content = await fsp.readFile(filePath, 'utf8');
return content.includes(text);
}
async function getMatchingFiles(folderPath, text) {
// The filtering logic is abstracted into a function but passed in as a callback
return getAllFilePathsInFolder(folderPath, (filePath) => fileContainsText(filePath, text));
}
This mechanism of passing a filter function to the files finder function was effective in making sure that the filter logic is appropriately abstracted and better memory utilisation. It solved the abstraction problem only partially, because the function collecting the files in the folder still has to a new argument to especially handle the case of filtering, when it is required. Also, another problem it landed into is the famous callback hell issues. The passing of one function as an argument to another has caused us enough trouble already, to introduce several new concepts to JS like Promises. So why not another one i.e. Iterables.
The Iterable solution
Now, implementing the solution for this problem with Iterables allows the code to be written with appropriate abstraction along with making sure that functions are not passed as as callbacks.
const {promises: fsp} = require('fs');
const path = require('path');
async function* filesInFolder(folderPath) {
const files = await fsp.readdir(folderPath);
for(const file of files) {
const fileFullPath = path.join(folderPath, file);
const stat = await fsp.stat(fileFullPath);
if(stat.isDirectory()) {
yield* filesInFolder(fileFullPath);
} else {
yield fileFullPath;
}
}
}
const fileContainsText = async (filePath, text) => {
const content = await fsp.readFile(filePath, 'utf8');
return content.includes(text);
}
async function getMatchingFiles(folderPath, text) {
const results = [];
// Iterable logic takes care of collecting the items of the iteration
for await (const filePath of filesInFolder(folderPath)) {
// The filter function is independently applied
if(await fileContainsText(filePath, text)) {
results.push(filePath);
}
}
// Filtered results are returned without extra memory usage
return results;
}