Building a Bulk Asynchronous Bird Recipient Validation Tool

Zachary Samuels

May 26, 2022

Email

1 min read

Building a Bulk Asynchronous Bird Recipient Validation Tool

Key Takeaways

    • The author built a bulk recipient validation tool to validate millions of email addresses efficiently using Bird’s Recipient Validation API.

    • Node.js proved faster and more scalable than Python due to its non-blocking I/O and lack of GIL limitations.

    • The tool reads CSV files asynchronously, calls the validation API for each email, and writes results to a new CSV in real time.

    • The approach avoids memory bottlenecks and improves throughput to about 100,000 validations in under a minute.

    • Future improvements could include better retry handling, a user-friendly UI, or migrating to serverless environments for scalability.

Q&A Highlights

  • What is the purpose of the Bulk Asynchronous Recipient Validation Tool?

    It validates large volumes of email addresses by integrating directly with Bird’s Recipient Validation API, outputting verified results quickly without manual uploads.

  • Why was Python initially used and later replaced by Node.js?

    Python’s Global Interpreter Lock (GIL) limited concurrency, while Node.js allowed true asynchronous execution, resulting in far faster parallel API calls.

  • How does the tool handle large files without running out of memory?

    Instead of loading all data at once, the script processes each CSV line individually—sending the validation request and immediately writing results to a new CSV file.

  • What problem does the tool solve for developers?

    It enables email list validation at scale, overcoming the 20MB limit of SparkPost’s UI-based validator and eliminating the need to upload multiple files manually.

  • How fast is the final version of the program?

    Around 100,000 validations complete in 55 seconds, compared to over a minute using the UI version.

  • What issues were encountered on Windows systems?

    Node.js HTTP client connection pooling caused “ENOBUFS” errors after many concurrent requests, which were fixed by configuring axios connection reuse.

  • What future enhancements are suggested?

    Adding error handling and retries, creating a front-end interface, or implementing the tool as a serverless Azure Function for better scalability and resilience.

For someone who is looking for a simple fast program that takes in a csv, calls the recipient validation API, and outputs a CSV, this program is for you.

When building email applications, developers often need to integrate multiple services and APIs. Understanding email API fundamentals in cloud infrastructure provides the foundation for building robust tools like the bulk validation system we'll create in this guide.

One of the questions we occasionally receive is, how can I bulk validate email lists with recipient validation? There are two options here, one is to upload a file through the SparkPost UI for validation, and the other is to make individual calls per email to the API (as the API is single email validation).

The first option works great but has a limitation of 20Mb (about 500,000 addresses). What if someone has an email list containing millions of addresses? It could mean splitting that up into 1,000’s of CSV file uploads.

Since uploading thousands of CSV files seems a little far-fetched, I took that use case and began to wonder how fast I could get the API to run. In this blog post, I will explain what I tried and how I eventually came to a program that could get around 100,000 validations in 55 seconds (Whereas in the UI I got around 100,000 validations in 1 minute 10 seconds). And while this still would take about 100 hours to get done with about 654 million validations, this script can run in the background saving significant time.

The final version of this program can be found here.

My first mistake: using Python

Python is one of my favorite programming languages. It excels in many areas and is incredibly straightforward. However, one area it does not excel in is concurrent processes. While python does have the ability to run asynchronous functions, it has what is known as The Python Global Interpreter Lock or GIL.

“The Python Global Interpreter Lock or GIL, in simple words, is a mutex (or a lock) that allows only one thread to hold the control of the Python interpreter.

This means that only one thread can be in a state of execution at any point in time. The impact of the GIL isn’t visible to developers who execute single-threaded programs, but it can be a performance bottleneck in CPU-bound and multi-threaded code.

Since the Global Interpreter Lock (GIL) allows only one thread to execute at a time, even on multi-core systems, it has gained a reputation as an “infamous” feature of Python (see Real Python’s article on the GIL).

At first, I wasn’t aware of the GIL, so I started programming in python. At the finish, even though my program was asynchronous, it was getting locked up, and no matter how many threads I added, I still only got about 12-15 iterations per second.

The main portion of the asynchronous function in Python can be seen below:

async def validateRecipients(f, fh, apiKey, snooze, count):
    h = {
        'Authorization': apiKey,
        'Accept': 'application/json'
    }
    with tqdm(total=count) as pbar:
        async with aiohttp.ClientSession() as session:
            for address in f:
                for i in address:
                    thisReq = requests.compat.urljoin(url, i)
                    async with session.get(thisReq, headers=h, ssl=False) as resp:
                        content = await resp.json()
                        row = content['results']
                        row['email'] = i
                        fh.writerow(row)
                        pbar.update(1)

So I scrapped using Python and went back to the drawing board…

I settled on utilizing NodeJS due to its ability to perform non-blocking i/o operations extremely well. Another excellent option for handling asynchronous API processing is building serverless webhook consumers with Azure Functions, which can efficiently handle variable workloads. I also am pretty familiar with programming in NodeJS.

Utilizing asynchronous aspects of Node.js, this approach worked well. For more details about asynchronous programming in Node.js, see RisingStack’s guide to asynchronous programming in Node.js.

My second mistake: trying to read the file into memory

My initial idea was as follows:

Flowchart illustrating the process of validating a CSV list of emails, starting with ingestion, format checking, asynchronous API validation, result aggregation, and concluding with outputting to a CSV file.


First, ingest a CSV list of emails. Second, load the emails into an array and check that they are in the correct format. Third, asynchronously call the recipient validation API. Fourth, wait for the results and load them into a variable. And finally, output this variable to a CSV file.

This worked very well for smaller files. The issue became when I tried to run 100,000 emails through. The program stalled at around 12,000 validations. With the help of one of our front-end developers, I saw that the issue was with loading all the results into a variable (and therefore running out of memory quickly). If you would like to see the first iteration of this program, I have linked it here: Version 1 (NOT RECOMMENDED).


Flowchart illustrating an email processing workflow, showing steps from ingesting a CSV list of emails to outputting results to a CSV file, with asynchronous validation via an API.


First, ingest a CSV list of emails. Second, count the number of emails in the file for reporting purposes. Third, as each line is read asynchronously, call the recipient validation API and output the results to a CSV file.

Thus, for each line read in, I call the API and write out the results asynchronously so as to not keep any of this data in long-term memory. I also removed the email syntax checking after speaking with the recipient validation team, as they informed me recipient validation already has checks built in to check if an email is valid or not.

My initial idea was as follows:

Flowchart illustrating the process of validating a CSV list of emails, starting with ingestion, format checking, asynchronous API validation, result aggregation, and concluding with outputting to a CSV file.


First, ingest a CSV list of emails. Second, load the emails into an array and check that they are in the correct format. Third, asynchronously call the recipient validation API. Fourth, wait for the results and load them into a variable. And finally, output this variable to a CSV file.

This worked very well for smaller files. The issue became when I tried to run 100,000 emails through. The program stalled at around 12,000 validations. With the help of one of our front-end developers, I saw that the issue was with loading all the results into a variable (and therefore running out of memory quickly). If you would like to see the first iteration of this program, I have linked it here: Version 1 (NOT RECOMMENDED).


Flowchart illustrating an email processing workflow, showing steps from ingesting a CSV list of emails to outputting results to a CSV file, with asynchronous validation via an API.


First, ingest a CSV list of emails. Second, count the number of emails in the file for reporting purposes. Third, as each line is read asynchronously, call the recipient validation API and output the results to a CSV file.

Thus, for each line read in, I call the API and write out the results asynchronously so as to not keep any of this data in long-term memory. I also removed the email syntax checking after speaking with the recipient validation team, as they informed me recipient validation already has checks built in to check if an email is valid or not.

My initial idea was as follows:

Flowchart illustrating the process of validating a CSV list of emails, starting with ingestion, format checking, asynchronous API validation, result aggregation, and concluding with outputting to a CSV file.


First, ingest a CSV list of emails. Second, load the emails into an array and check that they are in the correct format. Third, asynchronously call the recipient validation API. Fourth, wait for the results and load them into a variable. And finally, output this variable to a CSV file.

This worked very well for smaller files. The issue became when I tried to run 100,000 emails through. The program stalled at around 12,000 validations. With the help of one of our front-end developers, I saw that the issue was with loading all the results into a variable (and therefore running out of memory quickly). If you would like to see the first iteration of this program, I have linked it here: Version 1 (NOT RECOMMENDED).


Flowchart illustrating an email processing workflow, showing steps from ingesting a CSV list of emails to outputting results to a CSV file, with asynchronous validation via an API.


First, ingest a CSV list of emails. Second, count the number of emails in the file for reporting purposes. Third, as each line is read asynchronously, call the recipient validation API and output the results to a CSV file.

Thus, for each line read in, I call the API and write out the results asynchronously so as to not keep any of this data in long-term memory. I also removed the email syntax checking after speaking with the recipient validation team, as they informed me recipient validation already has checks built in to check if an email is valid or not.

Breaking down the final code

After reading in and validating the terminal arguments, I run the following code. First, I read in the CSV file of emails and count each line. There are two purposes of this function, 1) it allows me to accurately report on file progress [as we will see later], and 2) it allows me to stop a timer when the number of emails in the file equals completed validations. I added a timer so I can run benchmarks and ensure I am getting good results.

let count = 0; // Line count
require("fs")
    .createReadStream(myArgs[1])
    .on("data", function (chunk) {
        for (let i = 0; i < chunk.length; ++i)
            if (chunk[i] == 10) count++;
    })
    // Reads the infile and increases the count for each line
    .on("close", function () {
        // At the end of the infile, after all lines have been counted, run the recipient validation function
        validateRecipients.validateRecipients(count, myArgs);
    });


 I then call the validateRecipients function. Note this function is asynchronous. After validating that the infile and outfile are CSV, I write a header row, and start a program timer using the JSDOM library.

async function validateRecipients(email_count, myArgs) {
    if (
        // If both the infile and outfile are in .csv format
        extname(myArgs[1]).toLowerCase() == ".csv" &&
        extname(myArgs[3]).toLowerCase() == ".csv"
    ) {
        let completed = 0; // Counter for each API call
        email_count++; // Line counter returns #lines - 1, this corrects the number of lines
        // Start a timer
        const { window } = new JSDOM();
        const start = window.performance.now();
        const output = fs.createWriteStream(myArgs[3]); // Outfile
        output.write(
            "Email,Valid,Result,Reason,Is_Role,Is_Disposable,Is_Free,Delivery_Confidence\n"
        ); // Write the headers in the outfile
    }
}

The following script is really the bulk of the program so I will break it up and explain what is happening. For each line of the infile:

fs.createReadStream(myArgs[1])
    .pipe(csv.parse({ headers: false }))
    .on("data", async (email) => {
        let url =
            SPARKPOST_HOST +
            "/api/v1/recipient-validation/single/" +
            email;
        await axios
            .get(url, {
                headers: {
                    Authorization: SPARKPOST_API_KEY,
                },
            });
        // For each row read in from the infile, call the SparkPost Recipient Validation API
    });

Then, on the response

  • Add the email to the JSON (to be able to print out the email in the CSV)

  • Validate if reason is null, and if so, populate an empty value (this is so the CSV format is consistent, as in some cases reason is given in the response)

  • Set the options and keys for the json2csv module.

  • Convert the JSON to CSV and output (utilizing json2csv)

  • Write progress in the terminal

  • Finally, if number of emails in the file = completed validations, stop the timer and print out the results


.then(function (response) {
    response.data.results.email = String(email); 
    // Adds the email as a value/key pair to the response JSON for output
    response.data.results.reason ? null : (response.data.results.reason = ""); 
    // If reason is null, set it to blank so the CSV is uniform
    // Utilizes json-2-csv to convert the JSON to CSV format and output
    let options = {
        prependHeader: false, // Disables JSON values from being added as header rows for every line
        keys: [
            "results.email",
            "results.valid",
            "results.result",
            "results.reason",
            "results.is_role",
            "results.is_disposable",
            "results.is_free",
            "results.delivery_confidence",
        ], // Sets the order of keys
    };
    let json2csvCallback = function (err, csv) {
        if (err) throw err;
        output.write(`${csv}\n`);
    };
    converter.json2csv(response.data, json2csvCallback, options);
    completed++; // Increase the API counter
    process.stdout.write(`Done with ${completed} / ${email_count}\r`); 
    // Output status of Completed / Total to the console without showing new lines
    // If all emails have completed validation
    if (completed == email_count) {
        const stop = window.performance.now(); // Stop the timer
        console.log(
            `All emails successfully validated in ${(stop - start) / 1000} seconds`
        );
    }
});

 

One final issue I found was while this worked great on Mac, I ran into the following error using Windows after around 10,000 validations:

Error: connect ENOBUFS XX.XX.XXX.XXX:443 – Local (undefined:undefined) with email XXXXXXX@XXXXXXXXXX.XXX

After doing some further research, it appears to be an issue with the NodeJS HTTP client connection pool not reusing connections. I found this Stackoverflow article on the issue, and after further digging, found a good default config for the axios library that resolved this issue. I am still not certain why this issue only happens on Windows and not on Mac.

Next Steps

For someone who is looking for a simple fast program that takes in a csv, calls the recipient validation API, and outputs a CSV, this program is for you.

Some additions to this program would be the following:

  • Build a front end or easier UI for use

  • Better error and retry handling because if for some reason the API throws an error, the program currently doesn’t retry the call

  • Consider implementing as a serverless Azure Function for automatic scaling and reduced infrastructure management


I’d also be curious to see if faster results could be achieved with another language such as Golang or Erlang/Elixir. Beyond language choice, infrastructure limitations can also impact performance - we've learned this firsthand when we hit undocumented DNS limits in AWS that affected our high-volume email processing systems.

For developers interested in combining API processing with visual workflow tools, check out how to integrate Flow Builder with Google Cloud Functions for no-code automation workflows.

Please feel free to provide me any feedback or suggestions for expanding this project.

Other news

Read more from this category

A person is standing at a desk while typing on a laptop.

The complete AI-native platform that scales with your business.

© 2025 Bird

A person is standing at a desk while typing on a laptop.

The complete AI-native platform that scales with your business.

© 2025 Bird