Google Cloud Tutorial - Architecting a Solution with Long Running Tasks
Use Google Cloud Tasks to throttle HTTP requests for long running tasks
For an on boarding process for our customers, my system required long running tasks in the background retrieving multiple assets in the range of 10-30mb in size. Simplifying and breaking down the process acquiring each of these assets while expecting a flat or below line cost posed challenges.
Love Web Development?
Angular, Google Cloud, C#, Node, NestJs?
Goals
- Considerations for Long Running Tasks
- Find GCP Services that can run long running tasks
- Create a Long Running Task
For more context, I need to on board users to our system with minimal disruption with their current hosting solution. Their existing assets live publicly exposed via a URL in an RSS feed, so porting those assets into my system can be done by making HTTP requests for the assets.
Firebase
My Firebase instance intentions serve as the authenticator, database, storage, and event orchestrator in its functions.
Post authentication, the user instantiates verification through an email as a Firebase function watches a user changed event that verifies the user. Once I've verified the user, I start collecting assets from the user's RSS feed.
Considerations for Long Running Tasks
With many different options to accomplish this particular task, I decided to use technologies that were mostly in my current tool set. I needed to think about the best chances of success for retrieving assets and what information would be most beneficial for customers as assets were downloaded.
One Asset at a Time or Multiple
With cloud architecture, I considered the scaling required retrieving one asset vs. multiple. My considerations left me to think about a few things:
- Cost, how do I keep this at a free tier or at minimum to meet my goal?
- How much memory does each instance need to have in order to download assets not breaking thresholds scaling up and increasing costs?
- How many requests could each instance handle before it had to scale horizontally?
- If the memory is limited, should I be doing one asset or multiple assets at a time?
- What's an acceptable waiting time for the customer to have their assets ported over to our hosting?
Finding GCP Services for Long Running Tasks
One of the selling points of cloud remains that you're billed on resource usage. This typically means free tiers exist for multiple services.
App Engine provides a free tier within the standard runtime environments. Below exhibits a Node yaml configuration for the free tier in App Engine.
Viewing the configuration, the instance_class
uses an F1
instance class. This allots for 256mb of memory and 600Mhz CPU limits. Also notice the automatic_scaling
. The max number of instances allocated is one with a minimum number of zero. This keeps our costs low as usage for the service varies as customers onboard. The 256mb suffices our memory conditions as the service pulls different assets for our customers. App Engine also allows for long running tasks to perform operations up to twenty-four hours long. Should more complex operations arise in the future this GCP service will suffice.
Throttling the Number of Requests
Now that I had the App Engine service configured with code running on it, my decision weighed both the pros and cons of handling the retrieval of all assets at once, or one at a time. What GCP service or mechanism could I leverage to delegate the responsibility of throttling the number of requests to the App Engine Service?
Weighing the option of retrieving all assets at once revealed downsides.
- Would retrieving assets all at once cause the server to run out of memory?
- Does this block other requests until finished?
- If it fails, does it all fail?
At this point, I decided retrieving one asset at a time would be best. I can limit the transaction to one asset at a time, which involved reading a database record to the retrieve the asset, making an HTTP request to download the asset, saving the asset in Firebase storage, and writing to the record in the database. But, I'd require a mechanism to throttle the number of requests made to my service in order to alleviate any denial of service and remain within the memory limits.
Google Cloud Tasks - Creating and Throttling Requests
Google Cloud Tasks contain features that allow asynchronous execution of work to be done via App Engine or any HTTP endpoint. For my execution purposes, I created a task per each asset to saved off into cloud storage and update the database the asset created.
Set Up Cloud Task Service
In Google Cloud Console, open Cloud Tasks, and enable the API.
The Cloud Task contains queues. Presently the interface on Google Console limits the creation of these queues. In order to create these queues, I utilized gcloud
command line tool to generate the queue.
Open the cloud console shell. In the command line run: gcloud tasks queues create email
This creates a queue named email
.
Throttling the Number of Tasks
Take notice of the image above. The Max Rate
presently stands at 500/s
or 500 tasks dispatched per second. The Max concurrent
also needs to be lowered in order to throttle the number of requests handle by the App Engine instance. I only want to handle about two requests concurrently in the case that the files being downloaded on the instance max out the memory.
Throttling the number of requests, I'll need to update the queue throttle the Max Concurrent
and Max Rate
values. Open the cloud shell console again. Run the following command: gcloud tasks queues update email --max-concurrent-dispatches=2 --max-dispatches-per-second=10
Creating Tasks from Firebase
With my GCP services in place, to handle the load of memory intensive tasks for retrieving assets, I needed to create a task in the queue. For my purposes, creating the task is handled after user verification through Firebase. Capturing this event, I created a Firebase function to handle the request task creation and redirecting the user via the verification URL supplied by Firebase.
The userVerified
method handles generating a Cloud Task in the createPodcastImportTask
. To keep things easy to reference, the name
of the task uses the podcastId
. The task pushes to the configured httpRequest
. I'm sending a json object containing the podcastId
that gets base64 encoded for the request as required by cloud tasks.
Using the CloudTaskClient
from @google-cloud/tasks
npm package, generating the task requires a few parameters.
- The
projectId
where the Cloud Tasks Service exists. - The
region
selected where the cloud tasks service exists likeus-central1
- The
queueName
set up in the prior steps
I need a reference to the task queue using the parameters above. As shown in the code above, the client.queuePath
generates the cloud tasks queue reference. If the task has a name prepend the value returned from the queuePath
to the task name.
The queue populates with tasks that retrieve each asset per calling an HTTP endpoint to handle the task.
Conclusion
Creating Cloud Tasks requires enabling the API in Cloud Console. Tasks can be throttled through configuring through the cloud console.
Finally, cloud tasks can be generated using the @google-cloud/tasks
CloudTaskClient
class allowing for calling HTTP endpoints to handle the processing of the task.