Tutorials: The world of High Performance Distributed & Async Task Queue(s) with Celery

Wednesday - May 15th, 2024 9 a.m.-12:30 p.m. in Room 310/311

Presented by:

Description

All modern systems use distributed and asynchronous tasks to make proper use of the available hardware and software resources in a safe and reliable manner. The ability to create and distribute asynchronous tasks drastically impact the performance as well as the capability of a system.

The most popular way of creating asynchronous task queues involves using streams or pub/sub infrastructures like Apache Kafka, RabbitMQ, Kinesis etc, more or less, are equally suitable for a variety of scenarios.

Then, the question is why we’re talking something else i.e. “Celery” over here? The answer is “Python Programming Language”.

If you're creating a system (especially microservices based and/or distributed systems) in python__ and want to create an asynchronous (and distributed) task queues which is pure python, simple, super fast, and diminishes the boundary between distributed systems_, then you should consider using “Celery”___.

Using something like “Celery” is not about being better than other solutions out there, it’s all about the ability to make use of the python ecosystem and create things faster without leaving the python programming language for something else.

Welcome to this tutorial on Celery, the open source distributed task queues. In this tutorial, we'll learn how to use “Celery” and create an end to end system. We’ll also learn about how we can also visualize the distributed task queues at runtime using “Flower”

This tutorial will have classroom exercises, post class homeworks as well as complimentary readings. All the presentation, code, exercises will be shared in advance (~ 2 - 3 days) and the solutions of the exercise will be shared after the tutorial is completed.