How to scale this application

The example we created in this chapter uses two parts that work separately and use RabbitMQ to exchange tasks and results.

Scaling this application is very easy—you just need to start as many workers as you want, and they don't have to work on the same server. You can run workers on a distributed cluster of servers. If your workers can't handle a load, you can start extra workers that will consume waiting tasks immediately and start the decoding process.

But this system has a potential bottleneck – the message broker. For this example, you can simply handle it if you start extra independent message brokers; for example, RabbitMQ support multiple instances with its clustering feature. Your server can have multiple connections with multiple message brokers, but you can't spawn as many servers as you want because they will have different sets of tasks.

Is it possible to share a list of tasks? Yes, you can use a traditional database or storage, such as Redis, but it becomes another bottleneck, because it's hard to use the same database instance for millions of clients.

How can we handle the bottleneck with a database? You can split the tasks list by client and keep the list for a specific client on one instance of storage. If you want to provide a feature where your clients share tasks with each other, you can create shared lists and store them in a database, which won't have a heavy load in this case.

As you can see, scaling is an undefined process, and you have to do some experimenting to achieve the desired result. But in any case, you should strive for separate tasks across microservices and use messages or RPCs to get loose coupling for your application, as well as good performance.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.176.145