This is a rather high-level ALib Module. It is by design separated from the low-level module ALib Threads, which provides just fundamental functionality like a simple thread class, thread registration and identification and several locking and asynchronous notification mechanics.
In contrast to that, this module introduces a thread model. So what is that? The answer is that programmers have various different options to implement and manage parallelism. The choice is dependent from the type of application and quite a lot also a matter of "taste". Once more, ALib here chooses a rather traditional design.
This is the main reason why this module is separated from the low-level functionality: It is purely a matter of the library's user to select this module into an ALib Distribution or not. The second important reason for separating this module from module ALib Threads is simply to avoid massive cyclic dependencies with other modules.
Massive multi-threading applications like server-software do not use simple thread pools, but rather more sophisticated concepts like partitioned thread pools or task-affinity scheduling. It is not the ambition and scope of ALib to support such software.
Instead, ALib aims to provide reasonable efficient concepts, which are easily manageable and understandable.
This module differentiates between two sorts of worker threads, which are here discussed in theory.
Dedicated threads can be established to process tasks that are related by data, context, or type. Each thread works with a specific dataset or set of tasks, which minimizes the need for locking and helps avoid conflicts.
The advantages are:
When It’s Ideal: This approach is especially effective when you have distinct task categories with minimal cross-dependency. It’s common in real-time systems, certain types of game loops, or applications where tasks have strict priorities or dependencies on specific resources.
Very common in modern libraries are thread pools. Those are collections of threads that are assigned tasks as they become available. Each task is handled by the next available thread, regardless of its type or data.
Advantages:
When It’s Ideal: Thread pools are ideal for workloads where tasks are mostly independent, like web servers handling requests, background processing jobs, or any system where tasks are numerous and lightweight.
The following table recaps the pros and cons associated with each type:
Aspect | Dedicated Threads | Thread Pools (Library Approach) |
---|---|---|
Task Locality | High, due to grouped tasks by type/data | Low to moderate, as tasks go to any thread |
Synchronization Need | Lower, fewer locks if data sets are disjoint | Higher, requires locks or atomic operations for shared data |
CPU Utilization | May have idle threads if tasks are unbalanced | High, as all threads are actively used when needed |
Cache Efficiency | Good, better locality due to fixed data per thread | Variable, depends on task scheduling |
Adaptability to Load | Less adaptable, might require load-balancing strategies | Very adaptable, dynamic balancing by pool |
Implementation Complexity | Higher, requires more explicit management | Lower, handled by the library |
Application Control | Higher, and probably better structured code entities | Lower, handled by the library |
The class ThreadPool implements the concept of pooled threads, as discussed in the introductory sections above.
Let us start with a simple sample. The first thing to do is defining a job-type, derived from class Job:
With this in place, we can pass jobs of this type to a thread pool:
This already showed the basic idea how the type can be used.
This might be all that is needed to explain in this chapter.
As discussed in the introductory sections above, one principle type of threads are ones that are "dedicated to a group of jobs".
While the foundational module ALib Threads already provides the simple type Thread, which implements this concept along the design provided with class Thread of the Java programming language, this module introduces a more sophisticated implementation with the class DedicatedWorker.
Here is a quick sample code that demonstrates the use of this class. As a prerequisite we rely on the same class MyJob that had been introduced in the previous section:
Having this in place, a simple dedicated worker is quickly created and fed with such a job:
As the using sample shows, two interface versions are offered in this sample: One that returns the Job instance, a second that does not return anything. Both versions have advantages and disadvantages (explained in the comments above and in the reference documentation).
In more complicated cases it may be necessary to receive the job to be able to periodically check for processing, but then the sender may "lose interest" in it. To enable a due deletion of an unprocessed job, method DeleteJobDeferred is offered. This pushes a new job (of a special internal type) into the execution queue of the worker, which cares for deletion.
The sample furthermore showed that the very same job-type which had been used in the previous section with class ThreadPool, can be used with the dedicated worker. If done so, the advantage lies exactly here, namely that a job can be used with both concepts.
However, this usually is neither needed nor wanted, just because the decision which thread-concept to use, is dependent from exactly the nature of the job-types!
Therefore, the more common option of processing jobs with class DedicatedWorker is to override its virtual method process and perform execution there.
Here is a sample code:
If the overridden method returns true
for a job passed, then the virtual method Job::Do is not even called. In the sample above, both implementation even do different things. The first doubles the input value, the second triples it.
Let us summarize this:
This pair of classes offers a next method to execute tasks asynchronously.
Here is a quick sample:
As a final not, class DedicatedWorker implements the interface Triggered in order that it can be attached to a trigger. If done, a trigger-job will be pushed in its command queue, and with that, the execution of interface method Triggered::Trigger is performed asynchronously.