├── LICENSE ├── License.h ├── README.md ├── main.cpp ├── thread_pool.cpp └── thread_pool.h /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 AtanasRusevPros 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /License.h: -------------------------------------------------------------------------------- 1 | #pragma once 2 | #ifndef CTP_THREAD_POOL_LICENSE_H 3 | #define CTP_THREAD_POOL_LICENSE_H 4 | 5 | /*********************************************************************************************************************** 6 | Copyright 2019 Atanas Rusev and Ferai Ali 7 | 8 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software 9 | and associated documentation files (the "Software"), to deal in the Software without restriction, 10 | including without limitation the rights to use, copy, modify, merge, publish, distribute, 11 | sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is 12 | furnished to do so, subject to the following conditions: 13 | 14 | The above copyright notice and this permission notice shall be included in all copies or 15 | substantial portions of the Software. 16 | 17 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, 18 | INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR 19 | PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE 20 | FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 21 | ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 22 | ***********************************************************************************************************************/ 23 | #endif CTP_THREAD_POOL_LICENSE_H -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | /*********************************************************************************************************************** 2 | * @file thread_pool.cpp 3 | * 4 | * @brief Template based Thread Pool - the Pimpl concept implementation. It accepts 3 types of jobs by priority. 5 | * 6 | * @details This Thread pool is created as a class with template based functions to ensure 7 | * different possible input job types - a lambda, a class method, a functor or a function. 8 | * 9 | * It is based on the Pimpl paradigm: 10 | * "Pointer to implementation" or "pImpl" is a C++ programming technique[1] that removes 11 | * implementation details of a class from its object representation by placing them in a 12 | * separate class, accessed through an opaque pointer 13 | * 14 | * The jobs insertion and extraction is kept safe via one single mutex to avoid race conditions. 15 | * 16 | * The Queues are 3 - Critical (2), High (1), and Normal(0) Priority 17 | * 18 | * Each Thread sequentially checks the Queues from a map of Key-Value Pairs - a pair fo the priority and 19 | * an element from a vector of threads. 20 | * 21 | * Once all queues are empty - the current thread is blocked until notified via a condition variable. 22 | * 23 | * The Condition variable wait is blocking the thread in a sleep mode. 24 | * 25 | * There is a shutdown function which ensures all threads will stop taking new jobs based on a boolean flag. 26 | * It is called in the destructor. It will join all threads and wait for the end of each of them to execute 27 | * and exit. 28 | * 29 | * The code is based completely on C++11 features. The purpose is to be able to integrate it 30 | * in older projects which have not yet reached C++14 or higher. If you need newer features 31 | * fork the code and get it to the next level yourself. 32 | * 33 | * @author Atanas Rusev and Ferai Ali 34 | * 35 | * @copyright 2019 Atanas Rusev and Ferai Ali, MIT License. Check the License.h file in the library. 36 | */ 37 | 38 | # Usage: 39 | Create an object in the beginning of your program with optional number of threads: 40 | CTP::ThreadPool thread_pool(optional); 41 | 42 | If no parameter is given - the Thread Pool will create X threads, where X is the number of supported hardware threads as reported by std::thread::hardware_concurrency() 43 | 44 | Further simply call the Thread Pool thread_pool.Schedule(xxx) function with a lambda or a function. 45 | 46 | The main.cpp in the project illustrates how it was tested and how it works. 47 | 48 | # More Information: 49 | About a thread pool you can check this short article I wrote: 50 | http://atanasrusev.com/2019/09/13/thread-pool-design-pattern/ 51 | 52 | Following is a list of most of the important C++ elements and concepts we used in our code that can help you understand it fully: 53 | 54 | https://en.cppreference.com/w/cpp/thread/thread/hardware_concurrency 55 | https://en.cppreference.com/w/cpp/thread/unique_lock 56 | https://en.cppreference.com/w/cpp/thread/sleep_for 57 | https://en.cppreference.com/w/cpp/thread/future 58 | https://en.cppreference.com/w/cpp/utility/move 59 | https://en.cppreference.com/w/cpp/container/map/emplace 60 | https://en.cppreference.com/w/cpp/container/map 61 | https://www.geeksforgeeks.org/descending-order-map-multimap-c-stl/ 62 | https://en.cppreference.com/w/cpp/memory/unique_ptr 63 | https://en.cppreference.com/w/cpp/utility/functional/function 64 | https://en.cppreference.com/w/cpp/language/default_constructor 65 | https://en.cppreference.com/w/cpp/utility/functional/bind 66 | https://en.cppreference.com/w/cpp/thread/condition_variable/wait 67 | https://en.cppreference.com/w/cpp/language/pimpl 68 | https://en.cppreference.com/w/cpp/memory/shared_ptr 69 | https://en.cppreference.com/w/cpp/memory/shared_ptr/make_shared 70 | https://en.cppreference.com/w/cpp/thread/packaged_task 71 | -------------------------------------------------------------------------------- /main.cpp: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AtanasRusevPros/CPP11_ThreadPool/b2a031cd7b4abb45877c4c4c8238497b88663138/main.cpp -------------------------------------------------------------------------------- /thread_pool.cpp: -------------------------------------------------------------------------------- 1 | /*********************************************************************************************************************** 2 | * @file thread_pool.cpp 3 | * 4 | * @brief Template based Thread Pool - the Pimpl concept implementation. It accepts 3 types of jobs by priority. 5 | * 6 | * @details This Thread pool is created as a class with template based functions to ensure 7 | * different possible input job types - a lambda, a class method, a functor or a function. 8 | * 9 | * It is based on the Pimpl paradigm: 10 | * "Pointer to implementation" or "pImpl" is a C++ programming technique[1] that removes 11 | * implementation details of a class from its object representation by placing them in a 12 | * separate class, accessed through an opaque pointer 13 | * 14 | * The jobs insertion and extraction is kept safe via one single mutex to avoid race conditions. 15 | * 16 | * The Queues are 3 - Critical (2), High (1), and Normal(0) Priority 17 | * 18 | * Each Thread sequentially checks the Queues from a map of Key-Value Pairs - a pair fo the priority and 19 | * an element from a vector of threads. 20 | * 21 | * Once all queues are empty - the current thread is blocked until notified via a condition variable. 22 | * 23 | * The Condition variable wait is blocking the thread in a sleep mode. 24 | * 25 | * There is a shutdown function which ensures all threads will stop taking new jobs based on a boolean flag. 26 | * It is called in the destructor. It will join all threads and wait for the end of each of them to execute 27 | * and exit. 28 | * 29 | * The code is based completely on C++11 features. The purpose is to be able to integrate it 30 | * in older projects which have not yet reached C++14 or higher. If you need newer features 31 | * fork the code and get it to the next level yourself. 32 | * 33 | * @author Atanas Rusev and Ferai Ali 34 | * 35 | * @copyright 2019 Atanas Rusev and Ferai Ali, MIT License. Check the License.h file in the library. 36 | * 37 | ***********************************************************************************************************************/ 38 | 39 | #include "thread_pool.h" 40 | 41 | #include 42 | #include 43 | #include 44 | #include 45 | #include 46 | #include 47 | 48 | namespace CTP 49 | { 50 | //----------------------------------------------------------------------------- 51 | /// Thread Pool Implementation 52 | //----------------------------------------------------------------------------- 53 | class ThreadPool::impl 54 | { 55 | public: 56 | // the main function for initializing the pool and starting the threads 57 | void Init(size_t threadCount); 58 | 59 | // explicitly shutdown the threads - call this obligatory when wanting 60 | // the threads to be stopped. Currently this is performed 61 | // in the destructor relieving the user from the need to call it himself! 62 | void Shutdown(); 63 | 64 | // the AddJob function takes an Rvalue (double reference) to a std::function object. 65 | // This std::function object contains a Callable that returns no result and takes no arguments 66 | void AddJob(std::function&& job, Priority priority); 67 | 68 | private: 69 | // this flag is used to control the main loop in the Init function. While it is true the cycle will continue 70 | // popping jobs out from the queue. 71 | // Initialized as true so that once Init is called the Thread Pool is operational. 72 | bool m_running = true; 73 | 74 | // m_guard is a mutex that is used while adding a job or extracting one from the queue. 75 | // Together with the condition variable these control adding jobs to the queue 76 | // and extracting them so that there are no race conditions. 77 | std::mutex m_guard; 78 | std::condition_variable m_cvSleepCtrl; 79 | 80 | // the vector of threads which will process the jobs 81 | std::vector m_workers; 82 | 83 | // a map of kvp - Key-Value Pair. The pair is the priority level together with it's 84 | // corresponding dedicated Queue. This means for each priority we have a separate Queue 85 | // the last part - std::greater - sorts the map in descending order based on the Priority! 86 | std::map>, std::greater > m_jobsByPriority; 87 | }; 88 | 89 | // The Constructor simply initializes a single pointer based on the template from the header file in the member: 90 | // std::unique_ptr m_impl; 91 | ThreadPool::ThreadPool(size_t threadCount) 92 | : m_impl(std::make_unique()) 93 | { 94 | // the only functionality of the Constructor us to call the Init which effectively starts the threads 95 | // you can of course add more functionality here 96 | m_impl->Init(threadCount); 97 | } 98 | 99 | // Destructor 100 | ThreadPool::~ThreadPool() 101 | { 102 | // Via a call to Shutdown(): Simply notify all threads to finish their work by waking them up. 103 | // In addition the boolean flag that controlls the execution of the threads is set to false. 104 | m_impl->Shutdown(); 105 | } 106 | 107 | void ThreadPool::AddJob(std::function job, Priority priority) 108 | { 109 | m_impl->AddJob(std::move(job), priority); 110 | } 111 | 112 | /*********************************************************************************************************************** 113 | * @brief The main function for initializing the pool and starting the threads. 114 | * 115 | * @details This is the main function that starts the threads and feeds them with jobs. 116 | * The threads functions are defined by a lambda that is executed inside each new thread 117 | * The Queues are 3 and those are sequentially checked in decreasing order for next job to be executed 118 | * 119 | * @pre None 120 | * @post 121 | * @param[in] size_t threadCount - the number of threads you want to start 122 | * @return None 123 | * 124 | * @author Atanas Rusev and Ferai Ali 125 | * 126 | * @copyright 2019 Atanas Rusev and Ferai Ali, MIT License. Check the License file in the library. 127 | * 128 | ***********************************************************************************************************************/ 129 | void ThreadPool::impl::Init(size_t threadCount) 130 | { 131 | // First we explicitly initialize the 3 queues 132 | m_jobsByPriority[Priority::Normal] = {}; 133 | m_jobsByPriority[Priority::High] = {}; 134 | m_jobsByPriority[Priority::Critical] = {}; 135 | 136 | // now explicitly reserve for the vector of threads the exact number of threads whished 137 | m_workers.reserve(threadCount); 138 | 139 | // this is where each thread is created to consume jobs from the queues 140 | // if we have e.g. only normal jobs - and as we have multiple threads - then 141 | // we shall lock when a job is added and when a job is extracted to avoid race conditions 142 | for (int i = 0; i < threadCount; i++) 143 | { 144 | //------------------------------------------------------------------------------------ 145 | // MAIN EXECUTION BLOCK of each thread 146 | //------------------------------------------------------------------------------------ 147 | // push back a lambda function for each thread. Capturing "this" pointer inside the lambda 148 | // function will automatically capture all the member variables for this object inside the lambda. 149 | // This means the next code is executed INSIDE the corresponding thread: 150 | m_workers.push_back(std::thread([this](){ 151 | 152 | // the bool m_running is initialized as true upon object creation 153 | // we check it here to know when to stop consuming jobs 154 | while (m_running) 155 | { 156 | // we create here one empty function wrapper. For std::function we have in cppreference.com: 157 | /*------------------------------------------------------------------------------------------------- 158 | Class template std::function is a general-purpose polymorphic function wrapper. Instances 159 | of std::function can store, copy, and invoke any Callable target -- functions, lambda 160 | expressions, bind expressions, or other function objects, as well as pointers to member 161 | functions and pointers to data members. 162 | 163 | The stored callable object is called the target of std::function. If a std::function contains 164 | no target, it is called empty. Invoking the target of an empty std::function results in 165 | std::bad_function_call exception being thrown. 166 | ---------------------------------------------------------------------------------------------------*/ 167 | std::function job; 168 | 169 | { // here follows the part that needs to be locked - so we create a unique_lock class object 170 | // and pass to it our mutex. 171 | std::unique_lock ul(m_guard); 172 | 173 | // once we have the lock mutex object we can wait on it via the condition variable 174 | // wait causes the current thread to block until the condition variable is 175 | // notified or a spurious wakeup occurs, optionally looping until some predicate is satisfied. 176 | // in our case we wait on the lambda return result. We pass again "this" to capture the member variables 177 | // for this object inside the lambda 178 | // the template for wait accepts a mutex AND a predicate. If the wait should be continued - the 179 | // predicate (i.e. th lambda) shall return false 180 | m_cvSleepCtrl.wait(ul, [this](){ 181 | 182 | // if the flag is set to false explicitly stop processing jobs from the queue 183 | if (false == m_running) 184 | { 185 | return true; 186 | } 187 | 188 | // the given thread loops here through all queues 189 | bool allQueuesEmpty = true; 190 | for (const auto& kvp : m_jobsByPriority) // kvp - acronym for Key-Value pair 191 | { 192 | allQueuesEmpty &= kvp.second.empty(); 193 | } 194 | // we return here the result - if all queues ARE EMPTY we return false and the 195 | // condition variable will continue waiting 196 | return !allQueuesEmpty; 197 | }); 198 | 199 | // once we are done waiting - we loop through a Key-Value Pair based on priority to get 200 | // next job from the Queues. Remember - those are sorted in descending order upon map creation! 201 | for (auto& kvp : m_jobsByPriority) 202 | { 203 | auto& jobs = kvp.second; // we take here the Queue based on the Priority 204 | if (jobs.empty()) // if the current Queue is empty - we go the next Queue 205 | { 206 | continue; 207 | } 208 | job = std::move(jobs.front()); // once we know the current queue has a job we move it 209 | jobs.pop(); // and we pop one element from this Queue 210 | break; 211 | } 212 | } 213 | 214 | // and finally we execute the job 215 | if (job != nullptr) 216 | { 217 | job(); 218 | } 219 | } 220 | })); 221 | } 222 | } 223 | 224 | /*********************************************************************************************************************** 225 | * @brief explicitly shutdown the threads - call this obligatory when wanting the threads to be stopped. 226 | * 227 | * @details Currently this is performed in the destructor relieving the user from the need to call it himself! 228 | * Once the function is called - each thread will finish it's current job and will not take a new one. 229 | * Calling it in the destructor means we will perform it exactly at program termination. 230 | * And of course it is expected, that then we have waited all future objects to be consumed, 231 | * and so it is both safe and thread exit is correctly and undoubtedly waited at program termination. 232 | * 233 | * @pre None 234 | * @post None 235 | * @param[in] None 236 | * @param[out] None 237 | * @return None 238 | * 239 | * @author Atanas Rusev and Ferai Ali 240 | * 241 | * @copyright 2019 Atanas Rusev and Ferai Ali, MIT License. Check the License file in the library. 242 | * 243 | ***********************************************************************************************************************/ 244 | void ThreadPool::impl::Shutdown() 245 | { 246 | // set the global flag for disabling any thread to continue extracting jobs and execute the main loop code 247 | m_running = false; 248 | 249 | // now notify all threads (effectively waking them up) so that they either execute their last job 250 | // and/or directly stop working as the main flag is false 251 | m_cvSleepCtrl.notify_all(); 252 | 253 | // finally join all threads to ensure all of them are waited to finish before destroying the thread pool 254 | for (auto& worker : m_workers) 255 | { 256 | if (worker.joinable()) 257 | { 258 | worker.join(); 259 | } 260 | } 261 | } 262 | 263 | 264 | /*********************************************************************************************************************** 265 | * @brief explicitly shutdown the threads - call this obligatory when wanting the threads to be stopped. 266 | * 267 | * @details Currently this is performed in the destructor relieving the user from the need to call it himself! 268 | * Once the function is called - each thread will finish it's current job and will not take a new one. 269 | * Calling it in the destructor means we will perform it exactly at program termination. 270 | * And of course it is expected, that then we have waited all future objects to be consumed, 271 | * and so it is both safe and thread exit is correctly and undoubtedly waited at program termination. 272 | * 273 | * @pre None 274 | * @post None 275 | * @param[in] None 276 | * @param[out] None 277 | * @return None 278 | * 279 | * @author Atanas Rusev and Ferai Ali 280 | * 281 | * @copyright 2019 Atanas Rusev and Ferai Ali, MIT License. Check the License file in the library. 282 | * 283 | ***********************************************************************************************************************/ 284 | void ThreadPool::impl::AddJob(std::function&& job, Priority priority) 285 | { 286 | // first we lock our "one single common" thread pool mutex to ensure no overlapping (race condition) 287 | // at job addition 288 | std::unique_lock ul(m_guard); 289 | 290 | // then we add the new job 291 | m_jobsByPriority[priority].emplace(std::move(job)); 292 | 293 | // finally we notify at least one thread 294 | m_cvSleepCtrl.notify_one(); 295 | } 296 | } //end of namespace CTP 297 | -------------------------------------------------------------------------------- /thread_pool.h: -------------------------------------------------------------------------------- 1 | /*********************************************************************************************************************** 2 | * @file thread_pool.h 3 | * 4 | * @brief Template based Thread Pool with Pimpl concept implementation. It accepts 3 types of jobs by priority. 5 | * 6 | * @details This Thread pool is created as a class with template based functions to ensure 7 | * different possible input job types - a lambda, a class method, or a function. 8 | * 9 | * It is based on the Pimpl paradigm: 10 | * "Pointer to implementation" or "pImpl" is a C++ programming technique[1] that removes 11 | * implementation details of a class from its object representation by placing them in a 12 | * separate class, accessed through an opaque pointer 13 | * 14 | * The Queues are 3 - Critical (2), High (1), and Normal(0) Priority 15 | * 16 | * 17 | * 18 | * The code is based completely on C++11 features. The purpose is to be able to integrate it 19 | * in older projects which have not yet reached C++14 or higher. If you need newer features 20 | * fork the code and get it to the next level yourself. 21 | * 22 | * @author Atanas Rusev and Ferai Ali 23 | * 24 | * @copyright 2019 Atanas Rusev and Ferai Ali, MIT License. Check the License.h file in the library. 25 | * 26 | ***********************************************************************************************************************/ 27 | #pragma once 28 | #ifndef CTP_THREAD_POOL_H 29 | #define CTP_THREAD_POOL_H 30 | 31 | #include 32 | #include 33 | 34 | namespace CTP 35 | { 36 | template 37 | using JobReturnType = typename std::result_of::type; 38 | 39 | // this is the priority of the jobs. Most jobs shall be ran as Normal priority. 40 | enum class Priority : size_t 41 | { 42 | Normal, 43 | High, 44 | Critical 45 | }; 46 | 47 | class ThreadPool 48 | { 49 | public: 50 | // with this constructor we take by default the number of hardware threads possible. 51 | // pay attenttion - an Intel CPU with Hyperthreading will report double the number of HW cores 52 | // if you want to explicitly limit the number of threads to the number of cores and NOT use hyperthreading - 53 | // you have to write a Windows, MAC or Linux specific code! 54 | ThreadPool(size_t threadCount = std::thread::hardware_concurrency()); 55 | 56 | // Defaulted default constructor: the compiler will define the implicit default constructor even 57 | // if other constructors are present. 58 | ThreadPool(ThreadPool&&) = default; 59 | ThreadPool& operator=(ThreadPool&&) = default; 60 | 61 | ~ThreadPool(); 62 | 63 | // explicitly forbid copy constructors by reference or asignment, so that the thread pool is only one! 64 | ThreadPool(const ThreadPool&) = delete; 65 | ThreadPool& operator=(const ThreadPool&) = delete; 66 | 67 | //----------------------------------------------------------------------------- 68 | /// Adds a job for a given priority level. Returns a future. 69 | // 70 | // This is a template function that takes a function of implementation defined 71 | // type, hence we are freed from the necessity to define overloaded versions 72 | // for different input. It is transferred as an Rvalue (double reference) 73 | // The arguments are provided as variadic template args. 74 | // The return type is a trailing return type. Reason - different functions may 75 | // have different return types. In addition we recieve an std::future to be 76 | // able to get notification for the job done. 77 | //----------------------------------------------------------------------------- 78 | template 79 | auto Schedule(Priority priority, F&& f, Args&&... args) 80 | ->std::future> 81 | { 82 | auto job = std::make_shared()>> 83 | ( 84 | std::bind(std::forward(f), std::forward(args)...) 85 | ); 86 | 87 | AddJob([job] { (*job)(); }, priority); 88 | return job->get_future(); 89 | } 90 | 91 | //----------------------------------------------------------------------------- 92 | /// Adds a job with DEFAULT priority level (Normal). Returns a future. 93 | //----------------------------------------------------------------------------- 94 | template 95 | auto Schedule(F&& f, Args&&... args) 96 | ->std::future> 97 | { 98 | return Schedule(Priority::Normal, std::forward(f), std::forward(args)...); 99 | } 100 | 101 | private: 102 | // internally a job is a void function with no arguments 103 | // 104 | void AddJob(std::function job, Priority priority); 105 | 106 | // we use the Pimpl technique, so we need an implementation class 107 | // and a unique pointer to it. The class definition and declaration are separated from the template 108 | // thus serving the Pimpl concept. 109 | class impl; 110 | // the pointer is based on the std::unique_ptr<...> template. This is a smart pointer that owns and 111 | // manages another object through a pointer and disposes of that object when the unique_ptr goes out of scope. 112 | // The object is disposed of using the associated deleter when either of the following happens : 113 | // - the managing unique_ptr object is destroyed 114 | // - the managing unique_ptr object is assigned another pointer via operator= or reset(). 115 | std::unique_ptr m_impl; 116 | }; 117 | 118 | } // end of namespace CTP 119 | 120 | #endif CTP_THREAD_POOL_H --------------------------------------------------------------------------------