Introduction to Concurrency
Concurrency in C++ allows you to write programs that can perform multiple tasks seemingly at the same time. This is crucial for modern applications, especially those dealing with I/O-bound operations, complex computations, or responsive user interfaces. Modern C++ provides a powerful and standardized set of tools in the `
C++ Threads (`<thread>`)
The fundamental building block for concurrency in C++ is the `std::thread`. A `std::thread` object represents an independent flow of execution. You can create new threads by passing a function or a callable object to the `std::thread` constructor.
Key operations:
join()
: Waits for a thread to finish its execution.detach()
: Allows a thread to run independently of the `std::thread` object. The program will continue execution without waiting for the detached thread.hardware_concurrency()
: Returns the number of hardware threads available.
Example of creating and joining a thread:
#include <iostream>
#include <thread>
void worker_function() {
std::cout << "Worker thread is running." << std::endl;
}
int main() {
std::cout << "Main thread starting." << std::endl;
std::thread worker(worker_function); // Create a new thread
std::cout << "Main thread waiting for worker to finish." << std::endl;
worker.join(); // Wait for the worker thread to complete
std::cout << "Worker thread finished. Main thread exiting." << std::endl;
return 0;
}
Mutexes and Locking
When multiple threads access shared data, race conditions can occur, leading to unpredictable behavior. Mutexes (Mutual Exclusion) are used to protect shared resources. Only one thread can own a mutex at a time. Other threads attempting to lock a locked mutex will block until it's unlocked.
C++ provides std::mutex
and RAII-based lock guards like std::lock_guard
and std::unique_lock
.
Example using std::lock_guard
:
#include <iostream>
#include <thread>
#include <mutex>
std::mutex mtx;
int shared_counter = 0;
void increment() {
for (int i = 0; i < 10000; ++i) {
std::lock_guard<std::mutex> lock(mtx); // Lock acquired here
shared_counter++;
// Lock is automatically released when 'lock' goes out of scope
}
}
int main() {
std::thread t1(increment);
std::thread t2(increment);
t1.join();
t2.join();
std::cout << "Final counter value: " << shared_counter << std::endl; // Should be 20000
return 0;
}
Futures and Promises
Futures and Promises provide a way to manage the results of asynchronous operations. A std::promise
is used by a thread to set a value or an exception that can be retrieved later by another thread using a std::future
.
Example:
#include <iostream>
#include <future>
#include <thread>
#include <chrono>
int calculate_sum(int a, int b) {
std::this_thread::sleep_for(std::chrono::seconds(2)); // Simulate work
return a + b;
}
int main() {
std::promise<int> p;
std::future<int> f = p.get_future();
// Launch a thread to compute the sum and set the promise
std::thread t([&p]() {
int result = calculate_sum(10, 20);
p.set_value(result); // Set the result in the promise
});
std::cout << "Waiting for result..." << std::endl;
int sum = f.get(); // Blocks until the future has a value
std::cout << "The sum is: " << sum << std::endl;
t.join();
return 0;
}
Atomic Operations
Atomic operations are operations that are guaranteed to be executed indivisibly. They are useful for simple data types that are accessed by multiple threads and don't require the overhead of a full mutex lock.
The <atomic>
header provides types like std::atomic<T>
.
Example:
#include <iostream>
#include <thread>
#include <atomic>
std::atomic<int> atomic_counter = 0;
void increment_atomic() {
for (int i = 0; i < 10000; ++i) {
atomic_counter++; // Atomic increment
}
}
int main() {
std::thread t1(increment_atomic);
std::thread t2(increment_atomic);
t1.join();
t2.join();
std::cout << "Final atomic counter value: " << atomic_counter << std::endl; // Should be 20000
return 0;
}
Asynchronous Operations (`<async>`)
The <async>
header provides a higher-level abstraction for launching tasks asynchronously. std::async
launches a function asynchronously (potentially on a new thread) and returns a std::future
that will hold the result.
std::launch::async
: Guarantees execution on a new thread.
std::launch::deferred
: Executes lazily when .get()
is called on the future.
Example:
#include <iostream>
#include <future>
#include <chrono>
int long_computation(int x) {
std::this_thread::sleep_for(std::chrono::seconds(3));
return x * x;
}
int main() {
std::cout << "Launching asynchronous computation..." << std::endl;
// std::launch::async ensures it runs in a separate thread
std::future<int> fut = std::async(std::launch::async, long_computation, 5);
std::cout << "Doing other work while computation runs..." << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
std::cout << "Waiting for result..." << std::endl;
int result = fut.get(); // Get the result
std::cout << "The result is: " << result << std::endl; // Should be 25
return 0;
}
Synchronization Primitives
Beyond mutexes, C++ offers other synchronization primitives to coordinate threads:
std::condition_variable
: Allows threads to wait until a certain condition becomes true. Threads can be notified when the condition might have changed.std::semaphore
(C++20): A synchronization primitive that limits the number of threads that can access a resource simultaneously.std::latch
andstd::barrier
(C++20): Facilitate coordinating multiple threads for a specific task completion or synchronization point.
Best Practices
- Minimize Shared Mutable State: The fewer variables threads share and modify, the easier concurrency is to manage.
- Use RAII for Locks: Always use `std::lock_guard` or `std::unique_lock` to ensure mutexes are properly released, even if exceptions occur.
- Understand Lock Granularity: Lock only the critical sections of code necessary to protect shared data. Over-locking can serialize execution and negate the benefits of concurrency.
- Prefer Higher-Level Abstractions: Use `std::async`, `std::future`, and `std::promise` when possible, as they often lead to cleaner code than direct `std::thread` management.
- Be Mindful of Deadlocks: Avoid situations where threads are waiting indefinitely for each other to release resources.
- Test Thoroughly: Concurrent bugs can be subtle and hard to reproduce. Rigorous testing under various conditions is essential.