The Palos Publishing Company

Follow Us On The X Platform @PalosPublishing
Categories We Write About

How to Safely Share Resources Between Threads in C++

When developing multithreaded applications in C++, safely sharing resources between threads is crucial to avoid issues such as data races, deadlocks, and other synchronization problems. The goal is to ensure that multiple threads can access shared resources without corrupting data or causing unexpected behaviors. Here’s how to achieve that with the various tools provided in the C++ Standard Library.

1. Mutexes and Locks

A common and effective way to manage shared resources is by using mutexes (mutual exclusions). A mutex ensures that only one thread at a time can access a specific resource. In C++, the <mutex> header provides the std::mutex class, which can be used to create a mutex for resource protection.

Basic Usage of Mutex

To use a mutex, lock it before accessing the shared resource and unlock it when the operation is complete. Here’s an example:

cpp
#include <iostream> #include <thread> #include <mutex> std::mutex mtx; // Mutex to protect shared resource int shared_resource = 0; void increment_resource() { mtx.lock(); // Lock the mutex to protect the resource ++shared_resource; std::cout << "Shared resource: " << shared_resource << std::endl; mtx.unlock(); // Unlock the mutex } int main() { std::thread t1(increment_resource); std::thread t2(increment_resource); t1.join(); t2.join(); return 0; }

In this example, both threads try to increment the shared resource, but only one thread can access it at a time because the mutex is locked and unlocked around the critical section (the code that modifies the shared resource).

Using std::lock_guard for Automatic Locking

Instead of manually locking and unlocking the mutex, you can use std::lock_guard to automatically handle the locking and unlocking. It’s an RAII-style class, meaning it locks the mutex when it is created and unlocks it when it goes out of scope.

cpp
void increment_resource() { std::lock_guard<std::mutex> lock(mtx); // Automatically locks the mutex ++shared_resource; std::cout << "Shared resource: " << shared_resource << std::endl; }

This approach reduces the risk of accidentally forgetting to unlock the mutex, which could lead to deadlocks or undefined behavior.

2. std::unique_lock for More Control

std::unique_lock provides more flexibility than std::lock_guard, allowing you to manually unlock and re-lock the mutex within the scope. It also supports deferred locking and timed locking.

cpp
void increment_resource() { std::unique_lock<std::mutex> lock(mtx); // Lock the mutex ++shared_resource; std::cout << "Shared resource: " << shared_resource << std::endl; lock.unlock(); // Explicitly unlock the mutex // Do some other non-critical work lock.lock(); // Re-lock the mutex }

3. std::shared_mutex for Shared Access

In some cases, you might want to allow multiple threads to read from a shared resource but restrict writing to one thread at a time. This can be achieved using std::shared_mutex (introduced in C++17), which provides both exclusive (write) and shared (read) locking.

  • Exclusive lock (write) – only one thread can hold it at a time.

  • Shared lock (read) – multiple threads can hold it simultaneously.

Here’s how to use it:

cpp
#include <iostream> #include <thread> #include <shared_mutex> std::shared_mutex smtx; // Shared mutex for concurrent read/write access int shared_data = 0; void reader() { std::shared_lock<std::shared_mutex> lock(smtx); // Shared lock std::cout << "Reading shared data: " << shared_data << std::endl; } void writer() { std::unique_lock<std::shared_mutex> lock(smtx); // Exclusive lock ++shared_data; std::cout << "Writing shared data: " << shared_data << std::endl; } int main() { std::thread t1(reader); std::thread t2(writer); std::thread t3(reader); t1.join(); t2.join(); t3.join(); return 0; }

In this example, multiple readers can access the shared data concurrently, but the writer has exclusive access to modify it.

4. Atomic Operations

For simple shared data types like integers or pointers, atomic operations can be a lightweight and highly efficient alternative to mutexes. The C++ Standard Library provides atomic types in the <atomic> header.

Atomic operations allow safe access to shared variables without the need for locks, as the underlying hardware ensures that operations are completed without interruption.

cpp
#include <iostream> #include <atomic> #include <thread> std::atomic<int> shared_counter(0); void increment_counter() { ++shared_counter; // Atomic increment std::cout << "Counter: " << shared_counter.load() << std::endl; } int main() { std::thread t1(increment_counter); std::thread t2(increment_counter); t1.join(); t2.join(); return 0; }

In this example, std::atomic<int> guarantees that the increment operation is thread-safe without needing a mutex.

5. Condition Variables for Thread Synchronization

Sometimes, threads need to wait for a condition to be met before proceeding. This is where condition variables come into play. Condition variables allow threads to sleep until they are notified that the condition has been met.

Condition variables are often used with mutexes to ensure that shared data is protected while a thread waits for a signal.

Here’s a basic example using std::condition_variable:

cpp
#include <iostream> #include <thread> #include <mutex> #include <condition_variable> std::mutex mtx; std::condition_variable cv; bool ready = false; void print_id(int id) { std::unique_lock<std::mutex> lock(mtx); cv.wait(lock, []{ return ready; }); // Wait until ready is true std::cout << "Thread " << id << " is printingn"; } void go() { std::unique_lock<std::mutex> lock(mtx); ready = true; // Notify all threads that they can proceed cv.notify_all(); } int main() { std::thread threads[10]; for (int i = 0; i < 10; ++i) threads[i] = std::thread(print_id, i); std::cout << "Preparing to print...n"; go(); for (auto& t : threads) t.join(); return 0; }

In this example, the threads wait until the ready flag is set to true before proceeding.

6. Thread-safe Containers

If you’re sharing collections of data (like vectors or maps) between threads, you can either use thread-safe containers or manually protect access to them with locks. The C++ Standard Library does not provide thread-safe containers by default, but there are libraries such as Intel’s Threading Building Blocks (TBB) or Concurrent Containers in C++17 and later versions that offer thread-safe alternatives.

Alternatively, you can use standard containers, but with locks in place around their access.

Conclusion

In C++, safely sharing resources between threads is about choosing the right synchronization primitives for your needs. You can use mutexes, atomic operations, condition variables, or even shared locks depending on the complexity of your program and the type of resource being shared. Here’s a quick summary of the techniques:

  • Mutexes protect shared resources with exclusive access.

  • std::lock_guard and std::unique_lock simplify mutex management.

  • std::shared_mutex allows concurrent read access while ensuring exclusive write access.

  • Atomic operations are lightweight and efficient for simple types.

  • Condition variables help synchronize threads based on certain conditions.

By understanding the different synchronization mechanisms, you can ensure that your multithreaded C++ programs are both efficient and safe.

Share this Page your favorite way: Click any app below to share.

Enter your email below to join The Palos Publishing Company Email List

We respect your email privacy

Categories We Write About