Category Archives: Synchronization

binary semaphore, mutex, semaphore

binary semaphore, mutex, semaphore

A mutex provides mutual exclusion, either producer or consumer can have the mutex and proceed with their work. As long as the buffer is filled by producer, the consumer needs to wait, and vice versa.

At any point of time, only one thread can work with the entire buffer.

A semaphore is a generalized mutex. Instead  of single buffer, we can split the 16 KB buffer into sixteen 1 KB buffers (identical resources). A semaphore can be associated with these four buffers. The consumer and producer can work on different buffers at the same time.For a semaphore, think of taking the semaphore as consuming the resource, but not actually taking ownership of it.

A mutex is locking mechanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there will be ownership associated with mutex, and only the owner can release the lock (mutex).

Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). For example, if you are listening songs (assume it as one task) on your mobile and at the same time your friend called you, an interrupt will be triggered upon which an interrupt service routine (ISR) will signal the call processing task to wakeup.

A mutex and the binary semaphore are essentially the same. Both can take values: 0 or 1. However, there is a significant difference between them that makes mutexes more efficient than binary semaphores. A mutex can be unlocked only by the thread that locked it. Thus a mutex has an owner concept.

if you want at most four threads to be able to enter a
section,  you could protect it with a semaphore and initialize that semaphore to four.

The first four threads will be able to decrement the semaphore and enter the region, but at that point, the semaphore will be zero and any other threads will block outside the critical region until one of the current threads leaves and signals the semaphore.Let’s say you need to limit the number of simultaneously open file descriptors
among multiple threads to 4 , then above example can be used.

Binary Semaphore :

A very simple example of a critical section that is protected by a
semaphore lock:There is a global variable numTickets which tracks the number of tickets remaining to sell. We will create many threads that all  will attempt to sell tickets until they are all gone. However, we must  control access to this global variable lest we sell more tickets than really exist. We have a semaphore lock that will only allow one seller thread to access the numTickets variable at a time. Before attempting to sell a ticket, the thread must acquire the lock by waiting on the semaphore and then release the lock when through by signalling the semaphore.

So In the example, we have one
piece of global data, the number of tickets remaining to sell, that we want to co-ordinate the access by multiple threads. In this case, a binary semaphore serves as a lock to guarantee that at most one thread is examining or changing the value of the variable at any given time.When the program is run, it creates a certain number of threads that attempt to sell all the available tickets.

Q Prevent running multiple instances of an application on Windows:

The most common method is to use a mutex, similar to the following:
int WINAPI WinMain(…)
const char szMutex[] = “your text”;
HANDLE hHandle
= CreateMutex( NULL, TRUE, szMutex );
if( ERROR_ALREADY_EXISTS == GetLastError() )
// Program already running somewhere
return(1); // Exit program

// Program runs…

// Upon app closing:
ReleaseMutex( hHandle ); // Explicitly release mutex
CloseHandle( hHandle ); // close handle before terminating
return( 1 );
You have to make sure that you close properly – a program crash that doesn’t remove the mutex could possibly prevent the program from running again, though in theory the OS will clean up any dangling mutexes once the process ends.

Another method commonly used is to search window titles for the program title:

<span class="pln">HWND hWnd</span><span class="pun">=::</span>
<span class="typ">FindWindow</span><span class="pun">(</span><span class="pln">LPCTSTR lpClassName</span><span class="pun">,
</span> <span class="com">// pointer to class name</span><span class="pln">
LPCTSTR lpWindowName </span><span class="com">// pointer to window name</span>
<span class="pun">);</span>

If it’s null, then the window was not found, therefore the program is not running. You can use this to bring focus to the running app before closing this new instance or generate a message , so the user isn’t left wondering why the app didn’t open.

if(hWnd != NULL)
// exit this instance

What is the best way on Linux platform for the process (C++ application) to check its instance is not already running?

The standard way to do this is to create a pidfile somewhere, typically containing the pid of your program.

You don’t need to put the pid in there, you could just put an exclusive lock on it. If you open it for reading/writing and flock it with LOCK_EX | LOCK_NB, it will fail if the file is already locked. This is race-condition free, and the lock will be automatically released if the program crashes.

Normally you’d want to do it per-user, so the user’s home directory is a good place to put the file.

If it’s a daemon, path like /var/run is better.


A latch can be used say when you make multiple  async api calls (say n )and make sure that u only proceed when you get all responses.

In this case you will call api’s  with latch value of n  and decrement this value of latch for every response.

The code will not execute further unless count becomes zero again.

A sample code in java :

private static ConcurrentHashMap<String, CountDownLatch> mRequestIdCountDownLatchMap = new ConcurrentHashMap<>();
private static void Initialize(String id, String vId, UriInfo info, HttpHeaders httpHeaders)
            throws InvalidRequestException,
            InterruptedException {
        String requestId = httpHeaders.getHeaderString(“ID”);
        mRequestIdCountDownLatchMap.put(requestId, new CountDownLatch(2));“Calling api1”);
        mClient.api1(vId, httpHeaders);“Calling api2”);
        mClient.api2(vId, httpHeaders);“Waiting for CountDownLatch”);
        mRequestIdCountDownLatchMap.getOrDefault(requestId, new CountDownLatch(0)).await(3000, TimeUnit.MICROSECONDS);“Got api1 and api2 response”);
//Will be called when server  will send api1 Response
    public static void getapi1Handler(Response response, HttpHeaders httpHeaders)
            throws InvalidRequestException {
        String requestId = httpHeaders.getHeaderString(“ID”);
        Response.put(requestId, response);
Similarily api2 also do a latch down after sending reponse:
//Will be called when server  will send api2 Response
    public static void getapi2Handler(Response response,

HttpHeaders httpHeaders)

            throws InvalidRequestException {
        String requestId = httpHeaders.getHeaderString(“ID”);
        Response.put(requestId, response);

Related articles:

Scoped Locks

Given an m-ary tree, need to implement two functions lock and unlock on a node n. You can lock a node if none of the nodes in its subtree is locked and none of its ancestors are locked. Unlocking a node should release the lock. Suggest a good data structure for solving this. And implement the lock and unlock functions in O(log n) time.

semaphore vs spin-locks