|< Windows Process & Threads Programming 10 |Main |Windows Processes & Threads: Synchronization 2 >|Site Index |Download |


 

 

 

 

MODULE V

PROCESSES AND THREADS: SYNCHRONIZATION

Part 1: STORY

 

 

 

 

 

 

What do we have in this Module?

  1. Process & Thread: Synchronization Story

  2. Basic of Concurrency

  3. Process Interaction (Interprocess)

  4. Some definitions:

  1. Competition Among Processes for Resources

  2. Cooperation Among Processes by Sharing

  3. Cooperation Among Processes by Communication

  4. Requirement for Mutual Exclusion

  5. Back To Windows OS: Synchronization

  6. Synchronization Objects

  7. Wait Functions

  8. Single-object Wait Functions

  9. Multiple-object Wait Functions

  10. Alertable Wait Functions

  11. Registered Wait Functions

  12. Wait Functions and Synchronization Objects

  13. Wait Functions and Creating Windows

 

 

 

 

 

 

 

 

My Training Period: xx hours. Before you begin, read someinstruction here.

 

The expected skills for this session are:

&

TheProcess & Thread: Synchronization Story

 

Basic of Concurrency

 

In a single-processor, multitasking system, processes/threads are interleaved in time to yield the appearance of simultaneous execution.  Even though actual parallel processing is not achieved and even though there is a certain amount of overhead involved in switching back and forth between processes/threads, interleaved execution provides major benefits in processing efficiency and in program structuring.  In multiple processor system, it is possible not only to interleave processes/threads but to overlap them as well.  Both techniques can be viewed as examples of concurrent processing and both present the same problems such as in sharing (global) resources e.g. global variables and in managing the allocation of resources optimally e.g. the request use of a particular I/O channel or device.  The following figure try to describe the interleaving the processes. P stands for process and t is time.

 

Basic of interleaving concept

 

Figure 1: Interleave concept.

 

The following is another figure describing interleaving and overlapping as well.

 

Interleaving and overlapping concepts

 

Figure 2: Process/thread Interleaving and overlapping

 

Process Interaction (Interprocess)

 

In network and distributed computing more process interaction happens. Now let consider these concurrency condition when the processes interact each other instead of a standalone process. Theoretically, from the operating system point of view, these interactions can be classified on the basis of the degree to which processes are aware of each other’s existence.  These have been summarized in Table 1.  Keep in mind that in implementation, several processes may exhibit aspects of both competition and cooperation.

 

Degree of Awareness

Relationship

Influence that One Process Has on the Other

Potential Control Problems

Processes unaware of each other e.g multitasking of multiple independent processes.

Competition for resources e.g. two independent applications may both want access to the same disk or file. 0perating system must regulate these accesses.

  1. Results of one process independent of the action of others.

  2. Timing of process may be affected.

  1. Mutual exclusion.

  2. Deadlock (for renewable resource)

  3. Starvation.

Processes indirectly aware of each other e.g. using shared object such as I/O buffer.

Cooperation by sharing the common object.

  1. Result of one process may depend on information obtained from others.

  2. Timing of process may be affected.

  1. Mutual exclusion.

  2. Deadlock for renewable resource.

  3. Starvation.

  4. Data coherence.

Processes directly aware of each other e.g having communication available to them and are designed to work jointly.

Cooperation by communication.

  1. Results of one process may depend on information obtained from others.

  2. Timing of process may be affected.

  1. Deadlock for consumable resource.

  2. Starvation.

 

Table 1

 

Some definitions

  1. Race condition -In a multithreaded application, a condition that occurs when multiple threads access a data item without coordination, possibly causing inconsistent results, depending on which thread reaches the data item first.

  2. Deadlock -In multithreaded applications, a threading problem that occurs when each member of a set of threads is waiting for another member of the set. At the end no thread get the resource and all keep waiting.

  3. Concurrency -The ability of more than one transaction or process to access the same data at the same time. For the data changes in the database table's cell as an example, this issue must be handled carefully.

  4. Asynchronous call -A call to a function that is executed separately so that the caller can continue processing instructions without waiting for the function to return.

  5. Synchronous call -A function call that does not allow further instructions in the calling process to be executed until the function returns.There are two types of file I/O synchronization: synchronous file I/O and asynchronous file I/O. Asynchronous file I/O is also referred to as overlapped I/O.

  6. Synchronous file I/O -A thread starts an I/O operation and immediately enters a wait state until the I/O request has completed.

  7. Asynchronous file I/O -A thread performing asynchronous file I/O sends an I/O request to the kernel. If the request is accepted by the kernel, the thread continues processing another job until the kernel signals to the thread that the I/O operation is complete. It then interrupts its current job and processes the data from the I/O operation as necessary.

 

Competition Among Processes for Resources

 

Concurrent processes come into conflict with each other when they are competing for the use of the same resource.  Two or more processes need to access a resource during the course of their execution.  Each process is unaware of the existence of the other processes and each is to be unaffected by the execution of the other processes.  It follows from this that each process should leave the state of any resource that it uses unaffected.  Examples of resource include I/O devices, memory, processor time and the clock.The execution of one process may affect the behavior of competing processes.  If two processes both wish access to a single resource then one process will be allocated that resource by the operating system and the other one will have to wait.  In an extreme case the blocked process may never get access to the resource and hence will never successfully terminate.  In the case of competing processes, three control problems must be solved.

 

  1. The need for mutual exclusion.  Suppose two or more processes require access to a single non-sharable resource, such as a printer.  During the course of execution, each process will be sending commands to the I/O device, receiving status information, sending data and / or receiving it.  We will refer to such a resource as acritical resource and the portion of the program that uses it as a critical section of the program.  It is important that only one program at a time be allowed in its critical section. We cannot simply rely on the operating system to understand and enforce this restriction because the detailed requirement may not be obvious.  Well, in the case of printer, for example, we wish any individual process to have control of the printer while it prints an entire file else lines from competing processes will be interleaved :o).

  1. Another control problem is a deadlock (permanent blocking of a set of processes that either compete for system resources or communicate with each other).  Consider two processes P1 andP2 and twocritical resources,R1 andR2.  Suppose that each process needs access to both resources to perform part of its function.  Then it is possible to have the following situation:R1 is assigned by the operating system to P2, andR2 is assigned to P1.  Each process is waiting for one of the two resources.  Well, neither will release the resource that it already owns until it has acquired the other resource and performed its critical section.  Both processes are deadlocked.

  1. Final control problem is starvation.  Suppose that three processes, P1, P2 andP3, each requires periodic access to resource R. Consider the situation in which P1 is in possession of the resource, and both P2 and P3 are delayed, waiting for that resource.  WhenP1 exits its critical section, either P2 orP3 should be allowed access to R.  Assume that P3 is granted access and that before it completes its critical section, P1 again requires access. If P1 is granted access afterP3 has finished, and if P1 and P3 repeatedly grant access to each other, then P2 may indefinitely be denied access to the resource, even though there is no deadlock situation.

 

Control of competition inevitably involves the operating system because it is the operating system that allocates resources.

 

Cooperation Among Processes by Sharing

 

In this case processes that interact with other processes without being explicitly aware of them.  For example, multiple processes may have access to shared variables or to shared files or data bases.  Processes may use and update the shared data without reference to other processes but know that other processes may have access to the same data.  Thus, the processes must cooperate to ensure the integrity of the shared data.Because data are held on resources (device, memory) the control problems of mutual exclusion, deadlock and starvation are again present but here the data items may be accesses in two different modes, reading and writing and only writing operation must be mutually exclusive here.  Consider two processesP1 andP2 are sharing data/value A.  At time t0,P1 are updating data A toB, and then at t1,P2 are updating data A to C.  When P1 reread its previously updated data, well, the data is not accurate anymore (C instead of B).  This is also called a race condition and there is no data integrity for each process.

 

Cooperation Among Processes by Communication

 

Typically, communication can be characterized as consisting some sort of messages.  Primitives for sending and receiving messages may be provided as part of the programming language or by the system’s kernel of the operating system.  Because nothing is shared between processes in the act of passing messages, mutual exclusion is not a control requirement for this sort of cooperation.  However the problems of deadlock and starvation are present.  As an example of deadlock, two processes may be blocked, each waiting for a communication from the other.  For a starvation, consider three processes P1,P2 and P3, which exhibit the following behavior: P1 is repeatedly attempting to communicate with either P2 orP3, and P2 andP3 are both attempting to communicate with either P1.  A sequence could arise in which P1 andP2 exchange information repeatedly, while P3 is blocked waiting for a communication from P1.  There is no deadlock because P1 remains active, butP3 is starved.

 

Requirement for Mutual Exclusion

 

The successful implementation of concurrency among processes requires the ability to define critical sections and enforce mutual exclusion.  This is fundamental for any concurrent processing scheme.  Generally, any facility or capability that is to provide support for mutual exclusion should meet the following requirement:

 

  1. Mutual exclusion must be entered: Only one process at a time is allowed into its critical section among all processes that have critical sections for the same resource or shared object.

  2. A process that halts in its non-critical section must do so without interfering with other processes.

  3. It must not be possible for a process requiring access to a critical section to be delayed indefinitely; no deadlock or starvation can be allowed.

  4. When no process is in a critical section, any process that requests entry to its critical section must be permitted to enter without delay.

  5. No assumptions are made about relative process speeds or number of processes.

  6. A process remains inside its critical section for a finite time only.

 

In Windows, the mutual exclusion has been satisfied through using its APIs.  We will examine the details later on.

 

Back To Windows OS: Synchronization

 

To avoid race conditions,deadlocks and other related conditions as discussed before, it is necessary to synchronize access by multiple threads to shared resources. Synchronization is also necessary to ensure that interdependent code is executed in the proper sequence.In Windows, to synchronize access to a resource, you can use one of thesynchronization objects in one of the wait functions.  Each synchronization object instance can be in either a signaled or non-signaled state.  A thread can be suspended on an object in a non-signaled state; the thread is released when the object enters the signaled state.  The mechanism is: a thread issues a wait request to the NT executive by using the handle of the synchronization object.  When an object enters the signaled state, the NT executive releases all thread objects that are waiting on the synchronization object.    Thewait functions allow a thread to block its own execution until a specified non-signaled object is set to the signaled state.  The functions described in this section provide mechanisms that threads can use to synchronize access to a resource.

The following Table lists examples that cause each object type to enter the signaled state and the effect it has on waiting threads.

 

Object Type

Basic definition

Set to signaled State When

Effect on Waiting Threads

Process

A program invocation, including the address space and resources required to run the program.

Last thread terminates.

All released.

Thread

An executable entity within a process.

Thread terminates.

All released.

File

An instance of an opened file or I/O device.

I/O operation completes.

All released.

Event

An announcement that a system event has occurred.

Thread sets to the event.

All released.

Semaphore

A counter (variable) that regulates the number of threads that can use a resource.

Semaphore count drops to zero.

All released.

Waitable timer

A counter that records the passage of time.

Set time arrives or time interval expires.

All released.

 

Table 2

 

Synchronization Objects

 

A synchronization object is an object whose handle can be specified in one of the wait functions to coordinate the execution of multiple threads. More than one process can have a handle to the same synchronization object, making interprocess synchronization possible.  In Windows Win32 programming, the following object types are provided exclusively for synchronization.

 

Type

Description

Event

Notifies one or more waiting threads that an event has occurred.

Mutex

Can be owned by only one thread at a time, enabling threads to coordinate mutually exclusive access to a shared resource.

Semaphore

Maintains a count between zero and some maximum value, limiting the number of threads that are simultaneously accessing a shared resource.

Waitable timer

Notifies one or more waiting threads that a specified time has arrived.

 

Table 3

 

Though available for other uses, the following objects can also be used for synchronization.

 

Object

Description

Change notification

Created by the FindFirstChangeNotification() function, its state is set to signaled when a specified type of change occurs within a specified directory or directory tree.

Console input

Created when a console is created. The handle to console input is returned by the CreateFile() function when CONIN$ is specified, or by the GetStdHandle() function. Its state is set to signaled when there is unread input in the console's input buffer, and set to non-signaled when the input buffer is empty.

Job

Created by calling the CreateJobObject() function. The state of a job object is set to signaled when all its processes are terminated because the specified end-of-job time limit has been exceeded.

Memory resource notification

Created by the CreateMemoryResourceNotification() function. Its state is set to signaled when a specified type of change occurs within physical memory.

Process

Created by calling the CreateProcess() function. Its state is set to non-signaled while the process is running, and set to signaled when the process terminates.

Thread

Created when a new thread is created by calling the CreateProcess(),CreateThread(), or CreateRemoteThread() function. Its state is set to non-signaled while the thread is running, and set to signaled when the thread terminates.

 

Table 4

 

In some circumstances, you can also use a file, named pipe, or communications device as a synchronization object; however, their use for this purpose is discouraged. Instead, use asynchronous I/O and wait on the event object set in the OVERLAPPED structure. It is safer to use the event object because of the confusion that can occur when multiple simultaneous overlapped operations are performed on the same file, named pipe, or communications device. In this situation, there is no way to know which operation caused the object's state to be signaled.

 

Wait Functions

 

The wait functions allow a thread to block its own execution. The wait functions do not return until the specified criteria have been met. The type of wait function determines the set of criteria used. When a wait function is called, it checks whether the wait criteria have been met. If the criteria have not been met, the calling thread enters the wait state. It uses no processor time while waiting for the criteria to be met.There are four types of wait functions:

  1. single-object.

  2. multiple-object.

  3. alertable.

  4. registered.

 

Single-object Wait Functions

 

The SignalObjectAndWait(),WaitForSingleObject(), and WaitForSingleObjectEx() functions require a handle to one synchronization object. These functions return when one of the following occurs:

  1. The specified object is in the signaled state.

  2. The time-out interval elapses. The time-out interval can be set to INFINITE to specify that the wait will not time out.

 

The SignalObjectAndWait() function enables the calling thread to atomically set the state of an object to signaled and wait for the state of another object to be set to signaled.

 

Multiple-object Wait Functions

 

The WaitForMultipleObjects(),WaitForMultipleObjectsEx(),MsgWaitForMultipleObjects(), and MsgWaitForMultipleObjectsEx() functions enable the calling thread to specify an array containing one or more synchronization object handles. These functions return when one of the following occurs:

  1. The state of any one of the specified objects is set to signaled or the states of all objects have been set to signaled. You control whether one or all of the states will be used in the function call.

  2. The time-out interval elapses. The time-out interval can be set to INFINITE to specify that the wait will not time out.

 

The MsgWaitForMultipleObjects() andMsgWaitForMultipleObjectsEx() function allow you to specify input event objects in the object handle array. This is done when you specify the type of input to wait for in the thread's input queue.For example, a thread could use MsgWaitForMultipleObjects() to block its execution until the state of a specified object has been set to signaled and there is mouse input available in the thread's input queue. The thread can use the GetMessage() or PeekMessage() function to retrieve the input.

When waiting for the states of all objects to be set to signaled, these multiple-object functions do not modify the states of the specified objects until the states of all objects have been set signaled. For example, the state of a mutex object can be signaled, but the calling thread does not get ownership until the states of the other objects specified in the array have also been set to signaled. In the meantime, some other thread may get ownership of the mutex object, thereby setting its state to non-signaled.

 

Alertable Wait Functions

 

The MsgWaitForMultipleObjectsEx(),SignalObjectAndWait(),WaitForMultipleObjectsEx(), and WaitForSingleObjectEx() functions differ from the other wait functions in that they can optionally perform an alert-able wait operation. In an alertable wait operation, the function can return when the specified conditions are met, but it can also return if the system queues an I/O completion routine or an APC for execution by the waiting thread.

 

Registered Wait Functions

 

The RegisterWaitForSingleObject() function differs from the other wait functions in that the wait operation is performed by a thread from the thread pool. When the specified conditions are met, the callback function is executed by a worker thread from the thread pool.By default, a registered wait operation is a multiple-wait operation. The system resets the timer every time the event is signaled (or the time-out interval elapses) until you call the UnregisterWaitEx() function to cancel the operation. To specify that a wait operation should be executed only once, set the dwFlags parameter of RegisterWaitForSingleObject() to WT_EXECUTEONLYONCE.

 

Wait Functions and Synchronization Objects

 

The wait functions can modify the states of some types of synchronization objects. Modification occurs only for the object or objects whose signaled state caused the function to return. Wait functions can modify the states of synchronization objects as follows:

  1. The count of a semaphore object decreases by one, and the state of the semaphore is set to non-signaled if its count is zero.

  2. The states of mutex, auto-reset event, and change-notification objects are set to non-signaled.

  3. The state of a synchronization timer is set to non-signaled.

  4. The states of manual-reset event, manual-reset timer, process, thread, and console input objects are not affected by a wait function.

 

Wait Functions and Creating Windows

 

You have to be careful when using the wait functions and code that directly or indirectly creates windows. If a thread creates any windows, it must process messages. Message broadcasts are sent to all windows in the system. If you have a thread that uses a wait function with no time-out interval, the system will deadlock. Two examples of code that indirectly creates windows are DDE and theCoInitialize()function. Therefore, if you have a thread that creates windows, useMsgWaitForMultipleObjects() or MsgWaitForMultipleObjectsEx(), rather than the other wait functions.

 

 

 

 

 

 

 

 

 

Further reading and digging:

 

  1. Microsoft C references, online MSDN.

  2. Microsoft Visual C++, online MSDN.

  3. ReactOS - Windows binary compatible OS - C/C++ source code repository, Doxygen.

  4. Linux Access Control Lists (ACL) info can be found atAccess Control Lists.

  5. For Multi bytes, Unicode characters and Localization please refer to Locale, wide characters & Unicode (Story) and Windows users & groups programming tutorials (Implementation).

  6. Structure, enum, union and typedef story can be foundC/C++ struct, enum, union & typedef.

  7. Unicode version of the Win32 programming tutorials and more details windows process, threads and synchronization.

  8. Check the best selling C / C++ and Windows books at Amazon.com.

 

 

 

 

 

 

 

 

|< Windows Process & Threads Programming 10 |Main |Windows Processes & Threads: Synchronization 2 >|Site Index |Download |