|< Windows Processes & Threads: Synchronization 1 | Main | Windows Processes & Threads: Synchronization 3 >| Site Index | Download |


 

 

 

 

 

 

MODULE V1

PROCESSES AND THREADS: SYNCHRONIZATION

Part 2: STORY

 

 

 

 

 

 

 

What do we have in this Module?

  1. More Synchronization Mechanisms

  2. Synchronization and Overlapped Input and Output

  3. Asynchronous Procedure Calls

  4. Critical Section Objects

  5. Interlocked Variable Access

  6. Interlocked Singly Linked Lists

  7. Timer Queues

  8. Event Objects

  9. Waitable Timer Objects

  10. Mutex Objects

  11. Semaphore Objects

  12. Critical Section Objects

  13. Thread Pooling

 

 

My Training Period: xx hours. Before you begin, read some instruction here.

 

 

 

 

 

 

 

 

 

 

 

 

 

The expected abilities to be acquired in this session:

 

More Synchronization Mechanisms

 

The following are other synchronization mechanisms:

  1. Overlapped input and output.

  2. Asynchronous procedure calls (APC).

  3. Critical section objects.

  4. Interlocked variable access.

  5. Interlocked singly linked lists.

  6. Timer queues.

 

Keep in mind that you will find other methods or approaches used in Windows to provide synchronization such as using Monitor objects (used in .Net framework) and Apartment (used in Component Object Model – COM/COM++).

 

Synchronization and Overlapped Input and Output

 

You can perform either synchronous or asynchronous (or overlapped) I/O operations on files, named pipes, and serial communications devices. The WriteFile(), ReadFile(), DeviceIoControl(), WaitCommEvent(), ConnectNamedPipe(), and TransactNamedPipe() functions can be performed either synchronously or asynchronously. The ReadFileEx() and WriteFileEx() functions can be performed asynchronously only. When a function is executed synchronously, it does not return until the operation has been completed. This means that the execution of the calling thread can be blocked for an indefinite period while it waits for a time-consuming operation to finish. Functions called for overlapped operation can return immediately, even though the operation has not been completed. This enables a time-consuming I/O operation to be executed in the background while the calling thread is free to perform other tasks. For example, a single thread can perform simultaneous I/O operations on different handles, or even simultaneous read and write operations on the same handle.

To synchronize its execution with the completion of the overlapped operation, the calling thread uses the GetOverlappedResult() function or one of the wait functions to determine when the overlapped operation has been completed. You can also use the HasOverlappedIoCompleted() macro to poll for completion. To cancel all pending asynchronous I/O operations, use the CancelIo() function. This function only cancels operations issued by the calling thread for the specified file handle.

Overlapped operations require a file, named pipe, or communications device that was created with the FILE_FLAG_OVERLAPPED flag. To call a function to perform an overlapped operation, the calling thread must specify a pointer to an OVERLAPPED structure that has had all of its members initialized to zero. If this pointer is NULL, the function return value may incorrectly indicate that the operation completed. The system sets the state of the event object to non-signaled when a call to the I/O function returns before the operation has been completed.

The system sets the state of the event object to signaled when the operation has been completed. When a function is called to perform an overlapped operation, it is possible that the operation will be completed before the function returns. When this happens, the results are handled as if the operation had been performed synchronously. If the operation was not completed, however, the function's return value is FALSE, and the GetLastError() function returns ERROR_IO_PENDING.  A thread can manage overlapped operations by either of two methods:

  1. Use the GetOverlappedResult() function to wait for the overlapped operation to be completed.

  2. Specify a handle to the OVERLAPPED structure's manual-reset event object in one of the wait functions and then call GetOverlappedResult() after the wait function returns. The GetOverlappedResult() function returns the results of the completed overlapped operation, and for functions in which such information is appropriate, it reports the actual number of bytes that were transferred.

 

When performing multiple simultaneous overlapped operations, the calling thread must specify an OVERLAPPED structure with a different manual-reset event object for each operation. To wait for any one of the overlapped operations to be completed, the thread specifies all the manual-reset event handles as wait criteria in one of the multiple-object wait functions. The return value of the multiple-object wait function indicates which manual-reset event object was signaled, so the thread can determine which overlapped operation caused the wait operation to be completed. If no event object is specified in the OVERLAPPED structure, the system signals the state of the file, named pipe, or communications device when the overlapped operation has been completed. Thus, you can specify these handles as synchronization objects in a wait function, though their use for this purpose can be difficult to manage. When performing simultaneous overlapped operations on the same file, named pipe, or communications device, there is no way to know which operation caused the object's state to be signaled. It is safer to use a separate event object for each overlapped operation.

 

Asynchronous Procedure Calls

 

An asynchronous procedure call (APC) is a function that executes asynchronously in the context of a particular thread. When an APC is queued to a thread, the system issues a software interrupt. The next time the thread is scheduled, it will run the APC function. APCs made by the system are called kernel-mode APCs. APCs made by an application are called user-mode APCs. A thread must be in an alertable state to run a user-mode APC. Each thread has its own APC queue. An application queues an APC to a thread by calling the QueueUserAPC() function. The calling thread specifies the address of an APC function in the call to QueueUserAPC(). The queuing of an APC is a request for the thread to call the APC function.

When a user-mode APC is queued, the thread to which it is queued is not directed to call the APC function unless it is in an alertable state. A thread enters an alertable state when it calls the SleepEx(), SignalObjectAndWait(), MsgWaitForMultipleObjectsEx(), WaitForMultipleObjectsEx(), or WaitForSingleObjectEx() function. Note that you cannot use WaitForSingleObjectEx() to wait on the handle to the object for which the APC is queued. Otherwise, when the asynchronous operation is completed, the handle is set to the signaled state and the thread is no longer in an alertable wait state, so the APC function will not be executed. However, the APC is still queued, so the APC function will be executed if you call another alertable wait function. Note that the ReadFileEx(), SetWaitableTimer(), and WriteFileEx() functions are implemented using an APC as the completion notification callback mechanism.

 

Critical Section Objects

 

A critical section object provides synchronization similar to that provided by a mutex object, except that a critical section can be used only by the threads of a single process. Event, mutex, and semaphore objects can also be used in a single-process application, but critical section objects provide a slightly faster, more efficient mechanism for mutual-exclusion synchronization (a processor-specific test and set instruction). Like a mutex object, a critical section object can be owned by only one thread at a time, which makes it useful for protecting a shared resource from simultaneous access. There is no guarantee about the order in which threads will obtain ownership of the critical section, however, the system will be fair to all threads. Unlike a mutex object, there is no way to tell whether a critical section has been abandoned.

The process is responsible for allocating the memory used by a critical section. Typically, this is done by simply declaring a variable of type CRITICAL_SECTION. Before the threads of the process can use it, initialize the critical section by using the InitializeCriticalSection() or InitializeCriticalSectionAndSpinCount() function. A thread uses the EnterCriticalSection() or TryEnterCriticalSection() function to request ownership of a critical section. It uses the LeaveCriticalSection() function to release ownership of a critical section. If the critical section object is currently owned by another thread, EnterCriticalSection() waits indefinitely for ownership. In contrast, when a mutex object is used for mutual exclusion, the wait functions accept a specified time-out interval. The TryEnterCriticalSection() function attempts to enter a critical section without blocking the calling thread. Once a thread owns a critical section, it can make additional calls to EnterCriticalSection() or TryEnterCriticalSection() without blocking its execution. This prevents a thread from deadlocking itself while waiting for a critical section that it already owns. To release its ownership, the thread must call LeaveCriticalSection() once for each time that it entered the critical section.

A thread uses the InitializeCriticalSectionAndSpinCount() or SetCriticalSectionSpinCount() function to specify a spin count for the critical section object. On single-processor systems, the spin count is ignored and the critical section spin count is set to 0. On multiprocessor systems, if the critical section is unavailable, the calling thread will spin dwSpinCount times before performing a wait operation on a semaphore associated with the critical section. If the critical section becomes free during the spin operation, the calling thread avoids the wait operation. Any thread of the process can use the DeleteCriticalSection() function to release the system resources that were allocated when the critical section object was initialized. After this function has been called, the critical section object can no longer be used for synchronization. When a critical section object is owned, the only other threads affected are those waiting for ownership in a call to EnterCriticalSection(). Threads that are not waiting are free to continue running.

 

Interlocked Variable Access

 

The interlocked functions provide a simple mechanism for synchronizing access to a variable that is shared by multiple threads. The threads of different processes can use this mechanism if the variable is in shared memory. Simple reads and writes to properly-aligned 32-bit variables are atomic. In other words, when one thread is updating a 32-bit variable, you will not end up with only one portion of the variable updated; all 32 bits are updated in an atomic fashion. However, access is not guaranteed to be synchronized. If two threads are reading and writing from the same variable, you cannot determine if one thread will perform its read operation before the other performs its write operation. Simple reads and writes to properly aligned 64-bit variables are atomic on 64-bit Windows. Reads and writes to 64-bit values are not guaranteed to be atomic on 32-bit Windows. Reads and writes to variables of other sizes are not guaranteed to be atomic on any platform. The interlocked functions should be used to perform complex operations in an atomic manner. The InterlockedIncrement() and InterlockedDecrement() functions combine the operations of incrementing or decrementing the variable and checking the resulting value. This atomic operation is useful in a multitasking operating system, in which the system can interrupt one thread's execution to grant a slice of processor time to another thread. Without such synchronization, one thread could increment a variable but be interrupted by the system before it can check the resulting value of the variable. A second thread could then increment the same variable. When the first thread receives its next time slice, it will check the value of the variable, which has now been incremented not once but twice. The interlocked variable-access functions protect against this kind of error.

The InterlockedExchangePointer() function atomically exchanges the values of the specified variables. The InterlockedExchangeAdd() function combines two operations: adding two variables together and storing the result in one of the variables. The InterlockedCompareExchangePointer() function combines two operations: comparing two values and storing a third value in one of the variables, based on the outcome of the comparison. There are functions that are specifically designed to perform interlocked variable access on 64-bit memory values and addresses, and are optimized for use on 64-bit Windows. Each of these functions contains "64" in the name; for example, InterlockedDecrement64() and InterlockedCompareExchangeAcquire64(). There are also functions that combine the basic interlocked variable access operations with the acquire and release memory access semantics defined by Intel's Hyperthreading technology, which was developed for certain models of Pentium 4 processor and may be included in subsequent processors. Each of these functions contains the word "Acquire" or "Release" in their names; for example, InterlockedDecrementAcquire() and InterlockedDecrementRelease(). Acquire memory semantics specify that the memory operation being performed by the current thread will be visible before any other memory operations are attempted. Release memory semantics specify that the memory operation being performed by the current thread will be visible after all other memory operations have been completed. These semantics allows you to force memory operations to be performed in a specific order. Use acquire semantics when entering a protected region and release semantics when leaving it.

 

Interlocked Singly Linked Lists

 

The Interlocked Singly Linked Lists (SLists) ease the task of insertion and deletion from a linked list. They are implemented using a non-blocking algorithm to provide atomic synchronization, increase system performance, and avoid problems such as priority inversion and lock convoys. Singly linked lists (SLists) are straightforward to implement and use in 32-bit code. However, it is challenging to implement them in 64-bit code because the amount of data exchangeable by the native interlocked exchange primitives is not double the address size, as it is in 32-bit code. Therefore, SLists enable porting high-end scalable algorithms to Windows. Applications can use SLists by calling the InitializeSListHead() function to initialize the head of the list. To insert items into the list, use the InterlockedPushEntrySList() function. To delete items from the list, use the InterlockedPopEntrySList() function.

 

Timer Queues

 

The CreateTimerQueue() function creates a queue for timers. Timers in this queue, known as timer-queue timers, are lightweight objects that enable you to specify a callback function to be called when the specified due time arrives. The wait operation is performed by a thread in the thread pool. To add a timer to the queue, call the CreateTimerQueueTimer() function. To update a timer-queue timer, call the ChangeTimerQueueTimer() function. You can specify a callback function to be executed by a worker thread from the thread pool when the timer expires. To cancel a pending timer, call the DeleteTimerQueueTimer() function. When you are finished with the queue of timers, call the DeleteTimerQueueEx() function to delete the timer queue. Any pending timers in the queue are canceled and deleted.

 

Event Objects

 

Generally, an event is a notification that occurs in response to an action, such as a change in state, or as a result of the user clicking the mouse. An event handler is code, typically a function or routine that receives control when the corresponding event occurs. An event object is a synchronization object whose state can be explicitly set to signaled by use of the SetEvent() function. Following are the two types of event object.

 

Object

Description

Manual-reset event

An event object whose state remains signaled until it is explicitly reset to non-signaled by the ResetEvent() function. While it is signaled, any number of waiting threads, or threads that subsequently specify the same event object in one of the wait functions, can be released.

Auto-reset event

An event object whose state remains signaled until a single waiting thread is released, at which time the system automatically sets the state to non-signaled. If no threads are waiting, the event object's state remains signaled. If more than one thread is waiting, a waiting thread is selected. Do not assume a first-in, first-out (FIFO) order. External events such as kernel-mode APCs can change the wait order.

 

Table 5

 

The event object is useful in sending a signal to a thread indicating that a particular event has occurred. For example, in overlapped input and output, the system sets a specified event object to the signaled state when the overlapped operation has been completed. A single thread can specify different event objects in several simultaneous overlapped operations, then use one of the multiple-object wait functions to wait for the state of any one of the event objects to be signaled. A thread uses the CreateEvent() function to create an event object. The creating thread specifies the initial state of the object and whether it is a manual-reset or auto-reset event object. The creating thread can also specify a name for the event object. Threads in other processes can open a handle to an existing event object by specifying its name in a call to the OpenEvent() function.

 

Waitable Timer Objects

 

A waitable timer object is a synchronization object whose state is set to signaled when the specified due time arrives. There are two types of waitable timers that can be created:

 

  1. Manual-reset.

  2. Synchronization.

  3. A timer of either type can also be a periodic timer.

 

Object

Description

manual-reset timer

A timer whose state remains signaled until SetWaitableTimer() is called to establish a new due time.

synchronization timer

A timer whose state remains signaled until a thread completes a wait operation on the timer object.

periodic timer

A timer that is reactivated each time the specified period expires, until the timer is reset or canceled. A periodic timer is either a periodic manual-reset timer or a periodic synchronization timer.

 

Table 6

 

A thread uses the CreateWaitableTimer() function to create a timer object. Specify TRUE for the bManualReset parameter to create a manual-reset timer and FALSE to create a synchronization timer. The creating thread can specify a name for the timer object in the lpTimerName parameter. Threads in other processes can open a handle to an existing timer by specifying its name in a call to the OpenWaitableTimer() function. Any thread with a handle to a timer object can use one of the wait functions to wait for the timer state to be set to signaled.

  1. The thread calls the SetWaitableTimer() function to activate the timer. Note the use of the following parameters for SetWaitableTimer().

  2. Use the lpDueTime parameter to specify the time at which the timer is to be set to the signaled state. When a manual-reset timer is set to the signaled state, it remains in this state until SetWaitableTimer() establishes a new due time. When a synchronization timer is set to the signaled state, it remains in this state until a thread completes a wait operation on the timer object.

  3. Use the lPeriod parameter of the SetWaitableTimer() function to specify the timer period. If the period is not zero, the timer is a periodic timer; it is reactivated each time the period expires, until the timer is reset or canceled. If the period is zero, the timer is not a periodic timer; it is signaled once and then deactivated.

 

A thread can use the CancelWaitableTimer() function to set the timer to the inactive state. To reset the timer, call SetWaitableTimer(). When you are finished with the timer object, call CloseHandle() to close the handle to the timer object.

 

Mutex Objects

 

A mutex object is a synchronization object whose state is set to signaled when it is not owned by any thread, and non-signaled when it is owned. Only one thread at a time can own a mutex object, whose name comes from the fact that it is useful in coordinating MUTually EXclusive access to a shared resource. For example, to prevent two threads from writing to shared memory at the same time, each thread waits for ownership of a mutex object before executing the code that accesses the memory. After writing to the shared memory, the thread releases the mutex object. A thread uses the CreateMutex() function to create a mutex object. The creating thread can request immediate ownership of the mutex object and can also specify a name for the mutex object. It can also give the mutex a name or create an unnamed mutex. Threads in other processes can open a handle to an existing named mutex object by specifying its name in a call to the OpenMutex() function. To pass a handle to an unnamed mutex to another process, use the DuplicateHandle() function or parent-child handle inheritance. Any thread with a handle to a mutex object can use one of the wait functions to request ownership of the mutex object. If the mutex object is owned by another thread, the wait function blocks the requesting thread until the owning thread releases the mutex object using the ReleaseMutex() function. The return value of the wait function indicates whether the function returned for some reason other than the state of the mutex being set to signaled. If more than one thread is waiting on a mutex, a waiting thread is selected. Do not assume a first-in, first-out (FIFO) order. External events such as kernel-mode APCs can change the wait order.

Threads that are waiting for ownership of a mutex are placed in a first in, first out (FIFO) queue. Therefore, the first thread to wait on the mutex will be the first to receive ownership of the mutex, regardless of thread priority. However, kernel-mode APCs and events that suspend a thread will cause the system to remove the thread from the queue. When the thread resumes its wait for the mutex, it is placed at the end of the queue. After a thread obtains ownership of a mutex, it can specify the same mutex in repeated calls to the wait-functions without blocking its execution. This prevents a thread from deadlocking itself while waiting for a mutex that it already owns. To release its ownership under such circumstances, the thread must call ReleaseMutex() once for each time that the mutex satisfied the conditions of a wait function. If a thread terminates without releasing its ownership of a mutex object, the mutex object is considered to be abandoned. A waiting thread can acquire ownership of an abandoned mutex object, but the wait function will return WAIT_ABANDONED to indicate that the mutex object is abandoned. An abandoned mutex object indicates that an error has occurred and that any shared resource being protected by the mutex object is in an undefined state. If the thread proceeds as though the mutex object had not been abandoned, it is no longer considered abandoned after the thread releases its ownership. This restores normal behavior if a handle to the mutex object is subsequently specified in a wait function. Note that critical section objects provide synchronization similar to that provided by mutex objects, except that critical section objects can be used only by the threads of a single process.

 

Semaphore Objects

 

The fundamental principle is: Two or more processes can cooperate by means of simple signals, such that a process can be forced to stop at a specified place until it has received a specific signal.  Any complex coordination requirement can be satisfied by the appropriate structure of signals.  For signaling, special variables called semaphores are used.  To transmit a signal by semaphore, let called it a sph, a process executes the primitive signal(s).  To receive a signal by semaphore sph, a process executes the primitive wait(s); if the corresponding signal has not yet been transmitted, the process is suspended until the transmission take place. In Windows, a semaphore object is a synchronization object that maintains a count between zero and a specified maximum value. The count is:

 

  1. Decremented each time a thread completes a wait for the semaphore object and

  2. Incremented each time a thread releases the semaphore.

 

When the count reaches zero, no more threads can successfully wait for the semaphore object state to become signaled. The state of a semaphore is set to signaled when its count is greater than zero, and non-signaled when its count is zero. The semaphore object is useful in controlling a shared resource that can support a limited number of users. It acts as a gate that limits the number of threads sharing the resource to a specified maximum number. For example, an application might place a limit on the number of windows that it creates. It uses a semaphore with a maximum count equal to the window limit, decrementing the count whenever a window is created and incrementing it whenever a window is closed. The application specifies the semaphore object in call to one of the wait functions before each window is created. When the count is zero, indicating that the window limit has been reached, the wait function blocks execution of the window-creation code.

A thread uses the CreateSemaphore() function to create a semaphore object. The creating thread specifies the initial count and the maximum value of the count for the object. The initial count must be neither less than zero nor greater than the maximum value. The creating thread can also specify a name for the semaphore object. Threads in other processes can open a handle to an existing semaphore object by specifying its name in a call to the OpenSemaphore() function. If more than one thread is waiting on a semaphore, a waiting thread is selected. Do not assume a first-in, first-out (FIFO) order. External events such as kernel-mode Asynchronous Procedure Calls (APCs) can change the wait order.

Each time one of the wait functions returns because the state of a semaphore was set to signaled, the count of the semaphore is decreased by one. The ReleaseSemaphore() function increases a semaphore's count by a specified amount. The count can never be less than zero or greater than the maximum value.

The initial count of a semaphore is typically set to the maximum value. The count is then decremented from that level as the protected resource is consumed. Alternatively, you can create a semaphore with an initial count of zero to block access to the protected resource while the application is being initialized. After initialization, you can use ReleaseSemaphore() to increment the count to the maximum value. A thread that owns a mutex object can wait repeatedly for the same mutex object to become signaled without its execution becoming blocked. A thread that waits repeatedly for the same semaphore object, however, decrements the semaphore's count each time a wait operation is completed; the thread is blocked when the count gets to zero. Similarly, only the thread that owns a mutex can successfully call the ReleaseMutex() function, though any thread can use ReleaseSemaphore() to increase the count of a semaphore object. A thread can decrement a semaphore's count more than once by repeatedly specifying the same semaphore object in calls to any of the wait functions. However, calling one of the multiple-object wait functions with an array that contains multiple handles of the same semaphore does not result in multiple decrements.

 

Critical Section Objects

 

A critical section object provides synchronization similar to that provided by a mutex object, except that a critical section can be used only by the threads of a single process. Event, mutex, and semaphore objects can also be used in a single-process application, but critical section objects provide a slightly faster, more efficient mechanism for mutual-exclusion synchronization (a processor-specific test and set instruction). Like a mutex object, a critical section object can be owned by only one thread at a time, which makes it useful for protecting a shared resource from simultaneous access. There is no guarantee about the order in which threads will obtain ownership of the critical section, however, the system will be fair to all threads. Unlike a mutex object, there is no way to tell whether a critical section has been abandoned. The process is responsible for allocating the memory used by a critical section. Typically, this is done by simply declaring a variable of type CRITICAL_SECTION. Before the threads of the process can use it, initialize the critical section by using the InitializeCriticalSection() or InitializeCriticalSectionAndSpinCount() function. A thread uses the EnterCriticalSection() or TryEnterCriticalSection() function to request ownership of a critical section. It uses the LeaveCriticalSection() function to release ownership of a critical section. If the critical section object is currently owned by another thread, EnterCriticalSection() waits indefinitely for ownership. In contrast, when a mutex object is used for mutual exclusion, the wait functions accept a specified time-out interval. The TryEnterCriticalSection() function attempts to enter a critical section without blocking the calling thread. Once a thread owns a critical section, it can make additional calls to EnterCriticalSection() or TryEnterCriticalSection() without blocking its execution. This prevents a thread from deadlocking itself while waiting for a critical section that it already owns. To release its ownership, the thread must call LeaveCriticalSection() once for each time that it entered the critical section.

A thread uses the InitializeCriticalSectionAndSpinCount() or SetCriticalSectionSpinCount() function to specify a spin count for the critical section object. On single-processor systems, the spin count is ignored and the critical section spin count is set to 0. On multiprocessor systems, if the critical section is unavailable, the calling thread will spin dwSpinCount times before performing a wait operation on a semaphore associated with the critical section. If the critical section becomes free during the spin operation, the calling thread avoids the wait operation. Any thread of the process can use the DeleteCriticalSection() function to release the system resources that were allocated when the critical section object was initialized. After this function has been called, the critical section object can no longer be used for synchronization. When a critical section object is owned, the only other threads affected are those waiting for ownership in a call to EnterCriticalSection(). Threads that are not waiting are free to continue running.

 

Thread Pooling

 

There are many applications that create threads that spend a great deal of time in the sleeping state waiting for an event to occur. Other threads may enter a sleeping state only to be awakened periodically to poll for a change or update status information. Thread pooling enables you to use threads more efficiently by providing your application with a pool of worker threads that are managed by the system. At least one thread monitors the status of all wait operations queued to the thread pool. When a wait operation has completed, a worker thread from the thread pool executes the corresponding callback function. You can also queue work items that are not related to a wait operation to the thread pool. To request that a work item be handled by a thread in the thread pool, call the QueueUserWorkItem() function. This function takes a parameter to the function that will be called by the thread selected from the thread pool. There is no way to cancel a work item after it has been queued. Timer-queue timers and registered wait operations also use the thread pool. Their callback functions are queued to the thread pool. You can also use the BindIoCompletionCallback() function to post asynchronous I/O operations. On completion of the I/O, the callback is executed by a thread pool thread.

The thread pool is created the first time you call QueueUserWorkItem() or BindIoCompletionCallback(), or when a timer-queue timer or registered wait operation queues a callback function. By default, the number of threads that can be created in the thread pool is about 500. Each thread uses the default stack size and runs at the default priority. There are two types of worker threads in the thread pool: I/O and non-I/O. An I/O worker thread is a thread that waits in an alertable wait state. Work items are queued to I/O worker threads as asynchronous procedure calls (APC). You should queue a work item to an I/O worker thread if it should be executed in a thread that waits in an alertable state. A non-I/O worker thread waits on I/O completion ports. Using non-I/O worker threads is more efficient than using I/O worker threads. Therefore, you should use non-I/O worker threads whenever possible. Both I/O and non-I/O worker threads do not exit if there are pending asynchronous I/O requests. Both types of threads can be used by work items that initiate asynchronous I/O completion requests. However, avoid posting asynchronous I/O completion requests in non-I/O worker threads if they could take a long time to complete. To use thread pooling, the work items and all the functions they call must be thread-pool safe. A safe function does not assume that thread executing it is a dedicated or persistent thread. In general, you should avoid using thread local storage and queuing asynchronous calls that require a persistent thread, such as the RegNotifyChangeKeyValue() function. However, such functions can be queued to a persistent worker thread using QueueUserWorkItem() with the WT_EXECUTEINPERSISTENTTHREAD option. Note that thread pooling is not compatible with the single-threaded apartment (STA) model.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Further reading and digging:

 

  1. Unicode version of the Win32 programming tutorials and more details windows process, threads and synchronization.

  2. For Multibytes, Unicode characters and Localization please refer to Locale, wide characters & Unicode (Story) and Windows users & groups programming tutorials (Implementation).

  3. Structure, enum, union and typedef story can be found C/C++ struct, enum, union & typedef.

  4. Check the best selling C / C++ and Windows books at Amazon.com.

  5. Microsoft Visual C++, online MSDN.

 

 

 

 

 

 

|< Windows Processes & Threads: Synchronization 1 | Main | Windows Processes & Threads: Synchronization 3 >| Site Index | Download |