Sunday, December 31, 2006


Category of TeleStar's Notes in 2006

Tuesday, December 26, 2006


ProcessThread and Thread in C#



A process is a collection of virtual memory space, code, data, and system resources. A thread is code that is to be serially executed within a process. A processor executes threads, not processes, so each 32-bit application has at least one process, and a process always has at least one thread of execution, known as the primary thread. A process can have multiple threads in addition to the primary thread. Prior to the introduction of multiple threads of execution, applications were all designed to run on a single thread of execution.

Processes communicate with one another through messages, using Microsoft's Remote Procedure Call (RPC) technology to pass information to one another. There is no difference to the caller between a call coming from a process on a remote machine and a call coming from another process on the same machine.

When a thread begins to execute, it continues until it is killed or until it is interrupted by a thread with higher priority (by a user action or the kernel's thread scheduler). Each thread can run separate sections of code, or multiple threads can execute the same section of code. Threads executing the same block of code maintain separate stacks. Each thread in a process shares that process's global variables and resources.

The thread scheduler determines when and how often to execute a thread, according to a combination of the process's priority class attribute and the thread's base priority. You set a process's priority class attribute by calling the Win32® function SetPriorityClass, and you set a thread's base priority with a call to SetThreadPriority.

Multithreaded applications must avoid two threading problems: deadlocks and races. A deadlock occurs when each thread is waiting for the other to do something. The COM call control helps prevent deadlocks in calls between objects. A race condition occurs when one thread finishes before another on which it depends, causing the former to use a bogus value because the latter has not yet supplied a valid one. COM supplies some functions specifically designed to help avoid race conditions in out-of-process servers. (See Out-of-Process Server Implementation Helpers.)


In most multithreading operating systems, a process gets its own memory address space; a thread doesn't. Threads typically share the heap belonging to their parent process. For instance, a JVM runs in a single process in the host O/S. Threads in the JVM share the heap belonging to that process; that's why several threads may access the same object. Typically, even though they share a common heap, threads have their own stack space. This is how one thread's invocation of a method is kept separate from another's. This is all a gross oversimplification, but it's accurate enough at a high level. Lots of details differ between operating systems.


C#中的Process属于namespace System.Diagnostics,从rotor的代码来看,它就是系统process的一个简单薄层封装。



C#中的ProcessThread属于namespace System.Diagnostics,不过没有代码可看,估计也是一个系统thread的简单薄层封装。所以我们还有如下的属性可以使用



可是我们一般都是从namespace System.Threading派生thread来用,这个thread是DotNet新创建的类,完全没有任何与计时有关的东西,所以不能通过比较每个thread的实际花费时间来分析算法的瓶颈。

那么虚拟机到底是怎么调度threadpool里面的thread的,难道它能不考虑每个thread花费的时间吗?我在网上翻到Chris Brumme大牛的答案是这样的

Threads, fibers, stacks & address space
by Chris Brumme

Every so often, someone tries to navigate from a managed System.Threading.Thread object to the corresponding ThreadId used by the operating system.

System.Diagnostic.ProcessThread exposes the Windows notion of threads. In other words, the OS threads active in the OS process.

System.Threading.Thread exposes the CLR’s notion of threads. These are logical managed threads, which may not have a strict correspondence to the OS threads. For example, if you create a new managed thread but don’t start it, there is no OS thread corresponding to it. The same is true if the thread stops running – the managed object might be GC-reachable, but the OS thread is long gone. Along the same lines, an OS thread might not have executed any managed code yet. When this is the case, there is no corresponding managed Thread object.

A more serious mismatch between OS threads and managed threads occurs when the CLR is driven by a host which handles threading explicitly. Even in V1 of the CLR, our hosting interfaces reveal primitive support for fiber scheduling. Specifically, look at ICorRuntimeHost’s LogicalThreadState methods. But please don’t use those APIs – it turns out that they are inadequate for industrial-strength fiber support. We’re working to get them where they need to be.

In a future CLR, a host will be able to drive us to map managed threads to host fibers, rather than to OS threads. The CLR cooperates with the host’s fiber scheduler in such a way that many managed threads are multiplexed to a single OS thread, and so that the OS thread chosen for a particular managed thread may change over time.

When your managed code executes in such an environment, you will be glad that you didn’t confuse the notions of managed thread and OS thread.

When you are running on Windows, one key to good performance is to minimize the number of OS threads. Ideally, the number of OS threads is the same as the number of CPUs – or a small multiple thereof. But you may have to turn your application design on its head to achieve this. It’s so much more convenient to have a large number of (logical) threads, so you can keep the state associated with each task on a stack.

When faced with this dilemma, developers sometimes pick fibers as the solution. They can keep a large number of cooperatively scheduled light-weight fibers around, matching the number of server requests in flight. But at any one time only a small number of these fibers are actively scheduled on OS threads, so Windows can still perform well.

SQL Server supports fibers for this very reason.

However, it's hard to imagine that fibers are worth the incredible pain in any but the most extreme cases. If you already have a fiber-based system that wants to run managed code, or if you’re like SQL Server and must squeeze that last 10% from a machine with lots of CPUs, then the hosting interfaces will give you a way to do this. But if you are thinking of switching to fibers because you want lots of threads in your process, the work involved is enormous and the gain is slight.

Instead, consider techniques where you might keep most of your threads blocked. You can release some of those threads based on CPU utilization dropping, and then use various application-specific techniques to get them to re-block if you find you have released too many. This kind of approach avoids the rocket science of non-preemptive scheduling, while still allowing you to have a larger number of threads than could otherwise be efficiently scheduled by the OS.

Of course, the very best approach is to just have fewer threads. If you schedule your work against the thread pool, we'll try to achieve this on your behalf. Our threadpool will pay attention to CPU utilization, managed blocking, garbage collections, queue lengths and other factors – then make sensible dynamic decisions about how many work items to execute concurrently. If that’s what you need, stay away from fibers.

If you have lots of threads or fibers, you may have to reduce your default stack size. On Windows, applications get 2 GB of address space. With a default stack size of 1 MB, you will run out of user address space just before 2000 threads. Clearly that’s an absurd number of threads. But it’s still the case that with a high number of threads, address space can quickly become a scarce resource.

On old versions of Windows, you controlled the stack sizes of all the threads in a process by bashing a value in the executable image. Starting with Windows XP and Windows Server 2003, you can control it on a per-thread basis. However, this isn’t exposed directly because:

1) It is a recent addition to Windows.

2) It’s not a high priority for non-EXE’s to control their stack reservation, since there are generally few threads and lots of address space.

3) There is a work-around.

The work-around is to PInvoke to CreateThread, passing a Delegate to a managed method as your LPTHREAD_START_ROUTINE. Be sure to specify STACK_SIZE_PARAM_IS_A_RESERVATION in the CreationFlags. This is clumsy compared to calling Thread.Start(), but it works.

Incidentally, there’s another way to deal with the scarce resource of 2 GB of user address space per process. You can boot the operating system with the /3GB switch and – starting with the version of the CLR we just released – any managed processes marked with IMAGE_FILE_LARGE_ADDRESS_AWARE can now take advantage of the increased user address space. Be aware that stealing all that address space from the kernel carries some real costs. You shouldn’t be running your process with 3 GB of user space unless you really need to.

The one piece of guidance from all of the above is to reduce the number of threads in your process by leveraging the threadpool. Even client applications should consider this, so they can work well in Terminal Server scenarios where a single machine supports many attached clients.

flier说得很对,Chris Brumme的blog是不能不看的。





MyThread thread=new MyThread()

thread.Work+=new ThreadWork(Calculate)

thread.WorkComplete+=new WorkComplete(DisplayResult)

Calculate(object sender, EventArgs e)){



DisplayResult(object sender, EventArgs e)){






_Task = new newasynchui();
_Task.TaskProgressChanged += new TaskEventHandler( OnTaskProgressChanged1 );

private void OnTaskProgressChanged1( object sender,TaskEventArgs e )
if (InvokeRequired ) //不在UI线程上,异步调用
TaskEventHandler TPChanged1 = new TaskEventHandler( OnTaskProgressChanged1 );
this.BeginInvoke(TPChanged1,new object[] {sender,e});
progressBar.Value = e.Progress;






这个模式来保证方法在多线程和单线程下都可以运行,所以线程逻辑和界面逻辑混合在了一起,以至把以前很简单的只需要一句话的任务:progressBar.Value = e.Progress;搞的很复杂,如果线程类作为公共库来提供,对编写事件的人要求会相对较高,那么有什么更好的办法呢?


System.ComponentModel.BackgroundWorker bw = new System.ComponentModel.BackgroundWorker();

bw.DoWork += new System.ComponentModel.DoWorkEventHandler(bw_DoWork);

bw.RunWorkerCompleted += new System.ComponentModel.RunWorkerCompletedEventHandler(bw_RunWorkerCompleted);


static void bw_RunWorkerCompleted(object sender, System.ComponentModel.RunWorkerCompletedEventArgs e)

static void bw_DoWork(object sender, System.ComponentModel.DoWorkEventArgs e)

注意我在两个函数中输出了当前线程的ID,当我们在WindowsForm程序中执行上述代码时,我们惊奇的发现,bw_RunWorkerCompleted这个回调函数居然是运行在UI线程中的,也就是说在这个方法中我们不用再使用Invoke和BeginInvoke调用winform中的控件了, 更让我奇怪的是,如果是在ConsoleApplication中同样运行这段代码,那么bw_RunWorkerCompleted输出的线程id和主线程id就并不相同.


阅读一下这个类的代码,我们发现他借助了AsyncOperation.Post(SendOrPostCallback d, object arg)




public virtual void Post(SendOrPostCallback d, object state)
ThreadPool.QueueUserWorkItem(new WaitCallback(d.Invoke), state);




public override void Post(SendOrPostCallback d, object state)
if (this.controlToSendTo != null)
this.controlToSendTo.BeginInvoke(d, new object[] { state });


总结: 同事这个类还提供了进度改变事件,允许用户终止线程,功能全面,内部使用了线程池,能在一定成都上避免了大量线程的资源耗用问题,并通过SynchronizationContext解决了封送的问题,让我们的回调事件代码逻辑简单清晰,推荐大家使用

WinForms UI Thread Invokes: An In-Depth Review of Invoke/BeginInvoke/InvokeRequred

By Justin Rogers

Abstract:Marshalling the execution of your code onto the UI thread in the Windows Forms environment is critical to prevent cross-thread usage of UI code. Most people don't understand how or when they'll need to use the marshalling behavior or under what circumstances it is required and when it is not. Other users don't understand what happens when you use the marshalling behavior but it isn't needed. In actuality it has no negative effects on stability, and instead reserves any negative side effects to performance only.
Understanding the semantics of when your callback methods will be called, in what order, and how might be very important to your application. In addition to the default marhalling behavior, I'll be covering special considerations for enhancing the marhsalling behavior once we fully understand how it works. We'll also cover all of the normal scenarios and uses for code execution marhsalling to make this a complete Windows Forms marshalling document.
UCS 1: Using InvokeRequired and Invoke for Synchronous Marshalling, the default scenario
UCS 2: Using BeginInvoke for Asynchronous Marshalling
InvokeRequired and how it works
Invoke operation on the UI thread and from a different thread
InvokeMarshaledCallbacks and how it handles the callback queue
BeginInvoke operation on the UI thread and from a different thread
UCS 3: Using BeginInvoke to change a property after other events are processed, and why it can fail
Public and Internal Methods covered with a short description of what they do
1. UCS 1: Using InvokeRequired and Invoke for Synchronous Marshalling, the default scenarioI call this the default scenario, because it identifies the most prominent use of UI thread marshalling. In this scenario the user is either on the UI thread or they are not, and most likely they aren't sure. This can occur when you use common helper methods for acting on the UI that are called from your main code (most likely on the UI thread), and in code running on worker threads.
You can always tell if an Invoke is going to be required by calling InvokeRequired. This method finds the thread the control's handle was created on and compares it to the current thread. In doing so it can tell you whether or not you'll need to marshal. This is extremely easy to use since it is a basic property on Control. Just be aware that there is some work going on inside the method and it should have possibly been made a method instead.
Button b = new Button(); // Creates button on the current threadif ( b.InvokeRequired ) { // This shouldn't happen since we are on the same thread }else { // We should fall into here }
If your code is running on a thread that the control was not created on then InvokeRequired will return true. In this case you should either call Invoke or BeginInvoke on the control before you execute any code. Invoke can either be called with just a delegate, or you can specify arguments in the form of an object[]. This part can be confusing for a lot of users, because they don't know what they should pass to the Invoke method in order to get their code to run. For instance, let's say you are trying to do something simple, like call a method like Focus(). Well, you could write a method that calls Focus() and then pass that to Invoke.
myControl.Invoke(new MethodInvoker(myControl.Hide());
Noticed I used MethodInvoker. This is a special delegate that takes no parameters so it can be used to call any methods that take 0 parameters. In this case Focus() takes no arguments so things work. I'm telling the control to invoke the method right off of myControl, so I don't need any additional information. What happens if you need to call a bunch of methods on myControl? In that case you'll need to define a method that contains all of the code you need run and then Invoke it.
private void BunchOfCode() { myControl.Focus(); myControl.SomethingElse();}myControl.Invoke(new MethodInvoker(this.BunchOfCode());
This solves one problem, but leaves another. We just wrote code that only works only for myControl because we hard coded the control instance into our method. We can overcome this by using an EventHandler syntax instead. We'll cover the semantics of this later, so I'll just write some code that works now.
private void BunchOfCode(object sender, EventArgs e) { Control c = sender as Control; if ( c != null ) { c.Focus(); c.SomethingElse(); }}myControl.Invoke(new EventHandler(BunchOfCode));
EventArgs is always going to be empty, while sender will always be the control that Invoke was called on. There is also a generic helper method syntax you can use to circumvent any of these issues that makes use of InvokeRequired. I'll give you a version of that works with MethodInvoker and one that works with EventHandler for completeness.
private void DoFocusAndStuff() { if ( myControl.InvokeRequired ) { myControl.Invoke(new MethodInvoker(this.DoFocusAndStuff)); } else { myControl.Focus(); myControl.SomethingElse(); }}private void DoFocusAndStuffGeneric(object sender, EventArgs e) { Control c = sender as Control; if ( c != null ) { if ( c.InvokeRequired ) { c.Invoke(new EventHandler(this.DoFocusAndStuffGeneric)); } else { c.Focus(); c.SomethingElse(); } }}
Once you've set up these helper functions, you can just call them and they handle cross thread marshalling for you if needed. Notice how each method simply calls back into itself as the target of the Invoke call. This lets you put all of the code in a single place. This is a great abstraction that you can add to your application to automatically handle marshalling for you. We haven't yet had to define any new delegates to handle strange method signatures, so these techniques have low impact on the complexity of your code. I'll wrap up the Invoke use case scenario there and move into the BeginInvoke scenario.
2. UCS 2: Using BeginInvoke for Asynchronous MarshallingWhenever you call Invoke, you have to wait for the return call, so your current thread hangs until the remote operation completes. This can take some time since lots of things need to happen in order to schedule your code on the UI thread and have it execute. While you don't really have to worry that an Invoke might block indefinitely, you still can't determine exactly how long it will take (unless it really wasn't required in the first place, but we'll get to that later). In these cases you'll want to call Invoke asynchronously.
Calling your code asynchronously is simliar to calling it through Invoke. The only difference is that BeginInvoke will return immediately. You can always check for the results of your operation by calling EndInvoke, but you don't have to. In general, you'll almost never use EndInvoke unless you actually want the return value from the method which is fairly rare. The same plumbing is in the back-end for BeginInvoke as for Invoke so all we'll be doing is changing our code from UCS 1 to use BeginInvoke.
private void DoFocusAndStuff() { if ( myControl.InvokeRequired ) { myControl.BeginInvoke(new MethodInvoker(this.DoFocusAndStuff)); } else { myControl.Focus(); myControl.SomethingElse(); }}private void DoFocusAndStuffGeneric(object sender, EventArgs e) { Control c = sender as Control; if ( c != null ) { if ( c.InvokeRequired ) { c.BeginInvoke(new EventHandler(this.DoFocusAndStuffGeneric)); } else { c.Focus(); c.SomethingElse(); } }}
What happens if you do need the return value? Well, then the use case changes quite a bit. You'll need to wait until the IAsyncResult has been signalled complete and then call EndInvoke on this object to get your value. The following code will will grab the return value and then immediately call EndInvoke. Note that since the result is probably not ready yet, EndInvoke will hang. Using this combination of BeginInvoke/EndInvoke is the same as just calling Invoke.
IAsyncResult result = myControl.BeginInvoke(new MethodInvoker(myControl.Hide());myControl.EndInvoke(result);
So we'll change our behavior to check for completion status. We'll need to find some way to poll the completion status value so we don't hang our current thread and can continue doing work while we wait. Normally you'll just put places in your code to check the result status and return. We don't have the time nor space to make up such an elaborate sample here, so we'll just pretend we are doing work.
IAsyncResult result = myControl.BeginInvoke(new MethodInvoker(myControl.Hide());while ( !result.IsCompleted ) { // Do work somehow }myControl.EndInvoke(result);
The BeginInvoke use case scenario isn't much different from the Invoke scenario. The underlying reason behind using one over the other is simply how long you are willing to wait for the result. There is also the matter of whether you want the code to execute now or later. You see, if you are on the UI thread already and issue an Invoke the code runs immediately. If you instead issue a BeginInvoke you can continue executing your own code, and then only during the next set of activity on the message pump will the code be run. If you have some work to finish up before you yield execution then BeginInvoke is the answer for you.
You have to be careful when using BeginInvoke because you never know when your code will execute. The only thing you are assured is that your code will be placed on the queue and executed in the order it was placed there. This is the same guarantee you get for Invoke as well, though Invoke places your code on the queue and then exhausts it (running any queued operations). We'll examine this in more detail in later sections. For now, let's take a hard look at InvokeRequired.
3. InvokeRequired and how it worksThis is a read-only property that does quite a bit of work. You could say it ran in determinate time in most cases, but there are degenerate cases where it can take much longer. In fact the only time it is determinate is if IsHandleCreated is true meaning the control you are using is fully instantiated and has a windows handle associated with it.
If the handle is created then control falls into the check logic to see if the windows thread process id is the same as the current thread id. They use GetWindowThreadProcessID, a Win32 API call, to check the handle and find it's thread and process ID (note the process ID doesn't appear to be used). Then they grab the current thread ID through none other than GetCurrentThreadID. The result of InvokeRequired is nothing more than (threadID != currentThreadID). Pretty basic eh?
Things get more difficult when your control's handle is not created yet. In this case they have to find what they call a marshalling control for your control. This process can take some time. They walk the entire control hiearchy trying to find out if any of your parent control's have been instantiated yet and have a valid handle. Normally they'll find one. As soon as they do they fall out and return that control as your marshalling control. If they can't find any the have a fallback step. They get the parking window. They make one of these parking windows on every thread that has a message pump apparently, so no matter where you create your controls (no matter what thread) there should be at least one control that can be used as the marshalling control (unless maybe you are running in the designer ;-).
Application.GetParkingWindow is nasty. After all, this is the final fallback and the last ditch effort to find some control that can accept your windows message. The funny thing here is that GetParkingWindow is extremely determinant if your control is already created. They have some code that basically gets the ThreadContext given the thread ID of your control. That is what we've been looking for this entire time, so that code-path must be used somewhere else (darn IL is getting muddied, thank god these are small methods).
Then they start doing the magic. They assume the control is on the current thread. This is just an assumption, and it might not be true, but they make it for the sake of running the method. They get the parking window off of this current TheadContext and return that. If it hasn't been created yet, we are really screwed because that was our last chance to find a marshalling control. At this point, if we still don't have a marshalling control, they return the original control you passed in.
At the end of this entire process, if we find a marshalling control, that is used with GetWindowThreadProcessID. If not, we simply return false, indicating that an Invoke is not required. This is important. It basically means if the handle isn't created, it doesn't matter WHAT thread you are on when you call into the control. Reason being, is that there isn't any Handle, which means no real control exists yet, and all of the method calls will probably fail anyway (some won't, but those that require a HWND or Windows Handle will). This also means you don't always have to call control methods on the UI thread, only those that aren't thread safe. With InvokeRequired to the side, it is time to talk about Invoke and what it goes through.
4. Invoke operation on the UI thread and from a different threadTime to examine the Invoke operation and what is involed. To start with, we'll examine what happens when the Invoke operation is happening on the same thread as the UI thread for the control. This is a special case, since it means we don't have to marshal across a thread boundary in order to call the delegate in question.
All of the real work happens in MarshaledInvoke. This call is made on the marshalling control, so the first step is to get the marshaling control through FindMarshalingControl. The first Invoke method, without arguments, calls the Invoke method with a null argument set. The overriden Invoke in turn calls MarshaledInvoke on the marshaling control passing in the current caller (note we need this because the marshalling control might be different from the control we called Invoke on), the delegate we are marshalling, the arguments, and whether or not we want synchronous marshaling. That second parameter is there so we can use the same method for asynchronous invokes later.
// The method looks something like this and it is where all of the action occursobject MarshaledInvoke(Control invokeControl, Delegate delegate, object[] arguments, bool isSynchronous);
If the handle on the marhaling control is invalid, you get the classic exception telling you the handle isn't created and that the Invoke or what not failed. There is also some gook about ActiveX controls in there that I don't quite understand, but they appear to be demanding some permissions. Then comes the important part for calling Invoke on the UI thread. They again check the handle's thread id against the current thread id, and if we are running synchronously, they set a special bool indicating we are running synchronously and are operating on the same thread. This is the short-circuit code that gets run only when you call Invoke and are on the same thread.
Since the special case is enabled, we'll immediately call the InvokeMarshaledCallbacks method rather than posting a message to the queue. Note all other entries into this method, and all other conditions will cause a windows message to be posted and InvokeMarshaledCallbacks will later be called from the WndProc of the control once the message is received.
There is some more code before this point. Basically, they make a copy of the arguments you pass in. This is pretty smart, since I'm guessing you could try changing the arguments in the original array and thus the arguments to your delegate if they didn't make the copy. It also means, once Invoke or BeginInvoke is called, you can change your object array of parameters, aka you can reuse the array, which is pretty nice for some scenarios.
After they copy your parameters into a newly allocated array they take the liberty of grabbing the current stack so they can reattach it to the UI thread. This is for security purposes so you can't try to Invoke code on the UI thread that you wouldn't have been able to run on your own thread. They use CompressedStack for this operation and the GetCompressedStack method. While this is a public class inside of mscorlib.dll, there is NO documentation for it. It seems to me that this might be a very interesting security mechanism for API developers, but they don't give you any info on it. Maybe I'll write something about how to use it later.
With this in place, they construct a new ThreadMethodEntry. These guys are the work horse. They get queued into a collection, and are later used to execute your delegate. It appears the only additional parameter used to create this class over calling MarshaledInvoke is the CompressedStack. They also used the copied arguments array instead of the original.
They then grab the queue for these guys off of the property bag. You could never do this yourself, because they index the properties collection using object instances that you can't get access to. This is a very interesting concept, to create an object used to index a hashtable or other collection that nobody else has access to. They store all of the WinForms properties this way, as well as the events.
Finally, they queue the ThreadMethodEntry onto the queue and continue. They appear to do a bunch of locking to make all of this thread-safe. While the Invoke structure is a pain in the rear, I'm glad they reserve all of this locking to a few select methods that handle all of the thread safe operations.
Since this is an Invoke there is additional code required to make sure the operation happens synchronously. The ThreadMethodEntry implements IAsyncResult directly, so on Invoke calls, we check to make sure it isn't already completed (a call to IsCompleted), and if it isn't, we grab the AsyncWaitHandle and do a WaitOne call. This will block our thread until the operation completes and we can return our value. Why did we make a call to IsCompleted first? Well, remember that call we made to InvokeMarshaledCallbacks? Well, when we do that our operation will already be complete once we get to that portion of the code. If we didn't make this check and instead just started a WaitOne on the handle, we'd hang indefinitely.
Once the operation either completes or was already completed, we look for any exceptions. If there are exceptions, we throw them. Here have some exceptions they say ;-) If no exceptions were thrown then we return a special return value property stored on the ThreadMethodEntry. This value is set in InvokeMarshaledCallbacks when we invoke the delegate.
If you are running off the UI thread, how do things change? Well, we don't have the special same thread operation involved this time, so instead we post a message to the marshaling control. This is a special message that is constructed using some internal properties and then registered using RegisterWindowMessage. This ensures that all controls will use the same message for this callback preventing us from register a bunch of custom windows messages.
InvokeMarshaledCallbacks is an important method since it gets called both synchronously if we are on the same thread as the UI and from the WndProc in the case we aren't. This is where all of the action of calling our delegate happens and so it is where we'll be next.
5. InvokeMarshaledCallbacks and how it handles the callback queueThis method is deep. Since it has to be thread safe, we get lots of locking (even though we should only call this method from the UI thread, we have to make sure we don't step on others that are accessing the queue to add items, while we remove them). Note that this method will continue processing the entire queue of delegates, and not just one. Calling this method is very expensive, especially if you have a large number of delegates queued up. You can start to better understand the performance possibilities of asynchronous programming and how you should avoid queuing up multiple delegates that are going to do the same thing (hum, maybe that IAsyncResult will come in handy after all ;-)
We start by grabbing the delegate queue and grabbing a start entry. Then we start up a loop to process all of the entries. Each time through the loop the current delegate entry gets updated and as soon as we run out of elements, the loop exits. If you were to start an asynchronous delegate from inside of another asynchronous delegate, you could probably hang your system because of the way this queue works. So you should be careful.
The top of the loop does work with the stack. We grab the current stack so we can restore it later, then set the compressed stack that was saved onto the ThreadMethodEntry. That'll ensure our security model is in place. Then we run the delegate. There are some defaults. For instance, if the type is MethodInvoker, we cast it and call it using a method that yields better performance. If the method is of type EventHandler, then we automatically set the parameters used to call the EventHandler. In this case the sender will be the original caller, and the EventArgs will be EventArgs.Empty. This is pretty sweet, since it simplifies calling EventHandler definitions. It also means we can't change the sender or target of an EventHandler definition, so you have to be careful.
If the delegate isn't of one of the two special types then we do a DynamicInvoke on it. This is a special method on all delegates and we simply pass in our argument array. The return value is stored on our ThreadMethodEntry and we continue. The only special case is that of an exception. If an exception is thrown, we store the exception on the ThreadMethodEntry and continue.
Exiting our delegate calling code, we reset the stack frame to the saved stack frame. We then call Complete on our ThreadMethodEntry to signal anybody waiting for it to finish. If we are running asynchronously and there were exceptions we call Application.OnThreadException(). You may have noticed these exceptions happening in the background when you call BeginInvoke in your application, and this is where they come from. With all of that complete, we are done. That concludes all of the code required to understand an Invoke call, but we still have some other cases for BeginInvoke, so let's look at those.
6. BeginInvoke operation on the UI thread and from a different threadHow much different is BeginInvoke from the basic Invoke paradigm? Well, not much. There are only a couple of notes, so I don't take a bunch of your time redefining all of the logic we already discussed. The first change is how we call MarshaledInvoke. Instead of specifying true for running synchronously we instead specify false. There is also no special case for running synchronously on the UI thread, instead we always post a message to the windows pump. Finally, rather than having synchronization code on the ThreadMethodEntry, we return it immediately as an IAsyncResult that can be used to determine when the method has completed later or with EndInvoke.
That is where all of the new logic is, EndInvoke. You see, we need additional logic for retrieving the result of the operation and making sure it is completed. EndInvoke can be a blocking operation if IsCompleted is not already true on the IAsyncResult. So basically, we do a bunch of checks to make sure the IAsyncResult passed in really is a ThreadMethodEntry. If it is, and it hasn't completed, we do the same synchronization logic we did on the Invoke version, with some small changes. First, we try to do an InvokeMarshaledCallbacks if we are on the same thread. This is similar to the same thread synchronization we did in the first case. If we aren't on the same thread, then we wait on the AsyncWaitHandle. They have some code that is dangerously close to looking like a race condition here, but I think they've properly instrumented everything to prevent that scenario.
As we fall through all of the synchronization we again check for exceptions. Just like with Invoke we throw them if we have them. A lot of people don't catch these exceptions or assume they won't happen, so a lot of asynchronous code tends to fail. Catch your exceptions people ;-) If no exceptions were thrown then we return the value from the delegate and everything is done.
You see, not many changes are required in order to implement BeginInvoke over top of the same code we used in Invoke. We've already covered the changes in InvokeMarshaledCallbacks, so we appear to be complete. Time for a sample.
7. UCS 3: Using BeginInvoke to change a property after other events are processed, and why it can failSometimes events in Windows Forms can transpire against you. The classic example I use to explain this process is the AfterNodeSelect event of the TreeView control. I generally use this event in order to update a ListBox or other control somewhere on the form, and often you want to transfer focus to a new control, probably the ListBox. If you try to set the Focus within the event handler, then later on when the TreeView gets control back after the event, it sets the Focus right back to itself. You feel like nothing happened, even though it did.
You can easily fix this by using a BeginInvoke to set focus instead. We'll call Focus directly so we need to define a new delegate. We'll call it a BoolMethodInvoker since Focus() returns a bool, we can't just use the basic MethodInvoker delegate (what a shame eh?)
// Declare the delegate outside of your class or as a nested class memberprivate delegate bool BoolMethodInvoker();// Issue this call from your event instead of invoking it directly.listPictures.BeginInvoke(new BoolMethodInvoker(listPictures.Focus));
Now, knowing a bit about how the BeginInvoke stuff works, there is a way to screw yourself over. First, your method may get executed VERY soon. As a matter of fact, the next message on the pump might be a marshalling message, and then other messages in the pump that you wanted to go after might still be executed after you. In many cases your method calls will still generate even more messages so this can be circumvented a bit, but possibly not.
There is a second issue as well. If another code source calls an Invoke and you are on the UI thread, then your method may get processed even before the event handlers are done executing and the TreeView gets control back to make it's focus call. This is an edge case, but you can imagine you might run into scenarios where you want some asynchronous operations and some synchronous. You need to be aware than any synchronous call can possibly affect your asynchronous calls and cause them to be processed.
8. Public and Internal Methods covered with a short description of what they doThese are all of the public and internal methods that we covered and what they do. Kind of a quick reference. I'll probably find this very helpful later when I'm trying to derive some new functionality and I don't want to have to read my entire article.
InvokeRequired - Finds the most appropriate control and uses the handle of that control to get the thread id that created it. If this thread id is different than the thread id of the current thread then an invoke is required, else it is not. This method uses a number of internal methods to solve the issue of the most appropriate control.
Invoke - This method sets up a brand new synchronous marshalled delegate. The delegate is marshalled to the UI thread while your thread waits for the return value.
BeginInvoke - This method sets up a brand new asynchronous marshalled delegate. The delegate is marshalled to the UI thread while your thread continues to operate. An extended usage of this method allows you to continue working on the UI thread and then yield execution to the message pump allowing the delegate to be called.
EndInvoke - This method allows you to retrieve the return value of a delegate run by the BeginInvoke call. If the delegate hasn't returned yet, EndInvoke will hang until it does. If the delegate is alread complete, then the return value is retrieved immediately.
MarshaledInvoke - This method queues up marshaling actions for both the Invoke and BeginInvoke layers. Depending on the circumstances this method can either immediately execute the delegates (running on the same thread) or send a message into the message pump. It also handles wait actions during the Invoke process or returns an IAsyncResult for use in BeginInvoke.
InvokeMarshaledCallbacks - This method is where all of your delegates get run. This method is either called from MarshaledInvoke or WndProc depending on the circumstances. Once inside of this method, the entire queue of delegates is run through and all events are signalled allowing any blocking calls to operate (Invoke or EndInvoke calls) and setting all IAsyncResult objects to the IsCompleted = true state. This method also handles exception logic allowing exceptions to be thrown back on the original thread for Invoke calls or tossed into the applications thread exception layer if you are using BeginInvoke and were running asynchronous delegates.
FindMarshallingControl - Walks the control tree from current back up the control hierarchy until a valid control is found for purposes of finding the UI thread id. If the control hierarchy doesn't contain a control with a valid handle, then a special parking window is retrieved. This method is used by many of the other methods since a marshalling control is the first step in marshalling a delegate to the UI thread.
Application.GetParkingWindow - This method takes a control and finds the marking window for it. If the control has a valid handle then the thread id of the control is found, the ThreadContext for that thread is retreived, and the parking window is returned. If the control does not have a valid handle then the ThreadContext of the current thread is retrieved and the parking window is returned. If no context is found (really shouldn't happen) null is returned.
ThreadContext.FromId - This method takes a thread id and indexes a special hash to find the context for the given thread. If one doesn't exist then a new ThreadContext is created and returned in it's place.
ThreadContext.FromCurrent - This method grabs the current ThreadContext out of thread local storage. I'm guessing this must be faster than getting the current thread id and indexing the context hash, else why would they use thread local storage at all?
ThreadContext..ctor() - This is the most confusing IL to examine, but it appears the constructor does some self registration into a context hash that the other methods use to get the context for a given thread. They wind up using some of the Thread methods, namely SetData, to register things into thread local storage. Why they use thread local storage and a context hash indexed by thread ID, I'm just not sure.
9. ConclusionYou've learned quite a bit about the Windows Forms marshalling pump today and how it handles all of the various methods of cross thread marshalling. You've also gotten a peak deeper into the Windows Forms source through a very detailed IL inspection. I've come up with some derived concepts based on this whole process, so maybe these will lead into some even more compelling articles. Even more importantly, we've learned how the process can break down if we are expecting a specific order of events.
I had never fully examined this code before, so even I was surprised at some of what I found. For instance, the performance implications of calling the same method multiple times asynchronously might be something that should be considered. Knowing that all delegates will be processed in a tight loop is pretty huge and that items can be queued while others are being dequeued (aka you can hang yourself). Finally, the realization that if you use an EventHandler type, you can't pass in the sender explicitly might lead to confusion for some folks. After all, if you mock up an arguments array and pass it to Invoke or BeginInvoke you would expect it to be used.




  我的WinForm程序中有一个用于更新主窗口的工作线程(worker thread),但文档中却提示我不能在多线程中调用这个form(为什么?),而事实上我在调用时程序常常会崩掉。请问如何从多线程中调用form中的方法呢?


  每一个从Control类中派生出来的WinForm类(包括Control类)都是依靠底层Windows消息和一个消息泵循环(message pump loop)来执行的。消息循环都必须有一个相对应的线程,因为发送到一个window的消息实际上只会被发送到创建该window的线程中去。其结果是,即使提供了同步(synchronization),你也无法从多线程中调用这些处理消息的方法。大多数plumbing是掩藏起来的,因为WinForm是用代理(delegate)将消息绑定到事件处理方法中的。WinForm将Windows消息转换为一个基于代理的事件,但你还是必须注意,由于最初消息循环的缘故,只有创建该form的线程才能调用其事件处理方法。如果你在你自己的线程中调用这些方法,则它们会在该线程中处理事件,而不是在指定的线程中进行处理。你可以从任何线程中调用任何不属于消息处理的方法。

  Control类(及其派生类)实现了一个定义在System.ComponentModel命名空间下的接口 -- ISynchronizeInvoke,并以此来处理多线程中调用消息处理方法的问题:

public interface ISynchronizeInvoke
 object Invoke(Delegate method,object[] args);
 IAsyncResult BeginInvoke(Delegate method,object[] args);
 object EndInvoke(IAsyncResult result);
 bool InvokeRequired {get;}

  ISynchronizeInvoke提供了一个普通的标准机制用于在其他线程的对象中进行方法调用。例如,如果一个对象实现了ISynchronizeInvoke,那么在线程T1上的客户端可以在该对象中调用ISynchronizeInvoke的Invoke()方法。Invoke()方法的实现会阻塞(block)该线程的调用,它将调用打包发送(marshal)到 T2,并在T2中执行调用,再将返回值发送会T1,然后返回到T1的客户端。Invoke()方法以一个代理来定位该方法在T2中的调用,并以一个普通的对象数组做为其参数。



Form form;
/* obtain a reference to the form,
then: */
ISynchronizeInvoke synchronizer;
synchronizer = form;

MethodInvoker invoker = new


  C# 在正确的线程中写入调用

  列表A. Calculator类的Add()方法用于将两个数字相加。如果用户直接调用Add()方法,它会在该用户的线程中执行调用,而用户可以通过ISynchronizeInvoke.Invoke()将调用写入正确的线程中。

public class Calculator : ISynchronizeInvoke
 public int Add(int arg1,int arg2)
  int threadID = Thread.CurrentThread.GetHashCode();
  Trace.WriteLine( "Calculator thread ID is " + threadID.ToString());
  return arg1 + arg2;
 //ISynchronizeInvoke implementation
 public object Invoke(Delegate method,object[] args)
  public IAsyncResult BeginInvoke(Delegate method,object[] args)
   public object EndInvoke(IAsyncResult result)
    public bool InvokeRequired
   //Client-side code
   public delegate int AddDelegate(int arg1,int arg2);

    int threadID = Thread.CurrentThread.GetHashCode();
    Trace.WriteLine("Client thread ID is " + threadID.ToString());

    Calculator calc;
    /* Some code to initialize calc */

    AddDelegate addDelegate = new AddDelegate(calc.Add);

    object[] arr = new object[2];
    arr[0] = 3;
    arr[1] = 4;

    int sum = 0;
    sum = (int) calc.Invoke(addDelegate,arr);
    Debug.Assert(sum ==7);

    /* Possible output:
    Calculator thread ID is 29
    Client thread ID is 30

  或许你并不想进行同步调用,因为它被打包发送到另一个线程中去了。你可以通过BeginInvoke()和EndInvoke()方法来实现它。你可以依照通用的.NET非同步编程模式(asynchronous programming model)来使用这些方法:用BeginInvoke()来发送调用,用EndInvoke()来实现等待或用于在完成时进行提示以及收集返回结果。

  还值得一提的是ISynchronizeInvoke方法并非安全类型。 类型不符会导致在执行时被抛出异常,而不是编译错误。所以在使用ISynchronizeInvoke时要格外注意,因为编辑器无法检查出执行错误。

  实现ISynchronizeInvoke要求你使用一个代理来在后期绑定(late binding)中动态地调用方法。每一种代理类型均提供DynamicInvoke()方法: public object DynamicInvoke(object[]

  理论上来说,你必须将一个方法代理放到一个需要提供对象运行的真实的线程中去,并使Invoke() 和BeginInvoke()方法中的代理中调用DynamicInvoke()方法。ISynchronizeInvoke的实现是一个非同一般的编程技巧,本文附带的源文件中包含了一个名为Synchronizer的帮助类(helper class)和一个测试程序,这个测试程序是用来论证列表A中的Calculator类是如何用Synchronizer类来实现ISynchronizeInvoke的。Synchronizer是ISynchronizeInvoke的一个普通实现,你可以使用它的派生类或者将其本身作为一个对象来使用,并将ISynchronizeInvoke实现指派给它。

  用来实现Synchronizer的一个重要元素是使用一个名为WorkerThread的嵌套类(nested class)。WorkerThread中有一个工作项目(work item)查询。WorkItem类中包含方法代理和参数。Invoke()和BeginInvoke()用来将一个工作项目实例加入到查询里。WorkerThread新建一个.NET worker线程,它负责监测工作项目的查询任务。查询到项目之后,worker会读取它们,然后调用DynamicInvoke()方法。

通过多线程为基于 .NET 的应用程序实现响应迅速的用户


Friday, December 22, 2006


Some notes on ThreadPool - 2


也谈大规模定时器的实时集中管理实现 [草稿] [原]

温少在其最近的一篇blog里面,介绍了对 java.util.concurrent 中延迟队列 DelayQueue 的使用体会: 《精巧好用的DelayQueue》。溢美之辞我就不多说了,这个不是我擅长的;大家这么熟又特地问到我头上,那就不客气的提出一些自己的看法,希望能借此机会互相讨论,把类似问题的认识深化一下。
既然讨论 DelayQueue 那么不妨把目标范围权且定在,对大量定时器类似对象的实时集中管理上。这个大量的概念,是指同时有至少存在万这个数量级的定时器。对一个大型网络应用程序,或者复杂事务应用系统,经常存在这此类需求,例如并行跟踪几十万用户会话的超时情况,或者十几万个缓存对象的超时等等;另一个前提是实时集中管理,假定这些定时器通常情况下是活跃的,需要隔段时间被激活或重调度,毕竟大部分定时器的使用并非一次性的。而在上述定义讨论范围之外的需求,可以考虑直接用 Java 内置的 Timer/ScheduleExecutor 或者更加强大的 Quartz 调度库。


首先,新增一个定时器,隐式包含了创建内部结构,和放入到下次可被调度位置的操作。对 DelayQueue 实现来说,其内部使用一个 PriorityQueue 在插入定时器时按触发时间进行排序。这看起来是一个很精巧的实现,但个人觉得纯属提前优化,毕竟这个定时器会不会被调度到都是问题,而且插入和排序时需要锁住整个队列,效率和并行性堪忧。
其次,处理下一个定时器时,DelayQueue 的实现隐式包含了等待的语义。我不太了解其设计时的需求,以及对精度的要求;但就一个高性能网络程序的需求来说,在每秒处理几千甚至上万报文的情况下,任何等待语义都是昂贵且不可接受的。而且除非是实时或者非抢占调度的操作系统,否则毫秒一级的等待都是没什么实际意义的,远不如使用基于高精度计数器的差值检查来得实在。

尽管有上述这些缺点,但对于绝大多数的应用来说,DelayQueue 的确算的上是一个精巧实用的实现,可以在相当程度上改进我们的超时机制。

说了一堆意见,下面介绍一个个人比较喜欢的实现思路:Timer Wheel 算法

Redesigning the BSD Callout and Timer Facilities (1995)

这个算法最早是被设计用来实现 BSD 内核中定时器的,后来被广泛移植到诸如 ACE 等框架中,堪称 BSD 中经典算法之一,能针对定时器的各类常见操作提供接近常数时间的响应,且能根据需要很容易进行扩展,下面我们来简要介绍一下:

首先,整个算法是基于一个大的 Hash 表,可以把它想象成一个左轮手枪的枪膛,不同的定时器根据预期触发时间的不同,在插入时被放入不同的子弹槽内。这个插入过程完全不需要考虑排序,只是一个最简单的 hash 操作,加上一个 Wait-Free 的链表插入。

因为手头没有开发环境,只能把算法思路用伪代码大概写写,有没写明白的以那篇论文为准,哈哈,回头有空再补完整实现 :P


void insert(Callable callable, long expired)
Node node = new Node();
node._callable = callable
node._expired = expired - System.currentTimeMillis();
node._next = _wheel[expired % _wheel_size]._head;
node._next._prev = node
_wheel[node._expired % _hash_size]._head = node;

void check(long time)
for (Node node = _wheel[time % _wheel_size]._head;
node != null; node = node._next)
if (--node._expired < 0)
node._prev._next = node._next;

void cancel(Callable callable)
for (int i=0; i<_hash_size; i++)
for (Node node = _wheel[expired % _wheel_size]._head;
node != null; node = node._next)
if (node._callable == callable)
node._prev._next = node._next;

void update(Callable callable, long expired)
insert(callable, expired);






ThreadStart threadStart=new ThreadStart(Calculate);//通过ThreadStart委托告诉子线程讲执行什么方法,这里执行一个计算圆周长的方法
Thread thread=new Thread(threadStart);
thread.Start(); //启动新线程

public void Calculate(){
double Diameter=0.5;
Console.Write("The perimeter Of Circle with a Diameter of {0} is {1}"Diameter,Diameter*Math.PI);


delegate double CalculateMethod(double Diameter); //申明一个委托,表明需要在子线程上执行的方法的函数签名
static CalculateMethod calcMethod = new CalculateMethod(Calculate);//把委托和具体的方法关联起来
static void Main(string[] args)
calcMethod.BeginInvoke(5, new AsyncCallback(TaskFinished), null);

public static double Calculate(double Diameter)
return Diameter * Math.PI;

public static void TaskFinished(IAsyncResult result)
double re = 0;
re = calcMethod.EndInvoke(result);


WaitCallback w = new WaitCallback(Calculate);
ThreadPool.QueueUserWorkItem(w, 1.0);
ThreadPool.QueueUserWorkItem(w, 2.0);
ThreadPool.QueueUserWorkItem(w, 3.0);
ThreadPool.QueueUserWorkItem(w, 4.0);
public static void Calculate(double Diameter)
return Diameter * Math.PI;


受托管的线程与 Windows线程






启动了多个线程的程序在关闭的时候却出现了问题,如果程序退出的时候不关闭线程,那么线程就会一直的存在,但是大多启动的线程都是局部变量,不能一一的关闭,如果调用Thread.CurrentThread.Abort()方法关闭主线程的话,就会出现ThreadAbortException 异常,因此这样不行。
后来找到了这个办法: Thread.IsBackground 设置线程为后台线程。

msdn对前台线程和后台线程的解释:托管线程或者是后台线程,或者是前台线程。后台线程不会使托管执行环境处于活动状态,除此之外,后台线程与前台线程是一样的。一旦所有前台线程在托管进程(其中 .exe 文件是托管程序集)中被停止,系统将停止所有后台线程并关闭。通过设置 Thread.IsBackground 属性,可以将一个线程指定为后台线程或前台线程。例如,通过将 Thread.IsBackground 设置为 true,就可以将线程指定为后台线程。同样,通过将 IsBackground 设置为 false,就可以将线程指定为前台线程。从非托管代码进入托管执行环境的所有线程都被标记为后台线程。通过创建并启动新的 Thread 对象而生成的所有线程都是前台线程。如果要创建希望用来侦听某些活动(如套接字连接)的前台线程,则应将 Thread.IsBackground 设置为 true,以便进程可以终止。
所以解决办法就是在主线程初始化的时候,设置:Thread.CurrentThread.IsBackground = true;





IAsyncResult BeginXXX(...);

<返回类型> EndXXX(IAsyncResult ar);


这个模式在实际使用时稍显繁琐,虽然原则上我们可以随时调用EndInvoke来获得返回值,并且可以同步多个线程,但是大多数情况下当我们不需要同步很多线程的时候使用回调是更好的选择,在这种情况下三个元素中的IAsynResult就显得多余,我们一不需要用其中的线程完结标志来判断线程是否成功完成(回调的时候线程应该已经完成了),二不需要他来传递数据,因为数据可以写在任何变量里,并且回调时应该已经填充,所以可以看到微软在新的.Net Framework中已经加强了对回调事件的支持,这总模型下,典型的回调程序应该这样写

a.DoWork+=new SomeEventHandler(Caculate);
a.CallBack+=new SomeEventHandler(callback);







实际上DotNet Framework里面就有这样的例子,当我们使用文件流的时候,如果制定文件流属性为同步,则使用BeginRead进行读取时,就是用一个子线程来调用同步的Read方法,而如果指定其为异步,则同样操作时就使用了需要硬件和操作系统支持的所谓IOCP的机制


我的多线程WinForm程序老是抛出InvalidOperationException ,怎么解决?


Cross-thread operation not valid: Control 'XXX' accessed from a thread other than the thread it was created on.


ThreadStart threadStart=new ThreadStart(Calculate);//通过ThreadStart委托告诉子线程讲执行什么方法
Thread thread=new Thread(threadStart);
public void Calculate(){
double Diameter=0.5;
double result=Diameter*Math.PI;
public void CalcFinished(double result){

delegate void changeText(double result);

public void Calculate(){
double Diameter=0.5;
double result=Diameter*Math.PI;
this.BeginInvoke(new changeText(CalcFinished),t.Result);//计算完成需要在一个文本框里显示


delegate void changeText(double result);

public void CalcFinished(double result){
this.BeginInvoke(new changeText(CalcFinished),t.Result);



UnsafeNativeMethods.PostMessage(new HandleRef(this, this.Handle), threadCallbackMessage, IntPtr.Zero, IntPtr.Zero);


UnsafeNativeMethods.PostMessage(new HandleRef(this, this.Handle), threadCallbackMessage, IntPtr.Zero, IntPtr.Zero);



当一个线程第一次被建立时,系统假定线程不会被用于任何与用户相关的任务。这样可以减少线程对系统资源的要求。但是,一旦这个线程调用一个与图形用户界面有关的函数(例如检查它的消息队列或建立一个窗口),系统就会为该线程分配一些另外的资源,以便它能够执行与用户界面有关的任务。特别是,系统分配一个T H R E A D I N F O结构,并将这个数据结构与线程联系起来。

这个T H R E A D I N F O结构包含一组成员变量,利用这组成员,线程可以认为它是在自己独占的环境中运行。T H R E A D I N F O是一个内部的、未公开的数据结构,用来指定线程的登记消息队列(posted-message queue)、发送消息队列( send-message queue)、应答消息队列( r e p l y -message queue)、虚拟输入队列(virtualized-input queue)、唤醒标志(wake flag)、以及用来描述线程局部输入状态的若干变量。图2 6 - 1描述了T H R E A D I N F O结构和与之相联系的三个线程。



"当您在 Visual Studio 调试器中运行代码时,如果您从一个线程访问某个 UI 元素,而该线程不是创建该 UI 元素时所在的线程,则会引发 InvalidOperationException。调试器引发该异常以警告您存在危险的编程操作。UI 元素不是线程安全的,所以只应在创建它们的线程上进行访问"


不过问题依然存在:既然这样设计的原因主要是因为控件的值非线程安全,那么DotNet framework中非线程安全的类千千万万,为什么偏偏跨线程修改Control的属性会有这样严格的限制策略呢?









每个进程都有一个线程池,一个Process中只能有一个实例,它在各个应用程序域(AppDomain)是共享的,.Net2.0 中默认线程池的大小为工作线程25个,IO线程1000个,有一个比较普遍的误解是线程池中会有1000个线程等着你去取,其实不然, ThreadPool仅仅保留相当少的线程,保留的线程可以用SetMinThread这个方法来设置,当程序的某个地方需要创建一个线程来完成工作时,而线程池中又没有空闲线程时,线程池就会负责创建这个线程,并且在调用完毕后,不会立刻销毁,而是把他放在池子里,预备下次使用,同时如果线程超过一定时间没有被使用,线程池将会回收线程,所以线程池里存在的线程数实际是个动态的过程



其实无论FileStream的异步读写,异步发送接受Web请求,甚至使用delegate的beginInvoke都会默认调用 ThreadPool,也就是说不仅你的代码可能使用到线程池,框架内部也可能使用到,更改的后果影响就非常大,特别在iis中,一个应用程序池中的所有 WebApplication会共享一个线程池,对最大值的设定会带来很多意想不到的麻烦


线程池有一个方法可以让我们看到线程池中可用的线程数量:GetAvaliableThread(out workerThreadCount,out iocompletedThreadCount),对于我来说,第一次看到这个函数的参数时十分困惑,因为我期望这个函数直接返回一个整形,表明还剩多少线程,这个函数居然一次返回了两个变量.






FileStream outputfs=new FileStream(writepath, FileMode.Create, FileAccess.Write, FileShare.None,256,true);


FileStream outputfs = File.OpenWrite(writepath);


string readpath = "e:\\RHEL4-U4-i386-AS-disc1.iso";
string writepath = "e:\\kakakak.iso";
byte[] buffer = new byte[90000000];

//FileStream outputfs=new FileStream(writepath, FileMode.Create, FileAccess.Write, FileShare.None,256,true);

FileStream outputfs = File.OpenWrite(writepath);



FileStream fs = File.OpenRead(readpath);

fs.BeginRead(buffer, 0, 90000000, delegate(IAsyncResult o)

outputfs.BeginWrite(buffer, 0, buffer.Length,

delegate(IAsyncResult o1)



}, null);

Thread.Sleep(500);//this is important cause without this, this Thread and the one used for BeginRead May seem to be same one
}, null);


public static void ShowThreadDetail(string caller)
int IO;
int Worker;
ThreadPool.GetAvailableThreads(out Worker, out IO);
Console.WriteLine("Worker: {0}; IO: {1}", Worker, IO);

Worker: 500; IO: 1000
Worker: 500; IO: 999
Worker: 500; IO: 1000
Worker: 499; IO: 1000


其实当没有制定异步属性的时候,.Net实现异步IO是用一个子线程调用fs的同步Write方法来实现的,这时这个子线程会一直阻塞直到调用完成.这个子线程其实就是线程池的一个工作线程,所以我们可以看到,同步流的异步写回调中输出的工作线程数少了一,而使用异步流,在进行异步写时,采用了 IOCP方法,简单说来,就是当BeginWrite执行时,把信息传给硬件驱动程序,然后立即往下执行(注意这里没有额外的线程),而当硬件准备就绪, 就会通知线程池,使用一个IO线程来读取



2)一个Process中只能有一个实例,它在各个AppDomain是共享的。ThreadPool只提供了静态方法,不仅我们自己添加进去的WorkItem使用这个Pool,而且.net framework中那些BeginXXX、EndXXX之类的方法都会使用此Pool。

如果您需要将线程放置到单线程单元中(所有 ThreadPool 线程均处于多线程单元中)。



从原理上讲,lock和Syncronized Attribute都是用Moniter.Enter实现的,比如如下代码

object lockobj=new object();
//do things


//do things






Mutex:类似于一个接力棒,拿到接力棒的线程才可以开始跑,当然接力棒一次只属于一个线程(Thread Affinity),如果这个线程不释放接力棒(Mutex.ReleaseMutex),那么没办法,其他所有需要接力棒运行的线程都知道能等着看热闹





public static MySingleton Instance{
_instance= new MySingleton();

public static MySingleton Instance{




_instance= new MySingleton();







private static readonly _instance=new MySingleton()

public static MySingleton Instance{

get{return _instance}





















Thursday, December 21, 2006


Some notes on ThreadPool - 1

We may use different words to describe the concept we will discuss below, thread pool, waiting queue, ...




a) 关闭空闲连接。服务器中,有很多客户端的连接,空闲一段时间之后需要关闭之。
b) 缓存。缓存中的对象,超过了空闲时间,需要从缓存中移出。
c) 任务超时处理。在网络协议滑动窗口请求应答式交互时,处理超时未响应的请求。



DelayQueue是java.util.concurrent中提供的一个很有意思的类。很巧妙,非常棒!但是java doc和Java SE 5.0的source中都没有提供Sample。我最初在阅读ScheduledThreadPoolExecutor源码时,发现DelayQueue的妙用。随后在实际工作中,应用在session超时管理,网络应答通讯协议的请求超时处理。



DelayQueue = BlockingQueue + PriorityQueue + Delayed


public interface Comparable {
public int compareTo(T o);
public interface Delayed extends Comparable {
long getDelay(TimeUnit unit);
public class DelayQueue implements BlockingQueue {
private final PriorityQueue q = new PriorityQueue();
public boolean offer(E e) {
final ReentrantLock lock = this.lock;
try {
E first = q.peek();
if (first == null || e.compareTo(first) < 0)
return true;
} finally {
public E take() throws InterruptedException {
final ReentrantLock lock = this.lock;
try {
for (;;) {
E first = q.peek();
if (first == null) {
} else {
long delay = first.getDelay(TimeUnit.NANOSECONDS);
if (delay > 0) {
long tl = available.awaitNanos(delay);
} else {
E x = q.poll();
assert x != null;
if (q.size() != 0)
available.signalAll(); // wake up other takers
return x;

} finally {


public class Pair {
public K first;

public V second;

public Pair() {}

public Pair(K first, V second) {
this.first = first;
this.second = second;
import java.util.concurrent.Delayed;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;

public class DelayItem implements Delayed {
/** Base of nanosecond timings, to avoid wrapping */
private static final long NANO_ORIGIN = System.nanoTime();

* Returns nanosecond time offset by origin
final static long now() {
return System.nanoTime() - NANO_ORIGIN;

* Sequence number to break scheduling ties, and in turn to guarantee FIFO order among tied
* entries.
private static final AtomicLong sequencer = new AtomicLong(0);

/** Sequence number to break ties FIFO */
private final long sequenceNumber;

/** The time the task is enabled to execute in nanoTime units */
private final long time;

private final T item;

public DelayItem(T submit, long timeout) {
this.time = now() + timeout;
this.item = submit;
this.sequenceNumber = sequencer.getAndIncrement();

public T getItem() {
return this.item;

public long getDelay(TimeUnit unit) {
long d = unit.convert(time - now(), TimeUnit.NANOSECONDS);
return d;

public int compareTo(Delayed other) {
if (other == this) // compare zero ONLY if same object
return 0;
if (other instanceof DelayItem) {
DelayItem x = (DelayItem) other;
long diff = time - x.time;
if (diff < 0)
return -1;
else if (diff > 0)
return 1;
else if (sequenceNumber < x.sequenceNumber)
return -1;
return 1;
long d = (getDelay(TimeUnit.NANOSECONDS) - other.getDelay(TimeUnit.NANOSECONDS));
return (d == 0) ? 0 : ((d < 0) ? -1 : 1);

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.DelayQueue;
import java.util.concurrent.TimeUnit;
import java.util.logging.Level;
import java.util.logging.Logger;

public class Cache {
private static final Logger LOG = Logger.getLogger(Cache.class.getName());

private ConcurrentMap cacheObjMap = new ConcurrentHashMap();

private DelayQueue>> q = new DelayQueue>>();

private Thread daemonThread;

public Cache() {

Runnable daemonTask = new Runnable() {
public void run() {

daemonThread = new Thread(daemonTask);
daemonThread.setName("Cache Daemon");

private void daemonCheck() {

if (LOG.isLoggable(Level.INFO))"cache service started.");

for (;;) {
try {
DelayItem> delayItem = q.take();
if (delayItem != null) {
// 超时对象处理
Pair pair = delayItem.getItem();
cacheObjMap.remove(pair.first, pair.second); // compare and remove
} catch (InterruptedException e) {
if (LOG.isLoggable(Level.SEVERE))
LOG.log(Level.SEVERE, e.getMessage(), e);

if (LOG.isLoggable(Level.INFO))"cache service stopped.");

// 添加缓存对象
public void put(K key, V value, long time, TimeUnit unit) {
V oldValue = cacheObjMap.put(key, value);
if (oldValue != null)

long nanoTime = TimeUnit.NANOSECONDS.convert(time, unit);
q.put(new DelayItem>(new Pair(key, value), nanoTime));

public V get(K key) {
return cacheObjMap.get(key);

// 测试入口函数
public static void main(String[] args) throws Exception {
Cache cache = new Cache();
cache.put(1, "aaaa", 3, TimeUnit.SECONDS);

Thread.sleep(1000 * 2);
String str = cache.get(1);

Thread.sleep(1000 * 2);
String str = cache.get(1);


看到了const int MaxLimitCPThreadsPerCPU = 25 这样的直白要求。不过我觉得还是计算TLS时间来确定比较科学一些。慢慢来了。


.NET's ThreadPool Class - Behind The Scenes
By Marc Clifton (按:赞,又是他老人家)







(by Cocia Lin(



在通用的非实时系统中,Doug Lea的util.conconrrenct同步线程开发包定义了很优美的结构模型和提供了基本的线程池实现。util.conconrrenct相关的内容,请自行参考。

从非实时的角度来看,Doug Lea提供的线程池实现,已经很完善。但是对于实时系统中,他缺少主要的实时特性。所以我的想法是,在非实时线程池上进行扩展,使他具备实时特性。









PooledExecutor pool = new PooledExecutor(20);
pool.execute(new Runnable() {
public void run() {

while ( (task = getTaskFromQueue()) != null) {;
task = null;



while ( (task = getTask()) != null) {
//Change current thread priority to the priority of task.
RTRunnable rttask = (RTRunnable)task;
int oldprio = setCurrentThreadPriority(rttask.getPriority());
//run the task.;
task = null;
rttask = null;
//priority change back.


线程池通道(Thread Lane)的概念,出自RT-CORBA规范。但是,这样的线程池形式,对所有的实时系统都很有用。


看看下面的例子,当有一个优先级为15的请求任务,要求被执行,就会根据优先级匹配,将这个任务交给Lane2(15)来运行。 如果有一个优先级为13的请求任务到达后,怎么处理呢?线程池会根据优先级最接近的原则,找到Lane2(15),然后取出一个就绪线程,将此线程优先级调整到请求任务要求的优先级,然后运行。







orbas是开放源码的一个Java CORBA的实现,对RT-CORBA进行支持。

Doug Lea的util.conconrrenct同步线程开发包

Real-Time Java Expert Group

基于Jrate的rtsj RT-ORB:ZEN

好了,接下来分析我们需要加些什么。首先我们需要ThreadID。在DotNet Threadpool,ThreadID是不重视的。Rotor中写到

threadCB->threadId = threadId; // may be useful for debugging otherwise not used


this->m_ID = InterlockedIncrement(&LastUsedID);

可是,这依然是不太安全的。参见Raymond Chen的blog
Interlocked operations don't solve everything

Interlocked operations are a high-performance way of updating DWORD-sized or pointer-sized values in an atomic manner. Note, however, that this doesn't mean that you can avoid the critical section.

For example, suppose you have a critical section that protects a variable, and in some other part of the code, you want to update the variable atomically. "Well," you say, "this is a simple imcrement, so I can skip the critical section and just do a direct InterlockedIncrement. Woo-hoo, I avoided the critical section bottleneck."

Well, except that the purpose of that critical section was to ensure that nobody changed the value of the variable while the protected section of code was running. You just ran in and changed the value behind that code's back.

Conversely, some people suggested emulating complex interlocked operations by having a critical section whose job it was to protect the variable. For example, you might have an InterlockedMultiply that goes like this:

// Wrong!
LONG InterlockedMultiply(volatile LONG *plMultiplicand, LONG lMultiplier)
LONG lResult = *plMultiplicand *= lMultiplier;
return lResult;

While this code does protect against two threads performing an InterlockedMultiply against the same variable simultaneously, it fails to protect against other code performing a simple atomic write to the variable. Consider the following:

int x = 2;

InterlockedMultiply(&x, 5);

If the InterlockedMultiply were truly interlocked, the only valid results would be x=15 (if the interlocked increment beat the interlocked multiply) or x=11 (if the interlocked multiply beat the interlocked increment). But since it isn't truly interlocked, you can get other weird values:

Thread 1 Thread 2
x = 2 at start
InterlockedMultiply(&x, 5)
load x (loads 2)
x is now 3
multiply by 5 (result: 10)
store x (stores 10)
x = 10 at end

Oh no, our interlocked multiply isn't very interlocked after all! How can we fix it?

If the operation you want to perform is a function solely of the starting numerical value and the other function parameters (with no dependencies on any other memory locations), you can write your own interlocked-style operation with the help of InterlockedCompareExchange.

LONG InterlockedMultiply(volatile LONG *plMultiplicand, LONG lMultiplier)
LONG lOriginal, lResult;
do {
lOriginal = *plMultiplicand;
lResult = lOriginal * lMultiplier;
} while (InterlockedCompareExchange(plMultiplicand,
lResult, lOriginal) != lOriginal);
return lResult;

[Typo in algorithm fixed 9:00am.]

To perform a complicated function on the multiplicand, we perform three steps.

First, capture the value from memory: lOriginal = *plMultiplicand;

Second, compute the desired result from the captured value: lResult = lOriginal * lMultiplier;

Third, store the result provided the value in memory has not changed: InterlockedCompareExchange(plMultiplicand, lResult, lOriginal)

If the value did change, then this means that the interlocked operation was unsucessful because somebody else changed the value while we were busy doing our computation. In that case, loop back and try again.

If you walk through the scenario above with this new InterlockedMultiply function, you will see that after the interloping InterlockedIncrement, the loop will detect that the value of "x" has changed and restart. Since the final update of "x" is performed by an InterlockedCompareExchange operation, the result of the computation is trusted only if "x" did not change value.

Note that this technique works only if the operation being performed is a pure function of the memory value and the function parameters. If you have to access other memory as part of the computation, then this technique will not work! That's because those other memory locations might have changed during the computation and you would have no way of knowing, since InterlockedCompareExchange checks only the memory value being updated.

Failure to heed the above note results in problems such as the so-called "ABA Problem". I'll leave you to google on that term and read about it. Fortunately, everybody who talks about it also talks about how to solve the ABA Problem, so I'll leave you to read that, too.

Once you've read about the ABA Problem and its solution, you should be aware that the solution has already been implemented for you, via the Interlocked SList functions.

所以说,我们还是需要CriticalSection。当然,我们应该把这个CriticalSection分给Lane或者更上一层的Pool。于是把其定义成friend function吧。

Saturday, December 16, 2006


Some notes on IE8 Programming - 1


Session cookie 被广泛用来做用户身份校验。 相比IE7, IE8的Session 管理有很大变化, 这是Web 开发者需要注意的。

IE7中,同一个窗口(IE 进程)共享一个session。

IE8中,所有打开的IE窗口(IE 进程)共享一个session。除非,用户通过菜单 File > New session 打开新窗口,或者使用命令行参数 iexplore.exe -nomerge 来打开IE。 另外,当所有IE窗口被关闭后,session 结束。

Friday, December 15, 2006


Some notes on IE7 Programming - 1



由于现在显示器越来越大17'、19'甚至20'都很普通了,并且显示器的分辨率也越来越高,使用1280x1024的用户已经高于使用800x600的用户(根据本站统计)。原有的大量为800x600 9pt字体以及一些为1024x768 9pt字体设计的网页已经非常过时了。所以Zoom功能逐渐成为了浏览器的必备功能。



IE7提供了类似Opera那样的Zoom功能,可是不知道IE在搞什么飞机,Zoom页面的同时,有很大一部分网页的滚动条也会被同时Zoom:(。本blog首页被Zoom in 400%后的效果如下:

// 这滚动条也被放大的效果让人相当伤感。。。



IE7 正式发布版不支持offsetheight,clientheight,scrollheight属性


前一阵用IE7 测试版还显示正常。

不知道是IE7正式版的bug, 还是有其他新的方法,不再支持这些属性。



!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""
!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"
可以取道offsetHeight 值


# re: IE7 正式发布版不支持offsetheight,clientheight,scrollheight属性 回复 更多评论
2006-11-02 12:51 by ddee



Building Browser Helper Objects with Visual Studio 2005
尽管BHO被给与了太多的权限,以至于很多反恶意软件对BHO倍加关注,但是很多BHO也是很有用的,例如Google ToolBar和Internet Explorer Developer Toolbar。在Windows XP SP2中,微软在IE中加入了加载项管理器来管理包含BHO在内的浏览器扩展。

微软在1999年1月发布了一篇名为Browser Helper Objects: The Browser the Way You Want It的文章,同时在微软知识库中也提供了一个示例IEHelper,这使得编写BHO的难度大大降低,但是这也使得有缺陷的BHO的数量增加。甚至在最近这篇文章Building Browser Helper Objects with Visual Studio 2005的示例代码中,也有着一些缺陷,但是这篇文章也详尽地阐述了编写BHO需要注意的事项,编写BHO的程序员应该去看一看。


Microsoft {
Windows {
CurrentVersion {
Explorer {
'Browser Helper Objects' {
ForceRemove '{D2F7E1E3-C9DC-4349-B72C-D5A708D6DD77}' = s
'HelloWorldBHO' {
val 'NoExplorer' = d '1'


NoRemove Microsoft {
NoRemove Windows {
NoRemove CurrentVersion {
NoRemove Explorer {
NoRemove 'Browser Helper Objects' {
ForceRemove '{D2F7E1E3-C9DC-4349-B72C-D5A708D6DD77}' = s
'HelloWorldBHO' {
val 'NoExplorer' = d '1'

public IDispEventImpl<1, CHelloWorldBHO, &DIID_DWebBrowserEvents2,
&LIBID_SHDocVw, 1, 0>


public IDispEventImpl<1, CHelloWorldBHO, &DIID_DWebBrowserEvents2,
&LIBID_SHDocVw, 1, 1>

If the page has no frames, the event is fired once after the page is ready, but before any script has run.这句话有误,浏览器在下载到BODY内嵌的script标签的时候就会执行脚本。

those that fire DownloadBegin will also fire a corresponding DocumentComplete 这里DocumentComplete 应该是DownloadComplete。

Thursday, December 07, 2006


Some notes on Visual Studio 2005 - 3

How to bypass the WinSxS for CRT/MFC/ATL DLLs


May 14, 2006How to bypass the WinSxS for CRT/MFC/ATL DLLs
Starting with VC8, you have two options to distribute the DLL version of the CRT/MFC/ATL with your application:

You can redistribute the DLLs with your application in the same directory and also put a valid manifest for these DLLs into this directory
You can install the redist.exe and the DLL will be installed in the WinSxS folder (on XP and later)
So, if you want to be independed from global DLLs, you might think that you can simply put the DLLs into your applications directory. But this is a false conclusion.
If a DLL is installed in the WinSxS folder, the local DLLs will be ignored. This might be even true, if a newer DLL was installed (for example by security hotfixes). This is possible due to policy redirections of these SxS-DLLs.
In most cases this also makes sense, because you always get the latest (hopefully compatible) version of the DLL.

But there might be some situations in which you might have full control over which DLLs are loaded from where. Now, Andre Stille (an other VC++ MVP), found a very simple solution : just remove the “publicKeyToken” attribute from the manifests!
So an application manifest looks like:


?xml version="1.0" encoding="UTF-8" standalone="yes"?>
assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
assemblyIdentity type="win32" name="Microsoft.VC80.CRT"
version="8.0.50727.42" processorArchitecture="x86" />
/assembly>You must also set the correct verion-number of the DLL! And remove the “publicKeyToken” attribute.
The manifest the for DLL looks like:


?xml version="1.0" encoding="UTF-8" standalone="yes"?>
assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
assemblyIdentity type="win32" name="Microsoft.VC80.CRT"
version="8.0.50727.42" processorArchitecture="x86">/assemblyIdentity>
file name="msvcr80.dll">/file>
file name="msvcp80.dll">/file>
file name="msvcm80.dll">/file>
/assembly>Now the CRT DLLs in the WinSxS will be ignored and only the local DLLs will be loaded.

Thanks again to Andre Stille!





在使用 VC++2005环境下生成的程序,放置到未安装VC环境的机器下后,有时候会出现程序无法执行的错误,其提示是:应用程序配置不正确,程序无法启动,重新安装应用程序可能解决问题。


?xml version='1.0' encoding='UTF-8' standalone='yes'?>
assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'>
assemblyIdentity type='win32' name='Microsoft.VC80.CRT' version='8.0.50727.762' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' />
assemblyIdentity type='win32' name='Microsoft.VC80.MFC' version='8.0.50727.762' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' />
assemblyIdentity type='win32' name='Microsoft.VC80.DebugCRT' version='8.0.50727.762' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' />

需要注意这个文件中的3个关键词:Microsoft.VC80.CRT,Microsoft.VC80.MFC和Microsoft.VC80.DebugCRT。寻找到...."Program Files"Microsoft Visual Studio 8"VC"redist文件夹下面,找到这些名称的子文件夹,拷贝它们下面所有的文件到希望发布的EXE文件下面,一起打包。这些文件也就是mfc80.dll,msvcr80.dll,msvcp80.dll和Microsoft.VC80.CRT.manifest等。此错误发生的原因是在目标机器上需要这些文件的支持。


Added 1/15/2008

If you are confused by VS2005 which tells you cannot find some basic Afx functions, just add the following lines in your stdafx.h of your project

#include "C:\Program Files\Microsoft Visual Studio 8\VC\atlmfc\src\mfc\afximpl.h"

Then re-build, it could work

Of course the directory above depends where you installed your VS2005

Just my five cents

This page is powered by Blogger. Isn't yours?