IT이야기

JAVA MultiThread DegienPattern2

딜레이라마 2017. 2. 17. 22:28
반응형

System Calls

 A system call is basically a function that ends up trapping to routines in the kernel. These routines may do things as simple as looking up the user ID for the owner of the current process, or as complex as redefining the system's scheduling algorithm. For multithreaded programs, there is a serious issue surrounding how many threads can make system calls concurrently. For some operating systems, the answer is "one"; for others, it's "many." The most important point is that system calls run exactly as they did before, so all your old programs continue to run as they did before, with (almost) no degradation.

Signals

 Signals are the UNIX kernel's way of interrupting a running process and letting it know that something of interest has happened. (NT has something similar but doesn't expose it in the Win32 interface.) It could be that a timer has expired, or that some I/O has completed, or that some other process wants to communicate something. Happily, Java does not use UNIX signals, so we may conveniently ignore them entirely! The role that signals play in UNIX programs is handled in Java either by having a thread respond to a synchronous request or by the use of exceptions.

Synchronization

 Synchronization is the method of ensuring that multiple threads coordinate their activities so that one thread doesn't accidentally change data that another thread is working on. This is done by providing function calls that can limit the number of threads that can access some data concurrently. In the simplest case (a mutual exclusion lock—a mutex), only one thread at a time can execute a given piece of code. This code presumably alters some global data or performs reads or writes to a device. For example, thread T1 obtains a lock and starts to work on some global data. Thread T2 must now wait (typically, it goes to sleep) until thread T1 is done before T2 can execute the same code. By using the same lock around all code that changes the data, we can ensure that the data remains consistent.

Scheduling

 Scheduling is the act of placing threads onto CPUs so that they can execute, and of taking them off those CPUs so that others can run instead. In practice, scheduling is not generally an issue because "it all works" just about the way you'd expect.

The Value of Using Threads

There is really only one reason for writing MT programs—to get better programs more quickly. If you're an Independent Software Vendor (ISV), you sell more software. If you're developing software for your own in-house use, you simply have better programs to use. The reason you can write better programs is that MT gives your programs and your programmers a number of significant advantages over nonthreaded programs and programming paradigms. A point to keep in mind here is that you are not replacing simple, nonthreaded programs with fancy, complex, threaded programs. You are using threads only when you need them to replace complex or slow nonthreaded programs. Threads are just one more way to make your programming tasks easier. The main benefits of writing multithreaded programs are:

 • Performance gains from multiprocessing hardware (parallelism) 

• Increased application throughput 

• Increased application responsiveness 

• Replacing process-to-process communications 

• Efficient use of system resources 

• One binary that runs well on both uniprocessors and multiprocessors 11 

• The ability to create well-structured programs

The following sections elaborate further on these benefits. 

Parallelism

 Computers with more than one processor offer the potential for enormous application speedups (Figure 2-8). MT is an efficient way for application developers to exploit the parallelism of the hardware. Different threads can run on different processors simultaneously with no special input from the user and no effort on the part of the programmer.

A good example is a process that does matrix multiplication. A thread can be created for each available processor, allowing the program to use the entire machine. The threads can then compute distinct elements of the resulting matrix by performing the appropriate vector multiplication.

Throughput

 When a traditional, single-threaded program requests a service from the operating system, it must wait for that service to complete, often leaving the CPU idle. Even on a uniprocessor, multithreading allows a process to overlap computation with one or more blocking system calls (Figure 2-9). Threads provide this overlap even though each request is coded in the usual synchronous style. The thread making the request must wait, but another thread in the process can continue. Thus, a process can have numerous blocking requests outstanding, giving you the beneficial effects of doing asynchronous I/O while still writing code in the simpler synchronous fashion.

Responsiveness

Blocking one part of a process need not block the entire process. Single-threaded applications that do something lengthy when a button is pressed typically display a "please wait" cursor and freeze while the operation is in progress. If such applications were multithreaded, long operations could be done by independent threads, allowing the application to remain active and making the application more responsive to the user. In Figure 2-10, one thread is waiting for I/O from the buttons, and several threads are working on the calculations. 

Communications

An application that uses multiple processes to accomplish its tasks can be replaced by an application that uses multiple threads to accomplish those same tasks. Where the old program communicated among its processes through traditional interprocess communications facilities (e.g., pipes or sockets), the threaded application can communicate via the inherently shared memory of the process. The threads in the MT process can maintain separate connections while sharing data in the same address space. A classic example is a server program, which can maintain one thread 13 for each client connection, such as in Figure 2-11. This program provides excellent performance, simpler programming, and effortless scalability.

System Resources 

Programs that use two or more processes to access common data through shared memory are effectively applying more than one thread of control. However, each such process must maintain a complete process structure, including a full virtual memory space and kernel state. The cost of creating and maintaining this large amount of state makes each process much more expensive, in both time and space, than a thread. In addition, the inherent separation between processes may require a major effort by the programmer to communicate among the different processes or to synchronize their actions. By using threads for this communication instead of processes, the program will be easier to debug and can run much faster. An application can create hundreds or even thousands of threads, one for each synchronous task, with only minor impact on system resources. Threads use a fraction of the system resources needed by processes. 

Distributed Objects 

With the first releases of standardized distributed objects and object request brokers, your ability to make use of these will become increasingly important. Distributed objects are inherently multithreaded. Each time you request an object to perform some action, it executes that action in a separate thread (Figure 2-12). Object servers are an absolutely fundamental element in distributed object paradigm, and those servers are inherently multithreaded. 

Although you can make a great deal of use of distributed objects without doing any MT programming, knowing what they are doing and being able to create objects that are threaded will increase the usefulness of the objects you do write.  

반응형