IT이야기

JAVA MultiThread2

딜레이라마 2017. 2. 8. 21:15
반응형

Responsiveness 

Blocking one part of a process need not block the entire process. Single-threaded applications that do something lengthy when a button is pressed typically display a "please wait" cursor and freeze while the operation is in progress. If such applications were multithreaded, long operations could be done by independent threads, allowing the application to remain active and making the application more responsive to the user.  one thread is waiting for I/O from the buttons, and several threads are working on the calculations. 

Communications 

An application that uses multiple processes to accomplish its tasks can be replaced by an application that uses multiple threads to accomplish those same tasks. Where the old program communicated among its processes through traditional interprocess communications facilities (e.g., pipes or sockets), the threaded application can communicate via the inherently shared memory of the process. The threads in the MT process can maintain separate connections while sharing data in the same address space. A classic example is a server program, which can maintain one thread 13 for each client connection, . This program provides excellent performance, simpler programming, and effortless scalability. 


System Resources Programs 

that use two or more processes to access common data through shared memory are effectively applying more than one thread of control. However, each such process must maintain a complete process structure, including a full virtual memory space and kernel state. The cost of creating and maintaining this large amount of state makes each process much more expensive, in both time and space, than a thread. In addition, the inherent separation between processes may require a major effort by the programmer to communicate among the different processes or to synchronize their actions. By using threads for this communication instead of processes, the program will be easier to debug and can run much faster. An application can create hundreds or even thousands of threads, one for each synchronous task, with only minor impact on system resources. Threads use a fraction of the system resources needed by processes.


Distributed Objects 

With the first releases of standardized distributed objects and object request brokers, your ability to make use of these will become increasingly important. Distributed objects are inherently multithreaded. Each time you request an object to perform some action, it executes that action in a separate thread ). Object servers are an absolutely fundamental element in distributed object paradigm, and those servers are inherently multithreaded.  Although you can make a great deal of use of distributed objects without doing any MT programming, knowing what they are doing and being able to create objects that are threaded will increase the usefulness of the objects you do write.

Same Binary for Uniprocessors and Multiprocessors

 In most older parallel processing schemes, it was necessary to tailor a program for the individual hardware configuration. With threads, this customization isn't required because the MT paradigm works well irrespective of the number of CPUs. A program can be compiled once, and it will run acceptably on a uniprocessor, whereas on a multiprocessor it will just run faster. 

Program Structure

 Many programs are structured more efficiently with threads because they are inherently concurrent. A traditional program that tries to do many different tasks is crowded with lots of complicated code to coordinate these tasks. A threaded program can do the same tasks with much less, far simpler code. Multithreaded programs can be more adaptive to variations in user demands than single-threaded programs can. 

What Kinds of Programs to Thread There is a spectrum of programs that one might wish to thread. On one end, there are those that are inherently "MT-ish"—you look at the work to be done, and you think of it as several independent tasks. In the middle, there are programs where the division of work isn't obvious, but possible. On the far other end, there are those that cannot reasonably be threaded at all. Inherently MT Programs Inherently MT programs are those that are easily expressed as numerous threads doing numerous things. Such programs are easier to write using threads, because they are doing different things concurrently anyway. They are generally simpler to write and understand when threaded, easier to maintain, and more robust. The fact that they may run faster is a mere pleasant side effect. For these programs, the general rule is that the more complex the application, the greater the value of threading. Typical programs that are inherently MT include: Independent tasks A debugger needs to run and monitor a program, keep its GUI active, and display an interactive data inspector, dynamic call grapher, and performance monitor—all in the same address space, all at the same time. Servers A server needs to handle numerous overlapping requests simultaneously. NFS®, NIS, DBMSs, stock quotation servers, etc., all receive large numbers of requests that require the server to do some I/O, then process the results and return answers. Completing one request at a time would be very slow. Repetitive tasks A simulator needs to simulate the interactions of numerous different elements that operate simultaneously. CAD, structural analysis, weather prediction, etc., all model tiny pieces first, then combine the results to produce an overall picture. Not Obviously MT Programs Not obviously MT programs are those not inherently MT but for which threading is reasonable. Here you impose threads upon an algorithm that does not have an obvious decomposition, in order to achieve a speedup on an MP machine. Such a program is somewhat harder to write, a bit more difficult to maintain, etc., than its nonthreaded counterpart, but it runs faster. Because of these drawbacks, the (portions of) programs chosen are generally quite simple. 


Automatic Threading

 In a subset of cases, it is possible for a compiler to do the threading for you. If you have a program written in such a way that a compiler can analyze its structure, analyze the interdependencies of the data, and determine that parts of your program can run simultaneously without data conflicts, then the compiler can build the threads. With current technology, the capabilities above are limited largely to Fortran programs that have time-consuming loops in which the individual computations in those loops are obviously independent. The primary reason for this limitation is that Fortran programs tend to have very simple structuring, both for code and data, making the analysis viable. Languages like C, which have constructs such as pointers, make the analysis enormously more difficult. There are MP compilers for C, but far fewer programs can take advantage of such compiling techniques. With the different Fortran MP compilers,[3] it is possible to take vanilla Fortran 77 or 90 code, make no changes to it whatsoever, and have the compiler turn out threaded code. In some cases it works very well; in others, not. The cost of trying it out is very small, of course. A number of Ada compilers will map Ada tasks directly on top of threads, allowing existing Ada programs to take advantage of parallel machines with no changes to the code.

반응형