호빗의 인간세상 탐험기

JAVA MultiThread DegienPattern 본문

IT이야기

JAVA MultiThread DegienPattern

딜레이라마 2017. 2. 17. 22:24
반응형

Chapter 1. Introduction 

In which we discuss the motivation for creating thread libraries, the advent of shared memory multiprocessors, and the interactions between threads and SMP machines. Multithreading (MT) is a technique that allows one program to do multiple tasks concurrently. The basic concept of multithreaded programming has existed in research and development labs for several decades. Co-routine systems such as Concurrent Pascal and InterLisp's Spaghetti stacks were in use in the mid-70s and dealt with many of the same issues. Ada's tasks are a languagebased construct that maps directly onto threads (so directly, in fact, that current Ada compilers implement tasks with threads). Burroughs shipped a commercial mainframe OS with co-routinestyle threads as early as 1960. The emergence of this concept in industry as an accepted, standardized programming paradigm is a phenomenon of the 1990s. As with many other concepts, the research and experimental use of threads have been widespread in specific industries, universities, and research institutes and are entering industry as a relatively well-formed whole on all fronts almost simultaneously. In 1991, no major commercial operating systems contained a robust user-level threads library. In 1999, every major player in the computer industry has one. Some of the motivation for this emergence can be ascribed to general good sense and the recognition of a technology whose time has come. Some can be related to the unification efforts surrounding UNIX. Probably the greatest push, especially when viewed from the point of view of the independent software vendor (ISV) and the end user, is the emergence of shared memory symmetric multiprocessors (SMPs). MT provides exactly the right programming paradigm to make maximum use of these new machines. Java was designed from the very beginning with threads in mind, and some of its functionality is based very directly on having threads. The ability to have applets is based in allowing them to run in different threads in a browser. Because of Java's high-level approach to programming, it is much easier to build a threaded program in Java than in POSIX or Win32. At the same time, the fundamental issues do not change. This may well lure many programmers into writing threaded programs before they truly understand all of the intricacies. Oh, well. The threading models we describe are strictly software models that can be implemented on any general-purpose hardware. Much research is directed toward creating better hardware that would be uniquely suited for threaded programming. We do not address that aspect in this book. To those of us concerned with the theoretical underpinnings of programming paradigms and language design, the true value of multithreading is significant and obvious. It provides a far superior paradigm for constructing programs. For those concerned with the practical details of getting real tasks done using computers, the value is significant and obvious as well. Multithreading makes it possible to obtain vastly greater performance than was ever before possible by taking advantage of multiprocessor machines. At whatever price point, the purchasers of workstations want maximum performance from their machines. The demands of computationally intensive users are always growing, and they invariably exceed the provisions of their wallets. They might want a "personal Cray," but they can't afford one. One of the solutions to this demand lies in the ever-increasing performance of CPUs. Along with the obvious technique of increasing the clock speed, a wide range of other methods is used to increase the performance of individual CPUs. The use of long instruction pipelines or superscalar techniques has allowed us to produce multiple-instruction-issue machines that can do a lot more in 2 a single clock tick. Finer compiler optimization techniques, out-of-order execution, predictive branching, VLIW, etc., allow us to obtain better and better performance from processors. However good these methods are, they still have their limits. One of the major limiting factors is the problem of limited bus, memory, and peripheral speeds. We can build CPUs today that operate at 600 MHz, but we can't build communications buses that operate at the same speed. RAM speeds are also falling further behind the demands of the CPUs. It is expensive to build 600-MHz CPUs, but as there are only a few in a system, it is affordable. To build memory that can keep up with these speeds would be prohibitively expensive. A great many machines today implement two- and even three-level caches to deal with this problem (single-level caches weren't enough!). Multilevel caches work effectively with well-behaved programs, where sequential data and instruction references are likely to be physically adjacent in memory. But truly random-access programs wreak havoc on this scheme, and we can point to any number of programs that run faster on slower machines that lack that second-level cache. None of the issues addressed above play favorites with any manufacturers. Sun, Intel, HP, IBM, SGI, DEC, etc., have come up with techniques for dealing with them. Some techniques have proven to be more effective than others, but none of them avoids the fundamental limitations of physics. Nature is a harsh mistress. This is where SMP comes into play. It is one more weapon in our arsenal for performance. Just as the foregoing techniques have allowed us to increase our single-CPU performance, SMP allows us to increase our overall system performance. And that's what we really care about—overall system performance. As one customer put it, "SMP, superscalar—buzzwords! I don't care if you have little green men inside the box! I want my program to run faster!" We can build 64-processor machines today (e.g., the Cray CS6400) that will yield 64 times the performance of a single-processor machine (on some problems). The cost of that 64-CPU machine is a fraction of the cost of 64 single-processor machines. In a 64-way SMP machine, all 64 processors share the system costs: chassis, main memory, disks, software, etc. With 64 uniprocessors, each processor must have its own chassis, memory, etc. This fact makes SMP highly attractive for its price/performance ratio. An additional attraction of SMP is that it is also possible to purchase a machine with a small number of CPUs and add more CPUs as demands (and budgets) increase. In Figure 1-1, these advantages of SMP are clear.

The economics of purchasing an SMP machine are pretty much the same as the economics of purchasing any machine. There are some extra unknowns ("I have 600 different applications that I run from time to time; how much faster will they all run? How much time will I save in a day?"), but if we focus on the primary applications in use, we can get reasonable data upon which to make our decisions. The basic question is, "If my applications run an average of N% faster on a dualCPU machine that costs M% more, is it worth it?" Only you (or your customers) can answer this question, but we can give you some generalities. Here is a typical situation: The customer's major application is MARC Analysis's MARC Solver (for circuit simulation). The MARC Solver runs about 80% faster on a dual-processor SPARCstation™ 20 than it does on a single-processor SPARCstation 20. The single-processor machine costs $16,000; the dual-processor unit costs $18,000 (about 12% more). If the designers (who cost at least $100,000/year) are constantly waiting for the solver to complete its runs, is it worth it? Obviously, yes. You will save a lot of money on a minor investment. Indeed, MARC sells very well on SMP machines. If you are a program developer (either in-house or an ISV), your question is going to be, "Should I spend the time to write my program so that it will take advantage of SMP machines?" (This probably means threading, although there are other possibilities.) Your answer will be related to your anticipated sales. If your program runs 50% faster on a dual-processor machine, will your customers buy SMP machines and more of your software? Or, to pose the question differently, if you don't do it, will some competitor do it instead and steal your customers? The answer depends upon your program. If you write a simple text editor that is never CPU-bound, the answer is a clear "no." If you write a database that is always CPU-bound, it's "yes." If you write a page-layout program that is sometimes CPU-bound, the answer is "maybe." In general, if users ever have to wait for your program, you should be looking at threading and SMP. But there is more value to threading than just SMP performance. In many instances, uniprocessors will also experience a significant performance improvement. And that bit about programming paradigms? It really does count. Being able to write simpler, more readable code helps you in almost all aspects of development. Your code can be less buggy, get out there faster, and be easier to maintain. Multithreading is not a magic bullet for all your ills,[1] and it does introduce a new set of programming issues that must be mastered, but it goes a long way toward making your work easier and your programs more efficient. 


What Is a Thread? 

Just as multitasking operating systems can do more than one thing concurrently by running more than a single process, a process can do the same by running more than a single thread. Each thread is a different stream of control that can execute its instructions independently, allowing a multithreaded process to perform numerous tasks concurrently. One thread can run the GUI while a second thread does some I/O and a third performs calculations. A thread is an abstract concept that comprises everything a computer does in executing a traditional program. It is the program state that gets scheduled on a CPU; it is the "thing" that does the work. If a process comprises data, code, kernel state, and a set of CPU registers, then a thread is embodied in the contents of those registers—the program counter, the general registers, the stack pointer, etc., and the stack. A thread, viewed at an instant of time, is the state of the computation. "Gee," you say, "That sounds like a process!" It should. They are conceptually related. But a process is a heavyweight, kernel-level entity and includes such things as a virtual memory map, file descriptors, user ID, etc., and each process has its own collection of these. The only way for your program to access data in the process structure, to query or change its state, is via a system call. All parts of the process structure are in kernel space (Figure 2-4). A user program cannot touch any of that data directly. By contrast, all of the user code (functions, procedures, etc.), along with the data, is in user space and can be accessed directly.

A thread is a lightweight entity, comprising the registers, stack, and some other data. The rest of the process structure is shared by all threads: the address space, file descriptors, etc. Much (and sometimes all) of the thread structure is in user space, allowing for very fast access. The actual code (functions, routines, signal handlers, etc.) is global, and it can be executed on any thread. In Figure 2-4 we show three threads (T1, T2, and T3), along with their stacks, stack pointers (SP), and program counters (PC). T1 and T2 are executing the same function. This is a normal situation, just as two different people can read the same road sign at the same time. All threads in a process share the state of that process (Figure 2-5[2]). They reside in exactly the same memory space, see the same functions, and see the same data. When one thread alters a process variable (say, the working directory), all the others will see the change when they next access it. If one thread opens a file to read it, all the other threads can also read from it.

Let's consider a human analogy: a bank. A bank with one person working in it (traditional process) has lots of "bank stuff," such as desks and chairs, a vault, and teller stations (process tables and variables). There are lots of services that a bank provides: checking accounts, loans, savings accounts, etc. (the functions). With one person to do all the work, that person would have to know how to do everything, and could do so, but it might take a bit of extra time to switch among the various tasks. With two or more people (threads), they would share all the same "bank stuff," but they could specialize in their different functions. And if they all came in and worked on the same day, lots of customers could get serviced quickly. To change the number of banks in town would be a big effort (creating new processes), but to hire one new employee (creating a new thread) would be very simple. Everything that happened inside the bank, including interactions among the employees there, would be fairly simple (user space operations among threads), whereas anything that involved the bank down the road would be much more involved (kernel space operations between processes). When you write a multithreaded program, 99% of your programming is identical to what it was before—you spend your efforts in getting the program to do its real work. The other 1% is spent in creating threads, arranging for different threads to coordinate their activities, dealing with threadspecific data, etc. Perhaps 0.1% of your code consists of calls to thread functions.  

Kernel Interaction

We've now covered the basic concept of threads at the user level. As noted, the concepts and most of the implementational aspects are valid for all thread models. What's missing is the definition of the relationship between threads and the operating systems. How do system calls work? How are threads scheduled on CPUs? It is at this level that the various implementations differ significantly. The operating systems provide different system calls, and even identical system calls can differ widely in efficiency and robustness. The kernels are constructed differently and provide different resources and services. Keep in mind as we go through this implementation aspect that 99% of your threads programming will be done above this level, and the major distinctions will be in the area of efficiency. 

Concurrency vs. Parallelism 

 Concurrency means that two or more threads (or traditional processes) can be in the middle of executing code at the same time; it could be the same code or it could be different code (see Figure 2-6). The threads may or may not actually be executing at the same time, but rather, in the middle of it (i.e., one started executing, it was interrupted, and the other one started). Every multitasking operating system has always had numerous concurrent processes, even though only one could be on the CPU at any given time.Parallelism means that two or more threads actually run at the same time on different CPUs (see Figure 2-7). On a multiprocessor machine, many different threads can run in parallel. They are, of course, also running concurrently.

The vast majority of timing and synchronization issues in multithreading (MT) are those of concurrency, not parallelism. Indeed, the threads model was designed to avoid your ever having to be concerned with the details of parallelism. Running an MT program on a uniprocessor (UP) does not simplify your programming problems at all. Running on a multiprocessor (MP) doesn't complicate them. This is a good thing. Let us repeat this point. If your program is written correctly on a uniprocessor, it will run correctly on a multiprocessor. The probability of running into a race condition is the same on both a UP and an MP. If it deadlocks on one, it will deadlock on the other. (There are lots of weird little exceptions to the probability part, but you'd have to try hard to make them appear.) There is a small set of bugs, however, which may cause a program to run as (naively) expected on a UP, and show its problems only on an MP (see Bus Architectures). 

반응형

'IT이야기' 카테고리의 다른 글

Deep Learning  (0) 2017.02.27
JAVA MultiThread DegienPattern2  (0) 2017.02.17
What are threads?  (0) 2017.02.08
JAVA MultiThread2  (0) 2017.02.08
JAVA MultiThread  (0) 2017.02.08
Comments