호빗의 인간세상 탐험기

JAVA MultiThread 본문

IT이야기

JAVA MultiThread

딜레이라마 2017. 2. 8. 21:13
반응형

What Is a Thread? 

Just as multitasking operating systems can do more than one thing concurrently by running more than a single process, a process can do the same by running more than a single thread. Each thread is a different stream of control that can execute its instructions independently, allowing a multithreaded process to perform numerous tasks concurrently. One thread can run the GUI while a second thread does some I/O and a third performs calculations. A thread is an abstract concept that comprises everything a computer does in executing a traditional program. It is the program state that gets scheduled on a CPU; it is the "thing" that does the work. If a process comprises data, code, kernel state, and a set of CPU registers, then a thread is embodied in the contents of those registers—the program counter, the general registers, the stack pointer, etc., and the stack. A thread, viewed at an instant of time, is the state of the computation. "Gee," you say, "That sounds like a process!" It should. They are conceptually related. But a process is a heavyweight, kernel-level entity and includes such things as a virtual memory map, file descriptors, user ID, etc., and each process has its own collection of these. The only way for your program to access data in the process structure, to query or change its state, is via a system call. 

All parts of the process structure are in kernel space. A user program cannot touch any of that data directly. By contrast, all of the user code (functions, procedures, etc.), along with the data, is in user space and can be accessed directly. 

A thread is a lightweight entity, comprising the registers, stack, and some other data. The rest of the process structure is shared by all threads: the address space, file descriptors, etc. Much (and sometimes all) of the thread structure is in user space, allowing for very fast access. The actual code (functions, routines, signal handlers, etc.) is global, and it can be executed on any thread. In Figure 2-4 we show three threads (T1, T2, and T3), along with their stacks, stack pointers (SP), and program counters (PC). T1 and T2 are executing the same function. This is a normal situation, just as two different people can read the same road sign at the same time. All threads in a process share the state of that process. They reside in exactly the same memory space, see the same functions, and see the same data. 

Let's consider a human analogy: a bank. A bank with one person working in it (traditional process) has lots of "bank stuff," such as desks and chairs, a vault, and teller stations (process tables and variables). There are lots of services that a bank provides: checking accounts, loans, savings accounts, etc. (the functions). With one person to do all the work, that person would have to know how to do everything, and could do so, but it might take a bit of extra time to switch among the various tasks. With two or more people (threads), they would share all the same "bank stuff," but they could specialize in their different functions. And if they all came in and worked on the same day, lots of customers could get serviced quickly. To change the number of banks in town would be a big effort (creating new processes), but to hire one new employee (creating a new thread) would be very simple. Everything that happened inside the bank, including interactions among the employees there, would be fairly simple (user space operations among threads), whereas anything that involved the bank down the road would be much more involved (kernel space operations between processes). When you write a multithreaded program, 99% of your programming is identical to what it was before—you spend your efforts in getting the program to do its real work. The other 1% is spent in creating threads, arranging for different threads to coordinate their activities, dealing with threadspecific data, etc. Perhaps 0.1% of your code consists of calls to thread functions. 


Kernel Interaction 

We've now covered the basic concept of threads at the user level. As noted, the concepts and most of the implementational aspects are valid for all thread models. What's missing is the definition of the relationship between threads and the operating systems. How do system calls work? How are threads scheduled on CPUs? It is at this level that the various implementations differ significantly. The operating systems provide different system calls, and even identical system calls can differ widely in efficiency and robustness. The kernels are constructed differently and provide different resources and services. Keep in mind as we go through this implementation aspect that 99% of your threads programming will be done above this level, and the major distinctions will be in the area of efficiency. 

Concurrency vs. Parallelism 

 9 Concurrency means that two or more threads (or traditional processes) can be in the middle of executing code at the same time; it could be the same code or it could be different code (see Figure 2-6). The threads may or may not actually be executing at the same time, but rather, in the middle of it (i.e., one started executing, it was interrupted, and the other one started). Every multitasking operating system has always had numerous concurrent processes, even though only one could be on the CPU at any given time. Figure 2-6. Three Threads Running Concurrently on One CPU Parallelism means that two or more threads actually run at the same time on different CPUs (see Figure 2-7). On a multiprocessor machine, many different threads can run in parallel. They are, of course, also running concurrently. Figure 2-7. Three Threads Running in Parallel on Three CPUs The vast majority of timing and synchronization issues in multithreading (MT) are those of concurrency, not parallelism. Indeed, the threads model was designed to avoid your ever having to be concerned with the details of parallelism. Running an MT program on a uniprocessor (UP) does not simplify your programming problems at all. Running on a multiprocessor (MP) doesn't complicate them. This is a good thing. Let us repeat this point. If your program is written correctly on a uniprocessor, it will run correctly on a multiprocessor. The probability of running into a race condition is the same on both a UP and an MP. If it deadlocks on one, it will deadlock on the other. (There are lots of weird little exceptions to the probability part, but you'd have to try hard to make them appear.) There is a small set of bugs, however, which may cause a program to run as (naively) expected on a UP, and show its problems only on an MP (see Bus Architectures). 


System Calls

A system call is basically a function that ends up trapping to routines in the kernel. These routines may do things as simple as looking up the user ID for the owner of the current process, or as complex as redefining the system's scheduling algorithm. For multithreaded programs, there is a serious issue surrounding how many threads can make system calls concurrently. For some operating systems, the answer is "one"; for others, it's "many." The most important point is that system calls run exactly as they did before, so all your old programs continue to run as they did before, with (almost) no degradation.


Signals 

Signals are the UNIX kernel's way of interrupting a running process and letting it know that something of interest has happened. (NT has something similar but doesn't expose it in the Win32 interface.) It could be that a timer has expired, or that some I/O has completed, or that some other process wants to communicate something. Happily, Java does not use UNIX signals, so we may conveniently ignore them entirely! The role that signals play in UNIX programs is handled in Java either by having a thread respond to a synchronous request or by the use of exceptions. 

Synchronization 

Synchronization is the method of ensuring that multiple threads coordinate their activities so that one thread doesn't accidentally change data that another thread is working on. This is done by providing function calls that can limit the number of threads that can access some data concurrently. In the simplest case (a mutual exclusion lock—a mutex), only one thread at a time can execute a given piece of code. This code presumably alters some global data or performs reads or writes to a device. For example, thread T1 obtains a lock and starts to work on some global data. Thread T2 must now wait (typically, it goes to sleep) until thread T1 is done before T2 can execute the same code. By using the same lock around all code that changes the data, we can ensure that the data remains consistent. 

Scheduling

 Scheduling is the act of placing threads onto CPUs so that they can execute, and of taking them off those CPUs so that others can run instead. In practice, scheduling is not generally an issue because "it all works" just about the way you'd expect.  

The Value of Using Threads There is really only one reason for writing MT programs—to get better programs more quickly. If you're an Independent Software Vendor (ISV), you sell more software. If you're developing software for your own in-house use, you simply have better programs to use. The reason you can write better programs is that MT gives your programs and your programmers a number of significant advantages over nonthreaded programs and programming paradigms. A point to keep in mind here is that you are not replacing simple, nonthreaded programs with fancy, complex, threaded programs. You are using threads only when you need them to replace complex or slow nonthreaded programs. Threads are just one more way to make your programming tasks easier. The main benefits of writing multithreaded programs are:

• Performance gains from multiprocessing hardware (parallelism) 

• Increased application throughput

 • Increased application responsiveness

 • Replacing process-to-process communications

 • Efficient use of system resources

 • One binary that runs well on both uniprocessors and multiprocessors 

Parallelism 

Computers with more than one processor offer the potential for enormous application speedups (Figure 2-8). MT is an efficient way for application developers to exploit the parallelism of the hardware. Different threads can run on different processors simultaneously with no special input from the user and no effort on the part of the programmer. Figure 2-8. Different Threads Running on Different Processors A good example is a process that does matrix multiplication. A thread can be created for each available processor, allowing the program to use the entire machine. The threads can then compute distinct elements of the resulting matrix by performing the appropriate vector multiplication. 


Throughput 

When a traditional, single-threaded program requests a service from the operating system, it must wait for that service to complete, often leaving the CPU idle. Even on a uniprocessor, multithreading allows a process to overlap computation with one or more blocking system calls (Figure 2-9). Threads provide this overlap even though each request is coded in the usual synchronous style. The thread making the request must wait, but another thread in the process can continue. Thus, a process can have numerous blocking requests outstanding, giving you the beneficial effects of doing asynchronous I/O while still writing code in the simpler synchronous fashion. 


반응형

'IT이야기' 카테고리의 다른 글

What are threads?  (0) 2017.02.08
JAVA MultiThread2  (0) 2017.02.08
네트워크 보안  (0) 2017.02.06
운영체제  (0) 2017.02.02
패킷 검사와 DPI  (0) 2017.01.24
Comments