if you want to remove an article from website contact us from top.

    the thread library allows multiple threads of control that run concurrently in the same memory location. thread library provides an interface that supports multi-threading through a library of subroutine.

    Mohammed

    Guys, does anyone know the answer?

    get the thread library allows multiple threads of control that run concurrently in the same memory location. thread library provides an interface that supports multi-threading through a library of subroutine. from screen.

    Operating Systems: Threads

    Threads

    References:

    Abraham Silberschatz, Greg Gagne, and Peter Baer Galvin, "Operating System Concepts, Ninth Edition ", Chapter 4

    4.1 Overview

    A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of registers, ( and a thread ID. )

    Traditional ( heavyweight ) processes have a single thread of control - There is one program counter, and one sequence of instructions that can be carried out at any given time.

    As shown in Figure 4.1, multi-threaded applications have multiple threads within a single process, each having their own program counter, stack and set of registers, but sharing common code, data, and certain structures such as open files.

    Figure 4.1 - Single-threaded and multithreaded processes

    4.1.1 Motivation

    Threads are very useful in modern programming whenever a process has multiple tasks to perform independently of the others.

    This is particularly true when one of the tasks may block, and it is desired to allow the other tasks to proceed without blocking.

    For example in a word processor, a background thread may check spelling and grammar while a foreground thread processes user input ( keystrokes ), while yet a third thread loads images from the hard drive, and a fourth does periodic automatic backups of the file being edited.

    Another example is a web server - Multiple threads allow for multiple requests to be satisfied simultaneously, without having to service requests sequentially or to fork off separate processes for every incoming request. ( The latter is how this sort of thing was done before the concept of threads was developed. A daemon would listen at a port, fork off a child for every incoming request to be processed, and then go back to listening to the port. )

    Figure 4.2 - Multithreaded server architecture

    4.1.2 Benefits

    There are four major categories of benefits to multi-threading:

    Responsiveness - One thread may provide rapid response while other threads are blocked or slowed down doing intensive calculations.

    Resource sharing - By default threads share common code, data, and other resources, which allows multiple tasks to be performed simultaneously in a single address space.

    Economy - Creating and managing threads ( and context switches between them ) is much faster than performing the same tasks for processes.

    Scalability, i.e. Utilization of multiprocessor architectures - A single threaded process can only run on one CPU, no matter how many may be available, whereas the execution of a multi-threaded application may be split amongst available processors. ( Note that single threaded processes can still benefit from multi-processor architectures when there are multiple processes contending for the CPU, i.e. when the load average is above some certain threshold. )

    4.2 Multicore Programming

    A recent trend in computer architecture is to produce chips with multiple cores, or CPUs on a single chip.

    A multi-threaded application running on a traditional single-core chip would have to interleave the threads, as shown in Figure 4.3. On a multi-core chip, however, the threads could be spread across the available cores, allowing true parallel processing, as shown in Figure 4.4.

    Figure 4.3 - Concurrent execution on a single-core system.

    Figure 4.4 - Parallel execution on a multicore system

    For operating systems, multi-core chips require new scheduling algorithms to make better use of the multiple cores available.

    As multi-threading becomes more pervasive and more important ( thousands instead of tens of threads ), CPUs have been developed to support more simultaneous threads per core in hardware.

    4.2.1 Programming Challenges ( New section, same content ? )

    For application programmers, there are five areas where multi-core chips present new challenges:

    Identifying tasks - Examining applications to find activities that can be performed concurrently.Balance - Finding tasks to run concurrently that provide equal value. I.e. don't waste a thread on trivial tasks.Data splitting - To prevent the threads from interfering with one another.Data dependency - If one task is dependent upon the results of another, then the tasks need to be synchronized to assure access in the proper order.Testing and debugging - Inherently more difficult in parallel processing situations, as the race conditions become much more complex and difficult to identify.

    4.2.2 Types of Parallelism ( new )

    In theory there are two different ways to parallelize the workload:

    Data parallelism divides the data up amongst multiple cores ( threads ), and performs the same task on each subset of the data. For example dividing a large image up into pieces and performing the same digital image processing on each piece on different cores.

    स्रोत : www.cs.uic.edu

    CL Interface Multithreading : NAG Library CL Interface, Mark 27

    NAG Library Manual, Mark 27

    Interfaces:  FL   CL   ADMultithreading:  FL   CL   ADNAG CL Interface

    Multithreading

    ▿ Contents

    ▸ 1  Thread Safety ▸ 2  Parallelism

    3  Multithreaded Functions

    4  References 1 Thread Safety

    In multithreaded applications, each thread in a team processes instructions independently while sharing the same memory address space. For these applications to operate correctly any functions called from them must be thread safe. That is, any global variables they contain are guaranteed not to be accessed simultaneously by different threads, as this can compromise results. This can be ensured through appropriate synchronization, such as that found in OpenMP.

    When a function is described as thread safe we are considering its behaviour when it is called by multiple threads. It is worth noting that a thread unsafe function can still, itself, be multithreaded. A team of threads can be created inside the function to share the workload as described in Section 2.

    The NAG CL Interface is thread safe by design: the functions do not use global variables and all communication between them is via argument lists, and thus can be safely called simultaneously by multiple threads in your program.

    1.1

    Functions with Function Arguments

    Some Library functions require you to supply a function and to pass the name of the function as an actual argument in the call to the Library function. For many of these Library functions, the supplied function interface includes an array parameter (called comm) specifically for you to pass information to the supplied function without the need for global variables.

    If you need to provide your supplied function with more information than can be given via the interface argument list, then you are advised to check, in the relevant Chapter Introduction, whether the Library function you intend to call has an equivalent reverse communication interface. These have been designed specifically for problems where user-supplied function interfaces are not flexible enough for a given problem, and their use should eliminate the need to provide data through global variables. Where reverse communication interfaces are not available, it is usual to use global variables containing the required data that is accessible from both the supplied function and from the calling program. It is thread safe to do this only if any global data referenced is made threadprivate by OpenMP or is updated using appropriate synchronisation, thus avoiding the possibility of simultaneous modification by different threads.

    Thread safety of user-supplied functions is also an issue with a number of functions in multi-threaded implementations of the NAG Library, which may internally parallelize around the calls to the user-supplied functions. This issue affects not just global variables but also how the comm array may be used. In these cases, synchronisation may be needed to ensure thread safety. Chapter X06 provides functions which can be used in your supplied function to determine whether it is being called from within an OpenMP parallel region. If you are in doubt over the thread safety of your program you are advised to contact NAG for assistance.

    1.2 Input/Output

    When using the NAG CL Interface in multi-threaded applications we recommend that when using its error mechanism, the output is switched off (by setting fail:print=Nag_FALSE).

    1.3

    Implementation Issues

    In very rare cases we are unable to guarantee the thread safety of a particular specific implementation. Note also that in some implementations, the Library is linked with one or more vendor libraries to provide, for example, efficient BLAS functions. NAG cannot guarantee that any such vendor library is thread safe. Please consult the Users' Note for your implementation for any additional implementation-specific information.

    2 Parallelism 2.1 Introduction

    The time taken to execute a function from the NAG Library has traditionally depended, to a large degree, on the serial performance capabilities of the processor being used. In an effort to go beyond the performance limitations of a single core processor, multithreaded implementations of the NAG Library are available. These implementations divide the computational workload of some functions between multiple cores and executes these tasks in parallel. Traditionally, such systems consisted of a small number of processors each with a single core. Improvements in the performance capabilities of these processors happened in line with increases in clock frequencies. However, this increase reached a limit which meant that processor designers had to find another way in which to improve performance; this led to the development of multicore processors, which are now ubiquitous. Instead of consisting of a single compute core, multicore processors consist of two or more, which typically comprise at least a Central Processing Unit and a small cache. Thus making effective use of parallelism, wherever possible, has become imperative in order to maximize the performance potential of modern hardware resources, and the multithreaded implementations.

    The effectiveness of parallelism can be measured by how much faster a parallel program is compared to an equivalent serial program. This is called the parallel speedup. If a serial program has been parallelized then the speedup of the parallel implementation of the program is defined by dividing the time taken by the original serial program on a given problem by the time taken by the parallel program using n cores to compute the same problem. Ideal speedup is obtained when this value is n (i.e., when the parallel program takes

    स्रोत : www.nag.com

    Linux Tutorial: POSIX Threads

    POSIX Pthread libraries on Linux. YoLinux: Linux Information Portal includes informative tutorials and links to many Linux sites.

    POSIX thread (pthread) libraries

    The POSIX thread libraries are a standards based thread API for C/C++. It allows one to spawn a new concurrent process flow. It is most effective on multi-processor or multi-core systems where the process flow can be scheduled to run on another processor thus gaining speed through parallel or distributed processing. Threads require less overhead than "forking" or spawning a new process because the system does not initialize a new system virtual memory space and environment for the process. While most effective on a multiprocessor system, gains are also found on uniprocessor systems which exploit latency in I/O and other system functions which may halt process execution. (One thread may execute while another is waiting for I/O or some other system latency.) Parallel programming technologies such as MPI and PVM are used in a distributed computing environment while threads are limited to a single computer system. All threads within a process share the same address space. A thread is spawned by defining a function and it's arguments which will be processed in the thread. The purpose of using the POSIX thread library in your software is to execute software faster.

    Table of Contents:

    # Thread Basics

    # Thread Creation and Termination

    # Thread Synchronization

    # Thread Scheduling # Thread Pitfalls # Thread Debugging # Thread Man Pages # Links # Books

    Thread Basics:

    Thread operations include thread creation, termination, synchronization (joins,blocking), scheduling, data management and process interaction.

    A thread does not maintain a list of created threads, nor does it know the thread that created it.

    All threads within a process share the same address space.

    Threads in the same process share:

    Process instructions

    Most data

    open files (descriptors)

    signals and signal handlers

    current working directory

    User and group id

    Each thread has a unique:

    Thread ID

    set of registers, stack pointer

    stack for local variables, return addresses

    signal mask priority Return value: errno

    pthread functions return "0" if OK.

    Thread Creation and Termination:#include

    Example: pthread1.c

    #include #include

    void *print_message_function( void *ptr );

    main() {

    pthread_t thread1, thread2;

    char *message1 = "Thread 1";

    char *message2 = "Thread 2";

    int iret1, iret2;

    /* Create independent threads each of which will execute function */

    iret1 = pthread_create( &thread1, NULL, print_message_function, (void*) message1);

    iret2 = pthread_create( &thread2, NULL, print_message_function, (void*) message2);

    /* Wait till threads are complete before main continues. Unless we */

    /* wait we run the risk of executing an exit which will terminate */

    /* the process and all threads before the threads have completed. */

    pthread_join( thread1, NULL);

    pthread_join( thread2, NULL);

    printf("Thread 1 returns: %d\n",iret1);

    printf("Thread 2 returns: %d\n",iret2);

    exit(0); }

    void *print_message_function( void *ptr )

    { char *message;

    message = (char *) ptr;

    printf("%s \n", message);

    } Compile:

    C compiler: cc -lpthread pthread1.c

    or

    C++ compiler: g++ -lpthread pthread1.c

    Run: ./a.out Results: Thread 1 Thread 2 Thread 1 returns: 0 Thread 2 returns: 0 Details:

    In this example the same function is used in each thread. The arguments are different. The functions need not be the same.

    Threads terminate by explicitly calling pthread_exit, by letting the function return, or by a call to the function exit which will terminate the process including any threads.

    Function call: pthread_create

    int pthread_create(pthread_t * thread,

    const pthread_attr_t * attr,

    void * (*start_routine)(void *),

    void *arg); Arguments:

    thread - returns the thread id. (unsigned long int defined in bits/pthreadtypes.h)

    attr - Set to NULL if default thread attributes are used. (else define members of the struct pthread_attr_t defined in bits/pthreadtypes.h) Attributes include:

    detached state (joinable? Default: PTHREAD_CREATE_JOINABLE. Other option: PTHREAD_CREATE_DETACHED)

    scheduling policy (real-time? PTHREAD_INHERIT_SCHED,PTHREAD_EXPLICIT_SCHED,SCHED_OTHER)

    scheduling parameter

    inheritsched attribute (Default: PTHREAD_EXPLICIT_SCHED Inherit from parent thread: PTHREAD_INHERIT_SCHED)

    scope (Kernel threads: PTHREAD_SCOPE_SYSTEM User threads: PTHREAD_SCOPE_PROCESS Pick one or the other not both.)

    guard size

    stack address (See unistd.h and bits/posix_opt.h _POSIX_THREAD_ATTR_STACKADDR)

    stack size (default minimum PTHREAD_STACK_SIZE set in pthread.h),

    void * (*start_routine) - pointer to the function to be threaded. Function has a single argument: pointer to void.

    *arg - pointer to argument of function. To pass multiple arguments, send a pointer to a structure.

    Function call: pthread_exit

    void pthread_exit(void *retval);

    Arguments:

    retval - Return value of thread.

    This routine kills the thread. The pthread_exit function never returns. If the thread is not detached, the thread id and return value may be examined from another thread by using pthread_join.

    स्रोत : www.cs.cmu.edu

    Do you want to see answer or more ?
    Mohammed 12 day ago
    4

    Guys, does anyone know the answer?

    Click For Answer