Concurrent Programming

    Concurrency is naturally divided into instruction level, statement level(executing two or more statements simultaneously), program unit level(execute two or more subprogram units simultaneously) and program level(executing two or more programs simultaneously). Here we only discuss unit level and statement level concurrency.

    Concurrent execution of program units can occur either physically on separate processors or logically in some time-sliced fashion on a single-processor computer system. Statement level concurrency is largely a matter of specifying how data should be distributed over multiple memories and which statement can be executed concurrently.

    Task is a process(thread) running on each processor. A task can communicate with other tasks through shared variables, or through message passing. Because tasks often work together to create simulations or solve problems they must use some form of communication to either synchronize their executions or share data or both.

    Synchronization is a mechanism that controls the order in which tasks execute. Two kinds of syschronization are required when tasks share data, cooperation and competition. Cooperation synchronization is required between task A and B when task A must wait for task B to complete some specific activity before task A can continue its execution. Competition syschronization is required between two tasks when both require the use of some resource that can't be simultaneously used.

    The methods of providing for mutually exclusive access to a shared resource are semaphores, monitors and message passing.

Semaphore:

    A semaphore is a data structure consisting of an integer and a queue that stores task descriptors. The only two operations provided for semaphore were P,V.

Monitors:

    Monitors use the concept of data abstraction, encapsulate shared data structure with their operations and hide their representations, that is, make shared data structures abstract data types. One of the most important features of monitors is that shared data is resident in the monitor rather than in any of the client units. Because all accesses are resident in the monitor, the monitor implementation can be made to guarantee synchronized access by simply allowing only one access at a time.

Message Passing:

    Message passing is often used in distributed computing system, two tasks can exchange information through send and receive. Message passing can also be used to do the synchronization among tasks.
There are two types of message passing:

Evaluation

    Using semaphores to provide cooperation and competition synchronization creates an unsafe programming environment, there is no way to statically check for the correctness of their use.
    Monitors are a better way to provide competition syschronization than semaphores, but the use of queue to provide cooperation synchronization through wait and notify subject to the same problems as semaphores.
    Message passing is slightly better to do cooperation and competition synchronization than semaphores and monitors.

Statement level concurrency

    Statement level concurrency is largely a matter of specifying how data should be distributed over multiple memories and which statements can be executed concurrently.

    HPF(High Performance Fortran) is a collection of extensions to Fortran90 that are meant to allow programmers to specify information to the compiler to help it optimize the execution of programs on multiprocessor computers. User can specify the number of processors, the distribution of data over the memories of those processors, these can be used to inform the compiler of ways it can map the program onto a multiprocessor architecture.