![]() The Gauss-Seidel iteration requires more data communication: in the interval of each iteration it needs to send the data x ( 1 : i ) to other processors which are used to solve x ( k ), k > j. But after each Jacobi iteration, the most recent iteration x k ( i 1 : i 2 ) in each processor is broadcast to all the other processors. The Jacobi iteration can be easily made to run in parallel, in fact, it is similar to the matrix-vector product discussed above. See algorithms maximalIndependent_.m and maximalIndependents_.m presented in Section 1.5. To reveal the independence of the unknowns, the standard coloring algorithm in the graph theories can be used. These groups of x can then be solved in parallel. In A x = a, if A is sparse, several groups of x may be decoupled from each other. The parallel computations of many of these elementary matrix computations are supported by the computer hardware. Usually the parallelism in an algorithm can be revealed by the standard topological sorting algorithm in graph theories. ![]() Sometimes, the usual algorithm suitable for sequential computations must be reordered in order to reveal more parallelism. In an algorithm, parallelism can be extracted in different levels and areas of the algorithm. The efficiency of parallel computations depends on four factors: the distribution of computations among different processors, the processing speed of each processor, the amount of data that must be transmitted between different processors, and the speed of data communication between different processors. ![]() Fortunately, the emergence of message passing interface (MPI) and a number of libraries handling data communications between different processors, computers, or clusters of computers hide the complexity of hardware configurations from the implementation of parallel computations. Parallel computations intrinsically depend on the hardware configurations, shared memory, or distributed memory, etc. Routh, in Matrix Algorithms in MATLAB, 2016 5.7.2 Parallel Computationsīecause of the need to solve very large scale of sparse linear equation systems, parallel computations are becoming increasingly important.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |