ABSTRACT
Large, sparse, linear systems of equations arise frequently when constructing mathematical models of natural phenomena. Most often, these linear systems are fully constrained and can be solved via direct or iterative techniques. However, one important problem class requires solutions to underconstrained linear systems that maximize some objective function. These linear optimization problems are natural formulations of many business plans and often contain hundreds of equations with thousands of variables. Historically, linear optimization problems have been solved via the simplex method. Despite the excellent performance of the simplex method, the size of the optimization problems and the frequency of their solution make linear optimization a computationally taxing endeavor. This paper examines the performance of parallel variants of the simplex algorithm on the Intel iPSC, a message-based parallel system. Linear optimization test data are drawn from commercial sources and represent realistic problems. Analysis shows that the speedup obtained is sensitive to both the structure of the underlying data and the data partitioning.