Fine Art

.

In linear algebra, the Coppersmith–Winograd algorithm, named after Don Coppersmith and Shmuel Winograd, was the asymptotically fastest known algorithm for square matrix multiplication until 2010. It can multiply two n \times n matrices in \( O(n^{2.3737}) \) time (see Big O notation). This is an improvement over the naïve \( O(n^3) \) time algorithm and the \(O(n^{2.807}) \) time Strassen algorithm. Algorithms with better asymptotic running time than the Strassen algorithm are rarely used in practice. It is possible to improve the exponent further; however, the exponent must be at least 2 (because an \( n \times n \) matrix has \( n^2 \) values, and all of them have to be read at least once to calculate the exact result). It was known that the complexity of this algorithm is \( O(n^{2.3755}) \).[1]

In 2010, Stothers gave an improvement to the algorithm, \( O(n^{2.3736}) \),[2] Subsequently, Williams improved the bound to \( O(n^{2.3727}) \) [3] by combining a mathematical short-cut from Stothers' paper and her own insights with automated optimization on computers. These improvements involved packing the matrix multiplication structure into complicated tensor products; William's solution used a tensor power of 8.

The Coppersmith–Winograd algorithm is frequently used as a building block in other algorithms to prove theoretical time bounds. However, unlike the Strassen algorithm, it is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware.[4]

Henry Cohn, Robert Kleinberg, Balázs Szegedy and Christopher Umans have rederived the Coppersmith–Winograd algorithm using a group-theoretic construction. They also show that either of two different conjectures would imply that the optimal exponent of matrix multiplication is 2, as has long been suspected. However, they were not able to formulate a specific solution leading to a better running-time than Coppersmith-Winograd at the time. [5]

References

^ In the Coppersmith and Winograd's original paper
^ Stothers, Andrew (2010), On the Complexity of Matrix Multiplication.
^ Williams, Virginia (2011), Breaking the Coppersmith-Winograd barrier
^ Robinson, Sara (2005), "Toward an Optimal Algorithm for Matrix Multiplication", SIAM News 38 (9)
^ Cohn, H.; Kleinberg, R.; Szegedy, B.; Umans, C. (2005). "Group-theoretic Algorithms for Matrix Multiplication". 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05). pp. 379. doi:10.1109/SFCS.2005.39. ISBN 0-7695-2468-0. edit

Coppersmith, Don; Winograd, Shmuel (1990), "Matrix multiplication via arithmetic progressions", Journal of Symbolic Computation 9 (3): 251–280, doi:10.1016/S0747-7171(08)80013-2.
William, Virginia (2011), Breaking the Coppersmith-Winograd barrier.

Mathematics Encyclopedia

Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License

Home - Hellenica World