The MPACK; Multiple precision arithmetic BLAS (MBLAS) and LAPACK (MLAPACK)
- What is the MPACK?
- Where to use?
- Current status
- Download packages, and development
- Mailing list
- Available routines
- BLAS/LAPACK C header
- Slides, posters etc.
- Future plans
- Supported multiple precision arithmetic libraries
- Papers citing MPACK
What is the mpack?
The MPACK is a free library for arbitrary/higher precision linear algebra. It consists of MBLAS and MLAPACK, multiple precision arithmetic versions of BLAS and LAPACK, respectively. It supports various multiple precision arithmetic libraries like GMP, MPFR, and QD, as well as IEEE 754 binary128 (via gcc's extension __float128)
Accuracy to linear algebla is very important. However, there were very small number of libraries for linear algebra calculation in multiple precision arithmetic.
This is the reason that I developed the MPACK, Multiple precision arithmetic BLAS (MBLAS) and LAPACK (MLAPACK).
The BLAS and LAPACK
are widely used FORTRAN77 routines and many optimized implementations are available, like ATLAS and GotoBLAS.
Therefore, API of MBLAS and MLAPACK is very similar to BLAS and LAPACK.
Like BLAS and LAPACK, MPACK must be a free software. Thus, the MPACK is distributed under GNU Lesser General Public License.
The MLAPACK is a subset of the LAPACK, not all routines will be provided.
For multiple precision arithmetic, I use the GMP library, and QD library. The authors of GMP library states "the fastest bignum library on the planet!", and QD/DD are much faster than GMP if you don't need more precision.
Historically, I developed SDPA-GMP,
which is a multiple precision arithmetic version of SemiDefinite programming
solver using the GMP library.
Where to use?
- To solve ill-conditioned problems, the condition number of matrices becomes larger than
- Check your theorem! Is your new algorithm works as theorems indicated?
The complete list of the news has been found at history page.
- 2012/12/25: MPACK 0.8.0 NVIDIA C2050 support for Rgemm in double-double, and preliminary support for Intel Xeon Phi. ANNOUNCE
- 2012/12/20: MPACK 0.8.0-RC2 Build fixes on various platforms.
- 2012/12/05: Our Rgemm dd paper "A Fast implementation of matrix-matrix product in double-double precision on NVIDIA C2050 and application to semidefinite programming" is selected as the Best Papers of The Third International Conference on Networking and Computing. Slide is here.
- 2012/11/29: MPACK 0.8.0-RC1 CUDA version of Rgemm in double-double precision is integrated.
- 2012/10/13: CUDA 4.2 or later version of accelarated Rgemm in double-double precision on NVIDIA C2050 GPU is now available. Note it does not work on CUDA 4.1. Origial release announce is here., and preprint is available from here, and it will be presented at The Third International Conference on Networking and Computing Okinawa, Japan, December 5-7, 2012 .
- 2012/06/16: MPACK 0.7.0!
- 2012/06/16: Development has been migrated to SVN repository.
- 2011/10/28: Rgemm accleration in double-double precision on
NVIDIA C2050 GPU is now available. Even though papers are not published, you can just try by "make". Note that only CUDA 3.2 is supported. Origial release announce is here.
- 2011/08/24: Rgemm accleration on NVIDIA C2050 GPU is coming. Unforutnately paper are rejected, so please wait... Here is a pdf slide.
- 2010/08/20: MPACK 0.6.7! Includes condition number estimators; Rgecon and Rpocon. Now 100 MLAPACK routines, and license has been changed to 2-caluse BSD style license. No longer use LGPLv3.
- 2010/08/6: MPACK 0.6.6! Build fix release. Tested on various Linux distributions.
- 2010/05/31: A paper for MPACK (0.6.4) in Japanese has been uploaded.
- 2010/05/21: MPACK 0.6.5! MPFR support, and MBLAS license has been changed to BSD style. Still MLAPACK part is LGPLv3. I'll replace hopefully soon.
- 2010/01/13: MPACK 0.6.4! BUILD FIX RELEASE! PLEASE CHECK ON YOUR ENVIRONMET! THANKS! ALSO WINDOWS IS SUPPORTED!
- 2009/12/18: POSTER and SLIDE ARE UPLOADED; MPACK (MBLAS/MLAPACK) poster at HPCS2010 in English, and slide in Japanese, and I did two seminars about MPACK (MBLAS/MLAPACK) @NII and @Tokyo Univ; here is the slide.
- 2009/11/24: MPACK 0.6.0!
- 2009/11/7: Add example page.
- 2009/10/9: MPACK 0.5.2, build fix on Intel Mac.
- 2009/10/6: The cvs version has faster Raxpy using OpenMP parallelism.
- 2009/10/5: MPACK 0.5.1, just minor updates.
- 2009/9/24: MPACK 0.5.0!
- 2009/9/17: GMP/QD/DD integration is going very well; now builds the mlapack part as well.
- 2009/9/11: Now CVS version of MBLAS supports QD and DD. I abandoned "explicit generation of instances" for bad performance.
- 2009/5/25: Now I switched to template programming so that the library will be most portable. We automatically generate GMP, QD and DD versions via "explicit generation of instances". For real calculations, source code level compatibility is retained and also we can made some optimized version as well, whereas complex code we need a small changes. Typically 2-3 percent performance loss have been observed.
- 2009/3/21: QD and DD version of SDPA (SDPA-QD, DD) have been uploaded. These packages include some part of MPACK in QD and DD.
- 2009/2/10: mpack-devel ML has been launched.
- 2009/2/10: mpack-0.0.9.tar.gz.
- 2009/2/5: SDPA-GMP 7.1.2 has been released now is supported by MPACK (MBLAS/MLAPACK!)
- 2009/1/8: mpack-0.0.8.tar.gz. Moved to souceforge.net.
- 2008/6/24: This page has been created.
Current status (0.7.0)
- 64bit clean.
- examples. Contains inversion of matrix, diagonalization and matrix manipulations. You can invert Hilbert matrix using MPFR, GMP, QD, DD, double and IEEE 754 binary128.
- Total 75 BLAS routines, all are tested.
- Total 674 LAPACK routines, 666 are implemented, 100 are tested, and 14 not to be implemented.
- IEEE 754 2008, binary128 support! This is the true "quadruple precision".
- MPFR (and complex part is supported by MPC), GMP and QD, are supported as multiple precision arithmetic libraries.
- MacOSX Leopard, MacOSX Tiger, Linux (Cent OS, Ubuntu), Windows (mingw) and FreeBSD are supported.
- Clean blas.h and lapack.h for C/C++.
Download packages and development
Currently only distributed as source code.
Mplapack-devel ML is available.
BLAS/LAPACK C header
These "blas.h" and "lapack.h" based on LAPACK 3.1 can be used for when you want to link LAPACK and BLAS routines from your C/C++ programs.
You should also define if you are using GNU FORTRAN:
#define F77_FUNC(name,NAME) name ## _
#define F77_FUNC(name,NAME) name
etc. Easist way to (implicitly) add this by using "AC_F77_WRAPPERS" via configure.in.
Slides, posters etc.
- Implement LAPACK routines as far as possible.
- Better documentation.
- Providing examples.
- Parallel computing and/or threaded algorithms.
- Including in a package system like FreeBSD ports.
- Linux (CentOS/Ubuntu; x86-64)
$ make install
- MacOSX Leopard (PPC/Intel)
$ CXX="g++-4.2" ; export CXX
$ CC="gcc-4.2" ; export CC
$ make install
- MacOSX Tiger (PPC/Intel)
$ make install
- Windows with mingw cross compile on Ubuntu Linux.
$ ./configure --host=i686-pc-mingw32
$ make install
- On FreeBSD
$ make install
Supported multiple precision arithmetic libraries
- GMP library"GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers. "
- QD library "This package supports both a double-double datatype (approx. 32 decimal digits) and a quad-double datatype (approx. 64 decimal digits)."
- MPFR "The MPFR library is a C library for multiple-precision floating-point computations with correct rounding."
- MPC "Mpc is a C library for the arithmetic of complex numbers with arbitrarily high precision and correct rounding of the result."
- MPFR C++ A C++ wrapper to MPFR.
- C/C++ "double"
MPACK is licensed under 2-caluse BSD style license. Since version 0.6.7, we no longer use LGPLv3.
$BCfED??=((B (NAKATA Maho)
NAKATA Maho's E-mail address are following.
- author: Nakata, Maho
- title: The MPACK (MBLAS/MLAPACK); a multiple precision arithmetic version of BLAS and LAPACK
- version: 0.6.7
- date: 2010/08/24
- URL: http://mplapack.sourceforge.net/
Papers citing MPACK
M.N. has been supported by:
I'd like to thank all people for encouragements, discussions, etc, in particular,
- Fujisawa, Katsuki
- Goto, Kazushige
- NAKASATO, Naoto
- IMAMURA, Toshiyuki
- Himeno, Ryutaro
$Id: index.html,v 1.61 2011/10/29 03:50:49 nakatamaho Exp $