Commit f89bf6f1 by Jason Rhinelander

### Use Cholesky decomposition to invert

```The previous version was using a generic LU inverse, which ran into
numerical stability problems for large q values.  For example,

q=7, c=0, b=(1.40, 1.41, ..., 1.50) looked like:

0.1767019
0.1741197
0.1640931
0.1330334
0.2003187
0.1792207
0.1792029
0.175216
0.1836156
0.2131243
0.2235138

because the source data files have only slight variation in that range.
Using a cholesky decomposition inverse fixes this, and gives numbers
agreeing exactly with JGM's fortran version (which also uses Cholesky
decomposition):

0.1744341
0.1780252
0.181262
0.1841298
0.1866257
0.1887341
0.1904548
0.1917758
0.1926928
0.1932153
0.1933418

which agrees precisely with JGM's (trimmed) fracdist output of:

P value = 0.174
P value = 0.178
P value = 0.181
P value = 0.184
P value = 0.187
P value = 0.189
P value = 0.190
P value = 0.192
P value = 0.193
P value = 0.193
P value = 0.193```
parent 8f5942a7
 ... ... @@ -3,6 +3,9 @@ ## 1.0.2 - Added versioning to libfracdist shared library. - Improved numerical stability of matrix inversion used for quadratic approximations regressions. This improves results significantly for large critical values (which occur, for example, with larger q values q=7). ## 1.0.1 ... ...
 #include #include #include #include #include using Eigen::MatrixX3d; using Eigen::Matrix3Xd; using Eigen::Matrix3d; using Eigen::VectorXd; using Eigen::RowVector3d; using Eigen::LLT; namespace fracdist { ... ... @@ -122,7 +125,8 @@ const std::array quantiles(const unsigned int &q, const double // The interpolated F' is then the fitted value from the regression evaluted at the desired // b. // The regressors don't change, so calculate the X matrix just once: // The regressors don't change, so calculate the X and cholesky decomposition of XtX // matrices just once: MatrixX3d X(blast-bfirst+1, 3); for (size_t i = bfirst; i <= blast; i++) { ... ... @@ -131,16 +135,13 @@ const std::array quantiles(const unsigned int &q, const double X(i-bfirst, 2) = bweights[i] * bvalues[i] * bvalues[i]; } Matrix3Xd Xt = X.transpose(); LLT cholXtX(Xt * X); RowVector3d wantx; wantx(0) = 1.0; wantx(1) = b; wantx(2) = b*b; auto Xt = X.transpose(); // We want to get the fitted value for wantx, in other words, wantx * beta. Expanding beta, // we get wantx * (X^T X)^-1 X^T y. The only thing actually as we go through the data is y, // so we can just precompute most of the above. fitter here will actually just be a vector. auto fitter = (wantx * (Xt * X).inverse() * Xt).eval(); VectorXd y(blast-bfirst+1); ... ... @@ -151,7 +152,7 @@ const std::array quantiles(const unsigned int &q, const double y(j-bfirst) = bweights[j] * bmap[j][i]; } result[i] = fitter * y; result[i] = wantx * cholXtX.solve(Xt * y); } qcache_store(q, b, constant, interp, result); ... ...
 ... ... @@ -2,11 +2,14 @@ #include #include #include #include #include using Eigen::MatrixX3d; using Eigen::Matrix3Xd; using Eigen::Matrix3d; using Eigen::VectorXd; using Eigen::RowVector3d; using Eigen::LLT; namespace fracdist { ... ... @@ -70,8 +73,9 @@ double critical_advanced(double test_level, const unsigned int &q, const double data(2) = chisqinv_actual*chisqinv_actual; // Get the fitted value from the regression using the inverse of our actual test level auto Xt = X.transpose(); double fitted = data * (((Xt * X).inverse()) * Xt * y); Matrix3Xd Xt = X.transpose(); LLT cholXtX(Xt * X); double fitted = data * cholXtX.solve(Xt * y); // Negative critical values are impossible; if we somehow got a negative prediction, truncate it if (fitted < 0) fitted = 0; ... ...
 #include #include #include #include #include using Eigen::Matrix3d; using Eigen::MatrixX3d; using Eigen::Matrix3Xd; using Eigen::VectorXd; using Eigen::RowVector3d; using Eigen::LLT; namespace fracdist { ... ... @@ -62,14 +65,14 @@ double pvalue_advanced(const double &test_stat, const unsigned int &q, const dou y(i-ap.first) = chisq_inv_p_i(i, q); } RowVector3d data; data(0) = 1.0; data(1) = test_stat; data(2) = test_stat*test_stat; auto Xt = X.transpose(); double fitted = data * (((Xt * X).inverse()) * Xt * y); Matrix3Xd Xt = X.transpose(); LLT cholXtX(Xt * X); double fitted = data * cholXtX.solve(Xt * y); // A negative isn't valid, so if we predicted one anyway, truncate it at 0 (which corresponds to // a pvalue of 1). ... ...
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!