The implied ERP is very sensitive to assumptions, in particular G2
Underfitting implies low-variance but high-bias; overfitting implies low-bias but high variance
Mapping a k-means factory
My progress in learning how to purrr
The sequel in the JH dataviz specialization
Map over list elements with elegance and power
How do we visualize what's missing? And the art of imputation
Product of JH's DataViz in R with ggplot2.
Excel workbook (or ranges) can be embedded in the classic iframe
Seasonal dummy model, roots of characteristic equation, and transformation (difference versus detrend) of non-stationary process
The long-run mean of an MA(q) process is the intercept; of the AR(p) process is delta/(1 - sum of params)
Penalized MSE measures are called information criteria (IC) and two popular such measures are the Akaike Information Crite-rion (AIC) and the Bayesian Information Criterion (BIC).
The Box-Pierce statistic is a simplified version of the Ljung-Box statistic; both are joint tests of autocorrelation
ARMA(p,q) combines an AR(p) and MA(q)
What's the difference between and AR and MA process, when they appear to be similar?
White noise (WN) is the basic time series building block
The autocorrelation function (ACF; aka, correlogram) plots autocorrelation coefficients
standard lm() diagnostic plots: residual vs fitted, normal Q-Q, scale-location, residuals vs levereage
m-fold cross validation is for model checking, not model building
Cook's distance evaluates an outlier
Diagnostics: omitted variable bias, heteroskedasticity, and multicollinearity
Fama-French three-factor model; House prices; and Medical costs
Coefficient confidence interval (CI); hypothesis test; interpretation of SE, t-stat and p-value
Monthly rent against feet^2 per kaggle dataset
Simulated portfolio & benchmark for purposes of testing basic features of univariate regression
With FRED data and applying gt_table
MV = PY illustrates the problem but is tautological
Distill is so much easier than blogdown