Output Options Example References
LAD computes least absolute deviations regression, also known as L1 regression or Laplace regression. This type of regression is optimal when the disturbances have the Laplace distribution and is better than least squares (L2) regression for many other leptokurtic (fat-tailed) distributions such as Cauchy or Student's t.
LAD (LOWER=<lower censoring limit>, METHOD=<iteration method>, NBOOT=<# replications>, QUANTILE=<value> ,RESAMPLE=<method for computing s.e.s>,SILENT, TERSE, UPPER=<upper censoring limit>) <dependent variable> <list of independent variables> ;
Usage
To estimate by least absolute deviations in TSP, use the LAD command just like the OLSQ command. For example,
LAD CONS,C,GNP ;
estimates the consumption function using L1 (median) regression instead of L2 (least squares) regression.
Various options allow you to perform regression for any quantile and censored L1 regression. Standard errors may be obtained using the bootstrap - see the options below.
The usual regression output is printed and stored (see OLSQ for a table). The likelihood function (@LOGL) and standard error estimates are computed as though the true distribution of the disturbances was Laplace; this is by analogy to least squares, where the likelihood function and conventional standard error estimates assume that the true distribution is normal (with a small sample correction to the standard errors). The additional statistics are shown in the table below.
@PHI contains the sum of the absolute values of the residuals. This quantity divided by the number of observations and squared is an estimate of the variance of the disturbances, and is proportional to the scaling factor used in computing the variances of the coefficient estimates.
The LM test for heteroskedasticity is a modified Glejser test due to Machado and Santos-Silva (2000). This test is the result of regressing weighted absolute values of the residuals on the independent variables.
|
variable |
type |
length |
description |
|
@PHI |
scalar |
1 |
sum of abs value of residuals |
|
@IFCONV |
scalar |
1 |
=1 if there were no simplex iteration problems, 0 otherwise |
|
@UNIQUE |
scalar |
1 |
=1 if the solution is unique, 0 otherwise |
Method
The LAD estimator minimizes the sum of the absolute values of the residuals with respect to the coefficient vector b:

The estimates are computed using the Barrodale-Roberts modified Simplex algorithm. A property of the LAD estimator is that there are K residuals that are exactly zero (for K right-hand-side variables); this is analogous to the least squares property that there are only N-K linearly independent residuals. In addition, the LAD estimator occasionally produces a non-unique estimate of the coefficient vector b; TSP issues a warning message in this case.
When a quantile other than 0.5 is requested, the formula above is modified slightly.
When the number of observations is greater than 100 and the model is not censored, the estimated variance-covariance of the estimated coefficients is computed as though the true distribution were Laplace:

When the quantile tau is less than the median,

and when it is greater than the median,

The variance parameter lambda is estimated as though the data has the Laplace distribution,

This formula can also be derived as the BHHH estimate of the variance-covariance matrix if the first derivative of |e| is defined to be unity at zero, as it is everywhere else. The outer product of the gradients of the likelihood function will then yield the above estimate.
The alternative to these Laplace standard errors is to use the NBOOT= option to obtain bootstrap standard errors based on the empirical density. This is the default when the model is censored, or the number of observations is less than 100. In the case of censored estimation, the Bilias et al (2000) resampling method is used to speed up the computations; this can be changed using the RESAMPLE option.
See Judge et al (1988) for details on the statistical properties of this method of estimation. See Davidson and MacKinnon (1993) on testing for normality of the residuals in least squares. The censored version of the estimator is computed using an algorithm due to Fitzenberger (1997).
LOWER= the value below which the dependent variable is not observed. The default is no limit.
METHOD= BRCENS/SUBSET. The default method is the Barrodale-Roberts or BRCENS simplex method. The alternative SUBSET method evaluates the objective function for all possible subsets of K observations, where K is the number of RHS variables. (The K observations are fitted exactly to the objective function to yield a vector of parameter estimates, then the objective function is computed for all observations using this vector). For censored LAD or quantile regression, this is guaranteed way of finding the global optimum, but it may take a very long time if K is large.
For uncensored LAD or quantile regression, this is a way of investigating possible multiple optima. There may be multiple global optima (different parameter vectors which yield the same objective function value). If this occurs, the different solutions are stored in @COEFD. To provide a reproducible result, instead of choosing one of the multiple solutions at random, TSP uses the following rules to choose a solution:
1. Try an arithmetic average of the parameter vectors. If this yields the same objective function value, this is the reported solution. (This is equivalent to using the average of the two middle values for the median, when there is an even number of observations).
2. Choose the parameter vector with minimum L1 norm (sum of absolute values of the coefficients). If this yields a tie, use rule 3.
3. Choose the parameter vector with minimum absolute value of the first coefficient. If this yields a tie, pick the first vector of those tied.
Note that METHOD=SUBSETS is not used when doing bootstrapping of the SEs.
NBOOT= number of replications for bootstrap standard errors. For the uncensored model, the default is zero if there are 100 or more observations (conventional standard errors under the assumption that the disturbances are Laplace). Otherwise NBOOT=200. The coefficient values from the bootstrap are stored in a @BOOT, an NBOOT by NCOEF matrix, for use in computing other statistics, such as the 95% confidence interval.
QUANTILE= quantile to fit. The default is 0.5 (the median).
RESAMPLE=BILIAS/DIRECT specifies the resampling method to be used for the bootstrap standard error estimates for the censored model. DIRECT resamples from the original data and runs the censored regression estimator to compute the SEs. BILIAS (the default) zeros the observations where the predicted dependent variable is censored, then resamples from the partially zeroed observations, and runs the uncensored LAD/quantile regression to compute the bootstrap SEs. This method is faster and avoids possible convergence or local optima problems with the DIRECT method. See the Bilias et al (2000) reference for details.
SILENT/NOSILENT suppresses all printed output. The results are stored.
TERSE/NOTERSE suppresses all printed output except the table of coefficient estimates and the value of the likelihood function.
UPPER= the value above which the dependent variable is not observed. The default is no limit. This option cannot be used at the same time as the LOWER= option.
Here is an example that computes the median of the numbers 1-10:
SMPL 1,10 ; TREND T ;
LAD T C ; ? reports solution=5, but any solution from 5 to 6 is valid
LAD (METHOD=SUBSETS) T C ; ? reports solution=5.5, average of 5 and 6
Barrodale, I., and F. D. K. Roberts, Algorithm #478, Collected Algorithms from ACM Volume II, Association for Computing Machinery, New York, NY, 1980.
Bilias, Y., S. Chen, and Z. Ying, "Simple Resampling Methods for Censored Regression Quantiles," Journal of Econometrics 99 (2000), pp. 373-386.
Davidson, Russell, and James G. MacKinnon, Estimation and Inference in Econometrics, Oxford University Press, New York, NY, 1993, Chapter 16.
Dodge, Y. et al, Computational Statistics and Data Analysis, August 1991, p. 97.
Fitzenberger, Bernd, "A Guide to Censored Quantile Regressions," in G. S. Maddala and C. R. Rao (eds.), Handbook of Statistics, Volume 15: Robust Inference, 1997, pp. 405-437.
Judge, George, R. Carter Hill, William E. Griffiths, Helmut Lutkepohl, and Tsoung-Chao Lee. Introduction to the Theory and Practice of Econometrics, John Wiley & Sons, New York, Second edition, 1988, Chapter 22.
Koenker, R. W., and G. W. Bassett, "Regression Quantiles," Econometrica 46 (1978), pp. 33-50.
Machado, J. A. F., and J. M. C. Santos-Silva, "Glejser's Test Revisited," Journal of Econometrics 97 (2000): 189-202.