Fit the model using a regularized maximum likelihood. The regularization method AND the solver used is determined by the argument method.
Parameters : | start_params : array-like, optional
method : ‘l1’ or ‘l1_cvxopt_cp’
maxiter : Integer or ‘defined_by_method’
full_output : bool
disp : bool
fargs : tuple
callback : callable callback(xk)
retall : bool
alpha : non-negative scalar or numpy array (same size as parameters)
trim_mode : ‘auto, ‘size’, or ‘off’
size_trim_tol : float or ‘auto’ (default = ‘auto’)
auto_trim_tol : float
qc_tol : float
qc_verbose : Boolean
|
---|
Notes
Optional arguments for the solvers (available in Results.mle_settings):
'l1'
acc : float (default 1e-6)
Requested accuracy as used by slsqp
'l1_cvxopt_cp'
abstol : float
absolute accuracy (default: 1e-7).
reltol : float
relative accuracy (default: 1e-6).
feastol : float
tolerance for feasibility conditions (default: 1e-7).
refinement : int
number of iterative refinement steps when solving KKT
equations (default: 1).
Optimization methodology
With the negative log likelihood, we solve the convex but
non-smooth problem
via the transformation to the smooth, convex, constrained problem
in twice as many variables (adding the “added variables” )
subject to
With the derivative of
in the
parameter direction, theory dictates that, at the
minimum, exactly one of two conditions holds: