Ver registro no DEDALUS
Exportar registro bibliográfico

Metrics


Metrics:

Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary (2018)

  • Authors:
  • USP affiliated authors: HAESER, GABRIEL - IME
  • USP Schools: IME
  • DOI: 10.1007/s10107-018-1290-4
  • Subjects: PROGRAMAÇÃO MATEMÁTICA; PROGRAMAÇÃO NÃO LINEAR
  • Keywords: constrained optimization; nonconvex programming; interior point method; first order algorithm; nonsmooth problems
  • Agências de fomento:
  • Language: Inglês
  • Imprenta:
  • Source:
  • Acesso online ao documento

    Online accessDOI or search this record in
    Informações sobre o DOI: 10.1007/s10107-018-1290-4 (Fonte: oaDOI API)
    • Este periódico é de assinatura
    • Este artigo é de acesso aberto
    • URL de acesso aberto
    • Cor do Acesso Aberto: green
    Versões disponíveis em Acesso Aberto do: 10.1007/s10107-018-1290-4 (Fonte: Unpaywall API)

    Título do periódico: Mathematical Programming

    ISSN: 0025-5610,1436-4646

    • Melhor URL em Acesso Aberto:
      • Página do artigo
      • Link para o PDF
      • Evidência: oa repository (via OAI-PMH title and first author match)
      • Licença:
      • Versão: submittedVersion
      • Tipo de hospedagem: repository


    • Outras alternativas de URLs em Acesso Aberto:
        • Página do artigo
        • Link para o PDF
        • Evidência: oa repository (via OAI-PMH title and first author match)
        • Licença:
        • Versão: submittedVersion
        • Tipo de hospedagem: repository


    Informações sobre o Citescore
  • Título: Mathematical Programming, Series B

    ISSN: 0025-5610

    Citescore - 2017: 2.95

    SJR - 2017: 2.49

    SNIP - 2017: 2.586


  • How to cite
    A citação é gerada automaticamente e pode não estar totalmente de acordo com as normas

    • ABNT

      HAESER, Gabriel; LIU, Hongcheng; YE, Yinyu. Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary. Mathematical Programming, Heidelberg, Springer, 2018. Disponível em: < https://doi.org/10.1007/s10107-018-1290-4 > DOI: 10.1007/s10107-018-1290-4.
    • APA

      Haeser, G., Liu, H., & Ye, Y. (2018). Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary. Mathematical Programming. doi:10.1007/s10107-018-1290-4
    • NLM

      Haeser G, Liu H, Ye Y. Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary [Internet]. Mathematical Programming. 2018 ;Available from: https://doi.org/10.1007/s10107-018-1290-4
    • Vancouver

      Haeser G, Liu H, Ye Y. Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary [Internet]. Mathematical Programming. 2018 ;Available from: https://doi.org/10.1007/s10107-018-1290-4

    Referências citadas na obra
    Audet, C., Dennis Jr., J.E.: Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim. 17, 188–217 (2006)
    Agarwal, N., Allen-Zhu, Z., Bullins, B., Hazan, E., Ma, T.: Finding local minima for nonconvex optimization in linear time. arXiv:1611.01146 (2016)
    Andreani, R., Haeser, G., Martinez, J.M.: On sequential optimality conditions for smooth constrained optimization. Optimization 60(5), 627–641 (2011)
    Andreani, R., Haeser, G., Ramos, A., Silva, P.J.S.: A second-order sequential optimality condition associated to the convergence of optimization algorithms. IMA J. Numer. Anal. 37(4), 1902–1929 (2017)
    Andreani, R., Haeser, G., Schuverdt, M.L., Silva, P.J.S.: A relaxed constant positive linear dependence constraint qualification and applications. Math. Program. 135, 255–273 (2012)
    Andreani, R., Martínez, J.M., Haeser, G., Silva, P.J.S.: Two new weak constraint qualifications and applications. SIAM J. Optim. 22, 1109–1135 (2012)
    Andreani, R., Martínez, J.M., Ramos, A., Silva, P.J.S.: A cone-continuity constraint qualification and algorithmic consequences. SIAM J. Optim. 26(1), 96–110 (2016)
    Andreani, R., Martínez, J.M., Svaiter, B.F.: A new sequential optimality condition for constrained optimization and algorithmic consequences. SIAM J. Optim. 20(6), 3533–3554 (2010)
    Bian, W., Chen, X.: Optimality and complexity for constrained optimization problems with nonconvex regularization. Math. Oper. Res. 42(4), 1063–1084 (2017)
    Bian, W., Chen, X.: Worst-case complexity of smoothing quadratic regularization methods for non-Lipschitzian optimization. SIAM J. Optim. 23(3), 1718–1741 (2013)
    Bian, W., Chen, X.: Linearly constrained non-Lipschitz optimization for image restoration. SIAM J. Imaging Sci. 8(4), 2294–2322 (2015)
    Bian, W., Chen, X., Ye, Y.: Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization. Math. Program. 149(1), 301–327 (2015)
    Birgin, E.G., Gardenghi, J.L., Martínez, J.M., Santos, S.A., Toint, PhL: Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models. Math. Program. 163(1–2), 359–368 (2017)
    Birgin, E.G., Haeser, G., Ramos, A.: Augmented Lagrangians with constrained subproblems and convergence to second-order stationary points. Comput. Optim. Appl. 69(1), 51–75 (2018)
    Birgin, E.G., Martínez, J.M.: The use of quadratic regularization with a cubic descent condition for unconstrained optimization. SIAM J. Optim. 27(2), 1049–1074 (2017)
    Carmon, Y. J., Duchi, C., Hinder, O., Sidford, A.: Accelerated methods for nonconvex optimization. arXiv:1611.00756 (2016)
    Cartis, C., Gould, N.I.M., Toint, P.L.: Adaptive cubic regularisation methods for unconstrained optimization. Part ii: worst-case function- and derivative-evaluation complexity. Math. Program. 130(2), 295–319 (2011)
    Cartis, C., Gould, N.I.M., Toint, P.L.: Corrigendum: on the complexity of finding first-order critical points in constrained nonlinear optimization. Math. Program. 161(1–2), 611–626 (2017)
    Cartis, C., Gould, N.I.M., Toint, P.L.: On the evaluation complexity of composite function minimization with applications to nonconvex nonlinear programming. SIAM J. Optim. 21(4), 1721–1739 (2011)
    Cartis, C., Gould, N.I.M., Toint, P.L.: On the complexity of finding first-order critical points in constrained nonlinear optimization. Math. Program. 144(1), 93–106 (2014)
    Cartis, C., Gould, N.I.M., Toint, P.L.: On the evaluation complexity of constrained nonlinear least-squares and general constrained nonlinear optimization using second-order methods. SIAM J. Numer. Anal. 53(2), 836–851 (2015)
    Cartis, C., Gould, N.I.M., Toint, P.L.: Second-order optimality and beyond: characterization and evaluation complexity in convexly-constrained nonlinear optimization. Report naXys-06-2016, Department of Mathematics, UNamur, Namur (B) (2016)
    Cartis, C., Gould, N.I.M., Toint, P.L.: Evaluation complexity for smooth constrained optimization using scaled KKT conditions and high-order models http://perso.fundp.ac.be/~phtoint/pubs/NTR-11-2015-R1.pdf (2015)
    Cartis, C., Gould, N.I.M., Toint, P.L.: Complexity bounds for second-order optimality in unconstrained optimization. J. Complex. 28(1), 93–108 (2012)
    Cartis, C., Gould, N.I.M., Toint, P.L.: Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients. Optim. Methods Softw. 32(6), 1273–1298 (2017)
    Chen, X., Lu, Z., Pong, T.K.: Penalty methods for a class of non-Lipschitz optimization problems. SIAM J. Optim. 26(3), 1465–1492 (2016)
    Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of $$\ell _2$$ℓ2-$$\ell _p$$ℓp minimization. SIAM J. Sci. Comput. 32(5), 2832–2852 (2010)
    Curtis, F.E., Robinson, D.P., Samadi, M.: A trust region algorithm with a worst-case iteration complexity of $${O}(\varepsilon ^{-3/2})$$O(ε-3/2) for nonconvex optimization. Math. Program. 162(1–2), 1–32 (2017)
    Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96, 1348–1360 (2001)
    Fan, J., Lv, J.: Nonconcave penalized likelihood with NP-dimensionality. IEEE Trans. Inf. Theory 57, 5467–5484 (2011)
    Fan, J., Lv, J., Qi, L.: Sparse high dimensional models in economics. Annu. Rev. Econ. 3, 291–317 (2011)
    Fan, J., Xue, L., Zou, H.: Strong oracle optimality of folded concave penalized estimation. Ann. Stat. 3(42), 819–849 (2014)
    Fiacco, A.V., McCormick, G.P.: Nonlinear Programming: Sequential Unconstrained Minimization Techniques. Wiley, Hoboken (1968)
    Gay, D.M.: Computing optimal locally constrained steps. SIAM J. Sci. Stat. Comput. 2(2), 186–197 (1981)
    Grapiglia, G.N., Nesterov, Y.: Regularized Newton methods for functions with Hölder continuous Hessians. SIAM J. Optim. 27(1), 478–506 (2017)
    Grapiglia, G.N., Yuan, J., Yuan, Y.: On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization. Math. Program. 152(1), 491–520 (2015)
    Grapiglia, G.N., Yuan, J., Yuan, Y.: Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality. J. Optim. Theory Appl. 171(3), 980–997 (2016)
    Gratton, S., Sartenaer, A., Toint, P.L.: Recursive trust-region methods for multiscale nonlinear optimization. SIAM J. Optim. 19(1), 414–444 (2008)
    Haeser, G.: A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms. Comput. Optim. Appl. 70(2), 615–639 (2018)
    Haeser, G., Ramos, A.: A survey of constraint qualifications with second-order properties in nonlinear optimization. Optimization Online (2018). http://www.optimization-online.org/DB_HTML/2018/01/6409.html
    Han, S., Pool, J., Tran, J., Dally, W.J.: Learning both weights and connections for efficient neural networks. arXiv:1506.02626 (2015)
    Jahn, J.: Introduction to the Theory of Nonlinear Optimization. Springer, Berlin (2007)
    Karmarkar, N.: A new polynomial-time algorithm for linear programming. Combinatorica 4, 373–395 (1984)
    Liu, H., Yao, T., Li, R., Ye, Y.: Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions. Math. Program. 166(1–2), 207–240 (2017)
    Liu, Y.-F., Ma, S., Dai, Y.-H., Zhang, S.: A smoothing SQP framework for a class of composite $$l_q$$lq minimization over polyhedron. Math. Program. 158(1), 467–500 (2016)
    Loh, P.-L., Wainwright, M.J.: Regularized M-estimators with nonconvexity: statistical and algorithmic theory for local optima. J. Mach. Learn. Res. 16, 559–616 (2015)
    Martínez, J.M.: On high-order model regularization for constrained optimization. SIAM J. Optim. 27(4), 2447–2458 (2017)
    Martínez, J.M., Raydan, M.: Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization. J. Glob. Optim. 68(2), 367–385 (2017)
    Negahban, S.N., Ravikumar, P., Wainwright, M.J., Yu, B.: A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. Stat. Sci. 4(27), 538–557 (2012)
    Nesterov, Y.: Introductory Lectures on Convex Optimization. Springer, Berlin (2004)
    Nesterov, Y., Polyak, B.T.: Cubic regularization of newton method and its global performance. Math. Program. 108(1), 177–205 (2006)
    Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Springer, Berlin (1998)
    Toint, P.L.: Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization. Optim. Methods Softw. 28(1), 82–95 (2013)
    Sorensen, D.C.: Newton’s method with a model trust region modification. SIAM J. Numer. Anal. 19(2), 409–426 (1982)
    Vavasis, S.A., Zippel, R.: Proving polynomial time for sphere-constrained quadratic programming. Technical report, Department of Computer Science, Cornell University, 90-1182 (1990)
    Wang, L., Kim, Y., Li, R.: Calibrating nonconvex penalized regression in ultra-high dimension. Ann. Stat. 5(41), 2505–2536 (2013)
    Wang, Z., Liu, H., Zhang, T.: Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. Ann. Stat. 6(42), 2164–2201 (2014)
    Ye, Y.: On affine scaling algorithms for nonconvex quadratic programming. Math. Program. 56(1–3), 285–300 (1992)
    Ye, Y.: On the complexity of approximating a KKT point of quadratic programming. Math. Program. 80(2), 195–211 (1998)