AUTHORS: Issam A. R. Moghrabi
Download as PDF
ABSTRACT: Motivated by the success of Shanno’s memoryless Conjugate Gradient (CG) methods [28,29], this paper derives three new scaled quasi-Newton like CG algorithm that utilize an update formula that is invariant to a scaling of the objective function. The computation of the search directions, at each iteration, is done in two steps. The aim of developing such self scaling Variable Metric CG methods is to improve the quality of the generated search direction vectors. The computations involved are rather cheap as they merely involve a number of inner products and require just extra O(n) storage requirements. The extra requirements are shown to pay off when the algorithm is numerically compared to that developed by Shanno.
KEYWORDS: Unconstrained Optimization, Conjugate Gradient Methods, Variable Metric methods, Inexact Line Search
REFERENCES:
[1] N. Anderi, “A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization”, Applied Mathematics Letters, vol. 20, pp. 645-650, 2007.
[2] E.M.L Beale, 'A Derivation of Conjugate Gradients,' in Numerical Methods for Nonlinear Optimization, 7ed. F.A. Lootsma, London: Academic Press, 1972, pp.94-112.
[3] M.C. Biggs, 'Minimization algorithms making use of non-quadratic properties of the objective function', Journal of the Institute of Mathematics and its Applications, vol. 8, pp. 315-327, 1971.
[4] M.C. Biggs, 'A note on minimization algorithms which make use of non-quadratic properties of the objective function', Journal of Institute of Mathematics and its Applications, vol.12, pp. 337-338, 1973.
[5] C.G. Broyden, “The convergence of a class of double- rank minimization algorithms - Part 2: The new algorithm”, J. Inst. Math. Applic., vol. 6, pp. 222-231, 1970.
[6] R.H. Byrd, R.B. Schnabel, and G.A. Shultz, “Parallel quasi-Newton methods for unconstrained optimization”, Math. Programing, vol. 42 , pp. 273-306, 1988.
[7] Y.H. Dai, Y. Yuan, A nonlinear conjugate gradient method with a strong global convergence property, SIAM J. Optim., vol. 10, pp. 177–182, 1999.
[8] J.E. Dennis and R.B. Schnabel, “Least change secant updates for quasi-Newton methods”, SIAM Review, vol. 21, pp. 443-459, 1979.
[9] R. Fletcher, Practical Methods of Optimization (second edition), Wiley, New York, 1987.
[10] R. Fletcher, “A new approach to variable metric algorithms”, Comput. J. vol. 13, pp. 317-322, 1970.
[11] R. Fletcher and M.J.D. Powell, 'A rapidly convergent descent method for minimization', Computer Journal, vol. 6, pp. 163-168, 1963.
[12] R.Fletcher, C.Reeves, “Function minimization by conjugate gradients”, Computer J., vol. 7, pp. 149–154, 1964.
[13] J.A. Ford and I.A. Moghrabi, “Using function-values in multi-step quasi-Newton methods”, J. Comput. Appl. Math., vol. 66 (), pp. 201-211, 1996.
[14] J.A. Ford and I.A. Moghrabi, “Multi-step quasi-Newton methods for optimization”, J. Comput. Appl. Math, vol. 50, pp. 305-323, 1994.
[15] J.A. Ford and I.A. Moghrabi, “Alternative parameter choices for multi-step quasi-Newton methods”, Optimization Methods and Software , vol. 2, pp. 357-370, 1993.
[16] W.W. Hager, H.C. Zhang, “A new conjugate gradient method with guaranteed descent and an efficient line search”, SIAM J.Optim., vol. 16, pp.170–192, 2005.
[17] M.R. Hestenes, E. Stiefel, “Methods of conjugate gradients for solving linear systems”, J. Res. Nat. Bur. Stan., Sec. B 48, pp. 409–436, 1952.
[18] I.A.R. Moghrabi, “Numerical experience with multiple update quasi-newton methods for unconstrained optimization”, Journal of Mathematical Modeling and Algorithms, vol. 6, pp. 231-238, 2007.
[19] I.A.R. Moghrabi, “Implicit extra-update multi-step quasi- newton methods”, Int. J. Operational Research, vol. 28, pp. 69-81, 2017.
[20] L. Nazareth, 'A relationship between BFGS and conjugate-gradient algorithms and its implementations for new algorithms', SIAM Journal on Numerical Analysis, vol.16, pp. 794-800, 1979.
[21] S.S. Oren, 'Self-scaling variable metric algorithm without line search for unconstrained minimization', Mathematics of Computation , vol. 27 (), pp. 873-885, 1973.
[22] S.S. Oren and E. Spedicato, 'Optimal conditioning of self-scaling variable metric algorithms', Mathematical Programming , vol. l0, pp. 70-90, 1976.
[23] A. Perry, “A modified conjugate gradient algorithm”, Oper. Res., vol. 26, pp. 1073-1078, 1978.
[24] E. Polak, G. Ribiére, “Notesurla convergence de directions conjuguées”, Rev. Francaise Infomat Recherche Operatonelle, vol. 16 , pp. 35–43, 1969.
[25] B. T. Polyak, “The conjugate gradient method in extreme problems”, USSR Comp. Math. Phys, vol.. 9, pp. 94–112, 1969.
[26] M.J.D. Powell, “Restart procedures for the conjugate gradient method”, Math. Program., vol. 12, pp. 241–254, 1977.
[27] D.F. Shanno, “Conditioning of quasi-Newton methods for function minimization”, Math. Comp., vol. 24, pp. 647- 656, 1970.
[28] D.F. Shanno, “Conjugate gradient methods with inexact searches”, Math. Oper. Res., vol. 3, pp. 244–256, 1978.
[29] D.F. Shanno, “On the convergence of a new conjugate gradient algorithm”, SIAM J. Numer. Anal., vol. 15, pp. 1247–1257, 1978.
[30] Wei, Z., Li, G., Qi, L., New quasi-Newton methods for unconstrained optimization problems, Appl. Math. Comput. 175: 1156-1188 (2006).