Normal view MARC view ISBD view

Iterative optimizers : difficulty measures and benchmarks / Maurice Clerc.

By: Clerc, Maurice [author.].
Material type: materialTypeLabelBookSeries: Computer engineering series (London, England): Publisher: London : Hoboken : ISTE Ltd. ; John Wiley & Sons, Inc., 2019Description: 1 online resource : illustrations (some color).Content type: text Media type: computer Carrier type: online resourceISBN: 9781119612360; 1119612365; 9781119612476; 1119612470.Subject(s): Mathematical optimization | MATHEMATICS -- Applied | MATHEMATICS -- Probability & Statistics -- General | Mathematical optimizationGenre/Form: Electronic books.Additional physical formats: Print version:: Iterative optimizers.DDC classification: 519.6 Online resources: Wiley Online Library
Contents:
2.7.1. Deceptive vs disappointing2.7.2. Measure consistency; 2.8. Perceived difficulty; 3. Landscape Typology; 3.1. Reliable functions, misleading and neutral; 3.1.1. Dimension D = 1; 3.2. Plateaus; 3.2.1. Dimension D = 1; 3.2.2. Dimension D = 2; 3.3. Multimodal functions; 3.3.1. Functions with single global minimum; 3.3.2. Functions with several global minima; 3.4. Unimodal functions; 4. LandGener; 4.1. Examples; 4.2. Generated files; 4.3. Regular landscape; 5. Test Cases; 5.1. Structure of a representative test case; 5.2. CEC 2005; 5.3. CEC 2011; 6. Difficulty vs Dimension
6.1. Rosenbrock function6.2. Griewank function; 6.3. Example of the normalized paraboloid; 6.4. Normalized bi-paraboloid; 6.5. Difficulty d0 and dimension; 7. Exploitation and Exploration vs Difficulty; 7.1. Exploitation, an incomplete definition; 7.2. Rigorous definitions; 7.3. Balance profile; 8. The Explo2 Algorithm; 8.1. The algorithm; 8.1.1. Influence of the balance profile; 8.2. Subjective numerical summary of a distribution of results; 9. Balance and Perceived Difficulty; 9.1. Constant profile-based experiments; 9.2. Calculated difficulty vs perceived difficulty; Appendix
A.12.1. Random sampling in a D-sphereA. 12.2. SunnySpell: potential function; A.12.3. Valuex: evaluation for a LandGener landscape; A.12.4. Multiparaboloid generation; A.13. LandGener landscapes; A.13.1. T1 deceptive; A.13.2. T2 deceptive; A.13.3. T3 deceptive; A.13.4. T4 deceptive; A.13.5. T5 deceptive; References; Index; Other titles from iSTE in Computer Engineering; EULA
Summary: Almost every month, a new optimization algorithm is proposed, often accompanied by the claim that it is superior to all those that came before it. However, this claim is generally based on the algorithm's performance on a specific set of test cases, which are not necessarily representative of the types of problems the algorithm will face in real life. This book presents the theoretical analysis and practical methods (along with source codes) necessary to estimate the difficulty of problems in a test set, as well as to build bespoke test sets consisting of problems with varied difficulties. The book formally establishes a typology of optimization problems, from which a reliable test set can be deduced. At the same time, it highlights how classic test sets are skewed in favor of different classes of problems, and how, as a result, optimizers that have performed well on test problems may perform poorly in real life scenarios.
    average rating: 0.0 (0 votes)
No physical items for this record

Online resource; title from PDF title page (EBSCO, viewed April 15, 2019).

Includes bibliographical references and index.

2.7.1. Deceptive vs disappointing2.7.2. Measure consistency; 2.8. Perceived difficulty; 3. Landscape Typology; 3.1. Reliable functions, misleading and neutral; 3.1.1. Dimension D = 1; 3.2. Plateaus; 3.2.1. Dimension D = 1; 3.2.2. Dimension D = 2; 3.3. Multimodal functions; 3.3.1. Functions with single global minimum; 3.3.2. Functions with several global minima; 3.4. Unimodal functions; 4. LandGener; 4.1. Examples; 4.2. Generated files; 4.3. Regular landscape; 5. Test Cases; 5.1. Structure of a representative test case; 5.2. CEC 2005; 5.3. CEC 2011; 6. Difficulty vs Dimension

6.1. Rosenbrock function6.2. Griewank function; 6.3. Example of the normalized paraboloid; 6.4. Normalized bi-paraboloid; 6.5. Difficulty d0 and dimension; 7. Exploitation and Exploration vs Difficulty; 7.1. Exploitation, an incomplete definition; 7.2. Rigorous definitions; 7.3. Balance profile; 8. The Explo2 Algorithm; 8.1. The algorithm; 8.1.1. Influence of the balance profile; 8.2. Subjective numerical summary of a distribution of results; 9. Balance and Perceived Difficulty; 9.1. Constant profile-based experiments; 9.2. Calculated difficulty vs perceived difficulty; Appendix

A.12.1. Random sampling in a D-sphereA. 12.2. SunnySpell: potential function; A.12.3. Valuex: evaluation for a LandGener landscape; A.12.4. Multiparaboloid generation; A.13. LandGener landscapes; A.13.1. T1 deceptive; A.13.2. T2 deceptive; A.13.3. T3 deceptive; A.13.4. T4 deceptive; A.13.5. T5 deceptive; References; Index; Other titles from iSTE in Computer Engineering; EULA

Almost every month, a new optimization algorithm is proposed, often accompanied by the claim that it is superior to all those that came before it. However, this claim is generally based on the algorithm's performance on a specific set of test cases, which are not necessarily representative of the types of problems the algorithm will face in real life. This book presents the theoretical analysis and practical methods (along with source codes) necessary to estimate the difficulty of problems in a test set, as well as to build bespoke test sets consisting of problems with varied difficulties. The book formally establishes a typology of optimization problems, from which a reliable test set can be deduced. At the same time, it highlights how classic test sets are skewed in favor of different classes of problems, and how, as a result, optimizers that have performed well on test problems may perform poorly in real life scenarios.

There are no comments for this item.

Log in to your account to post a comment.