Challenge 1 Evaluation
The Challenge 1 Evaluation detailed description is in Appendix F of the Problem Formulation.
The evaluation process will assess the provided solutions against the official challenge problem; however, Entrants are free to utilize any other formulation or modeling approach within their solution software. Entrants are required to calculate and report power flow solutions for the base case and all contingency cases (given their previously reported decision variables). With this information, all SCOPF solutions will be validated in a uniform way by forward constraint evaluation. In order to ensure fairness and to enable the use of alternative problem formulations, where appropriate, there will be two sections for each competition algorithm, referred to as Solver1 and Solver2, each generating a different output file.
The first solution section, Solver1, which has a strict time limit, will solve the base SCOPF problem and report the base case operating point as output. This information is recorded in the file solution1.[txt|hdf5] described in Challenge 1 Output Data and Files. This information, along with the cost function information taken from the Generator Dispatch Units, Active Power Dispatch Tables and Piecewise Linear Cost Curve Tables in the Cost Function Data File (case.rop), is used to compute the Objective Function value that is used as the scenario score.
The second solution section, Solver2, solves the SCOPF power flow solutions for all contingency cases (given their previously reported decision variables in the solution1 file). This information is recorded in the file solution2.[txt|hdf5], also described in Challenge 1 Output Data and Files. It is used to assess the feasibility of the base case operating point with respect to the base case constraints.
In summary, the evaluation and scoring process consists of the following steps carried out by the Competition Platform for each scenario:
- Run Solver1, with a time limit specified as an input parameter, to produce solution1.[txt|hdf5] and, optionally, solution2.[txt|hdf5].
- In the absence of the solution2 file, run Solver2 to produce solution2.[txt|hdf5]. There will be a time limit, also specified as an input parameter, to stop runaway solvers, but it will be generous. Solver2 is expected to finish normally. If this is not the case, contact the GO Operations Team.
- Run evaluation.py (see below) with the 4 data input files and the 2 solution files as input to calculate the Objective Function value, the Cost, the Penalty (Objective - Cost) and both base and contingency constraint violations, writing the results to the submissionID_scenario_n_DetailedSolution.csv file and a one line summary the submisssionID_Score.csv file. Note: in the absence of solution files, or errors in processing them, the Objective Function value will be set to a penalty value described below as WorstCase (see the Scoring document). The Objective Function value is the Scenario Score.
- The geometric mean of all the Power System Network Model Scenario Scores is computed to generate Power System Network Model Score and written to the submisssionID_Score.csv file.
- The geometric mean of all the Power System Network Mode Scores is computed to generate the Dataset Score, which is also written to the submisssionID_Score.csv file.
- The Dataset Score is used in Division 1 and 2 objective function scoring.
- The Scenario scores are used in Division 3 and 4 performance profile scoring.
The python Evaluation source code, evaluation.py is available on GitHub at https://github.com/GOCompetition/Evaluation. The latest commit was December 6, 2018, which reduced the memory requirements. The update on November 29, 2018, made the code compatible with both Python 2 and 3. The penalty parameters (in data units) in evaluation.py are discussed in Appendix D.1 of the Problem Formulation document.
Entrants may find the input data parsing done by the Evaluation code (specifically in data.py) a useful model for their own code. The data.py file is in the search path for all python submissions so it does not need to be included in a Team GitHub repository.
The python code WorstCase, available on GitHub at https://github.com/GOCompetition/WorstCase, constructs a guaranteed feasible solution with no starting point information by insuring that the normal voltage magnitude meets normal and emergency bounds in both base and contingency cases and stays the same. While feasible, the result is far from optimal and represents something of a worst case that any solver should be able to exceed. The resulting objective value is used when no other objective value is found due to missing solution files or errors in processing the solution files.