Competing Teams



This section describes the evaluation process used in the Beta Phase, including a description of the required output files and the evaluation platform.

The evaluation process will assess solutions against the official competition problem formulation as described in this website; however, competitors will be free to utilize any other formulation or modeling approach within their solution software. In order to ensure fairness and to enable the use of alternative problem formulations where appropriate, there will be two sections for each competition algorithm. The first section will record the computation time required by competitors’ codes to solve the PSCOPF problem and report the real power generation dispatch decisions at each bus. This information is recorded in the file, solution1.txt, described in detail below. The timer records the time for the first section to execute, from invocation to completion.

In theory, the evaluation platform could use the decision variable solution provided by competitors to calculate power flow solutions and use that solution to assess constraint violations and objective function value; however, solving for power flows and checking for the existence of feasible power flow solutions are also non-trivial problems. Existing (commercial or open-source) power flow tools that could be used by the evaluation platform may find different power flow solutions given the same inputs or may not always converge to a feasible power flow solution even when one exists. The failure of the evaluation platform to find feasible power flow solutions could unfairly penalize the scores of individual competitors; therefore, we believe the best method for evaluating solutions is to require competitors to calculate and report power flow solutions for the base case and all contingency cases (given their previously reported decision variables). Given this additional information, all PSCOPF solutions will be validated in a uniform way by forward constraint evaluation.

The second section, which is not timed, will provide the additional solution details required for solution validation via calculation of feasibilities by the evaluation platform. This information is recorded in a second file, solution2.txt, also described in detail below. Allowing solution software to calculate these quantities only after the timer has been stopped is important as some competitors will utilize insights on the problem structure or inputs to quickly screen out some contingency cases. Therefore, the software that competitors submit to the competition may not need to calculate actual power flow solutions for every contingency prior to reporting generator and equipment control set points.

Solution data generated by algorithm evaluation will include objective function values, algorithmic run-time and constraint violation magnitudes for each power system model scenario tested. This data will be logged by the competition evaluation platform and associated with a specific competitor (i.e., team). These logs and the public names of the associated competitors will be released into the public domain after the conclusion of each trial or final event.

Remember, there are two parts to the evaluation of an algorithm on a particular power system model scenario. The first part, which is timed, establishes the base case solution used to calculate the objective function value. The second part, which is not timed, establishes the contingency solutions used to determine the feasibility of the solution.

As explained in the Scoring section, if the solution does not satisfy all constraints, a power system model scenario score is determined by multiplying a nominal objective value by a constraint violation penalty factor. Similarly, if a power system model scenario runtime is greater than the specified cutoff threshold a power system model scenario score is determined by multiplying the nominal objective value by a time violation penalty factor.  A power system network model score is then calculated by taking the geometric mean across all scenario scores of that network model. A final dataset score is computed by taking the geometric mean of all power system network models in a given dataset.

Evaluation Procedure

The evaluation procedure used by the competition platform is designed to evaluate solution objective function values and constraint violations. Competitors’ codes will be required to output solutions in two specific standardized formats: one (solution1.txt) to evaluate the solution objective function value (timed); and the second (solution2.txt)  to evaluate constraint violations (not timed). The information that will be required in the solution output includes:

  • Real and reactive power generation at each generator (timed section)
  • Real power (injections or withdrawals), reactive power (injections or withdrawals), voltage magnitude and phase angle at each bus for the base case and each of the contingency cases (untimed section)
  • System-wide power imbalance magnitude for each contingency case (untimed section). 

The automated evaluation process will use the information in competitors’ two solution files to calculate the objective function value and to assess solution constraint violations.

Feasibility check for constraint violations

A feasibility check is performed by evaluating competitors' solutions in the constraints and limits from the standard formulations.

For an inequality constraint gi(x)≤bi and |bi|>1, a relative constraint violation (CVi) is calculated as CVi = max(gi(x)-bi, 0) / |bi|.

For an equality constraint gi(x)=band |bi|>1, a relative constraint violation (CVi) is calculated as CVi = |gi(x)-bi| / |bi|.

For an inequality constraint gi(x)≤bi and |bi|≤1, a relative constraint violation (CVi) is calculated as CVi = max(gi(x)-bi, 0).

For an equality constraint gi(x)=band |bi|≤1, a relative constraint violation (CVi) is calculated as CVi = |gi(x)-bi|.


The solution1.txt file must be generated during the timed portion of the solution.

The solution2.txt file may be generated during either the timed or untimed portion of the solution.

The detailed evaluation process can be seen in the flow chart below.


solution1.txt must contain generator dispatch information from the base case of each power system model scenario. This information is used, along with the provided cost function information, to calculate the Objective Function value that is the basis of the scenario score.

It contains the following information:

  1. begin generation dispatch segment delimiter (“--generation dispatch”)
  2. column headers for data
    1. bus ID (“bus id”)
    2. unit ID (“unit id”)
    3. real power in megawatts (“pg(MW)”)
    4. reactive power in megaVar (“qg(MVar)”)
  3. csv data for each dispatch unit in the order given by b-i to b-iv.
  4. end generation dispatch segment delimiter ("--end of generation dispatch”)

When writing out numerical values, use the full precision used to perform the calculation in order to calculate an accurate Objective Function value.

The Phase 0 IEEE 14 bus (5 generators) contents of Scenario1/solution1.txt would look like the following (competitor’s real power and reactive power values may be different):

--generation dispatch
bus id,unit id,pg(MW),qg(MVar)
1,'1 ',37.9649606792,1.8583662976
6,'1 ',110.4266998728,-13.0957522832
8,'1 ',0.3198286705,6.9978519286
2,'1 ',84.7177624287,17.5456887762
3,'1 ',5.8017882658,26.8372933492
--end of generation dispatch

solution2.txt must contain solution information from the base and contingency cases of each power system model scenario needed to compute constraint violations. The base case has contingency id of 0.

It contains the following information:

  1. contingency generator dispatch (“--contingency generator”; “--end of contingency generator”)    [contingency case information only]
    1. contingency ID (“contingency id”)
    2. generator ID (“generator id”) -- something relevant to your code; not used in the evaluation process
    3. bus ID (“bus id”)
    4. unit ID (“unit id”)
    5. Reactive power in megaVar (“q(MW”)
  2. contingency bus information (“--bus”; “--end of bus”)  [base and contingency case information]
    1. contingency ID (“contingency id”)
    2. bus ID (“bus id”)
    3. Voltage in per unit (“v(pu)”)
    4. Voltage angle in degree (“theta(deg)”)
  3. contingency delta (--Delta”; “--end of Delta”)   [contingency case information only]
    1. contingency ID (“contingency id”)
    2. Delta (“Delta(MW)”)
  4. contingency line flow information (“--line flow”; ”--end of line flow”)   [base and contingency case information]
    1. contingency ID (“contingency id”)
    2. line ID (“line id”) -- something relevant to your code; not used in the evaluation process
    3. origin bus ID (“origin bus id”)
    4. destination bus ID (“destination bus id”)
    5. circuit ID (“circuit id”)
    6. real power in megawatts at origin (“p_origin(MW)”)
    7. reactive power in MVar at origin (“q_origin(MVar)”)
    8. real power in megawatts at destination (“p_destination(MW)”)
    9. reactive power in MVar at destination (“q_destination(MVar)”)

When writing out numerical values, use the full precision used to perform the calculation.

The Phase 0 IEEE 14 bus (1 contingency) contents of Scenario1/solution2.txt would look like the following (competitor’s values may be different):

--contingency generator
contingency id,genID,bus id,unit id,q(MW)
1,l_14,1,'1 ',1.8920439657
1,l_17,6,'1 ',-13.0424143005
1,l_18,8,'1 ',7.2486797834
1,l_15,2,'1 ',17.7006839160
1,l_16,3,'1 ',26.9101533072
--end of contingency generator
contingency id,bus id,v(pu),theta(deg)
--end of bus
contingency id,Delta(MW)
--end of Delta
--line flow
contingency id,line id,origin bus id,destination bus id,circuit id, p_origin(MW) q_origin(MVar), p_destination(MW), q_destination(MVar)
--end of line flow
Evaluation Platform Environment

The Evaluation Platform hardware consists of a dedicated 4-node cluster with a Mellanox 4x FDR Infiniband interconnect (54.54 Gigabits/second data bandwidth) where each node has 64 GiB of 4-channel 2133 MHz DDR4 SDRAM (68 Gigabytes/second) memory; two Intel Xeon E5-2670 v3 (Haswell) CPUs, each with 12 cores (24 cores per node) and a clock speed of 2.30 GHz (peak floating point performance per node is 883.2 GFlops/sec); and 480 GB Intel® SSD 530 Series disk drives with SATA 3.0 6 Gbit/sec interfaces (sequential read (up to)540 MB/s, sequential write (up to)490 MB/s).

One of the nodes manages the evaluations run on the other three "compute" nodes.

Results from each submission are gzipped and placed in a tar file on, an externally-facing gateway for transferring large amounts of data efficiently. It has multiple 10-gigabit per second links to to the Pacific Northwest Gigapop and the Seattle Internet Exchange. It is not uncommon to see transfer rates over 1 gbps to various sites around the world.

Currently each submission has exclusive access to only one of the compute nodes but the ability to use multiple nodes is under development. Please contact us if this is of interest to you.

The Evaluation Platform software uses the CentOS Linux release 7.3.1611 (Core) operating system. A list of software in /usr/bin and /opt/stack/bin is available for download. Additional software available includes MATPOWER 6.0 using MATLAB. In addition to MATLAB, the following programs and toolboxes from MathWorks are available: Simulink; Control System Toolbox; Global Optimization Toolbox; Optimization Toolbox; Parallel Computing Toolbox; Signal Processing Toolbox; Simscape; and Simscape Power Systems. The GAMS modeling language, with a full set of solvers, is available. Other solver libraries available include IBM® ILOG CPLEX, GUROBI, Artelys Knitro®, and FICO® Xpress.

Requests for additional software will be considered. Send them to the GO Operations Team.

Please use the Competition Forum  or contact the GO Operations Team to comment about any aspect of the Evaluation Platform that might hurt performance.