The GO Competition is currently in the Beta Testing Phase, also known as Phase 0.

During this phase, we provide you the opportunity to get familiar with the competition platform: the problem to be solved, formats for input and output files, registration, algorithm submission and scoring. We encourage you, as a potential competitor, to try out the platform and raise questions through the Forum or Contact Us directly. Your participation in this phase will help improve the competition platform and process.

Participation at this point carries no obligation for the future. Team membership is flexible at this point.

The formal competition is subject to appropriation of funding.



The evaluation process will assess solutions against the official competition problem formulation as described in this website and official competition documents; however, competitors will be free to utilize any other formulation or modeling approach within their solution software. In order to ensure fairness and to enable the use of alternative problem formulations where appropriate, there will be two sections for each competition algorithm. The first section will record the computation time required by competitors’ codes to solve the OPF problem and report the objective function value, real and reactive power generation dispatch decisions at each bus, and control set points for any other equipment that has been declared controllable by the problem statement). The timer records the time for the first section to execute, from invocation to completion.

In theory, the evaluation platform could use the decision variable solution provided by competitors to calculate power flow solutions and use that solution to assess constraint violations and objective function value; however, solving for power flows and checking for the existence of feasible power flow solutions are also non-trivial problems. Existing (commercial or open-source) power flow tools that could be used by the evaluation platform may find different power flow solutions given the same inputs or may not always converge to a feasible power flow solution even when one exists. The failure of the evaluation platform to find feasible power flow solutions could unfairly penalize the scores of individual competitors; therefore, we believe the best method for evaluating solutions is to require competitors to calculate and report power flow solutions for the base case and all contingency cases (given their previously reported decision variables). Given this additional information, all OPF solutions will be validated in a uniform way by forward constraint evaluation.

The second section, which is not timed, will provide the additional solution details required for solution validation by the evaluation platform. The time required to calculate the additional solution information beyond decision variables will not be counted in an algorithm’s computation time score. Allowing solution software to calculate these quantities only after the first timer has been stopped is important as some competitors will utilize insights on the problem structure or inputs to quickly screen out some contingency cases. Therefore, the software that competitors submit to the competition may not need to calculate actual power flow solutions for every contingency prior to reporting generator and equipment control set points.

Solution data generated by algorithm evaluation will include objective function values, algorithmic run-time and constraint violation magnitudes for each network and scenario pair tested. This data will be logged by the competition evaluation platform and associated with a specific competitor (i.e., team). These logs and the public names of the associated competitors will be released into the public domain after the conclusion of each trial or final event.

This section describes the evaluation process used in the competition, including a description of the required output files and the evaluation platform.

There are two parts to the evaluation of an algorithm on a particular power system network model/scenario pair. The first part, which is timed, establishes the base case solution used to calculate the objective function value. The second part, which is not timed, establishes the contingency solutions used to determine the feasibility of the solution.

As explained in the Scoring section, if the solution does not satisfy all constraints, a scenario score is determined by multiplying a nominal objective value by a constraint violation penalty factor. Similarly, if a scenario runtime is greater than the specified cutoff threshold a scenario score is determined by multiplying the nominal objective value by a time violation penalty factor.

Evaluation Procedure

The evaluation procedure used by the competition platform is designed to check solution objective function values and constraint violations. Competitors’ codes will be required to output solutions in two specific standardized formats: one (solution1.txt) to evaluate the solution objective function value (timed); and the second (solution2.txt)  to evaluate contingencies (not timed). The information that will be required in the solution output includes:

  • Real and reactive power generation at each generator, as well as other relevant control variables (timed section)
  • Real power (injections or withdrawals), reactive power (injections or withdrawals), voltage magnitude and phase angle at each bus for the base case and each of the contingency cases (untimed section)
  • System-wide power imbalance magnitude for each contingency case (untimed section). 

The automated evaluation process will use the information in competitors’ two solution files to calculate the objective function value and to assess solution constraint violations.

Feasibility check

A feasibility check is performed by evaluating competitors' solutions in the constraints and limits from the standard formulations.

For an inequality constraint gi(x)≤bi and |bi|>1, a relative constraint violation (CVi) is calculated as CVi = max(gi(x)-bi, 0) / |bi|.

For an equality constraint gi(x)=band |bi|>1, a relative constraint violation (CVi) is calculated as CVi = |gi(x)-bi| / |bi|.

For an inequality constraint gi(x)≤bi and |bi|≤1, a relative constraint violation (CVi) is calculated as CVi = max(gi(x)-bi, 0).

For an equality constraint gi(x)=band |bi|≤1, a relative constraint violation (CVi) is calculated as CVi = |gi(x)-bi|.


The solution1.txt file must be generated during the timed portion of the solution.

The solution2.txt file may be generated during the untimed portion of the solution.

The detailed evaluation process can be seen in the flow chart below.


solution1.txt must contain generator dispatch information and other control variables (if any) from the base case of each scenario. It contains the following information:

  1. begin generation dispatch segment delimiter (“--generation dispatch”)
  2. column headers for data
    1. bus ID (“bus id”)
    2. unit ID (“unit id”)
    3. real power in megawatts (“pg(MW)”)
    4. reactive power in megaVar (“qg(MVar)”)
  3. csv data for each dispatch unit in the order given by b-i to b-iv.
  4. end generation dispatch segment delimiter ("--end of generation dispatch”)

Instructions on how to input other control variables will be added at a later time.

The Phase 0 IEEE 14 bus contents of Scenario1/solution1.txt would look like the following (competitor’s real power and reactive power values may be different):

--generation dispatch
bus id,unit id,pg(MW),qg(MVar)
1,'1 ',37.9649606792,1.8583662976
6,'1 ',110.4266998728,-13.0957522832
8,'1 ',0.3198286705,6.9978519286
2,'1 ',84.7177624287,17.5456887762
3,'1 ',5.8017882658,26.8372933492
--end of generation dispatch

solution2.txt must contain solution information from the contingency cases. It contains the following information:

  1. contingency generator dispatch (“--contingency generator”; “--end of contingency generator”)
    1. contingency ID (“conID”)
    2. generator ID (“genID”)
    3. bus ID (“busID”)
    4. unit ID (“unitID”)
    5. Reactive power in megaVar (“q(MW”)
  2. contingency bus information (“--bus”; “--end of bus”)
    1. contingency ID (“contingency id”)
    2. bus ID (“bus id”)
    3. Voltage in per unit (“v(pu)”)
    4. Voltage angle in degree (“theta(deg)”)
  3. contingency delta (--Delta”; “--end of Delta”)
    1. contingency ID (“contingency id”)
    2. Delta (“Delta(MW)”)
  4. contingency line flow information (“--line flow”; ”--end of line flow”)
    1. contingency ID (“contingency id”)
    2. line ID (“line id”)
    3. origin bus ID (“origin bus id”)
    4. destination bus ID (“destination bus id”)
    5. circuit ID (“circuit id”)
    6. real power in megawatts at origin (“p_origin(MW)”)
    7. reactive power in MVar at origin (“q_origin(MVar)”)
    8. real power in megawatts at destination (“p_destination(MW)”)
    9. reactive power in MVar at destination (“q_destination(MVar)”)

The Phase 0 IEEE 14 bus contents of Scenario1/solution2.txt would look like the following (competitor’s values may be different):

--contingency generator
1,l_14,1,'1 ',1.8920439657
1,l_17,6,'1 ',-13.0424143005
1,l_18,8,'1 ',7.2486797834
1,l_15,2,'1 ',17.7006839160
1,l_16,3,'1 ',26.9101533072
--end of contingency generator
contingency id,bus id,v(pu),theta(deg)
--end of bus
contingency id,Delta(MW)
--end of Delta
--line flow
contingency id,line id,origin bus id,destination bus id,circuit id, p_origin(MW) q_origin(MVar), p_destination(MW), q_destination(MVar)
--end of line flow
Evaluation Platform Environment

The Evaluation Platform hardware consists of a dedicated 4-node cluster with a Mellanox 4x FDR Infiniband interconnect (54.54 Gigabits/second data bandwidth) where each node has 64 GiB of 4-channel 2133 MHz DDR4 SDRAM (68 Gigabytes/second) memory and two Intel Xeon E5-2670 v3 (Haswell) CPUs, each with 12 cores (24 cores per node) and a clock speed of 2.30 GHz.

Each submission will have exclusive access to only one of the four nodes.

The Evaluation Platform software uses the CentOS Linux release 7.3.1611 (Core) operating system. A list of software in /usr/bin and /opt/stack/bin is available for download. Additional software available includes MATPOWER 6.0 using MATLAB 2017a. In addition to MATLAB, the following programs and toolboxes from MathWorks are available: Simulink; Control System Toolbox; Global Optimization Toolbox; Optimization Toolbox; Parallel Computing Toolbox; Signal Processing Toolbox; Simscape; and Simscape Power Systems. The GAMS modeling language Distribution 24.8, with a full set of solvers, is available. Other solver libraries available include IBM® ILOG CPLEX, GUROBI, Artelys Knitro®, and FICO® Xpress.

Requests for additional software will be considered. Send them to the GO Operations Team.

Please use the Competition Forum  or contact the GO Operations Team to comment about any aspect of the Evaluation Platform that might hurt performance.