Contact
Leaderboard
Team
Stars
ARPA-E-DEV-TEAM
ASU
Mountain Bees
UniversidadNacionalColombia_TV_SR
Notice

The GO Competition is currently in the Beta Testing Phase, also known as Phase 0.

During this phase, we provide you the opportunity to get familiar with the competition platform: the problem to be solved, formats for input and output files, registration, algorithm submission and scoring. We encourage you, as a potential competitor, to try out the platform and raise questions through the Forum or Contact Us directly. Your participation in this phase will help improve the competition platform and process.

Participation at this point carries no obligation for the future. Team membership is flexible at this point.

The formal competition is subject to appropriation of funding.

Evaluation

Evaluation

The evaluation process will assess solutions against the official competition problem formulation as described in this website and official competition documents; however, competitors will be free to utilize any other formulation or modeling approach within their solution software. In order to ensure fairness and to enable the use of alternative problem formulations where appropriate, there will be two sections for each competition algorithm. The first section will record the computation time required by competitors’ codes to solve the OPF problem and report the objective function value, real and reactive power generation dispatch decisions at each bus, and control set points for any other equipment that has been declared controllable by the problem statement). The timer records the time for the first section to execute, from invocation to completion.

In theory, the evaluation platform could use the decision variable solution provided by competitors to calculate power flow solutions and use that solution to assess constraint violations and objective function value; however, solving for power flows and checking for the existence of feasible power flow solutions are also non-trivial problems. Existing (commercial or open-source) power flow tools that could be used by the evaluation platform may find different power flow solutions given the same inputs or may not always converge to a feasible power flow solution even when one exists. The failure of the evaluation platform to find feasible power flow solutions could unfairly penalize the scores of individual competitors; therefore, we believe the best method for evaluating solutions is to require competitors to calculate and report power flow solutions for the base case and all contingency cases (given their previously reported decision variables). Given this additional information, all OPF solutions will be validated in a uniform way by forward constraint evaluation.

The second section, which is not timed, will provide the additional solution details required for solution validation by the evaluation platform. The time required to calculate the additional solution information beyond decision variables will not be counted in an algorithm’s computation time score. Allowing solution software to calculate these quantities only after the first timer has been stopped is important as some competitors will utilize insights on the problem structure or inputs to quickly screen out some contingency cases. Therefore, the software that competitors submit to the competition may not need to calculate actual power flow solutions for every contingency prior to reporting generator and equipment control set points.

Solution data generated by algorithm evaluation will include objective function values, algorithmic run-time and constraint violation magnitudes for each power system model scenario tested. This data will be logged by the competition evaluation platform and associated with a specific competitor (i.e., team). These logs and the public names of the associated competitors will be released into the public domain after the conclusion of each trial or final event.

This section describes the evaluation process used in the competition, including a description of the required output files and the evaluation platform.

There are two parts to the evaluation of an algorithm on a particular power system model scenario. The first part, which is timed, establishes the base case solution used to calculate the objective function value. The second part, which is not timed, establishes the contingency solutions used to determine the feasibility of the solution.

As explained in the Scoring section, if the solution does not satisfy all constraints, a power system model scenario score is determined by multiplying a nominal objective value by a constraint violation penalty factor. Similarly, if a power system model scenario runtime is greater than the specified cutoff threshold a power system model scenario score is determined by multiplying the nominal objective value by a time violation penalty factor.  A power system network model score is then calculated by taking the geometric mean across all scenario scores of that network model. A final dataset score is computed by taking the geometric mean of all power system network models in a given dataset.

Evaluation Procedure

The evaluation procedure used by the competition platform is designed to check solution objective function values and constraint violations. Competitors’ codes will be required to output solutions in two specific standardized formats: one (solution1.txt) to evaluate the solution objective function value (timed); and the second (solution2.txt)  to evaluate contingencies (not timed). The information that will be required in the solution output includes:

  • Real and reactive power generation at each generator, as well as other relevant control variables (timed section)
  • Real power (injections or withdrawals), reactive power (injections or withdrawals), voltage magnitude and phase angle at each bus for the base case and each of the contingency cases (untimed section)
  • System-wide power imbalance magnitude for each contingency case (untimed section). 

The automated evaluation process will use the information in competitors’ two solution files to calculate the objective function value and to assess solution constraint violations.

Feasibility check

A feasibility check is performed by evaluating competitors' solutions in the constraints and limits from the standard formulations.

For an inequality constraint gi(x)≤bi and |bi|>1, a relative constraint violation (CVi) is calculated as CVi = max(gi(x)-bi, 0) / |bi|.

For an equality constraint gi(x)=band |bi|>1, a relative constraint violation (CVi) is calculated as CVi = |gi(x)-bi| / |bi|.

For an inequality constraint gi(x)≤bi and |bi|≤1, a relative constraint violation (CVi) is calculated as CVi = max(gi(x)-bi, 0).

For an equality constraint gi(x)=band |bi|≤1, a relative constraint violation (CVi) is calculated as CVi = |gi(x)-bi|.

Summary

The solution1.txt file must be generated during the timed portion of the solution.

The solution2.txt file may be generated during the untimed portion of the solution.

The detailed evaluation process can be seen in the flow chart below.

Solution1.txt

solution1.txt must contain generator dispatch information and other control variables (if any) from the base case of each power system model scenario. It contains the following information:

  1. begin generation dispatch segment delimiter (“--generation dispatch”)
  2. column headers for data
    1. bus ID (“bus id”)
    2. unit ID (“unit id”)
    3. real power in megawatts (“pg(MW)”)
    4. reactive power in megaVar (“qg(MVar)”)
  3. csv data for each dispatch unit in the order given by b-i to b-iv.
  4. end generation dispatch segment delimiter ("--end of generation dispatch”)

Instructions on how to input other control variables will be added at a later time.

The Phase 0 IEEE 14 bus contents of Scenario1/solution1.txt would look like the following (competitor’s real power and reactive power values may be different):

--generation dispatch
bus id,unit id,pg(MW),qg(MVar)
1,'1 ',37.9649606792,1.8583662976
6,'1 ',110.4266998728,-13.0957522832
8,'1 ',0.3198286705,6.9978519286
2,'1 ',84.7177624287,17.5456887762
3,'1 ',5.8017882658,26.8372933492
--end of generation dispatch
Solution2.txt

solution2.txt must contain solution information from the contingency cases. It contains the following information:

  1. contingency generator dispatch (“--contingency generator”; “--end of contingency generator”)
    1. contingency ID (“contingency id”)
    2. generator ID (“generator id”) -- something relevant to your code; not used in the evaluation process
    3. bus ID (“bus id”)
    4. unit ID (“unit id”)
    5. Reactive power in megaVar (“q(MW”)
  2. contingency bus information (“--bus”; “--end of bus”)
    1. contingency ID (“contingency id”)
    2. bus ID (“bus id”)
    3. Voltage in per unit (“v(pu)”)
    4. Voltage angle in degree (“theta(deg)”)
  3. contingency delta (--Delta”; “--end of Delta”)
    1. contingency ID (“contingency id”)
    2. Delta (“Delta(MW)”)
  4. contingency line flow information (“--line flow”; ”--end of line flow”)
    1. contingency ID (“contingency id”)
    2. line ID (“line id”) -- something relevant to your code; not used in the evaluation process
    3. origin bus ID (“origin bus id”)
    4. destination bus ID (“destination bus id”)
    5. circuit ID (“circuit id”)
    6. real power in megawatts at origin (“p_origin(MW)”)
    7. reactive power in MVar at origin (“q_origin(MVar)”)
    8. real power in megawatts at destination (“p_destination(MW)”)
    9. reactive power in MVar at destination (“q_destination(MVar)”)

The Phase 0 IEEE 14 bus contents of Scenario1/solution2.txt would look like the following (competitor’s values may be different):

--contingency generator
contingency id,genID,bus id,unit id,q(MW)
1,l_14,1,'1 ',1.8920439657
1,l_17,6,'1 ',-13.0424143005
1,l_18,8,'1 ',7.2486797834
1,l_15,2,'1 ',17.7006839160
1,l_16,3,'1 ',26.9101533072
--end of contingency generator
--bus
contingency id,bus id,v(pu),theta(deg)
0,8,1.1000000000,5.5864770127
0,2,1.0914701645,8.1398994078
0,14,1.0628888503,6.4505375932
0,10,1.0741009382,6.1122501823
0,3,1.0654280495,3.4058096240
0,7,1.0887940120,5.5595254296
0,9,1.0791972126,5.3373738635
0,6,1.1000000000,11.3165871591
0,12,1.0865543223,10.2294798690
0,11,1.0816641774,8.5372192167
0,1,1.0956591539,8.7574065074
0,13,1.0801097385,9.7951050674
0,4,1.0697105064,5.9440132240
0,5,1.0732898620,7.2148842999
1,8,1.1000000000,-0.7532229238
1,2,1.0914701645,1.8538326988
1,14,1.0598341609,-0.0743928645
1,10,1.0733983766,-0.2261850840
1,3,1.0654280495,-2.8808294176
1,7,1.0883923500,-0.7806842204
1,9,1.0784377053,-1.0286520888
1,6,1.1000000000,5.1046567042
1,12,1.0587544948,2.7149743850
1,11,1.0812228268,2.2622242346
1,1,1.0956591539,2.4723694497
1,13,1.0738404326,3.0895688263
1,4,1.0695266224,-0.3486365532
1,5,1.0732096990,0.9333968779
--end of bus
--Delta
contingency id,Delta(MW)
1,0.0019761473
--end of Delta
--line flow
contingency id,line id,origin bus id,destination bus id,circuit id, p_origin(MW) q_origin(MVar), p_destination(MW), q_destination(MVar)
0,i_11,7,9,'BL',4.1413318422,9.5061994545,-4.1413318422,-9.4064239939
0,i_14,10,11,'BL',-22.9761326646,6.1272750623,23.3782748731,-5.1859045079
0,i_17,4,7,'BL',3.8215031717,2.6172876907,-3.8215031717,-2.5797860834
0,i_18,4,9,'BL',2.2679201960,4.9215656632,-2.2679201960,-4.7875472966
0,i_19,5,6,'BL',-35.9525204199,24.8652624433,35.9525204199,-21.2339571641
0,i_1,1,5,'BL',15.9641439290,4.3593613937,-15.8253740474,-9.5734621036
0,i_6,4,5,'BL',-57.3737425289,9.7671310853,57.7689110106,-8.5206483317
0,i_9,6,13,'BL',26.1622121170,3.8319397860,-25.7799935330,-3.0792320773
0,i_4,2,5,'BL',13.2243134554,5.1075898562,-13.1161456412,-8.8311681583
0,i_3,2,4,'BL',27.0351993991,3.0208504059,-26.6662570887,-5.8718879653
0,i_10,7,8,'BL',-0.3198286705,-6.9264133711,0.3198286705,6.9978519286
0,i_12,9,10,'BL',-14.0616138860,11.9276805523,14.1544761136,-11.6810015761
0,i_2,2,3,'BL',49.5701517682,1.9868853878,-48.5926008283,-2.9633522332
0,i_0,1,2,'BL',22.0008167502,-2.5009950960,-21.9226033984,-3.5744969015
0,i_15,12,13,'BL',3.7549512193,-0.6300386655,-3.7278244221,0.6545819582
0,i_8,6,12,'BL',9.4916104689,1.3053581664,-9.3983668501,-1.1112921822
0,i_16,13,14,'BL',17.8421446777,-2.8566525730,-17.3637689253,3.8306438523
0,i_13,9,14,'BL',-4.2129398200,8.5699707507,4.3124667999,-8.3582635299
0,i_5,3,4,'BL',-26.2675003800,7.5512088127,26.7152631642,-7.8672085362
0,i_7,6,11,'BL',27.8375735276,-2.4490101532,-27.2245772023,3.7327011314
1,i_11,7,9,'BL',4.6176371063,9.8586961607,-4.6176371063,-9.7486335316
1,i_14,10,11,'BL',-23.5600133212,6.2615917077,23.9832166878,-5.2709193053
1,i_17,4,7,'BL',4.2918799941,2.7319226632,-4.2918799941,-2.6866626481
1,i_18,4,9,'BL',2.5399988694,5.0368539304,-2.5399988694,-4.8915758208
1,i_19,5,6,'BL',-36.5584065652,24.8681177588,36.5584065652,-21.1524989951
1,i_1,1,5,'BL',15.9411089231,4.4034282171,-15.8023788808,-9.6172700944
1,i_6,4,5,'BL',-57.9146903473,9.6884187522,58.3170944534,-8.4191126168
1,i_9,6,13,'BL',34.4914896899,5.1353881864,-33.8266903513,-3.8261907587
1,i_4,2,5,'BL',13.1895860305,5.1684098512,-13.0814381054,-8.8917511979
1,i_3,2,4,'BL',27.1345130495,3.1047842675,-26.7625299406,-5.9459265667
1,i_10,7,8,'BL',-0.3257571123,-7.1720335125,0.3257571123,7.2486797834
1,i_12,9,10,'BL',-14.6398441010,12.0770070160,14.7383567702,-11.8153182215
1,i_2,2,3,'BL',49.5757963155,1.9860298956,-48.5980277346,-2.9615798140
1,i_0,1,2,'BL',22.0337324924,-2.5113842514,-21.9552873732,-3.5634001262
1,i_15,12,13,'BL',-5.6434156307,-1.7413308477,5.7121581782,1.8035264858
1,i_16,13,14,'BL',16.4488588956,-3.2586384193,-16.0320577524,4.1072614454
1,i_13,9,14,'BL',-2.8863256675,8.8357462988,2.9807556271,-8.6348811230
1,i_5,3,4,'BL',-26.1647482212,7.6222963516,26.6100283387,-7.9443808414
1,i_7,6,11,'BL',28.4705959846,-2.4752205733,-27.8295190169,3.8177159288
--end of line flow
Evaluation Platform Environment

The Evaluation Platform hardware consists of a dedicated 4-node cluster with a Mellanox 4x FDR Infiniband interconnect (54.54 Gigabits/second data bandwidth) where each node has 64 GiB of 4-channel 2133 MHz DDR4 SDRAM (68 Gigabytes/second) memory and two Intel Xeon E5-2670 v3 (Haswell) CPUs, each with 12 cores (24 cores per node) and a clock speed of 2.30 GHz (peak floating point performance per node is 883.2 GFlops/sec). One of the nodes manages the evaluations run on the other three "compute" nodes.

Results from each submission are gzipped and placed in a tar file on dtn2.pnl.gov, an externally-facing gateway for transferring large amounts of data efficiently. It is connected to the Internet via a 10 gigabit per second link. It is not uncommon to see transfer rates over 1 gbps to various sites around the world.

Each submission will have exclusive access to only one of the three compute nodes.

The Evaluation Platform software uses the CentOS Linux release 7.3.1611 (Core) operating system. A list of software in /usr/bin and /opt/stack/bin is available for download. Additional software available includes MATPOWER 6.0 using MATLAB 2017a. In addition to MATLAB, the following programs and toolboxes from MathWorks are available: Simulink; Control System Toolbox; Global Optimization Toolbox; Optimization Toolbox; Parallel Computing Toolbox; Signal Processing Toolbox; Simscape; and Simscape Power Systems. The GAMS modeling language Distribution 24.8, with a full set of solvers, is available. Other solver libraries available include IBM® ILOG CPLEX, GUROBI, Artelys Knitro®, and FICO® Xpress.

Requests for additional software will be considered. Send them to the GO Operations Team.

Please use the Competition Forum  or contact the GO Operations Team to comment about any aspect of the Evaluation Platform that might hurt performance.