Mountain Bees

The GO Competition is currently in the Beta Testing Phase, also known as Phase 0.

During this phase, we provide you the opportunity to get familiar with the competition platform: the problem to be solved, formats for input and output files, registration, algorithm submission and scoring. We encourage you, as a potential competitor, to try out the platform and raise questions through the Forum or Contact Us directly. Your participation in this phase will help improve the competition platform and process.

Participation at this point carries no obligation for the future. Team membership is flexible at this point.

The formal competition is subject to appropriation of funding.


Competition Overview

Reliable operation of electric power systems requires the real-time matching of instantaneous electricity generation and demand. Achieving a continuous match between supply and demand requires utilities, grid operators, and other stakeholders to use a variety of sophisticated optimization algorithms operating across a wide range of timescales. A number of emerging trends, including the integration of high penetrations of renewable electricity generation, changing electricity demand patterns, and the improving cost effectiveness of distributed energy resources (including storage), will substantially alter the operation and control of electric grids over the next several decades. This expected growth in system complexity will require the development of substantially improved software optimization and control tools to assist grid operators, and deliver the societal benefits of improved grid performance.

Many new grid optimization methods have been proposed in the research community in recent years. In addition, many claims have been made regarding the possible practical benefits that these new algorithms might offer utilities and grid system operators. Today, it is extremely difficult to compare strengths and weaknesses of different proposed approaches. The vast majority of reports only test new algorithms on relatively small-scale models that often must be heavily modified to satisfy the modeling requirements for each algorithm. Computational experiments are also typically conducted on a wide range of computational systems (ranging from commodity laptops to large-scale clusters with many thousands of nodes). Variations in modeling assumptions further complicate the comparability of algorithm testing results (for example, what types of contingency constraints are included and/or how normal versus emergency ratings are considered). Even small changes in how specific constraints are modeled or which constraints are considered can have significant implications for algorithm performance and solution quality. A new paradigm for the testing and evaluation of emerging grid optimization algorithms is needed to accelerate the adoption of these transformational techniques by industry.

This competition seeks to lay the foundation for that change. In particular, ARPA-E hopes this competition will accelerate the comprehensive evaluation of new power system optimization and control approaches. In turn, this will accelerate the industrial use and widespread adoption of these new algorithms. Success in this competition will require competitors to demonstrate the applicability and strength of new algorithms across a wide range of system operating conditions.

Competition Timeline

The GO competition will consist of two main phases of competition (subject to the appropriation of funding) and a Beta Testing Phase, also known as Phase 0

The Beta Testing Phase / Phase 0 is now underway. In this phase, participants will solve a security constrained optimal power flow problem for relatively small networks. This gives participants a chance to become familiar with the competition process and allows the competition organizers to identify and fix any issues that might arise with the competition platform. There are no prizes for this phase and participation carries not obligation for the future, including team membership. The formal competition will begin with Phase I (subject to appropriation of funding), which will use larger, more complex datasets created within the ARPA-E GRID DATA program.

Each phase of the competition will focus on a different variant of the Optimal Power Flow (OPF) problem. A specific problem formulation will be provided at the outset of each phase along with a detailed set of scoring criteria. The provided problem formulation and modeling approach described will be used for solution evaluation. Competitors will be permitted and encouraged to use any alternative problem formulation, modeling conventions, and/or solution method within their own software.

Each of the two main competition phases will use four unique datasets. Each dataset will consist of a collection of power system network models of different sizes with associated operating scenarios (for more details on what constitutes a power system network model and a scenario, please see the GO Competition detailed modeling framework). The “Original Dataset (OD)” will be released at the start of Phase 1 in order to allow competitors to begin developing solution methods. Competitors will be able to download the dataset in order to test algorithms within their own development environment. Competitors can also submit software using this website to be scored against the OD at any time.

Approximately 6 months and 9 months after the initiation of each phase, two dry-run “trial” rounds for the GO competition will be held utilizing new power system datasets, indicated as Trial Dataset 1 (TD1) and Trial Dataset 2 (TD2) respectively. These datasets will be similar in complexity and scope to those in the OD, but they will not be publicly released until after the conclusion of each trial event.

A deadline for the submission of OPF solution software will be established at least one month prior to each trial event. Immediately following the deadline, the software from all competitors will be executed and scored against TD1 and TD2, respectively. After each trial event, aggregate scores (as well as individual scores for each power system network/scenario pair in TD1 and TD2) for each competitor's submission will be displayed on this website’s competition leaderboards. The objective of the trial events is to give competitors experience in using the portal for the competition and to troubleshoot any potential algorithm submission and evaluation problems in the context of a specified deadline, as will be required in the Final Event.

The network/scenario pairs used for scoring each trial event (TD1 and TD2) will be released via this website as soon as scoring and evaluation of all algorithms has been completed. TD1 and TD2 will remain available for scoring runs using the official competition platform throughout the remainder of the competition, and competitors will have the ability to submit new software/algorithms (to be tested against TD1 or TD2) at any time. An evolving, continuously updated leaderboard will be maintained corresponding to each individual competition dataset.

A Final Event will mark the conclusion of each competition phase. Conditions will be similar to those in each trial event, with a new Final Dataset (FD) used for evaluation and scoring. A deadline for the submission of OPF solution software will be established at least one month prior to the final event. Immediately following the deadline, the software from all competitors will be executed and scored against the FD. Aggregate scores (as well as individual scores for each power system network/scenario pair in FD) for each competitor's submission will be displayed on a series of competition leaderboards.

Competition winners will be determined based on the final aggregate scores subject to the winning criteria specified in the final competition rules.

Team memberships may change between the different phases.

Please navigate to the Timeline page for more information.

Getting Started
  1. Review the competition problem description.
  2. Review the evaluation and scoring procedures. Required formats for the output from all competition software are explained in the evaluation section.
  3. Read the competition rules.
  4. Once you are ready, Register as a contestant. Respond to the automated e-mail verification message to complete your registration.
  5. Create a Team GitHub account.
  6. Create a team.  All competition submissions must be submitted by a “team.” Teams can consist of an individual or many individuals.  Note: Team Creation requires a valid Team GitHub account.
  7. Return to GitHub to establish the SSH key (in My Team view).
  8. Download the training data provided through this platform.
  9. Develop and test your algorithms/software (on your system) using the provided training data.
  10. Commit (save/upload) your algorithms/optimization software to your GitHub repository.
  11. Submit your algorithms/software for evaluation and scoring.

A tutorial based on the reference GAMS code that explains the purpose of each component, how to make it accessible on GitHub, how to make a submission, how to download the results, and how to interpret the contents of the zip file can be downloaded from the Resources page.

Please contact the GO Operations Team if you have any questions regarding the steps described above.

Output Requirements

For each of the scenarios in a dataset, two versions of the competitor’s solvers may be executed. The first, solver1, is the version that will be timed as it solves the base case. The solution for the base case contains generator dispatch information that is written to the file solution1.txt that is used to compute the objective value for scoring. The second version, solver2, is not timed but produces the contingency results for the file solution2.txt that is used for feasibility scoring. If solver1 produces a file solution2.txt, solver2 will not be executed.

The required output format for the files solution1.txt and solution2.txt are posted on the Evaluation page, along with sample files.

The files submission.log, solution1.txt, solution2.txt, and other language dependent files will be returned for all scenarios, along with a results.csv file and a score.csv file, summarizing all scenario results and scores.  These files will be posted in a tar.gz file whose location is given in the Submission History section pointed to by the submission ID received after submission.

Available Solvers

ARPA-E intends to make a number of common software packages and solvers available by default in the competition platform. Initially, the solver libraries available include:

  • CPLEX (12.7.1) (from IBM)
  • GAMS (24.9) (note: GAMS has its own versions of CPLEX, Gurobi, Knitro, Xpress, etc. that may be different from the versions available directly)
  • Gurobi (7.5.1) (from GUROBI Optimization)
  • Knitro (10.3.0) (from Artelys)
  • MATLAB (R2017a) (from MathWorks; includes Simulink, Control System Toolbox, Global Optimization Toolbox, Optimization Toolbox, Parallel Computing Toolbox, Signal Processing Toolbox, Simscape, Simscape Power Systems, and Statistics and Machine Learning Toolbox)
  • MATPOWER (6.0)
  • Xpressmp (7.7) (from FICO)

Important: The competition platform is intended for algorithm evaluation only. You will need to purchase your own license for any solvers or modeling languages that are needed during the algorithm development process. If you wish to use a different solver or modeling language during execution of your code on the competition platform, please let us know through the competition forum and we will do our best to accomodate you; however, last-minute requests are unlikely to be satisfied. We will attempt to work with vendors and competitors on a case-by-case basis to acquire a license for the requested software. Competitors may be required to pay a license fee for non-standard software that they request to be run with their algorithms on the evaluation platform. ARPA-E reserves the right to make the additional software available to all competitors if the license allows. (The license fee will be paid for by ARPA-E in instances where ARPA-E makes the software available to all competitors.)


As explained above, a competitor's code may be executed twice against each of the power systems models/scenarios of a dataset, once to determine the base case solution and timing, and once to determine contingency solutions (without timing). Consequently, there may to be two versions of the code to satisfy these requirements: version1, which produces the file solution1.txt; and version2, which produces the file solution2.txt. Alternatively, version1 may produce both solution files, in which case version2 is not executed (but a file named version2, even if empty, must be available in the repository for download). The naming, invocation procedure, and other language dependencies are described below for each language supported. The language is chosen during the submission process, along with the dataset, Git Repository name and branch, and submission name and notes.


Java projects must be supplied with jar files for the main executable and dependent libraries. Be sure to use forward path separators (//), which is considered system independent on the Java platform, to create the solution files. All dependencies, with the exception of the suite of commercial libraries that are preinstalled (e.g., cplex, gurobi, knitro, expressmp), must be included in the lib directory of your project (e.g., Ipopt). Projects are executed without network access so managed dependencies will be unreachable. Your code must take command line arguments which will specify the input data for the case being evaluated. The following commands will be executed with MyJava being the jar and package name while MyJava1 and MyJava2 are your submission codes packaged inside the provided jar file:

java -cp "MyJava.jar:lib/*" MyJava.MyJava1   powersystem.raw generator.csv contingency.csv >> submission1.log


java -cp "MyJava.jar:lib/*" MyJava.MyJava2   powersystem.raw generator.csv contingency.csv >> submission2.log


Submitted code will be executed with the commands

gams MyGams1.gms --case=pscopf_data.gms >> submission1.log


gams MyGams2.gms --case=pscopf_data.gms >> submission2.log

where pscopf_data.gms is the input data for the scenario being evaluated. See the GO GAMS tutorial for example GAMS code, how to submit it, and how to obtain the result files.

The GO Competition is currently using GAMS version 24.8. All solvers supported by GAMS are available.

Note: There is a step-by-step tutorial describing the steps of downloading the reference GAMS code, explains the purpose of each component, how to make it accessible on GitHub, how to make a submission, how to download the results, and how to interpret the contents of the zip file. This tutorial can be downloaded from the Resources page.


Submitted code will be executed with the commands

julia MyJulia1.jl powersystem.raw generator.csv contingency.csv >> submission1.log


julia MyJulia2.jl powersystem.raw generator.csv contingency.csv >> submission2.log

where powersystem.raw, generator.csv, and contingency.csv are the three input datasets presented as arguements to the program.

The GO Competition is currently using julia version 0.6 with Ipopt version 3.12.1 and JuMP version 0.17.1.


Submitted code will be executed with the commands

python powersystem.raw generator.csv contingency.csv >> submission1.log


python powersystem.raw generator.csv contingency.csv >> submission2.log

where scenario_$i is the directory containing the dataset input files described in Datasets for the current scenario being evaluated. The first command is timed; the second command is not timed.

The Python version currently installed is 2.7.5. If you require a different version, please send your request to the GO Operations Team.


The MATLAB script code will be executed with the commands

matlab -nodesktop -nosplash -r "clear all; global pfotemp; pfotemp='"$"2/scenario_"$"i';  MyMatlab1; quit" -log


matlab -nodesktop -nosplash -r "clear all; global pfotemp; pfotemp='$2/scenario_$i';; MyMatlab2; quit" -log

where pfotemp is the name of the directory containing the scenario data.

The MATLAB version being used is R2017a and includes the following additional programs:

  • Simulink
  • Control System Toolbox
  • Global Optimization Toolbox
  • Optimization Toolbox
  • Parallel Computing Toolbox
  • Signal Processing Toolbox
  • Simscape
  • Simscape Power Systems
  • Statistics and Machine Learning Toolbox

The MATLAB PATH variable includes all the MATPOWER 6.0 code and links to Gurobi, CPLEX, Knitro, and xpressmp solvers.

CVX and other open source libraries may be included upon request to the GO Operations Team.


Before execution the command "make all" is issued, then the code will be executed with the commands

./MyCpp1 powersystem.raw generator.csv contingency.csv >> submission.log


./MyCpp2 powersystem.raw generator.csv contingency.csv >> submission.log

where \$1/scenario_$i is the directory containing the dataset input files described in Datasets for the current scenario being evaluated.

Linux binary executables

A Docker container with the evaluation platform environment (less any licensed software) is available for download for building and testing your executable.

Some basic instructions:

  • Install docker; follow the installation instructions.
  • Download and unzip the zip file that contains the GO Docker Container .tar file; extract it to a directory
    • tar -xvzf arpa-go-comp.tar
  • Navigate to the extracted files
    • cd arpa-go-comp/
  • Build your executables and call them MyExe1 and MyExe2
  • Replace contents of the executable directory with your own executable named MyExe1
  • Run the following command from the arpa-go-comp directory:
    • “docker run --rm -v `pwd`/data:/home/arpacomp/data:ro -v `pwd`/executable:/home/arpacomp/executable -w /home/arpacomp/executable centos:centos7 ./MyExe1 141982/scenario_1”.
  • Alternately, run the following command from the arpa-go-comp directory. In this example, the executables are MyCpp1 and MyCpp2:
    • "docker run -ti --rm -v `pwd`/data:/home/arpacomp/data:ro -v `pwd`/executable:/home/arpacomp/executable -w /home/arpacomp/executable centos:centos7 ./MyCpp1 105"
    • this will run MyCpp inside a container and create solution2.txt in ./executable directory
  • The 141982 on the command line is name of the single scenario IEEE 14 directory. The other dataset directories are
    •    105  = Phase_0_RTS96
    •    106  = Phase_0_Modified_RTS96
    • 110128  = Phase_0_Feas179
    • 132521  = Phase_0_Infeas179
    • 141982  = Phase_0_IEEE14_1Scenario
    •    416  = Phase_0_Modified_IEEE14
    •    417  = Phase_0_IEEE14
  • Confirm that your executable ran and produced a valid solution1.txt file and, optionally a solution2.txt. 

As above, MyEXE1 will be timed and MyExe2 will not.

(Currently under devolopment)

If you have any issues, please use the competition forum or contact the GO Operations Team.

Final Scoring

A final event will mark the conclusion of each competition phase. Conditions will be similar to those in each trial event, with a new Final Dataset (FD) used for evaluation and scoring. A deadline for the submission of OPF solution software will be established at least one month prior to the final event.


Use the competition forum for submitting questions and suggestions, including discussions.