Frequently Asked Questions

Challenge 3

Please refer to the official rules (,
scoring (, formulation (, and data format ( documents for further information.

FAQ C3 1 was added October 3, 2022
FAQ C3 2 was added February 23, 2023


Consider the Input Attributes table at the beginning of section 2.3 Bus of the Data Format document where it shows con_loss_factor as Not Required.

Operations information Description(Type) Req Sec Sym
con_loss_factor Contingency participation loss factor (Float) N S α

How is an attribute, such as con_loss_factor, to be interpreted when it is not present?

If an attribute is present for one bus, will it also be present for all buses?


Optional attributes (Not Required) are not guaranteed to exist. If they are present, competitors are free to use them to guide their development process, in particular for debugging.

There are no guarantees that all the buses have the same set of optional attributes. 

Ultimately, optional fields are outside the formulation and competitors should not assume their presence for their code to work; they may be relevant for future Challenges or post-Challenge analysis.

Note: v.1.0.3 of the Data Format document has been updated to include these clarifications.


There is one particular instance where our solver solution was marked infeasible.  When I looked at the feasibility log, the infeasibility came from a very small violation:

infeasibility summary:

{'viol_pr_t_p_on_max': {'idx': {0: '123_STEAM_3', 1: 24},

                        'val': 1.166533966520342e-08}}

 However, my solver is claiming in this instance that all constraints are satisfied to a tolerance of at least 6e-9.  Can you tell me which constraint in the "Problem Formulation" document 'viol_pr_t_p_on_max' corresponds to so that I can try to resolve this discrepancy?


The constraint that is being checked is equation (109) of the formulation document:

Information to determine this is in Section 7.1 of the formulation document.

In the notation of the problem and solution files, this is:

p_on + p_reg_res_up + p_syn_res + p_ramp_res_up_online <= p_ub * on_status

For 123_STEAM_3 and t=24 (starting from t=0), the problem file has p_ub=3.5, and the solution file has on_status=1, p_on=3.499999999954076, p_reg_res_up=3.468853980166409e-12, p_syn_res=5.853069227668623e-09, p_ramp_res_up_online=5.854725736923645e-09.

So the right hand side of (109) is 3.5. And the left hand side comes out to 3.499999999954076 + 3.468853980166409e-12 + 5.853069227668623e-09 + 5.854725736923645e-09 = 3.5000000116653394, which is 1.16653394e-8 more than the right hand side and greater than the hard constraint feasibility tolerance, , defined on page 15 of the formulation document, and its value is specified as 1e-8. This is an absolute (not relative) tolerance. This tolerance is applied to each hard constraint in the units of that constraint. This value is supplied to the evaluation procedure through the "hard_constr_tol" field in the configuration file config.json.

Typically, solvers employ one or more general purpose optimization solvers, which in turn employ a constraint feasibility tolerance. Setting a given tolerance in an optimization solver may not translate to the same absolute tolerance in the solution evaluation procedure. The following are some, but not necessarily all, of the reasons for this which may apply to a given optimization solver:

  • The tolerance in an optimization solver may be formulated as a relative tolerance with a normalization value constructed in a specified way.
  • The tolerance in an optimization solver may apply to a rescaled optimization model.
  • The tolerance in an optimization solver may apply to a presolved version of the optimization model, and when the solution of this model is substituted to create a solution of the original model, errors in the presolved model that were within the tolerance may create errors in the original model that exceed the tolerance.

In past challenges, some GO Competition solvers have implemented a postsolve step that cleans up the violations of hard constraints in the solution before writing it to the solution file. Typically this takes the form of a projection operator, substituting possibly higher violations of the soft constraints, which do not effect feasibility, in exchange for lower violations of the hard constraints, that do. A slightly worse feasible solution is better than an infeasible solution, which is no solution in the Competition.

Post facto note: The problem was not in the accuracy of the solver but an error in writing the solution file.

Challenge 2 Monarch of the Mountain (Ch2-MoM)

Please refer to the official rules ( and scoring ( documents for Ch2-MoM.

The Problem Formulation, Data Sets (synthetic only), Input File Formats, Output File Formats, and Solution Evaluation are all from Challenge 2.

FAQ MoM 1 was added March 2, 2022

From the Rules document:

"At the end of the competition, Eligible Entrants with the top score on a scenario leaderboard may receive a $5,000 award per scenario. To qualify for this award, the top score must also be at least 1% greater than the top score from the Challenge 2 Final Event. Additionally, one bonus prize of $5,000 will be awarded for each scenario to whichever Eligible Entrant held the top rank of that scenario’s leaderboard for the greatest number of competition weeks (need not be consecutive)."

Does the requirement that a new score must be at least 1% greater than the Challenge 2 Final Event score apply to be eligible for the bonus prize?

No. The requirement that a new score must be at least 1% greater than the Challenge 2 Final Event score to be eligible for a prize does not apply to the bonus prize awarded to teams which have held the top rank of a scenario for the greatest number of weeks.

Challenge 2

Please refer to the official rules (,
scoring (,
and formulation ( documents for further information.
You can download ( the original set of 8 Challenge 2 questions and answers released May 26, 2020.

FAQ 9 was added July 27, 2020.
FAQs 2, 6, 7 and 8 were updated August 21, 2020.
FAQ 1 was updated and FAQs 10 and 11 were added September 11, 2020.
FAQ 12 was added September 17, 2020.
FAQ 13 was added September 25, 2020.
FAQ 14 was added October 27, 2020.
FAQ 15 was added May 19, 2021
FAQ 16 was added June 2, 2021
FAQ 17 was added June 4, 2021
FAQ 18 was added June 15, 2021
FAQ 19 was added June 18, 2021
FAQ 20 was added July 21, 2021
FAQ 21 was added July 30, 2021

The Challenge 2 formulation expands upon the Challenge 1 formulation. Challenge 2 additionally considers: discrete transformer tap settings, switchable shunts, phase shifting transformers, price-responsive demand, generator ramp rates, limited “fast-start” unit commitment, and topology optimization (transmission switching). Challenge 2 also considers post-contingency response on a different timescale than the Challenge 1 approach. Challenge 2 places additional expectations on the competitors to perform a more detailed analysis as part of the Code2 model.

Similar to Challenge 1, Challenge 2 will have multiple divisions. These divisions will feature short and long time limits and will target certain potential use cases and/or new model features unique to Challenge 2. Unlike Challenge 1, we will no longer consider performance profile scoring -- the scoring method will be the same for all of Challenge 2’s divisions. Please see the Scoring Document for details.

Similar to Challenge 1, there will be a split between the “Code1” and “Code2” algorithms. Code1 will primarily handle the base case with some consideration of contingency response and Code2 will focus only on the contingency response. Code1 and Code2 will each be timed separately. The Code1 time limit will be specified by each division -- We will include divisions (1 and 3) with short time limits (5 minutes) and divisions (2 and 4) with longer time limits (60 minutes). For Code2, there will be a dynamic time limit based on the dataset/problem dimensions rather than the division.

There are no current plans to change the hardware that was available to all teams by the end of Challenge 1. We will no longer support Matlab software on the GO Competition platform.

Similar to Challenge 1, Challenge 2 will include two formal trials with leaderboards and one final event. Only performance in the final event will impact the final ranking. Based on group performance, ARPA-E may add an additional, optional trial to provide more development time and support to the competitor teams.

Yes, we have released an updated Rules document for Challenge 2.

Please refer to the Challenge 2 Rules document for any questions pertaining to confidentiality or proprietary information.

Please refer to the Scoring document for any questions on Challenge 2 divisions and prizes and to the Rules document for any questions on prize eligibility.

For challenge 2, there are no funding opportunities beyond the winnings from challenge 1.  The competition is open for new entrants and teams.  

In Challenge 1, we were tasked with solving problems for some networks that we had seen previously, while other networks were given to us for the first time during the competition. Will that also be the case with Challenge 2? If so, will we have the name of the network provided as an input for our code, similar to the inputs provided in Challenge 1?

Similar to Challenge 1, we will not share many details about the upcoming datasets in advance. We expect that the average complexity of the datasets should increase through the trials up to the final event. We are making an effort in trial 1 to release at least some examples that represent the expected range (min/max) of scenario sizes (number of buses and branches) that will be used throughout the competition. As in Challenge 1, the network model name will be passed to code1 and code2 as explained in the Input Parameters section of the References/Languages page. As before, the scenario instances will be unique to each Event and new network models may be introduced as well.

For the 10-minute divisions in Challenge 1, we were provided with a reasonable initialization for the decision variables analogous to what a system operator would have via the solution from the previous time period. Will we similarly be provided with a reasonable initialization for Challenge 2? In particular, can we assume that the initialization of the discrete variables for generator statuses and line switching statuses admits a feasible solution?

Given our formulation, all potential commitment statuses are feasible, though they may not be feasible within reasonable variable relaxations. The formulation document makes all of the assurances that we plan to maintain as part of a legal dataset for the competition. Our goal is that the prior points should represent the operating point 5-10 minutes ago, but this may no longer be a reasonable commitment or dispatch in the “present” base case time interval. It will be up to the competitors to find the best solution available for the base case considering the prior point and the ramping/commitment constraints.


No fields are skipped or missing, with two specific exceptions. These exceptions are: unneeded switched shunt [Na, Ba] fields and unneeded transformer impedance correction table [Tm, Fm] fields. These unneeded fields occur at the end of their respective records, so the end of the record line indicates that no additional fields are to be read.

Other unneeded fields may be empty or they may contain data. Since the file is generally in CSV format with the comma character “,” as the separator, an empty field is indicated by commas.

This policy on separator character and missing or empty fields is a restriction of the PSSE RAW file format as documented in the PSSE POM (v.33) and as implied by PSSE RAW files encountered in practice, read by PSSE software, or written by PSSE software. In particular, files that follow the format we have specified are valid PSSE RAW files and a parser that is able to parse valid PSSE RAW files is able to parse files in our format. While Challenge 1 files did have values instead of empty fields, we have concluded that empty fields are a more accurate representation of the data and are fully PSSE compliant.

Section 3.6.4 of the Challenge 2 Problem Formulation describes the impedance correction factor ηfk of a transformer f with impedance correction in case k is used to modify the conductance gf and susceptance bf. Is this piece-wise correction function convex or non-convex?


The impedance correction function may or may not be convex. The properties that can be assumed of the impedance correction function are in equations 260 to 267 in Appendix E Data Properties of Challenge 2 Problem Formulation.


Please refer to the code in the GitHub repository PowerModelsSecurityConstrained.jl, which provides examples of how to extend the basic formulation in PowerModels.jl, a Julia/JuMP package for Steady-State Power Network Optimization. It is designed to enable computational evaluation of emerging power network formulations and algorithms in a common platform. The code is engineered to decouple problem specifications (e.g. Power Flow, Optimal Power Flow, ...) from the power network formulations (e.g. AC, DC-approximation, SOC-relaxation, ...). This enables the definition of a wide variety of power network formulations and their comparison on common problem specifications.

When is switching allowed?


No, transmission line and transformer switching is NOT ALLOWED in Divisions 1 and 2. It is only allowed in Divisions 3 and 4. This way we can see the impact switching has by subtracting the Divion 1 score from Division 3, and Division 2 from Division 4.

This is explained in the Scoring document, last paragraph of page 3.

Example (there may be others): Transformer with I=388, J=363, CKT=1 in C2S7N00793/scenario_001 has R12=-0.00022


Negative R values are within the bounds of the formulation document.

An example use case for negative resistances is the star equivalent model of 3 winding transformers or power system equivalent lines. These standard equivalencing methods can produce negative resistances. Commercial power flows have long supported negative resistance values; the negative values in this example are typically close to zero so there should be no issues with overall negative losses.

Here are some references with examples:

1) J.B. Ward, “Equivalent Circuits for Power-Flow Studies,” AIEE Transactions, 1949, pp. 373-382 (mentions negative resistance throughout)

2) S. Deckmann, A. Pizzolante, A. Monticelli, B. Stott, O. Alsac, “Studies on Power System Load Flow Equivalencing,” IEEE Trans. Power Apparatus and Systems, Vol. PAS-99, Nov/Dec 1980, pp. 2301-2310. (Discussion mentions negative resistance)

3) Prabha Kundur, "Power System Stability and Control", McGraw-Hill, 1994; 979 pages. (pdf available online. Section 6.2, page 241: "In large transformers, Zs is small and may even be negative")

4) J. Duncan Glover, Mulukutla S. Sarma, Thomas Overbye, “Power System Analysis & Design, Fifth Edition, SI Version”, Cengage Learning, Aug 14, 2012; 782 pages. (Chapter 3.6, with the three-winding transformer equivalent portion is available here.  Note equations 3.6.8 to 3.6.10.)

I’ve been noticing voltage limits in the interval [0.2,2.2] in some datasets (e.g., C2T2N03022).  Are these voltage limits something to expect in actual industry datasets?


Bounds like this certainly are unusual, but they could occur for various reasons, including in industry cases as an indication that a particular bus voltage is not monitored. In the formulation we do not have a concept of a non-monitored bus, so if there are very loose voltage bounds on a particular bus, then those are bounds that need to be satisfied. Bounds this loose are uncommon, but they can happen, both in the synthetic cases and in the industry cases.

I noticed that some loads have negative benefit costs, e.g., load “LD” on bus1 in scenario 9 of model C2T2N08300.

Is this reasonable?


Load offer blocks with negative marginal benefit and generator offer blocks with negative marginal cost are not unusual and may occur in competition problem instances. This is a common industry practice.


 We noticed that in the trial 2 dataset file, there are some instances (e.g. C2T2N03022, scenario 1), which are not counted in the final score. We got negative objective values for these instances in our test. Should we consider them in our test?


This was one of our problematic datasets with an MSpp value of -98,449,005,090. A few teams got reasonable results (our Benchmark got an MStotal (objective) value of 261,877,192 for Division 1 and a few other teams were close to that value), but we followed our rule of not using datasets with negative MSpp values in the scoring. The same goes for other datasets in the Trial 2 file but are not listed in the Leaderboard results: C2T2N10000 scenarios 3, 111, 401; C2T2N30000 scenarios 2, 116; C2T2N31777 scenarios 1, 38.

If lines and transformers are switched off during optimization, it is possible that contingencies from case.con can no longer occur, because the affected asset is already switched off.

How is this to be handled?

Should the results of these contingencies be equal to those of the base case in these cases?


In the case of a contingency K defined by the outage of a grid element J that was already disconnected in the base case, all of the general rules apply:

  • A contingency K solution file needs to be written.
  • Element J should not be included in the contingency K solution file.
  • The constraints on the contingency K solution are those listed in the formulation. In particular, the contingency K solution need not be the same as the base case solution.

I am confused regarding the distinction between Code1 and Code2, mentioned in languages and time limits but not in the Scoring document or Problem Formulation document.

I interpret the problem formulation as one big common optimization problem, containing the base case and all contingencies and they are all interconnected to each other (e.g. by the ramp rate constraints). If that is true, I don’t see the reason for executing Code2.


This question goes to heart of the design and purpose of the competition.

The problem is just as you say, “one big common optimization problem, containing the base case and all contingencies and they are all interconnected to each other (e.g. by the ramp rate constraints).” 

The Competition, however, needs both a code1 and a code2 to properly evaluate your solution. The requirements on code1 and code2 are:

  • Code1 must produce a base case solution file X1 in a given (typically short, 5 to 60 minutes) time limit.
  • Code2 must produce a contingency solution file X2[k] for each contingency k in a given (much longer) time limit T2. This allows the Competition to evaluate the overall solution quality.
  • Code2 may not modify the base case solution file X1 produced by code1.

Code1 may consider only the base case, or the base case and all contingencies, or the base case and some selected contingencies, or the base case and some approximation of the contingencies, or really anything you like.

The reason for the code1/code2 design is:

  • The Competition envisions a hypothetical practical use case where, in a fairly short time (e.g. 5 minutes assuming a market environment, but maybe up to 1 hour in some kind of planning context) an optimal solution is needed.
  • Only the values of variables associated with the base case are needed in 5 minutes, as only these will be immediately acted on. These variables contain dispatch instructions for generators and loads.
  • The variables, constraints and objective terms associated with contingencies can be viewed as ensuring that the base case meets a certain concept of security. Since the contingencies have nontrivial objective terms, it is not a simple binary condition, either the base case solution is secure to contingencies or it is not. Instead it might be best to think of the total contingency objective as quantifying the degree to which the base case solution is secure to contingencies.
  • In the hypothetical use case, the user does not need to know the contingency variables. They only need to know that the base case solution is secure to contingencies. The contingency variables are needed to check this requirement in a quantitative fashion. This checking is done by the competition solution evaluator, but a user does not need to do this ever.
  • The requirements on code1 make it suitable for this hypothetical use case, while code2 is necessary for the competition to evaluate the solution quality.
  • If we allowed only a single code, in order to make it suitable for the hypothetical use case, it would have to have a short time limit (5 to 60 minutes). This code would need to produce a solution for the base case and one for each contingency. In order for the solution evaluation to accurately measure the quality of the base case solution, the contingency solutions need to be more or less optimized, conditional on the base case solution. We believe that the time limit imposed by the practical use case would make it nearly impossible to optimize the contingency solutions, and as a result, the evaluation of the base case solution quality could be highly inaccurate. Hence, we require code2 with a long time limit in order to allow solvers to optimize their contingency solutions.


No. Once you have registered, you cannot change the Team GitHub account username.

The competition uses GitHub to manage submitted solutions. All code submitted by a team must be stored in GitHub. You must grant read access to the GitHub account; this is accomplished with a public SSH key that you can find in your My Team summary.

You can view/download the full step-by-step instructions (with screen shots) found under Resources. A summary of the steps are:

1. Go to the ARPA-E Optimal Power Flow Competition login page and log in using the username and password associated with your account.
2. Once logged in, go to View Account under the Account menu.
3. Click on the My Team button to view My Team information.
3. In the "Team Information" section, copy the SSH Public Key information; this is the text with "ssh-rsa ...". You can also click the Copy SSH Information button.
4. Go to the GitHub login page and log in using the username/e-mail address and password associated with your account.
5. Once logged in, go to your submission repository.
6. Click the “Settings” option from the tab bar along the top of the page.
7. Click the “Deploy keys” option from the pane on the left hand side of the page.
8. Click “Add deploy key” button on the right hand side of the page.
9. Enter a “Title” (e.g. ARPA Competition) and paste the SSH Public Key information from step 3. Leave all other settings.
10. Click the “Add key” button to complete the process.

Challenge 1 update

Yes, you can download a set of questions and answers from ARPA-E.  Initial release on March 20, 2019; updated on April 3, 2019.

According to eq. 85 of the Problem Formulation document, a generator responding to a given contingency adjusts its real power output according to its predefined (offline) participation factor until it hits an operational bound (min or max capacity). The real power output of a responding generator g in contingency k is p + α ∆, if the generator follows its required participation factor.
Is therefore the following statement correct?
According to eq. 207 of the same document, ∆ is not in the pu-system, but in MW. In the output file solution2.txt, the delta section is also filled in MW.

The variables and parameters of the problem can be expressed in two unit conventions: a data unit convention; and a model unit convention.

The data unit convention is used for the input and output data files and is mostly MW, MVar, MVA, etc., except for voltage magnitude values which are p.u. This choice was motivated by the default unit convention of PSSE data files. We extended this data unit convention to the output data as well, i.e. solution1 and solution2.

The model unit convention is used to express the equations of the model and is exclusively p.u.

Conversions of data parameters from data units in input data files to parameters in the model and from model variables in model units to solution data in data units in solution files are given in the formulation document.

Delta is a quantity of real power; therefore, its units are MW in the data unit convention and p.u. in the model unit convention.

Consider the equation pk[g,k] = p[g] + alpha[g] * Delta[k]. This is not the same as the formulation, for it ignores pmin and pmax, but if generator g does not hit a bound in contingency k, then this equation should hold. In this equation pk, p, and Delta are quantities of real power expressed in p.u., and alpha is a dimensionless quantity. To write the solution files, pk, p, and Delta are converted to their representations pk’, p’, and Delta’ in the data unit convention, e.g. by pk’[g]=sbase*p[g]. Then the numerical values of pk’, p’, and Delta’ are written in the solution files.

Clarification about the dimension of the participation factor alpha, respectively the governor permanent droop from appendices A.4, respectively C.11. The participation factor is announced as dimensionless, and the sum of all participation factors must be one to counteract the ∆ exactly. But looking at Original_Dataset_Real-Time_Edition_1\Network_01R-10\case.inl as an example, the values are far away from 0<=alpha<=1. Is there an error in this interpretation or in the documentation? To receive valid participation factors, each alpha must be divided by the sum of all alpha. Is this true?

This is not necessarily true. Delta[k] need not be equal to the generation lost in contingency k or the change of AC power losses or any such physically meaningful value. Delta[k] is simply any quantity of real power that ensures that values of pk satisfying the real power adjustment constraints pk[g,k] = Proj_[pmin[g], pmax[g]] (p[g] + alpha[g] * Delta[k]) also satisfy the power balance constraints. If the alpha values summed to 1, then it might be possible to interpret Delta[k] in some physically meaningful way. But this interpretation is not necessary.

E.g. suppose we have 2 generators and 1 contingency, with generator 2 going out of service in contingency 1. Suppose p’ = [100.0, 100.0] (MW), alpha = [10.0, 10.0] (dimensionless), pk’ = [[200.0, 0.0]] (MW). Then we can take Delta’ = [10.0] (MW). Delta’ is not equal to the amount of generation lost in the contingency, but it is a real power quantity expressed in MW. The alpha values do not sum to 1, but they are dimensionless.

Does a generator that is Out-of-Service in a Generator-Out-of-Service contingency no longer adjusts its real power (Generator Real Power Contingency Response) and does not try to maintain the base case voltage magnitude at its bus by adjusting the reactive power output (Generator Reactive Power Contingency Response)?

A generator that is out of service in a Generator-Out-of-Service contingency must have real and reactive power output equal to 0 in that contingency, regardless of the voltage magnitude. This is contained in equations 60 and 62.

Challenge 1 General

The Department of Energy Advanced Research Projects Agency-Energy (ARPA-E) is challenging the research and industrial communities to successfully develop and test power system optimization and control algorithms on new, publicly available, large-scale, and high-fidelity power system network models. For more information, please read here.

The GO Competition consists of multiple stages summarized below:

Beta Testing: This is the predecessor to the actual competition and allows the public to understand and test the submission processes. The relatively small and simple datasets and solutions are publicly available for download. This stage began June15, 2017, and ended October 30, 2018.

Challenge 1: The first round of the formal competition will solve a Security-Constrained OPF (SCOPF) problem. The focus will be on large-scale industry sized systems, up to, up to 60,000 buses and thousands of contingencies, with an updated problem description. Datasets will come from the ARPA-E GRID DATA program and real-world industry datasets. See the Timeline document for information on dataset release. 

Start Date: Fall 2018 (Projected; Subject to Appropriation of Funding)

End Date: One year after start (Projected)

Challenge 2: Challenge 2 is expected to build on the models used in Challenge 1 and may include complicating factors such as solving larger network models, optimizing power flows over both transmission and distribution systems, and/or including unit commitment.

Start Date: At the end of Challenge 1 (Projected; Subject to Appropriation of Funding))

End Date: One year after start (Projected)

All dates are approximate and subject to change.  Challenge 1 and 2 are also subject to appropriation of funding.

Final rankings for Challenge 1 will be announced following the Final Event one year after Challenge 1 starts (projected).

The handling of all submitted executable programs are subject to the Rules document for all Entrants and subject to the grant contracts established for the Proposal Entrants.

For further information or questions, please use the Competition Forum or contact the GO Competition Team.

Challenge 1 Eligibility

Please see the Rules document for all questions regarding eligibility.

Participation by minors requires authorization by a parent or guardian. Please contact the Competition Team for details.

Challenge 1 Registration

Yes, you need to register to participate in the competition. You must register as a team, which can be comprised of an individual or more than one individual.

You must register as a team. A team may consist of one or more individuals; thus, you can be the only member of a team.

First, you need an account on the GO Competition website. Go to the Registration Page to fill in your personal information. Second, you also need to create a GitHub account.

Once you have successfully registered and created a GitHub account, click on the Create Team button located on your Account View page. Follow the directions by entering a Team Name (which must be unique to the GO Competition), the GitHub username, and the ARPA-E Competition Identifier which ARPA-E has provided to you (if you don't have one, you can fill it in later from the My Team page).  Skip the last step in choosing team members.  Click the Save button and you should see the My Team page with only yourself as the team members.

First, you need an account on the GO Competition website. Go to the Registration Page to fill in your personal information. Second, you also need to create a Team GitHub account.

Once you have successfully registered and created a Team GitHub account, click on the Create Team button located on your Account View page. Follow the directions by entering a Team Name (which must be unique to the GO Competition), the Team GitHub username, the ARPA-E Competition Identifier which ARPA-E has provided to you (if you don't have one, you can fill it in later after you created your team), and choose your teammates. A team may consist of one or more individuals.  If you are the only person on your team, you can skip the last part; otherwise, please make sure you choose the correct individuals and they must match the team that you provided to ARPA-E on their registration form.  Click the Save button and you should see the My Team page.

First you need an account on the GO Competition website.  Go to the Registration Page to fill in your personal information. Once you have successfully registered, notify the team leader and s/he will add you to their team and GitHub Team repository.

You can only register with one team per challenge.

Yes, you may change teams but must fill-out and return a change of entrant form to the Competition team.

Yes, all team members must sign the required form and return it following the directions on the back of the form.

Yes. Any teams created during the Beta Testing are only for our testing period and have no implications for the competition. New teams must be created for each challenge, but you may continue to use the same name even if the membership changes.

Challenge 1 Submission

Congratulations! Here is how you can enter your solution for evaluation:

1. Register with the GO Competition by going to the Registration Page.
2. Create a Team GitHub account.
3. Create a team.  All competition submissions must be submitted by a “team.” Teams can consist of an individual or many individuals.  
    Note: Team Creation requires a valid Team GitHub account.
4. Return to GitHub to establish the SSH key (in My Team view).
5. Commit (save/upload) your algorithms/optimization software to your GitHub repository.
6. Submit your algorithms/software for evaluation and scoring.
7. View your results from the Accounts Page or by clicking on the My Team button and view your "My Team Submission" panel.

Ultimately each team is allowed to determine and execute their own preferred approach to this problem; however, your results will be evaluated according to the formulation described in this competition. If your team believes that you have a better, or a more efficient, approach than the official formulation, you are free to develop such an approach. However, any deviations from the official formulation produced by your approach will results in penalties (on soft constraint violations) or infeasibilities (on hard constraint violations). APRA-E recommends familiarity with the official formulation and evaluation procedure so you understand your final score.

If you have qualified for Challenge 1, submit for Trial 2 using the submission process when it is open (July 17-19, 2019). You can practice with the Original Datasets and Trial 1 datasets – Trial 2 will use the same process. Teams may no longer submit scoring in Trial 1 (it is now closed) but the datasets are available to use for development/practice. You can re-submit with the Original Datasets and Trial 1 Datasets as many times as necessary to perfect the submission process. Each team can only submit one time for Trial 2 and the Final Event.

Unit commitments are not considered in the Challenge 1 formulation.

You are allowed to add new team members at any time until the Final Event. New team members should be added using the Team Change Form on the team leader’s account page.

These penalties are calculated from the absolute MVA mismatch.

Unfortunately, we cannot share any tips or strategy to teams for this aspect of the competition. We encourage you to find novel solutions to balance the time and accuracy concerns for this problem.

This is related to the previous two questions. As stated in the response to question 23, it is possible to translate a minor error in a hard constraint to a soft constraint such that the slack variables, which are defined, are used to gain feasibility.

This question is, in part, related to the preceding question. Your output files should conform to the guidelines posted on the website. Your results will be directly fed into the evaluation code. You are encouraged to download the evaluation code and to use it for your own testing purposes. We advise that you have consistent precision for your answers to avoid any issues with numerical accuracy and/or infeasibility.

All small violations can be pushed into the slack variables that are contained in the formulation; the formulation, given the relaxations, is feasible, which should allow for teams to correct their violations in such a way to grab small violations through slacks. With that said, we adjusted the formulation document on March 29th to reduce the importance of precision level differences in voltages. Please refer to the formulation document on the website for the most up-to-date information (appendix G.4).

There should be no islanding in any of the contingencies.

All controllable shunts in this competition are modeled with continuous variables, your model may/must dynamically choose the susceptance (b) across the defined full range of operation as it finds a solution.

Yes, it is possible – there is nothing in the formulation which expressly excludes this.

This differentiates between instances of the problem with similar topologies that represent different physical power grids. Each network model has multiple scenarios which reflect small topology changes (i.e., generation or transmission element availability) or operating conditions (i.e., load profiles, unit commitments). Refer to the scoring document at for a detailed description of dataset terminology.

Yes, anything your code1 produces will be available for code2 to read, though code2 will not be able to make updates to sol1 after code1 completes.

Yes, the evaluation code will look for code1 and code2 before starting either program and will produce a “scripts not found” error if either is missing. If necessary, your code2 can write the version of sol2 that your code1 created.

For divisions 1 and 3, the incumbent solutions should reflect an input that would be used in practice for an SCOPF application in real-time, with a short execution time limit. We anticipate that the majority of the incumbent solutions would be at least feasible across the base case and all contingencies (feasible without constraint relaxations). At best, the incumbent solution quality may reflect the quality of the last time period’s solution provided to an operator when solving for the next operating point in real time. The incumbent solutions will be available in the input data files. You are free to choose a different starting point for your program if desired.

The clock starts as soon as your code1 is launched. The time is measured as wall-clock time.

No additional cases will be released in the Original Dataset. However, following each Trial Event, the datasets from each trial will be released for algorithm development.

At each stage of the competition, the datasets will grow in complexity (network size and contingencies) and grow in terms of the number of scenarios. Trial 1 will be close to the Original Datasets that are released but there will be more scenarios in Trial 1 and there will be some that are larger in size. Trial 2 will grow from Trial 1 and the Final Event will grow from Trial 2.

We are not publicly releasing the exact contents of the upcoming trials, but teams should expect that the size and complexity of the networks should generally increase as we progress through the competition.

No, you will only get one submission for Trial 2.

Yes, you should be able to continue working with all currently available datasets throughout the competition, but . However, please be aware that we may have to temporarily suspend executing open submissions to the platform during the evaluation periods for Trial 1, Trial 2, and the Final Event.

If you have submitted using the Original Dataset but are not on the Original Datasets leaderboard then there may have been an error with either your submission or the online platform. Please check with so we can resolve the issue as quickly as possible. Resubmissions of the Trial 1 datasets will not affect your standing on the Trial 1 leaderboard.

The Original Dataset Leaderboards have been suspended in preparation for the Trial and Final Events.

Once we launch your code1, you are free to utilize the full extent of the hardware as best serves your program. Your submission.conf should specify the number of nodes your program will require.

Trial 2 datasets will be released soon after the trial 2 results are posted.

We are not planning to release globally optimal solutions for our datasets, but the leaderboards and trial events should provide feedback on the relative strength of your approach.

We are considering this. If you are interested, we suggest you provide follow-on feedback to

We do not plan to provide an admittance matrix.

The website receives event messages from the evaluation platform and assigns a date/time as they are processed. It processes the most recent message first, giving the appearance of events being out of order.

Yes. If you have any compatibility issues please contact

We are considering this, but as of now the evaluation process remains unchanged. Please contact with your specific request so we can gauge interest and potentially make future improvements to our process.

The 2sec/contingency limit will not affect submissions during teams’ development windows, but the logs for successful submissions should have adequate information for teams to gauge their code 2 performance. Teams with only partially successful code 2 attempts may have to calculate their performance based on the log information.

Several teams have requested this already, we will make more information available on the website for teams interested in replicating our singularity recipe.

No, these time limits are exclusive. The code 2 timer begins with code 2, not at the end of the code 1 timer. Any extra time remaining after code1 completes is lost.

Take the URL for the submission tar.gz and insert _output1 before .tar.gz, i.e., becomes

Some submissions may have a different output number; check the main tar.gz file.

The evaluation platform sets the environment variables based on information in submission.conf in the specified repository. Contact regarding special requests.

The trial 2 submission window schedule has been reduced and slightly delayed in order to increase the development time for the competitors. Currently, all other submission windows and other events remain as originally scheduled. We reserve the right to alter the final event schedule.

As announced on the website, APRA-E is releasing an open source version of code 2 that many teams may find helpful. ARPA-E does not guarantee that the open source code 2 release will work for every team or that it will satisfy the new time limit for trial 2. Teams are free to use the open source release as it is presented as their code 2 or to otherwise modify and or adapt the open source release to their own processes. Every team is still responsible for having a code 2 program in their github submission for Trial 2 and the Final Event.

We will not likely provide any new data formats at this point in the competition, but if desired then please send a formal request to You can also check the forum to see if any other team has already prepared the data in this way and would be willing to share with the group.

 If your team has coded a different approach within their code 1 for divisions 3 and 4, please specify so on the submission page. Teams that select “Yes” to the question: “For Division 3 and 4: Does your code use a different approach for these divisions?” will have their code re-run for divisions 3 and 4. Teams that select “No” will have their runs from divisions 1 and 2 applied for scoring in division 3 and 4. Please remember that your code 1 must be a single program, so separate approaches must be handled internal to your code, not as separate code packages.

There is an input parameter passed to codes 1 and 2 that defines which division is being scored -- Your program may take that input and respond accordingly, but you must submit one code set that handles all four divisions, not separate codes. Only a single submission is allowed for Trial and Final Events but you may select which divisions will be scored at the time of submission.

Please document any suspected discrepancies in the datasets and contact so that we can track these reports and continue to improve the realism of our test environment. We have already implemented data quality improvements for the Trial 2 datasets based on many competitors’ observations after Trial 1.

We have expanded our platform to accommodate more submissions from all of the competitors during the development windows. For the trial and final events, each team will have full use of the software and hardware described on the GO Competition website: We always welcome feedback, for additional software or hardware needs, please submit a formal request with When submitting requests, please be aware that one of our objectives is to keep the competition hardware within a reasonable realm of the means of a typical end-customer.

The network models used in Challenge 1 will include network names (i.e., “Network 3”) which identify networks that share base topologies. Competitors can expect a combination of previously encountered networks as well as new networks in Trial 1, Trial 2, and the Final Event.

The new time constraint on sol2 is not prohibitive; it is meant to encourage teams to innovate in a space that can directly compete with existing industry software. Competitors are encouraged to make full use of the platform hardware that is available to them to solve all of the contingencies within this new time limit.

The open source code 2 prepared by ARPA-E is parallelized across both cores and nodes. The developer has also recorded several webinar videos that describe his approach and methodology for code 2 and for core and node parallelization. These webinars are available at and and the code 2 is available online through a public github account. We recommend that all competitors make use of these materials.

Challenge 1 Scoring/Ranking

A computer algorithm will judge the submission.

The python Evaluation source code will be made publicly available through this site once the evaluation process has been finalized and approved and the code is is deemed stable and reliable.

Yes, you will get some feedback on where your code failed via a log summary. This log summary can be viewed in the submission details; your submissions can be found in the "My Team Submissions" panel; you can reach this information by going to your Team Information by clicking on My Team in View Account. Click on an individual submission and you will find the link to download log files.

The initial score is based on your submission as it compares to other submissions at that current time for the dataset that was indicated on the submission form. The ranking is based on all current submissions.

The final results will be based on unpublished datasets, so the final standings may be different. These datasets will be published after the competition and evaluation have been completed.