Event 1

Teams

Teams Participating in Challenge 3 Event 1
Event 1 Team Name Lead Affiliation Team Lead Members          
ARPA-e Benchmark Los Alamos National Laboratory # Carleton Coffrin Robert Parker          
Argonauts Argonne National Laboratory Kibaek Kim Michel Schanen Anirudh Subramanyam Weiqi Zhang Francois Pacaud    
Artelys_Columbia Artelys $ Richard A Waltz Daniel Bienstock          
Electric-Stampede University of Texas - Austin $ Javad Mohammadi Kyri Baker Constance Crozier Hussein Sharadga      
Gatorgar University of Florida $ Yongpei Guan Lei Fan Mingze Li Weihang Ren      
GOT-BSI-OPF Global Optimal Technology, Inc.  $ Hsiao-Dong Chiang Pat Causgrove Bin Wang Lin Zeng      
LLGoMax Lawrence Livermore National Laboratory $ Ignacio Aravena Solis Rod Frowd Shmuel Oren Alex Papalexopoulos      
PACE IncSys  $ Robin Podmore Roozbeh Khodadadeh Chris Mosier        
PGWOpt University of Pittsburgh $ Masoud Barati Nina Fatehi Sattiago Grijalva Masoud H. Nazari Samuel Talkington Jorge Fernandez  
The Blackouts University of Tennessee: Knoxville $ James Ostrowski Marzieh Bakhshi Christopher Ginart William Hart Bernard Knueven Jonathan Schrock Jean-Paul Watson
TIM-GO Massachusetts Institute of Technology $ Xu Sun Matthew Brun Xin Chen Dirk Lauinger Thomas Lee    
YongOptimization Mississippi State University $ Yong Fu Lin Gong Yehong Peng Fasiha Zainab      
 

# not competing for prizes
$ recipient of FOA award

The submission window was January 25-27, 2023.

Scoring

Division 1, 2, and 3 rankings will be determined based upon the total score (Market Surplus, zms in the Formulation Document). Only zms values that are positive for feasible solutions will be counted. All other results will be treated as 0.0. 

Division 4, 5, and 6 rankings are based on the count of top scores for each scenario in Divisions 1, 2, and 3.

See the Scoring document for details. There are no prizes for Event 1.

Leaderboard Rankings

Ensemble is the set of best scores for the grouping. The Sum of all scores column is provided for comparison. In the prize events are 6 categories (divisions).

These scores are in effect February13, 2023.

Scores for all 281 Event 1 Scenarios
Rank Team Sum of all scores Rank Team Division 1 scores Rank Team Division 2 scores Rank Team Division 3 scores
       Ensemble 108,982,884,958.94         Ensemble 3,930,096,139.78         Ensemble 25,827,974,957.23           Ensemble 79,224,813,861.93
      1 GOT-BSI-OPF 94,936,208,869.55        1 GOT-BSI-OPF 3,311,325,687.67        1 GOT-BSI-OPF 22,972,204,891.28          1 GOT-BSI-OPF 68,652,678,290.60
      2 ARPA-e Benchmark 91,721,540,164.44        2 ARPA-e Benchmark 2,209,161,756.22        2 ARPA-e Benchmark 22,874,425,223.49          2 ARPA-e Benchmark 66,637,953,184.72
      3 Electric-Stampede 54,909,732,005.35        3 Electric-Stampede 2,124,061,544.90        3 Electric-Stampede 12,520,768,782.80          3 Electric-Stampede 40,264,901,677.65
      4 TIM-GO 46,513,549,046.83        4 TIM-GO 1,398,880,526.44        4 TIM-GO 10,515,572,351.86          4 TIM-GO 34,599,096,168.53
      5 Artelys 33,513,534,402.14        5 Artelys 176,421,497.02        5 Artelys 8,345,361,628.81          5 Artelys 24,991,751,276.31
      6 Gatorgar 74,314,550.61        6 Gatorgar 18,889,999.47        6 Gatorgar 13,894,560.35          6 Gatorgar 41,529,990.80
      7 Argonauts 0.00        7 Argonauts 0.00        7 Argonauts 0.00          7 Argonauts 0.00
  LLGoMax 0.00   LLGoMax 0.00   LLGoMax 0.00   LLGoMax 0.00
  PACE 0.00   PACE 0.00   PACE 0.00   PACE 0.00
  PGWOpt 0.00   PGWOpt 0.00   PGWOpt 0.00   PGWOpt 0.00
  The Blackouts 0.00   The Blackouts 0.00   The Blackouts 0.00   The Blackouts 0.00
  YongOptimization 0.00   YongOptimization 0.00   YongOptimization 0.00   YongOptimization 0.00

The ranking of all firsts is provided for comparison.

Number of best scores for all 281 Event 1 Scenarios
Rank Team All firsts Rank Team Division 4  Rank Team Division 5  Rank Team Division 6 
        1 Artelys_Columbia 92         1 Artelys_Columbia 20         1 Artelys_Columbia 39           1 Artelys_Columbia 33
        2 TIM-GO 74         2 ARPA-e Benchmark 17         2 TIM-GO 28           2 TIM-GO 30
        3 GOT-BSI-OPF 72         3 GOT-BSI-OPF 17         3 GOT-BSI-OPF 26           3 GOT-BSI-OPF 29
        4 ARPA-e Benchmark 38         4 TIM-GO 16         4 ARPA-e Benchmark 12           4 ARPA-e Benchmark 9
        5 Electric-Stampede 5         5 Electric-Stampede 2         5 Electric-Stampede 1           5 Electric-Stampede 2
        6 Argonauts 0         6 Argonauts 0         6 Argonauts 0           6 Argonauts 0
  Gatorgar 0   Gatorgar 0   Gatorgar 0   Gatorgar 0
  LLGoMax 0   LLGoMax 0   LLGoMax 0   LLGoMax 0
  PACE 0   PACE 0   PACE 0   PACE 0
  PGWOpt 0   PGWOpt 0   PGWOpt 0   PGWOpt 0
  The Blackouts 0   The Blackouts 0   The Blackouts 0   The Blackouts 0
  YongOptimization 0   YongOptimization 0   YongOptimization 0   YongOptimization 0

Statistics

Other rankings are possible. In this next one, a scaled score is the scenario score as in Divisions 1-3, but divided by the best score for that scenario before summing. This gives more weight to the smaller problems. The ensemble score becomes the number of scenarios being summed.

Scaled Scores
Rank Team all scaled scores Rank Team Division 1 scaled Rank Team Division 2 scaled Rank Team Division 3 scaled
         Ensemble 281.00          Ensemble 72.00          Ensemble 106.00            Ensemble 103.00
        1 ARPA-e Benchmark 249.00         1 Electric-Stampede 52.66         1 ARPA-e Benchmark 100.89           1 ARPA-e Benchmark 96.86
        2 TIM-GO 205.09         2 ARPA-e Benchmark 51.25         2 TIM-GO 78.71           2 TIM-GO 79.98
        3 Electric-Stampede 200.26         3 GOT-BSI-OPF 50.83         3 Electric-Stampede 72.91           3 Electric-Stampede 74.69
        4 GOT-BSI-OPF 175.27         4 TIM-GO 46.40         4 Artelys 71.90           4 Artelys 72.95
        5 Artelys 172.80         5 Artelys 27.96         5 GOT-BSI-OPF 64.81           5 GOT-BSI-OPF 59.63
        6 Gatorgar 43.84         6 Gatorgar 12.99         6 Gatorgar 17.00           6 Gatorgar 13.85
        7 Argonauts 0.00         7 Argonauts 0.00         7 Argonauts 0.00           7 Argonauts 0.00
  LLGoMax 0.00   LLGoMax 0.00   LLGoMax 0.00   LLGoMax 0.00
  PACE 0.00   PACE 0.00   PACE 0.00   PACE 0.00
  PGWOpt 0.00   PGWOpt 0.00   PGWOpt 0.00   PGWOpt 0.00
  The Blackouts 0.00   The Blackouts 0.00   The Blackouts 0.00   The Blackouts 0.00
  YongOptimization 0.00   YongOptimization 0.00   YongOptimization 0.00   YongOptimization 0.00

Another interesting statistic is the number of feasible scores a team found. Here we count negative feasible scores as well as positive.

Number of Feasible Scores
Rank Team All Feasible Scores Rank Team Divi. 1 Feasible Rank Team Div. 2 Feasible Rank Team Div. 3 Feasible
1 ARPA-e Benchmark 253 1 TIM-GO 68 1 ARPA-e Benchmark 102 1 ARPA-e Benchmark 99
2 TIM-GO 239 2 Electric-Stampede 57 2 TIM-GO 85 2 TIM-GO 86
3 Electric-Stampede 218 3 GOT-BSI-OPF 53 3 Electric-Stampede 79 3 Electric-Stampede 82
4 GOT-BSI-OPF 178 4 ARPA-e Benchmark 52 4 Artelys_Columbia 73 4 Artelys_Columbia 76
5 Artelys_Columbia 177 5 The Blackouts 41 5 GOT-BSI-OPF 65 5 Argonauts 75
6 Argonauts 174 6 Argonauts 39 6 Argonauts 60 6 GOT-BSI-OPF 60
7 The Blackouts 123 7 Artelys_Columbia 28 7 The Blackouts 42 7 The Blackouts 40
8 LLGoMax 81 8 LLGoMax 16 8 LLGoMax 33 8 LLGoMax 32
9 Gatorgar 44 9 Gatorgar 13 9 Gatorgar 17 9 Gatorgar 14
10 PACE 1 10 PACE 1 10 PACE 0 10 PACE 0
11 PGWOpt 0 11 PGWOpt 0   PGWOpt 0   PGWOpt 0
  YongOptimization 0   YongOptimization 0   YongOptimization 0   YongOptimization 0

The number of Infeasible scores is another metric; for this fewer is better. The number of feasible and infeasible solutions rarely (only for Gatorgar) add up to the total number of scenarios--some scenarios have no solution files or invalid solution files. Having zero infeasible scores is not good unless all scenarios have valid solution files, which is not the case here.

Number of Infeasible Scores
Rank Team All Infeasible Scores Rank Team Div. 1 Infeasible Rank Team Div. 2 Infeasible Rank Team Div. 3 Infeasible
  PGWOpt 0   Argonauts 0   PGWOpt 0   Argonauts 0
1 ARPA-e Benchmark 8   PGWOpt 0 1 ARPA-e Benchmark 3   PGWOpt 0
2 Argonauts 13 1 ARPA-e Benchmark 1 2 Argonauts 13 1 ARPA-e Benchmark 4
3 Electric-Stampede 37 2 Electric-Stampede 1 3 The Blackouts 13 2 Artelys_Columbia 14
4 TIM-GO 39 3 LLGoMax 3 4 LLGoMax 16 3 TIM-GO 16
5 Artelys_Columbia 44 4 TIM-GO 4 5 Electric-Stampede 19 4 Electric-Stampede 17
6 LLGoMax 46 5 Artelys_Columbia 9 6 TIM-GO 19 5 The Blackouts 24
7 The Blackouts 50 6 GOT-BSI-OPF 10 7 Artelys_Columbia 21 6 LLGoMax 27
8 GOT-BSI-OPF 93 7 The Blackouts 13 8 GOT-BSI-OPF 40 7 GOT-BSI-OPF 43
9 PACE 125 8 PACE 21 9 PACE 58 8 PACE 46
10 Gatorgar 237 9 Gatorgar 59 10 Gatorgar 89 9 Gatorgar 89
11 YongOptimization 268 10 YongOptimization 71 11 YongOptimization 100 10 YongOptimization 97

The final solution file metric is the number that are missing or invalid. Solutions may be missing because the solver timed out or failed before creating a solution. We believe it is better to produce a poor but feasible solution than no solution. Again, a smaller number is better.

Number of Invalid or Missing Solution Files
Rank Team All invalid or no soln. Rank Team Div. 1 invalid or no soln. Rank Team Div. 2 invalid or no soln. Rank Team Div. 3 invalid or no soln.
1 Gatorgar 0 1 Gatorgar 0 1 Gatorgar 0 1 ARPA-e Benchmark 0
2 TIM-GO 3   TIM-GO 0 2 ARPA-e Benchmark 1   Gatorgar 0
3 GOT-BSI-OPF 10 2 YongOptimization 1   GOT-BSI-OPF 1   GOT-BSI-OPF 0
4 YongOptimization 13 3 GOT-BSI-OPF 9 3 TIM-GO 2 2 TIM-GO 1
5 ARPA-e Benchmark 20 4 Electric-Stampede 14 4 YongOptimization 6 3 Electric-Stampede 4
6 Electric-Stampede 26 5 The Blackouts 18 5 Electric-Stampede 8 4 YongOptimization 6
7 Artelys_Columbia 60 6 ARPA-e Benchmark 19 6 Artelys_Columbia 12 5 Artelys_Columbia 13
8 Argonauts 94 7 Argonauts 33 7 Argonauts 33 6 Argonauts 28
9 The Blackouts 108 8 Artelys_Columbia 35 8 PACE 48 7 The Blackouts 39
10 LLGoMax 154 9 PACE 50 9 The Blackouts 51 8 LLGoMax 44
11 PACE 155 10 LLGoMax 53 10 LLGoMax 57 9 PACE 57
12 PGWOpt 281 11 PGWOpt 72 11 PGWOpt 106 10 PGWOpt 103

Producing a feasible solution file while reaching the time limit is not only possible, but desirable. A large number of TIMOUTs is not an necessarily an indication of a poor solver, but maybe one under stress, pushing the optimization to the limit. It can also be an indication that a problem is difficult. Again, fewer is better. Not surprisingly, Division 1 has the most time outs. The divisions with longer time limits have many fewer. Perhaps the limits for Division 3 are too generous.

Number of TIMEOUTs
Rank Team  All Time Outs Rank Team Div. 1 Time Outs Rank Team Div. 2 Time Outs Rank Team Div. 3 Time Outs
1 LLGoMax 0 1 LLGoMax 0 1 ARPA-e Benchmark 0 1 ARPA-e Benchmark 0
  PGWOpt 0   PGWOpt 0   Gatorgar 0   Gatorgar 0
  YongOptimization 0   YongOptimization 0   GOT-BSI-OPF 0   GOT-BSI-OPF 0
2 The Blackouts 2 2 The Blackouts 2   LLGoMax 0   LLGoMax 0
3 GOT-BSI-OPF 5 3 GOT-BSI-OPF 5   PACE 0   PACE 0
4 ARPA-e Benchmark 14 4 TIM-GO 13   PGWOpt 0   PGWOpt 0
5 Gatorgar 16 5 ARPA-e Benchmark 14   The Blackouts 0   The Blackouts 0
6 Electric-Stampede 21   Electric-Stampede 14   YongOptimization 0   YongOptimization 0
7 PACE 26 6 Gatorgar 16 2 Electric-Stampede 5 2 Electric-Stampede 2
8 Artelys_Columbia 39 7 Artelys_Columbia 20 3 Artelys_Columbia 11 3 Artelys_Columbia 8
9 TIM-GO 47 8 Argonauts 22 4 TIM-GO 19 4 TIM-GO 15
10 Argonauts 73 9 PACE 26 5 Argonauts 28 5 Argonauts 23

Details 

A spreadsheet with the scoring for each of the 281 scenarios by each team is available for download (updated Feb. 20, 2023 to include additional statistics). Included are the Best Score for each scenario and which team accomplished this. Also shown are the z (objective), including negative and infeasible values, whether or not the result is feasible, the score, and a scaled score. Not a number (NaN) values result when the branch winding ratio, TAU, is zero in the solution file because the term 1/TAU appears in the equations for AC branch flows.

Included in the list are 24 scenarios where we asked that you solve without making any switching changes. Any switching change made the solution infeasible. Only TIM-GO heeded our request and did not do switching when asked not to. The 24 pairs are highlighted in light green in the switching allowed column (1=yes, 0=no). Two teams (GOT-BSI-OPF and Argonauts) were conservative and did no switching. PGWOpt produced no valid solution files so we don't know if they were doing switching or not. You may not tell the Sandbox to test the use of the AllowSwitching in-line parameter with the submission.conf command "experiment=[SW0 | SW1]"; the default is SW1 (switching allowed). See the Languages page for details.

Each competing team will receive a more detailed summary of their results, including where to download the results if individual runs, by e-mail, which should arrive by the end of the month.