Vikings

2016 NFL Consensus Board: Evaluating vs. Forecasting

Image Courtesy of Wikipedia

I’ve published our 2016 NFL Consensus Big Board, and it gives us some insight into what analysts broadly think about this year’s NFL prospects. Generally speaking, the consensus board is one of the most accurate boards you can find, averaging about tenth in The Huddle Report’s Top 100 scoring out of fifty competitors.

But there’s more to it than that. What many big boards attempt to do is not to figure out which players will go in the top 100 picks of the draft, but which players are the best—NFL teams selecting particular players early is not proof by itself that those players are the best players in any draft, especially because NFL teams disagree and don’t understand other teams’ tendencies.

So, some boards will purport to predict the draft, and other boards will attempt to evaluate the talent of a player. Because of this, it’s important to read into rankings with caution and a critical eye. I’ve also found that some boards will very often rely on insider information—on character, injuries or the state of team’s boards—to generally construct their boards, or will explicitly attempt to predict the draft (like PlayTheDraft does).

This year, the forecaster boards were Gil Brandt’s at NFL.com, Rob Rang’s at CBS, Nolan Nawrocki’s in his own guide, Lance Zierlein’s at NFL.com, Mike Mayock’s at NFL.com, Todd McShay’s at ESPN.com, PlayTheDraft, Tony Pauline’s at DraftInsider.net, Scout.com’s and Chris Burke’s at Sports Illustrated. When Mel Kiper releases his big board, his will be added to the constantly updating Google document that will be linked below.

In 2015, the evaluators were far better at predicting performance than the forecasters and both were once again better than the NFL as a whole

The evaluator boards were everyone else, which includes other folks at big media, like Daniel Jeremiah at NFL.com or Matt Miller at the Bleacher Report. It also includes those without the same access to those resources, including our own Luke Inman and independent draft guides, like Kyle Crabbs at NDT Scouting.

Who is better at predicting the draft? Well, the forecasters. That’s hardly surprising. In 2014, players with high difference scores between the two board (a rank-adjusted measure of how much the two boards disagree with each other) more often went at the position projected by the forecasters than the evaluators.

Take a look at the results (“Wins” are when a player’s value in the draft, per the Jimmy Johnson trade chart, are closer to what that board predicted):

Biggest Differences: 2014 Draft
Range Evaluator Win Forecaster Win Ties
Top 32 2 7 0
Top 64 2 8 0
Top 75 1 1 0
Top 100 2 5 0
Top 256 12 27 11
All 78 19 48 11

The same was true of the 2015 draft:

Biggest Differences: 2015 Draft
Range Evaluator Win Forecaster Win Ties
Top 32 4 6 0
Top 64 2 4 0
Top 75 0 2 0
Top 100 0 2 0
Top 256 10 12 8
All 56 16 26 8

Generally speaking, if the two boards disagree, there’s a 55% chance the forecasters are correct, a 30% chance that the evaluators are correct and a 15% chance that the truth is in the middle (if you’re into splitting ties, then 63%% to 37% in favor of forecasters).

Who is better at predicting performance? That’s a bit of an open question given that the most recent board we have to work off of was built in 2014, but we can still try with the metrics available to us.

Using both Pro-Football-Reference’s Approximate Value metric and ProFootballFocus’ player grades, we can determine the rank of players from a draft and compare them to the rank of the two boards. Personally, I prefer PFF for this exercise because offensive linemen get boosted in PFR’s approach simply by playing for a good offense and defensive players get penalized for playing with other good players.

It doesn’t matter, because they agree. If we assign Jimmy Johnson Trade Chart draft value to each rookie based on the rank order of their performance (PFF’s top rookie gets 3000 points, the second-best gets 2600 points and so on) and look at the weighted difference between the two boards (weighted for degree of difference—if two boards basically agreed on a player, it doesn’t make sense to apply a big win or loss for that player), then in 2014, the forecasters were marginally better.

The average point difference in performance from PFF grades for the 2014 draft class was 236 trade value points. For the forecasters, it was 213 points and for the evaluators, it was 259 points. If we counted the actual NFL draft order as a board, then they were way off with an average difference of 323.

In 2015, the evaluators were far better at predicting performance than the forecasters and both were once again better than the NFL as a whole. The average difference in points for evaluators was 176, and for forecasters was 193. The NFL was at 202.

That may not be fair, however, given the number of early-round injuries—particularly to Dante Fowler, Kevin White and Breshad Perriman. We’ll see as the 2015 class continues on into this year.

In any case, the fact that the boards seem to be better than the NFL (so far) implies that despite the massive informational advantages the NFL has, the ability to best determine team fit and the ability to put players in positions to succeed, that there are big inefficiencies to what they do—and to some extent, their dismissiveness to outside evaluators should be reassessed.

Obviously, positional value and positions of need play a role, though need would play less of a role if the NFL were more open to trading picks around.

Aside from that, the fact that evaluators could even be close to forecasters in predicting performance despite being far behind in predicting where players are picked means that fans and teams can use the differences between the boards to take advantage of draft efficiencies. If the evaluators like someone who the forecasters do not like, then there is a good chance that that player will go low (abut a 65 percent chance) but an even chance he will meet the high expectation of the evaluator board.

Here’s the Forecaster Board’s Top 50:

Forecaster Rk Overall Rk Player School Position
1 1 Laremy Tunsil Ole Miss OT
2 2 Jalen Ramsey Florida State S
3 3 Joey Bosa Ohio State ED
4 4 Myles Jack UCLA OB
5 5 Ezekiel Elliott Ohio State RB
6 6 DeForest Buckner Oregon ID
7 8 Ronnie Stanley Notre Dame OT
8 12 Carson Wentz North Dakota State QB
9 9 Vernon Hargreaves III Florida CB
10 7 Jared Goff California QB
11 17 A’Shawn Robinson Alabama ID
12 11 Laquon Treadwell Ole Miss WR
13 13 Sheldon Rankins Louisville ID
14 19 Jack Conklin Michigan State OT
15 15 Reggie Ragland Alabama OB
16 14 Darron Lee Ohio State OB
17 23 Leonard Floyd Georgia ED
18 10 Shaq Lawson Clemson ED
19 28 Robert Nkemdiche Ole Miss ID
20 21 Corey Coleman Baylor WR
21 18 Jarran Reed Alabama ID
22 22 Taylor Decker Ohio State OT
23 34 Kevin Dodd Clemson ED
24 27 Mackensie Alexander Clemson CB
25 26 Paxton Lynch Memphis QB
26 16 Josh Doctson TCU WR
27 29 Vernon Butler Louisiana Tech ID
28 31 Eli Apple Ohio State CB
29 24 William Jackson III Houston CB
30 25 Andrew Billings Baylor ID
31 33 Emmanuel Ogbah Oklahoma State ED
32 42 Will Fuller Notre Dame WR
33 39 Derrick Henry Alabama RB
34 38 Ryan Kelly Alabama C
35 20 Noah Spence Eastern Kentucky ED
36 44 Hunter Henry Arkansas TE
37 36 Cody Whitehair Kansas State OG
38 35 Kenny Clark UCLA ID
39 32 Jonathan Bullard Florida ED
40 45 Karl Joseph West Virginia S
41 37 Jason Spriggs Indiana OT
42 41 Chris Jones Mississippi State ID
43 40 Michael Thomas Ohio State WR
44 30 Jaylon Smith Notre Dame OB
45 48 Germain Ifedi Texas A&M OT
46 53 Austin Johnson Penn State ID
47 58 Keanu Neal Florida S
48 49 Su’a Cravens Southern California OB
49 55 Kamalei Correa Boise State OB
50 43 Sterling Shepard Oklahoma WR

Here’s the Evaluator Board’s Top 50:

Evaluator Rk Overall Rk Player School Position
1 1 Laremy Tunsil Ole Miss OT
2 2 Jalen Ramsey Florida State S
3 3 Joey Bosa Ohio State ED
4 4 Myles Jack UCLA OB
5 5 Ezekiel Elliott Ohio State RB
6 6 DeForest Buckner Oregon ID
7 7 Jared Goff California QB
8 8 Ronnie Stanley Notre Dame OT
9 10 Shaq Lawson Clemson ED
10 9 Vernon Hargreaves III Florida CB
11 13 Sheldon Rankins Louisville ID
12 11 Laquon Treadwell Ole Miss WR
13 12 Carson Wentz North Dakota State QB
14 14 Darron Lee Ohio State OB
15 16 Josh Doctson TCU WR
16 30 Jaylon Smith Notre Dame OB
17 15 Reggie Ragland Alabama OB
18 20 Noah Spence Eastern Kentucky ED
19 18 Jarran Reed Alabama ID
20 17 A’Shawn Robinson Alabama ID
21 22 Taylor Decker Ohio State OT
22 21 Corey Coleman Baylor WR
23 25 Andrew Billings Baylor ID
24 24 William Jackson III Houston CB
25 26 Paxton Lynch Memphis QB
26 19 Jack Conklin Michigan State OT
27 27 Mackensie Alexander Clemson CB
28 23 Leonard Floyd Georgia ED
29 29 Vernon Butler Louisiana Tech ID
30 32 Jonathan Bullard Florida ED
31 28 Robert Nkemdiche Ole Miss ID
32 33 Emmanuel Ogbah Oklahoma State ED
33 36 Cody Whitehair Kansas State OG
34 35 Kenny Clark UCLA ID
35 31 Eli Apple Ohio State CB
36 37 Jason Spriggs Indiana OT
37 40 Michael Thomas Ohio State WR
38 39 Derrick Henry Alabama RB
39 34 Kevin Dodd Clemson ED
40 38 Ryan Kelly Alabama C
41 43 Sterling Shepard Oklahoma WR
42 41 Chris Jones Mississippi State ID
43 42 Will Fuller Notre Dame WR
44 45 Karl Joseph West Virginia S
45 46 Joshua Garnett Stanford OG
46 47 Kendall Fuller Virginia Tech CB
47 49 Su’a Cravens Southern California OB
48 44 Hunter Henry Arkansas TE
49 51 Vonn Bell Ohio State S
50 52 Shilique Calhoun Michigan State ED

 

You can find a Google document of the top 300 of both boards here.

Most interesting to me are the differences between the two. The largest differences for individual players below:

Player Eval Forecast Difference Score Preference
Jaylon Smith 16 44 3.40 Evaluator
Chris Brown 406 244 2.51 Forecaster
T.J. Green 152 81 2.49 Forecaster
B.J. Goodson 238 142 2.08 Forecaster
Evan Boehm 171 102 1.83 Forecaster
Mike Thomas (SM) 145 233 1.76 Evaluator
Peyton Barber 380 251 1.67 Forecaster
Alex Lewis 244 155 1.66 Forecaster
Noah Spence 18 35 1.53 Evaluator
Tyrone Holmes 204 308 1.52 Evaluator
Trevone Boykin 252 374 1.51 Evaluator
Landon Turner 102 160 1.38 Evaluator
Ricardo Louis 307 210 1.30 Forecaster
D.J. Reader 122 186 1.30 Evaluator
Shaq Lawson 9 18 1.26 Evaluator
Jack Conklin 26 14 1.19 Forecaster
Adam Gotsis 182 122 1.16 Forecaster
Kevin Dodd 39 23 1.05 Forecaster
Stephane Nembot 333 239 1.04 Forecaster
Vernon Adams Jr. 186 263 1.03 Evaluator
A’Shawn Robinson 20 11 1.01 Forecaster

Some of these differences are very easy to parse. Forecasters have much more information about Jaylon Smith’s injuries and how Noah Spence did in team interviews, and therefore downgraded both of them.

What’s surprising is that there’s only one safety up there, despite the fact that it’s the most contentious position to evaluate looking at it board-by-board. That contentiousness carried over into the individual forecaster boards, however, so safeties didn’t get well-represented up there.

T.J. Green earned a Jalen Ramsey-ish evaluation from an area scout, and that seems to have filtered through to the forecasters:

In general, the Clemson defense has been the source of much disagreement, and the forecasters near-uniformly preferred the Clemson defenders, with the exception of Shaq Lawson—and the clouded scouting evaluation for Shaq Lawson was nicely summed up by Matt Waldman recently.

Small-school players, like Mike Thomas of Southern Miss, Tyrone Holmes of Montana and Noah Spence of Eastern Kentucky, also drew disagreement, with the evaluators preferring the small-schoolers nearly every time, and that largely bares out.

The one exception was just below a difference score of 1.0 was Devon Johnson of Marshall, who the forecasters preferred. Still, the evaluators preferred Joe Haeg of NDSU, Tavon Young of Temple and Cre’von LeBlanc of Florida Atlantic.

For schools with at least three players in the top 300, the largest average difference scores went to Clemson (the only one with an average above 1.0, at 1.03), Nebraska, Missouri, Colorado and Notre Dame. The lowest was Ohio State, who everyone loved. By the way, the biggest agreements were the following players:

Player Eval Forecast Adj Difference Preference
Laremy Tunsil 1 1 0.00 No one
Jalen Ramsey 2 2 0.00 No one
Joey Bosa 3 3 0.00 No one
Myles Jack 4 4 0.00 No one
Ezekiel Elliott 5 5 0.00 No one
DeForest Buckner 6 6 0.00 No one
Laquon Treadwell 12 12 0.00 No one
Paxton Lynch 25 25 0.00 No one
Chris Jones 42 42 0.00 No one
Dak Prescott 114 114 0.00 No one
Keyarris Garrett 139 139 0.00 No one
Kevin Hogan 178 178 0.00 No one

It’s never too surprising to see the names at the top, but it’s always curious to see some of the bottom names. What makes Kevin Hogan so “178th-best”? Why is Keyarris Garrett, a big receiver in a gimmick-y offense, so universally agreed-upon?

It’s likely random chance—if you generate random numbers 300 times between two sets, some will match. Even more will match if you weight those numbers to a general area, which is kind of what this is. Still, it’s kind of fun to see.

Those are your boards. Make of them what you will.

Vikings
The Vikings Are Getting Caught In the NFL Draft Silly Season Vortex
By Tom Schreier - Mar 29, 2024
Vikings
Finalizing the Roster: New Signings, Contracts, and Draft Day Rumors
By Matt Johnson - Mar 28, 2024
Vikings

Did the Vikings Ultimately Come Out Ahead In Free Agency?

Image Courtesy of Wikipedia

The Minnesota Vikings may have had the most active and impactful free agency session in the NFL, even though they didn’t sign as many new players as […]

Continue Reading