- Joined
- Apr 7, 2013
- Messages
- 123,381
[h=1]2014 NFL Draft: “Consensus” Top 200 Big Board From 34 Experts—”Forecasters” vs. “Evaluators”[/h]May 2, 2014, by Arif Hasan 26 comments
With the NFL Draft finally approaching, getting a consensus on the talents of the prospects may give us the best possible understanding of who a team selects or whether or not they did a good job (as far as we can tell). Unfortunately, not every board is constructed with the same goals in mind or with the same amount of information available to them.
NFL Draft Tracker, a great website for getting detailed reports on prospects as well as a general understanding of the theories driving the draft, made a good point not too long ago: consensus boards don’t make a lot of sense if we don’t discriminate between those who are purely evaluating player talent and those who are attempting to reflect the consensus of the league.
That’s fine—these draft resources generally answer two questions: 1) “Who’s Good?” and 2) “Who Are We Going to Pick?”
The good news is that we can easily do that. For the most part, while insiders like Rob Rang and Daniel Jeremiah do an excellent job pointing out how they differ from mainstream views, the major networks reflect often the consensus of the league, and their draft rankings do not change much from each other. At the very least, their low variance suggest that their rankings are influenced by what they hear around the league, if not driven by it.
I was initially skeptical of this approach, but when I found unusual clusters of rankings by the mainstream draft services (especially those at the same networks) but not among the third-party services, I was fairly convinced they were either stealing evaluations or working from the same inside information—it should be extremely rare to find clusters when you’re ranking around 70 or 100 (flat talent distribution means small differences of opinion cause wide ranking divergence), but among the more established services, it was not.
For what it’s worth, I don’t think they’re stealing draft evaluations from each other, I just think that when they start hearing the same things from the same people over and over again, it affects the process. The fact that many of the NFL-influenced draft boards showed much more week-to-week volatility than the film-dependent draft boards further solidifies this approach.
As a result, we can construct two consensus boards. I ended up including 34 total boards (listed at the bottom of this post), with 22 of the boards categorized as coming from “evaluators” and the other 12 coming from people categorized as “projectors.”
Some of folks (notably Bob McGinn of the Milawaukee Journal-Sentinel) have not released a Top 100 yet, but we’ll do our best regardless.
The distinction wasn’t necessarily arbitrary, but I did make some calls. A simple divergence test on the boards actually produced stark differences between the two groups—with all of the boards with similar ranks coming from those who have insider access and the others coming from a mix of those with high-level access who haven’t incorporated outside input and somewhat amateur evaluation boards (here, “amateur” is used not as a reference to evaluation talent, but recognition).
There are drawbacks and benefits to either approach. Getting information from NFL generally means getting information from a source that has more access to knowledge of injuries (they get to inspect all the top prospects at the combine and request medical information from teams, as well as do their own medical work with top doctors), off-field concerns (as Daniel Jeremiah points out, he left scouting because it has become 70 percent character background work), coaches film (invaluable for scouting safeties and quarterbacks) and years of high-pressure evaluation experience.
On the other hand, relying on information from those who have an enormous incentive to lie and those who are often stuck too far into tradition may cause serious issues in terms of accuracy and evaluation.
For these boards, I’ve used different positional names to get a clearer picture of a player’s most often projected role, so that we can compare like to like. For example, I think a rush linebacker and an edge-setting defensive end are more alike than different, and that to categorize one as an “LB” and the other as a “DE” causes too much confusion—Anthony Barr is not similar to C.J. Mosley, even though both are considered “LBs,” while Jadeveon Clowney and Stephon Tuitt hardly play the same position, despite both being “DEs.”
I’m hardly the first to come up with this concept; it’s been discussed on Twitter for quite some time, and the first time I saw it implemented in a ranking was from Josh Norris at Rotoworld. Positions below:
With the NFL Draft finally approaching, getting a consensus on the talents of the prospects may give us the best possible understanding of who a team selects or whether or not they did a good job (as far as we can tell). Unfortunately, not every board is constructed with the same goals in mind or with the same amount of information available to them.
NFL Draft Tracker, a great website for getting detailed reports on prospects as well as a general understanding of the theories driving the draft, made a good point not too long ago: consensus boards don’t make a lot of sense if we don’t discriminate between those who are purely evaluating player talent and those who are attempting to reflect the consensus of the league.
That’s fine—these draft resources generally answer two questions: 1) “Who’s Good?” and 2) “Who Are We Going to Pick?”
The good news is that we can easily do that. For the most part, while insiders like Rob Rang and Daniel Jeremiah do an excellent job pointing out how they differ from mainstream views, the major networks reflect often the consensus of the league, and their draft rankings do not change much from each other. At the very least, their low variance suggest that their rankings are influenced by what they hear around the league, if not driven by it.
I was initially skeptical of this approach, but when I found unusual clusters of rankings by the mainstream draft services (especially those at the same networks) but not among the third-party services, I was fairly convinced they were either stealing evaluations or working from the same inside information—it should be extremely rare to find clusters when you’re ranking around 70 or 100 (flat talent distribution means small differences of opinion cause wide ranking divergence), but among the more established services, it was not.
For what it’s worth, I don’t think they’re stealing draft evaluations from each other, I just think that when they start hearing the same things from the same people over and over again, it affects the process. The fact that many of the NFL-influenced draft boards showed much more week-to-week volatility than the film-dependent draft boards further solidifies this approach.
As a result, we can construct two consensus boards. I ended up including 34 total boards (listed at the bottom of this post), with 22 of the boards categorized as coming from “evaluators” and the other 12 coming from people categorized as “projectors.”
Some of folks (notably Bob McGinn of the Milawaukee Journal-Sentinel) have not released a Top 100 yet, but we’ll do our best regardless.
The distinction wasn’t necessarily arbitrary, but I did make some calls. A simple divergence test on the boards actually produced stark differences between the two groups—with all of the boards with similar ranks coming from those who have insider access and the others coming from a mix of those with high-level access who haven’t incorporated outside input and somewhat amateur evaluation boards (here, “amateur” is used not as a reference to evaluation talent, but recognition).
There are drawbacks and benefits to either approach. Getting information from NFL generally means getting information from a source that has more access to knowledge of injuries (they get to inspect all the top prospects at the combine and request medical information from teams, as well as do their own medical work with top doctors), off-field concerns (as Daniel Jeremiah points out, he left scouting because it has become 70 percent character background work), coaches film (invaluable for scouting safeties and quarterbacks) and years of high-pressure evaluation experience.
On the other hand, relying on information from those who have an enormous incentive to lie and those who are often stuck too far into tradition may cause serious issues in terms of accuracy and evaluation.
For these boards, I’ve used different positional names to get a clearer picture of a player’s most often projected role, so that we can compare like to like. For example, I think a rush linebacker and an edge-setting defensive end are more alike than different, and that to categorize one as an “LB” and the other as a “DE” causes too much confusion—Anthony Barr is not similar to C.J. Mosley, even though both are considered “LBs,” while Jadeveon Clowney and Stephon Tuitt hardly play the same position, despite both being “DEs.”
I’m hardly the first to come up with this concept; it’s been discussed on Twitter for quite some time, and the first time I saw it implemented in a ranking was from Josh Norris at Rotoworld. Positions below:
Positions |
---|