Was Jets OC Mike LaFleur actually good at things he can control?

10 min read

#NFLBeast #NFL #NFLTwitter #NFLUpdate #NFLNews #NFLBlogs

#NYJ #NYJets #Jets #NewYorkJets #AFC

By: Michael Nania

Attempting to evaluate former New York Jets offensive coordinator Mike LaFleur independent of his surroundings

When people analyze an offensive play-caller in the NFL, they simply look at the overall numbers of the play-caller’s unit and make their judgments based on those alone. If the play-caller’s offense was 30th in scoring, he stinks. If his offense was 5th in scoring, he’s a rockstar.

However, evaluating coaches is a lot more complex than that. There are many factors involved in determining the performance of a unit. It’s never 100% on the coach. He could be let down by poor execution. He could also be made to look better than he actually is thanks to the phenomenal talent on the roster.

Much of a coach’s true performance quality comes down to their abilities as leaders and teachers off the field, which outsiders are mostly unable to judge (save for the occasional reports and rumors that leak out of a locker room).

But when looking solely at what happens on the field, there are ways to isolate the coach’s performance from that of his players.

The best way to do this is by grinding through the film, but I wanted to find a way to quantify some of the things you might notice on film as examples of a good or bad job by the offensive coordinator.

I tried my best to identify metrics that tell us specifically about how the offensive play-caller performed at putting his players in the best positions to succeed.

Using these metrics, we can get a somewhat better idea of how good or bad Mike LaFleur really was as the New York Jets‘ offensive coordinator. Did the Jets make a mistake moving on from him? Or did he struggle as much as it seemed?

The metrics chosen

Before we get into the numbers, let’s break down the four metrics I chose to analyze. Each of these metrics is intended to focus on a part of the game that the play-caller has a high degree of control over and is largely independent of his players’ execution.

Wide-open passes created

Every time I watch a San Francisco 49ers game, I am in awe of how many wide-open passes Kyle Shanahan draws up. Players constantly break wide-open solely because of his play designs.

In my opinion, the ability to scheme people open is the best trait an offensive play-caller can offer. It makes life so easy for everyone on the field. The quarterback is given a throw that is simple to execute and the skill-position players are not pressured to win their routes one-on-one.

Using data from NFL Next Gen Stats, I calculated the percentage of each team’s pass attempts that traveled at least 10 yards down the field and went to a receiver who was categorized as “open” or “wide-open” (3+ yards of separation from nearest defender when ball arrives).

This is an effective way of seeing how frequently the quarterback was gift-wrapped an easy completion down the field.

Unaccounted-for pressures allowed

Few things make an offense look more discombobulated than allowing unaccounted-for defenders to run free at the quarterback. This metric allows us to get an idea of how often each team allowed this to happen.

Any time a defender pressures a quarterback, Pro Football Focus charts which offensive player is responsible for allowing it to happen. The pressure can be charged to any blocker, the quarterback himself, or nobody.

If the pressure is charged to nobody, it is likely an example of an unblocked pressure in which no particular player is to blame. This typically means one of two things: Either there was a miscommunication or the play call did not have a built-in way to account for the rusher. Both things are an indictment on the play-caller.

I calculated the percentage of each team’s dropbacks in which they allowed an unaccounted-for pressure. This gives us an idea of how frequently the offense’s protection breaks down in a way that can be blamed on the play-caller rather than the individual pass-blocking talents of the offensive line.

Expected YAC on screens

The success of a screen play usually comes down to whether the play-caller won the chess match. Did he create a situation where the ball carrier is in a favorable position to succeed?

Yes, the players have plenty of responsibility on screens – the blocking is key and the ball-carrier’s job is to make people miss – but more often than not, the success of a screen play mostly comes down to whether the play-caller called it at the correct time and designed it in the correct way to exploit the defense that was called.

On each completed pass, NFL Next Gen Stats uses player tracking data to calculate the amount of YAC (yards after catch) that the receiver is expected to gain, based on the positioning of all players on the field at the time the ball is caught. In other words, it estimates the amount of YAC that the league-average player would gain if they were given the ball in that exact scenario.

I calculated the average amount of expected YAC that each team generated per screen play. This gives us an idea of the quality of each team’s screen plays. If the number is high, it means the play-caller is dialing up screens that have a high likelihood of success, and vice versa.

This is a better way to evaluate the play-caller than by looking at the raw results on screen plays, as it isolates the play-caller from the performance of his players. We don’t want to give the play-caller too much credit if his receiver makes an amazing play to outperform what was presented to him. We also don’t want to knock the play-caller too harshly if his receiver misreads the play and fails to take advantage of a well-designed play.

Play-calling predictability (Based on XPASS)

Every armchair fan loves to blast the play-caller for being predictable. But they usually don’t make the critique until after the play is called and fails.

That’s not how it works, unfortunately. If Joe Schmoe from Jersey City can show me his record at predicting LaFleur’s play calls before the ball is snapped, I’d love to see it, but you can’t say a play is predictable after it happens, which is what the MetLife faithful seem to always do.

What we can do is evaluate each play-caller’s predictability by using the pre-snap probabilities of what they are expected to call on each play.

Jets X-Factor’s Rivka Boord has frequently evaluated LaFleur’s predictability using a metric called XPASS, courtesy of nflfastR. I’m going to borrow the data she brought up in another LaFleur article.

Here is Rivka’s explanation of XPASS:

“XPASS provides the probability that any given play will be a pass play based on the game situation and historical play-by-play data. In some game situations, the play selection split between run and pass will be pretty even, resulting in an XPASS close to 50%. However, when XPASS is above 55% or lower than 45%, there is a clear expectation of what a team will do. What the team actually does in those situations gives you a good idea of how predictable the offensive coordinator is.”

Rivka calculated the percentage of plays in which each team called the opposite of the expected call in that situation (including only situations where there is a clear expectation – i.e. an XPASS above 55% or below 45%). In other words, if LaFleur called a pass when the XPASS was under 45% or a run when the XPASS was above 55%, it was an unexpected play call.

Rivka also included a win-probability filter, counting out plays in which the team’s win probability was below 20% or above 80%. This allows us to filter out plays in which the team is either milking the clock or in pass-only mode.

Overall, this metric is a good way to gauge the predictability of a team’s play-calling.

The results

Time to hop into it. How good was Mike LaFleur in these four metrics that isolate the play-caller from his players?

Wide-open passes created

Here is a look at the percentage of each team’s pass attempts that went to a receiver who was at least 10 yards downfield and had at least 3 yards of separation from the nearest defender.

Rank Team Wide open passes Total passes %
1 MIA 91 584 15.6
2 BUF 64 574 11.1
3 KC 71 650 10.9
4 CHI 41 378 10.8
5 ATL 42 415 10.1
6 NO 51 512 10.0
7 PHI 52 536 9.7
8 SF 50 517 9.7
9 JAX 54 596 9.1
10 BAL 43 488 8.8
11 MIN 59 670 8.8
12 ARI 58 663 8.7
13 SEA 50 573 8.7
14 CLE 47 540 8.7
15 LV 51 586 8.7
16 NYJ 53 625 8.5
17 DEN 48 571 8.4
18 DET 48 587 8.2
19 LA 43 529 8.1
20 HOU 47 579 8.1
21 DAL 45 556 8.1
22 PIT 46 570 8.1
23 LAC 57 711 8.0
24 GB 44 563 7.8
25 WAS 43 554 7.8
26 CAR 35 452 7.7
27 CIN 47 610 7.7
28 IND 44 604 7.3
29 TB 53 751 7.1
30 NE 38 540 7.0
31 NYG 29 520 5.6
32 TEN 23 456 5.0

LaFleur did a decent job here. New York attempted 53 wide-open downfield passes over 625 total passes, a rate of 8.5% that ranked 16th.

The quarterbacks just didn’t do a good job of executing these throws. New York completed only 63.0% of these passes, which ranked fifth-worst. The league-average completion percentage on these throws was 69.5%.

Mike White capitalized on these easy throws at around an average rate, completing 70.0% of them, but Zach Wilson was at 61.9% while Joe Flacco was even worse at 58.8%.

Still, LaFleur was not as good in this area as some of the other coaches from his tree. His former 49ers colleague, Mike McDaniel, had the Dolphins at No. 1 by an enormous margin, generating these throws 15.6% of the time. Shanahan and the 49ers ranked eighth at 9.7%.

Generating easy passes is supposed to be the bread-and-butter of LaFleur’s west-coast offense, but he was only average at it.

Get Started: Learn More About Becoming A Jet X Member

Unaccounted-for pressures allowed

Here is a look at the percentage of each team’s dropbacks in why they allowed an unaccounted-for pressure.

Rank Team Unaccounted Pressures Total Dropbacks %
1 NE 3 600 0.50
2 SF 3 558 0.54
3 DAL 4 601 0.67
4 DET 5 626 0.80
5 ARZ 6 749 0.80
6 CIN 7 682 1.03
7 HST 7 638 1.10
8 NYG 8 627 1.28
9 CLV 8 616 1.30
10 LA 8 611 1.31
11 GB 8 606 1.32
12 MIA 9 639 1.41
13 LAC 11 777 1.42
14 TEN 8 531 1.51
15 MIN 11 729 1.51
16 JAX 10 648 1.54
17 CAR 8 519 1.54
18 PIT 10 639 1.56
19 WAS 10 629 1.59
20 ATL 8 486 1.65
21 TB 13 776 1.68
22 LV 11 648 1.70
23 CHI 9 510 1.76
24 KC 13 725 1.79
25 BUF 12 665 1.80
26 SEA 12 652 1.84
27 BLT 11 562 1.96
28 NYJ 14 689 2.03
29 PHI 13 625 2.08
30 NO 12 563 2.13
31 IND 16 684 2.34
32 DEN 16 665 2.41

This was a major problem for the Jets. They allowed unaccounted-for pressures at the fifth-highest rate of any offense.

LaFleur deserves plenty of blame for the number of unblocked defenders that made their way to the quarterback. He failed to get his offense on the same page in pass protection. While the Jets’ offensive line was decimated by injuries and therefore lacked talent, there were still too many pressures on the quarterback that were simply a result of LaFleur being out-schemed.

Expected YAC on screens

Here is a look at the average amount of YAC each team was expected to gain on its screens.

Rank Team Screen Receptions XYAC on Screens XYAC/Screen
1 LAR 46 483 10.5
2 DET 37 386 10.4
3 KC 68 629 9.3
4 LAC 65 596 9.2
5 JAX 65 593 9.1
6 SF 56 508 9.1
7 DEN 47 424 9.0
8 TEN 27 243 9.0
9 CHI 47 406 8.6
10 MIN 63 543 8.6
11 CLE 33 283 8.6
12 TB 61 512 8.4
13 NYJ 42 352 8.4
14 ARI 71 592 8.3
15 BAL 28 233 8.3
16 MIA 31 253 8.2
17 ATL 38 307 8.1
18 PHI 60 482 8.0
19 SEA 34 273 8.0
20 NO 32 254 7.9
21 WAS 48 380 7.9
22 BUF 31 240 7.7
23 LV 31 240 7.7
24 NYG 23 177 7.7
25 NE 72 551 7.7
26 HOU 63 477 7.6
27 CIN 40 302 7.6
28 CAR 34 254 7.5
29 IND 37 266 7.2
30 GB 46 329 7.2
31 PIT 23 146 6.4
32 DAL 33 186 5.6

LaFleur was surprisingly respectable here, ranking 13th-best as he generated an average of 8.4 expected YAC per screen reception. But the Jets still ranked 26th with only 7.6 actual YAC per screen reception.

Basically, NFL Next Gen Stats’ data is suggesting that the Jets’ screens were poorly executed by the players despite solid play-calling. The Jets actually gained 7.6 YAC per screen reception (26th) despite LaFleur generating an expected mark of 8.4 (13th). The difference between these two numbers is known as “YACOE” (YAC Over Expected). New York’s -0.8 YACOE per screen reception ranked 32nd in the NFL.

This could be an indictment on the blocking, the ball-carriers’ lack of vision, the ball-carriers’ lack of elusiveness, or a combination of all three. Or, it could be a bogus metric that is telling the wrong story (I have seen some suspect conclusions from these tracking-based expectation metrics). Nevertheless, it’s an interesting piece of data to consider.

Play-calling predictability (Based on XPASS)

Here is a look at the percentage of qualified plays in which each team went against the expected play call. (Running with an XPASS over 55% or passing with an XPASS under 45% – only including situations with a win probability between 20% and 80%).

Rank posteam Unexpected Calls Qualified Plays %
1 CHI 212 492 43.1
2 DET 161 450 35.8
3 CAR 155 445 34.8
4 TEN 165 494 33.4
5 PHI 145 450 32.2
6 ATL 145 450 32.2
7 CLE 161 503 32.0
8 WAS 160 510 31.4
9 NYG 162 525 30.9
10 BAL 158 517 30.6
11 DAL 166 552 30.1
12 BUF 170 567 30.0
13 HOU 144 498 28.9
14 GB 128 443 28.9
15 DEN 171 606 28.2
16 LA 116 421 27.6
17 NE 129 475 27.2
18 KC 137 508 27.0
19 NO 115 430 26.7
20 PIT 149 561 26.6
21 SEA 131 495 26.5
22 CIN 114 438 26.0
23 NYJ 135 521 25.9
24 SF 115 480 24.0
25 MIA 118 495 23.8
26 JAX 120 506 23.7
27 MIN 112 473 23.7
28 LV 111 504 22.0
29 LAC 128 598 21.4
30 ARI 109 515 21.2
31 IND 108 512 21.1
32 TB 100 512 19.5

Ranking 23rd as he went against the grain just 25.9% of the time, LaFleur leaned toward the predictable side. The league average was 27.9%.

Interestingly, the Jets ranked ahead of some great offenses, including LaFleur’s colleagues in Miami (23.8%) and San Francisco (24.0%). Perhaps predictability is an overrated trait.

Nevertheless, more unpredictability from LaFleur would have been useful considering his team’s lack of talent. You can afford to be predictable when the other team can’t stop you even when they know what’s coming, but when your team is not talented enough to out-execute the opponent, the play-caller needs to help his players out by putting them one step ahead of the defense.

Cumulative rankings

Here is a look at each team’s average ranking between the four categories we analyzed today.

Team Average Wide Open Unaccounted Screen YAC Predictability
DET 6.5 18 4 2 2
CHI 9.3 4 23 9 1
SF 10.0 8 2 6 24
CLE 10.3 14 9 11 7
LA 11.5 19 10 1 16
ATL 12.0 5 20 17 6
KC 12.0 3 24 3 18
MIA 13.5 1 12 16 25
JAX 14.0 9 16 5 26
TEN 14.5 32 14 8 4
PHI 14.8 7 29 18 5
ARI 15.3 12 5 14 30
BUF 15.3 2 25 22 12
BAL 15.5 10 27 15 10
MIN 15.8 11 15 10 27
HOU 16.5 20 7 26 13
DAL 16.8 21 3 32 11
LAC 17.3 23 13 4 29
DEN 17.8 17 32 7 15
NYG 18.0 31 8 24 9
NE 18.3 30 1 25 17
WAS 18.3 25 19 21 8
CAR 18.5 26 17 28 3
NO 18.8 6 30 20 19
GB 19.8 24 11 30 14
SEA 19.8 13 26 19 21
NYJ 20.0 16 28 13 23
CIN 20.5 27 6 27 22
LV 22.0 15 22 23 28
PIT 22.8 22 18 31 20
TB 23.5 29 21 12 32
IND 29.8 28 31 29 31

Despite placing in the top half of two categories, LaFleur still ranked sixth-worst overall with an average ranking of 20.0 across the four categories. His lack of an elite skill in any of the four categories is what buried him.

With this ranking, it does seem that LaFleur was indeed a poor offensive coordinator even when isolating him from the players. He was fine in some areas, but he really struggled in others, and he wasn’t great at anything.

This study appears to support the case that New York did make the right decision by moving on from LaFleur.

Next Article: The hidden truths behind NY Jets’ 2022 run defense 

FOR MORE AT JETS X-FACTOR, VISIT/SUBSCRIBE AT: Was Jets OC Mike LaFleur actually good at things he can control? | Jets X-Factor

Originally posted on Jets X-Factor

Copyright © All rights reserved. | Newsphere by AF themes.