The Value of Uncontested vs. Contested Drops

Dec 20, 2019

Hello, we are Prodigy Analytics, an esports statistics company, back again to bring you our second post. This time we will be covering something that we touched upon briefly in our first post; the impact of having a zone uncontested vs contested (commonly abbreviated UC and C respectively). With that, let's jump in.

Being uncontested at drop is a strong advantage over being contested, as even teams that win their contested drop still average worse performance than compared with their average finish when uncontested. The notion of "free half pots" is outdated, and inaccurate.

So one of the first things to note is that the data we will be presenting in this post specifically applies to NAE. Each of the regions display their own unique traits and tendencies (beyond just UC v C drops), but for the sake of brevity we will only cover NAE. Generally though EU displays similar trends regarding UC vs. C drops, whereas NAW is its own unique beast. In NAW having a drop UC does not seem to be as important, possibly due to a variety of factors. The current hypothesis as to why boils down to a larger skill gap between the top teams and those in the next tier, as well as a more shallow pool of top teams. This is based on additional data regarding consistency, elimination patterns, performance relative to lobby difficulty, as well as other factors that are outside the scope of this post.

Moving on to the meat of this post, we observe a strong correlation between having drops uncontested and performing well. This may seem intuitive/obvious, yet we feel it warrants sharing the data as to just how powerful it is. We still see many teams/players cling to the wayward notion that being contested means “free 50 pots”, or openly challenging other teams to try and contest them in some sort of display of digital bravado. Moreover, it’s not simply that teams that are uncontested perform better, but even the teams that win their contested drop still perform worse on average than they otherwise would.

The first data we would like to present is the average placement of teams when contested vs. uncontested for each week dating back to FNCS Trios Grand Finals. We measured teams average placement when UC vs. C for each week/event individually, and only used teams that had some games UC and some C. In other words teams that were C 6/6 games were not evaluated (for this specific examination) as there were no UC games with which we could compare. Shown below is a graph that illustrates the average placement per team when UC (blue) and C (red).

chart1

Shown here are the average placements for teams when UC vs. C, dating back to FNCS Trios Grand Finals.

The number along the X-axis represents each teams actual final placement for the given week in which measured. Note: due to space limitations the graph was not able to fit all the teams, however more detailed graphs and data are to follow, so don’t get too hung up on on this particular image.

The data shows that we had a total of 80 teams that fulfilled our requirements of having some amount of both UC and C games in the week measured. Of those 80 teams 75% (60/80) averaged better placement when uncontested. While that may not seem surprising, let us also note that the difference in average placement was greater across the board for teams that averaged better placement when uncontested, with the average difference shown below.

chart1

When teams averaged better placement in contested games in was by 3.80 places, compared with 7.16 for teams that averaged better placement in uncontested games.

Now some of you may be ready to point out an obvious weakness of considering the data in this light; namely that when teams are contested, the team that loses the fight at drop will be one of the first teams eliminated, which will bring down their average placement greatly in C drops. That is true, but something that we didn’t fail to consider and that we accounted for in several ways, some of which we will present here.

First, we looked at the average placement for teams that “won” their drop. We then compared this with our records examining their average placement when uncontested, in order to ascertain whether their placement was impacted even when winning a contested drop. The data shows that out of 121 cases (limited by the number of teams of which we have the appropriate uncontested average placements to compare with) there we 38 in which a team placed better than their average UC placement. The graph below shows this is visual form.

chart1

Of 121 instances, only 38 times (30.6%) did a team average better placement in a game they were contested than their uncontested average placement.

This helped to further enforce the benefit of being UC, but even as shown doesn’t quite tell the whole story. We then examined the placement for the same set of teams across the same weeks, and obtained the value 12.73 for the average placement when UC, vs. an average finish of 16.01 when they win their C drop.

Not only that, but being eliminated early (thus increasing average C placement) has ramifications beyond simply a better average placement and/or placement points. Teams that are among the first to be eliminated have a lower “floor” than do other teams in terms of potential points. Intuitively this hopefully makes sense, as even if you don’t obtain top placement (and the corresponding placement points), simply being alive longer gives you additional opportunities for eliminations (and thus, points); an opportunity removed once eliminated. In games that are high variance, one of the critical components to the success of top players/teams is the ability to minimize variance and/or its impact. When a team continually drops contested, they are doing the exact opposite--increasing the amount of variance present. If we look at the average points from UC teams worse 3 games compared with the average points from C teams worse 3 games, we notice a significant difference, as shown in below.

chart1

On the left we have the average points from the bottom 3 (of 6) games from UC teams, and the bottom 3 games from C teams on the right.

Using the previous graph, on average UC teams score an additional 3.41 pts per game for a total of 10.22 total points in their bottom 3 games, compared with 1.24 pts per game and 3.71 total points for C teams. This means that over the course of a typical 6 game event, UC teams will, on average, score an additional 6.5 points from their worst three games alone. To put that in perspective, an additional 6.5 points would have meant an additional $15k in prize money for Nate Hill and team (8th->6th), or an additional $112k for Zexrow and his team (2nd->1st) in the FNCS Squads Finals.

Another way that we analyzed the impact of UC vs. C drops was rather simple, we looked at the final placement distribution of teams that were contested ≥2 times vs. those that were contested <2 times. We chose these ranges because teams that were contested <2 times displayed the lowest impact on their final placement. The results can be seen on below.

chart1

Average Final placement of teams contested <2 games (blue, of 6 games) and contested <2 games (red, of 6 games).

As you can see the average placement was better across the board for those that were contested less than 2 times, compared with those who were contested 2 or more times. Of note: since Trios Finals had 32 teams (as opposed to 24/25 in squads) we did adjust for this by converting the placement’s obtained there into their equivalents based on 25 teams.

Perhaps even more insightful, is the distribution of these teams, especially towards the top of the final standings. Across the events tracked, there were a total of 156 teams, 76 of which were never contested at their drop (so no longer considering teams that were contested even once). Of those 76 occurrences, 46 of those teams then finished in the top ten for the given event (60.53%). Of teams that finished top ten for each event, 65.71% of them were comprised of teams that were not contested once in the 6 games played. In fact, on average, the Top 4 teams in final placement were never contested. Going even further, the data also shows that the higher the total number of games contested, the worse the final placement. This can be seen seen in the image below, which shows the average number of games contested for teams in the Top 10 vs. bottom 10 teams (16th-25th).

chart1

Shown here are the average number of games contested for teams that finished Top 10 overall for each event (blue) and teams that finished in the bottom 10, 16th-25th (red).

The data also displays positive correlation, higher C drops = greater placement (greater in integer value, i.e. 24, 24th, is a greater value than 5, 5th), where we obtained a correlation coefficient of 0.347, which is >0.159, the 𝛂 for our degrees of freedom.

Moving on to some of the last data we will share on this, we first have the total wins from teams UC v C, seen below.

chart1

This table displays some of the composite values obtained for the series of events for which we included in the focus of this work.

This table shows some additional information as well, such as the average number of C and UC teams per week, as well as the total drops for each. In 42 total games, only 4 were won by teams from a contested drop, so 9.52% of the games. It would make sense UC teams won a higher total percent of games, since there are more games from teams at uncontested drops. However there is still a disparity in that 31.68% of the teams that had contested drops only accounted for 9.52% of the wins. If we assume an equal chance at winning a game for every team, then we would expect roughly 31% of the games to have been won by contested teams, or roughly 13/42 games. There are other factors that could influence this, perhaps less skilled teams are responsible for a greater number of contested drops, thus skewing the win rate of contested teams. More research will need to be done on this, but it certainly is of interest.

The last thing we would like to present is more specific data regarding the Squads Finalist teams. As mentioned, 18 of the FNCS Squads Finals teams had previously made a weekly qualifier at least once. For those teams, we analyzed their data for a number of things, some of which is displayed in the graph below.

chart1

This graph displays the average placement when C (red) & UC (blue) for the 18 FNCS Squads finalists that had played in at least 1 weekly final prior.

The graph shows the average placement (per game) for each of those 18 teams. As you can see, it follows the trends we’ve discussed, with only 3 of the 18 teams averaging higher placement when contested; and even in those cases by a smaller margin than their counterparts. Accompanying this we also have data that shows the average final overall placement for the teams based on when they were UC or C (minimum of 2 C games required to fall into the latter category). Worth noting here is that 4 teams had a better final placement in the weeks they were contested multiple times. In the case of 3/4 teams, despite having a worse average final overall placement, their best individual finishes occurred in the weeks that they were uncontested (2nd, 5th, and 10th respectively).

To close there is one additional matter we would like to bring up, and that is the challenge of teams that clearly agree to split a drop spot beforehand, and not fight each other off spawn. In the truest sense, it is a contested drop, as two teams are landing at one location. However, more specifically in our testing we were looking for landing at the same POI, as well as engagement with the other team. Yes, even if two teams don’t fight, they are still splitting the overall loot available, which is, in theory, a disadvantage. However, this is akin to two teams landing at separate POI’s, one of which has more loot than the other. There are also some potential advantages to having a predetermined agreement to split a drop. Teams that switch drops often might be less likely to drop at your POI when they see two teams, and instead drop with a solo team at another POI. This is giving those 2 teams in agreement an advantage, in that they know they are safe, and are warding off other potential teams for each other that otherwise might have landed and fought them. There is also the potential that if a 3rd team does land (that is not aware of the agreement) they begin fighting, drawing both teams attention. The two teams in agreement would immediately understand that it is an outsider team firing at them, and could essentially “team” on that 3rd team without it being "teaming" in the way traditionally thought of.

I certainly don’t mean to imply that teams that agree to split a drop are cheaters, and am not looking to tarnish anyone's name or reputation. I do believe that this falls into a gray area of sorts though, and is questionable at best. It bares monitoring to make sure that it never spills over into something more concerning. Cases such as this also serve to distort our data somewhat. We will continue to refine our working definition of “contested”, and can hopefully more accurately account for even these cases in the future.

With that, we will close. Thank you for reading the post, and hope you enjoyed what we had to share.


NAE FNCS Squads Finals Recap

Dec 16, 2019

We are an analytics company that provides data for esports. We wanted to share with the community some of our data and recap the NAE FNCS Finals from this past weekend. Numbers to follow.

Hello, we’re Prodigy Analytics, and we are the team that provided the statistics for this past weekends broadcast of the FNCS finals on the Practice Server Twitch channel. Up to this point we have, for the most part, only shared our data and work with select parties, but wanted to take this opportunity to give you a sample of our work and provide some deeper level insights into the Finals this past weekend for NAE. With that, let’s go ahead and jump into it!

First, there are several abbreviations/terms we should get out of the way:

  • EP% & PP%- Elimination Point & Placement Point percentage; the percentage of a teams points that were derived from one of those two sources.
  • Conversion rates- A catch-all term to describe the rate at which a team converts a game into placement; also used to describe the rate at which a team converts a specified placement into a higher bracket (i.e. Top 8-Top 4 conversion rate, the rate at which a team converts their games in which they made it to Top 8 into a Top 4 finish).
  • TTFE- Stands for Time To First Elimination, which is generally the average time for a team/region/heat/particular week/etc to obtain their first elimination within a game. I swap in and out of the acronym often in my work, so forgive me for any inconsistency in its use.
  • Clutch Factor- One of our proprietary metrics used to evaluate how players perform in “Clutch moments”, i.e. when down one or more teammates. More information will be provided in the sections discussing this.
  • Relative Location to Zone- This refers to a designation 1-5 on how far a team is from zone; the higher the number, the further away. We have done a good deal of work regarding the impact that “getting zone” has on placement, and this is one of the ways we used to measure that.

I will try to keep this as “light” as possible to make it more accessible to all interested parties, but forgive me if I wander. Lastly, this is in no way meant to disparage or put down any players. All teams did an incredible job to make it to the Finals, and that alone should be celebrated. Most of this will be presenting our data, with a touch of insight into how the data can be applied. Including all of the application would double the length of this post, and so I will refrain from doing so at this time. This is also just a sample of the total data we collect and incorporate into our work, but hopefully proves insightful. With that, let's begin.

So first let’s introduce you to the EP% & PP% for the 4 qualifier week Finals + the Warmups round, as can be seen below.

chart1

Cumulative EP% and PP%'s for all four NAE weekly qualifier finals

The percentage’s are broken down into separate placement ranges, so as to better illustrate the differences between placement thresholds. Typically the EP% & PP% are more insightful when considering the semis (i.e. the Saturday games during weekly quals) but we can still extract some valuable insight from here.

Typically speaking, we would like to see a roughly 55/45 split (or vice-versa) within ± 5% or so. When teams start to veer further away than that in either direction, it raises a red flag of sorts. It certainly isn’t a death knell for a team, but it does indicate an over-reliance on one source of points. Fortnite is an inherently high-variance game, and the teams that perform the best are able to minimize that variance as much as possible. Teams that are over-reliant on one source of points are thus putting themselves at risk of being at the mercy of the cruel mistress that is variance. Not only that but, generally speaking, as the level of competition rises, so too does the need for balance.

Of note is that typically it is more worrisome to see an over-reliance on PP% as opposed to EP%. If a team is regularly making placement thresholds, they should naturally be near the desired split by virtue of the fact that placement drives eliminations, not vice-versa. The number of elims obtained from achieving placement should naturally work to balance out the splits, so if that isn’t the case, then there is usually a cause for concern.

The EP% and PP% for the NAE Finals also showed just how hyper-competitive the finals were. For comparison AdonisFN’s team finished in 22nd overall, but still had a game in which they placed 5th, the highest placement in a game for a team that low from all 4 weekly, + Warmups, finals. Additionally only 3 teams finished without any placement points, tied for the fewest teams of all qualifier weeks (Week 4). One of those 3 teams also had the highest finish of all qualifier weekly finals of a team that did not score any PP’s, A1 Eeasu’s team at 15th. These are some of the ways to illustrate how competitive and wide open this finals was, as almost every team had a realistic chance at placing Top 6.

One last thing to note on the EP% and PP%, we recognize that there are a finite amount of points that occur in these games, and thus the EP% & PP%’s are going to be subject to variance as you move down the placements, and towards the bottom are going to be more heavily skewed by one or two games. As I mentioned, this is more useful when analyzing semis data, and is also why we break it down into separate ranges when doing our analysis in-house.

Next let's look at some of the conversion rates for NAE and specific teams. The image below shows what a typical conversion rate chart looks like.

chart1

Read left to right, it displays the rate at which teams convert Total games into each placement threshold, and then each placement threshold with the subsequent ones.

The above image are the cumulative conversion rates for the Top 6 and and Top 10 Teams from all 4 weekly qualifiers. Similar to EP% and PP%, it should be noted that the conversion rates are interpreted and applied differently when analyzed from semis and finals. In particular, given that in weekly finals (and heats/grand finals) there are only 48 possible slots to receive placement points, there is a limit to the maximum conversion rates each team could obtain. Said another way, it is not possible for every team to have a 100% Total Games-Top 8 conversion rate (for example), as there are only 8/24 teams each game that will reach that threshold. This may seem obvious, but it is still important to note. I will save delving further into this point, as it isn’t necessary for the scope of this post, but we have additional data about these actual limits and the limits they impose.

We will specifically be examining the conversion rates for weekly finals, heats, and grand finals. I’ve also chosen to focus specifically on the conversion rates for the Top 6 and Top 10 teams (Top 6 only for Heats), as this is where we observe the most correlation as a predictor for future performance. In addition to the previous image showing the cumulative average conversion rates for weekly qualifiers, below we have the rates for the Top 6 teams from each heat, and the cumulative heat rates. The values are fairly similar to those observed from qualifiers (small sample size and all for the heats). One point that I would like to highlight is the Total Games-Top 8 rates for each of the 4 heats).

chart1

The conversion rates for all four NAE Heats

chart1

The cumulative conversion rates for all four NAE Heats (for Top 6 teams)

Looking at the Total Games-Top 8 rate for each heat, we see that heats 1 & 2 come in right at the average rate for the 4 weekly qualifiers, but that heats 3 & 4 come in roughly 8% & 10% higher respectively. Again, small sample size should be noted, but this lines up interestingly with the strength of the respective heats. Heats 1 & 2 had the highest average finals/team (of teams that had made a finals, as opposed to all 25 teams/heat, an important distinction because teams that made finals previously are heavily favored to advance), headlined by Heat 2’s value of 2.50 finals/team. Heat’s 3 & 4 were the weaker of the 4 heats, with Heat 4 having the lowest value at 1.91 average finals/team. To further help illustrate the strength of the two sets of heats, 7/10 teams in the finals came from these two heats, compared with 2 from Heat 3, and 1 from Heat 4. Tying back into the Total-Top 8 conversion rate, this is a possible reason for the higher conversion rates for the 2 sets of heats, as heats 3 & 4 were easier and thus allowed the tops teams to convert at a higher than usual rate.

The last thing I would like to point out from the conversion rates are those from the finals, as seen below.

chart1

Conversion rates for NAE Grand Finals

As you can see the rates line up similarly with those seen from the weekly finals cumulative rates. The finals rates are however slightly lower in several of the rates, because of a point I touched on earlier with EP% and PP%; the competitiveness throughout. As noted there, we saw the highest single game finish from a team that finished as low as 22nd (5th), and we also observed the highest rate of teams (tied with week 4) that had some points from placement (thus absorbing some of the limited placement slots that contribute to conversion rates--slots that otherwise might have gone to some of the top teams).

To help show how conversion rates can help as a predictor of performance let us look at two different cases, both involving commonly accepted “Top Teams”, that both made finals in all 4 weeks of qualifiers. In the first case we have FaZe Megga’s team: they boasted some of the best conversion rates across the board for NAE, coming in directly in line with the numbers observed for the Top 6 averages.

chart1

FaZe Megga Squad conversion rates

True to form, they placed 3rd overall in finals, tying with UnknownArmy’s team for most games converted into at least Top 8 (⅚). In the second case we have SEN Bugha’s team: who had numbers that came in substantially lower than that of “Top teams”, and was saved from being even more alarming by a solid performance in Week 3 finals.

chart1

SEN Bugha Squad conversion rates

Bugha’s team finished a disappointing 21st overall in finals, only making Top 8 once in their 6 games.

Hopefully this gives you some insight into how the conversion rates can be applied. There are certainly many additional ways we use and incorporate them, but I’ll spare you that for the time being.

Next I would like to show some of the elimination data from weekly qualifiers as well as finals. Across the 4 weekly finals, the average time to first elimination was 0:12:49 for teams that placed in the Top 10 each game. Put another way: in a given game, teams that placed Top 10 didn’t obtain their first elimination until after the close of Circle #2. This value increases to 0:13:06 for the finals, a time higher than all but Week 3 finals (0:13:36). This suggests that teams, in general, were playing tighter than average; a fact supported by the higher players alive count at the close of each zone than average, shown below.

chart1

Average players alive at close of circle-X (left). Average players alive at close of circle-X (right).

The values don’t tell the whole story though, as they are artificially suppressed somewhat due to storm surge. Teams began to play more aggressive as games went on, as shown by the image below, a trait typical for 6 game series.

chart1

The number of players alive at the close of circle one in each game of NAE Grand Finals

Of note is that in game 2, for the first time throughout all the weeks of NAE FNCS Squads finals (and Heats), there were 0 players eliminated during the first 2 zones (the game started with 95 players, missing Liquid 72hrs).

Going back to the TTFE data, the data we’ve collected illustrates something we’ve touched on previously--that placement drives higher elimination totals--and is a trait that is seen consistently in the top performing teams. There are certainly some teams that veer slightly from the norm, and have a lower TTFE and average more eliminations in early stages of games, but generally this trend holds. This segues nicely into the elimination distribution by time, per team.

Top teams derive a greater number of eliminations from the later stages of the game and, generally speaking, teams that veer from this are less consistent and don’t perform to the same level. Not only that, but there is also much greater variance in results for teams that rely on early game kills. Throughout FNCS weekly finals to grand finals, only 8 of 50 total teams (10 teams each week + 10 teams in finals = 50 teams total) that finished in the Top 10 overall averaged more than 1 elimination before zone 2, with an average finish of 6th and the highest placement achieved of 3rd. This also ties in to data that we’ve presented that illustrates just how important it is to have an uncontested drop, the premise being that uncontested teams are less likely to have early game eliminations. I won’t present the uncontested vs. contested data at this time, but in essence it says that you are substantially more likely to perform better uncontested vs. contested (as intuitive as that may be), and that even teams that win their contested drop still perform much worse on average.

Shown on below are the total eliminations by zone for each of the Top 10 teams from finals, showing which periods each team was the most/least active.

chart1

The graph displays the distribution of eliminations for teams that placed in the Top 10 of NAE Grand Finals.

Next we can see the average percentage of total eliminations for Top 10 teams that were obtained during each zone, and the same data for the Finals in the images below.

chart1

The graph displays the percentage of the total eliminations for teams that finished Top 10 overall during the four weekly qualifier finals.

chart1

This graph displays the percentage of total eliminations for the Top 10 teams from NAE Grand Finals.

Of note is that due to some games ending earlier than others, not all games will have a 8th or 9th zone, which impacts those numbers slightly. These charts tie in to much of what we’ve already discussed on the overall aggression level being slightly muted in finals, with lower values than average until zone 5. This also ties back into the point we’ve been harping on, that placement drives high elimination totals. Across the 4 weekly qualifier finals, Top 10 teams had a total of 600 eliminations through zone 5, typically 0:19:50 into a game, compared with 667 after the close of zone 5, a period 0:04:25 in length at maximum. This means that of Top 10 teams each week, 52.64% of their total eliminations occurred during only 18.21% of the total game time, compared with 47.36% in 81.79% of the total game time. This, again, helps to illustrate where exactly eliminations occur for top teams, and where teams should prioritize looking for them.

Moving on to our next topic, we have “Clutch Factor”. Without going into/revealing exactly how we derive this value, it attempts to quantify which players perform the best in “clutch moments”, which we define as when being down 1 or more teammates. We calculate Clutch factor differently for weekly semis than we do finals (and the heats), and each round has their own distinct values and applications. As presented, it can be seen as the total points contributed to the teams total by the individual compared with what the typical player would produce in the same situations, with higher values being “better”. It is also important to note that additional context is needed when analyzing individuals. For example if a team had won 6/6 games in the finals, with all players alive throughout the game, they would each have a clutch factor of 0, as there were no opportunities for them to accrue a score. So while some players/teams with lower values may have not performed well in clutch moments, others may have a low score due to simply not having opportunities, due to a variety of factors. In the following images we have the rankings for players from finals alone, heats alone, and the combined totals from heats +finals.

chart1

NAE Grand Finals Clutch Factor Rankings

NAE Heats Clutch Factor Rankings

NAE Heats + Grand Finals Clutch Factor Rankings

The values shown are the (more) raw values, which are easier to understand that some of the additional manipulations we apply when using the data to incorporate elsewhere. This is why none of the players have a negative value, though some that have a 0 value otherwise would. **Note-due to Jamper’s squad being a replacement after scores were updated, we did not have the data for their performance from heats ready at this moment, thus their scores for heats are shown as 0.

I will take a moment to discuss highlight a select few players from here, though otherwise will refrain from discussing individual and/or team performances in this post. Vanguard Kez actually generated the most additional value for his team in the finals, and had the highest composite value on his team from heats + finals (Ronaldo led the team in clutch factor in heats). FaZe Diggy and LZR kreo were the only players whose teams finished outside of the Top 10 to place within the Top 10 for Clutch factor during finals (3rd and 4th respectively). The greatest disparity in Clutch factor rank within a team was for TSM_Comadon and his teammates, where he finished 6th overall, and the lowest ranked teammate (Highsky and Saf) were tied for 81st. This is a good example where opportunity played a role in the scores generated, as generally Comadon’s team were all alive together, as can be seen below, where it shows the total additional time alive for each team (the sum of the total time down 1, 2, and 3 teammates).

This table shows the total additional time alive for all teams from NAE Grand Finals

The total time alive when down 1, 2, and 3 teammates is used individually for different things, but for the sake of brevity I will only show the total. On the other side, Clkzy and his teammates had the smallest disparity between the top and bottom player on their team in Clutch Factor, with Clkzy at 43rd and Tfue at 64th. UnknownArmy and CizLuckys team’s both had multiple players in the Top 10 for clutch factor; Vanguard Kez (1st), Avery (8th), and UnknownArmy (9th), and Liquid Vivid (5th) and Liquid Chap (10th).

Looking at the total additional time alive table above, there are a few things to note. This information that can be gleaned for this varies and, is often the case, most effective when used in concert with additional data in order to create a more complete picture. For this particular table I did not include the additional time caused by a teammate not loading in (though we do use that data in our work) as it skews the data and I would rather keep this as simple as possible for the time being. One thing to note is that typically teams with a lower total time, however obvious, do so by being having their deaths closely stacked together (or by winning with multiple teammates also alive). This is aided by the fact that deaths early in a game provide a greater upper limit for the time that could be played down one or more teammates, whereas later in the game that limit is lowered (by virtue of games having a cap on possible length--the later in the game you are, the less potential time you could play down 1+ teammates). Having a lower time is certainly not a requirement, or indicative of, for team success, but it certainly does help. This can be seen by the higher concentration of our Top 10 teams toward the bottom of the table, yet despite this we still have Top 10 teams dispersed throughout.

Lastly on clutch factor, we have our adjusted sigma value. The clutch factor for each player is divided by the standard deviation, in order to obtain data that indicates within how many standard deviations that player was. The order and ranking remain the same, but the values are adjusted. I won’t include those tables as well, but you can see the distribution by looking at the pie graphs below, which show the percentage of players with adjusted sigma clutch factors within the specified ranges, for both Finals as well as Heats + Finals.

Orange represents players with an adjusted CF score of 0-1, Green 1-2, Yellow 2-3, Red 3-4, and Blue 4+ for the NAE Grand Finals. The orange and green comprise 93.75% of all finals players.

Orange represents players with an adjusted CF score of 0-1, Green 1-2, Yellow 2-3, Red 3-4, and Blue 4+ for the NAE Grand Finals + Heats. The orange and green comprise 91.67% of all finals players.

Lastly I will share some of our data that should hopefully help to dismiss a commonly believed myth: the value of getting ½ in-½ out zone. We’ve analyzed team performance from Week 1 through to Grand finals, and used it to generate the graph seen below.

Typically if a data set shows correlation, there will be a positive or negative slope, indicating positive or negative correlation. In the case seen here, the line of best fit is nearly horizontal, indicating a lack of correlation

We measured the radius of the zone, and segmented the areas into specific distances to create a criteria with which we could measure. The RL (relative location) is as follows: 1-Inside zone, in center, 2-Inside zone, on edge, 3-Outside zone on edge, 4-Outside zone 2nd farthest region, 5-Outside zone farthest region. The descriptions used are general, so as not to complicate the explanation. We then recorded the placement of each team as well as their distance, from the time that 5th zone first appears on the map. We weighted 3 primary factors as well; number of players alive, number of teams between a team and zone, and the number of teams that fell into each of the 5 categorical distances.

As you can tell from the graph, the relative horizontal distribution of the data points indicates a lack of correlation. In fact, the correlation coefficient obtained for the data set was 0.095, which is < 0.113, the 𝝰 for our degrees of freedom (300), again indicating a lack of correlation between distance to zone and placement. This hopefully can begin to dispel the myths of “being marketable” and “didn’t get zone, go next”, and other similar outlooks on the importance of “getting zone”. This isn’t to say that you still shouldn’t want zone--you do--but rather than whether you actually get it or not has little correlation with your final placement. There is a great deal more information that I would love to share, but for a first course I hope this will satisfy. Please feel free to comment with any questions, and I will do my best to answer them.