(07-09-2020, 12:22 AM)iStegosauruz Wrote:Question on methodology - how did you factor in strength of schedule or initial starting point? Obviously maximizing your % chance to win is the obvious goal but not everyone starts every matchup with the ability to hit 50%. A team that starts with 15% and gains 27% to hit 42% is still down 8% but made substantial gains.
I know you based it off the previous games numbers but that excludes a few key variables - strength of schedule, home and away splits, and ability to run variant strategies.
Adding % purely for gains on the year also ignores average gain or loss which would be another metric to used to gauge. In that situation for an accurate statistical average you’d drop a high and a low to normalize somewhat and rid of outliers - something that would change the outcomes drastically. For example, in the situation of Austin they lose 11% because of W2 at New Orleans, a major outlier since its the second largest change of any team on any week. This would also normalize the Austin matchup W3 against Orange County where the strategy that was dropped then obviously was improved on in the other direction.
Just food for thought. I do find this interesting and accept the comments that it’s not a shot at anyone in particular, but the opening allusion to a meme, other responses by Sarasota players, comments within the post, and the way the methodology was constructed to avoid outliers and normalize data do tend to contradict that to me and also skew the data.
I just based it solely from what you have as a starting point from the previous week. I'm not sure how the teams start their testing round, but if you take the sim-file from the week before and run the test for the next game - without changing any strategy or tempo - you have a baseline for both teams facing each other the next week. I took this win% as the baseline, and just used the final sim-file and ran the matchup again. The two values are compared and give the final result.
The only thing I think is doubtful here is the 1 sim with 500 tests, which gives enough room for variance. So I could have ran a matchup multiple times to get an average for the win%. But in the end that was too much time I didn't want to spend on running the sim, that's why I just took 1 time 500 runs for the matchup with the sim file from the week before, and 1 time 500 runs for the matchup with the final sim-file for the week.
![[Image: 011p.png]](https://i.postimg.cc/0ytdxNZ0/011p.png)
![[Image: 11-win-WR.png]](https://i.postimg.cc/Qdh5wMgY/11-win-WR.png)