TL;DR: DDSPF16 randomizes its results every 3 minutes and 35.7 seconds, based off of your system clock
Foreword
I have been wanting to write this article for a while now but have not mainly due to laziness. Additionally, life has gotten much busier for me recently as I started a new job. However, with some downtime during the holidays, media paying out twice the normal amount, and the death of DDSPF 16 (and sim testing in its current form) quickly approaching, I figured now is as good a time as ever to publish this article. This information may already be known to some members of the league, but I have not seen it published anywhere so I am taking the opportunity to do just that.
I have just two quick shoutouts I want to make before I begin. First, to the members of the London Royals who introduced me to the concept of sim testing as a rookie. While I had limited control over my player’s performance on the field, helping the team sim test off the field was a tangible way for me to contribute to the team’s success. My other shoutout goes to @Maglubiyet and @slate for being solid dudes who I could bounce my thinking off of. As I said to both of them, I probably sounded like a conspiracy theorist when I first approached them with this information. But they heard me out and helped me make sense of it.
Introduction
I would venture to say that if you have been in the ISFL for at least a full season, you have most likely heard the phrase “sim luck” thrown around in some contexts. Sim luck is how some people explain upset victories, or players with ridiculous stat lines. Really, the phrase is just a convenient way for us to summarize the multitude of factors within DDSPF that went into determining a team’s or player’s performance without actually diving into the nitty gritty. Obviously, there are certain factors teams have control over with the primary ones being strategy and depth chart; smaller factors include tempo, primary receiver designation, etc. All that being said, it is very difficult to forecast a score or an individual stat line of a single game. The exact results are seemingly random every time a game is simmed. But what if you could control this randomization? This article will take a look at how I accidentally discovered one of DDSPF16’s randomization mechanics, how you can control it, and the very limited practical applications. For those of you who utilize Slate’s sim-batcher program, the information contained here provides context to the “System Time Manipulation” section.
Discovery
I first started sim testing for London about halfway through S24 by using Mag’s AutoHotkey script to start. Over time, we modified the script in-house to be more automated, and combined it with an Excel macro to analyze game data at a high level. Running batches of 500 exhibition games at a time, the final Excel file would look something like what’s below.
![[Image: WoBxwX7.png]](https://i.imgur.com/WoBxwX7.png)
At first, I thought these results were great. The script was working, and I could change strategy to directly see how it affected the team’s win rate. But upon closer inspection, I realized several of the columns were duplicates. In the above picture, for example, columns B and C, E and F, and H and I are exact copies of each other. For a while, I did not pay much attention to this. I would simply delete the duplicate columns and recalculate the average of each metric. However, this became problematic in larger batches where roughly 40% of the data was duplicated and thus wasting time. I thought maybe the Games.csv file was not correctly exporting every time and the macro was reading off of the old file. I added a line to the script to delete the old Games.csv file after running the macro. That did not fix the issue. Maybe the macro was being run twice somehow with the Games.csv file open. I ran the macro manually after pausing the script, and still received duplicate results. I decided to investigate a little further to see if I could mitigate the occurrence of repeat data.
The first thing I tried was adjusting the timings in my script, specifically reducing the amount of time between running each exhibition game. I figured maybe if I just ran the games faster, there would be fewer duplicates. This unfortunately had the opposite effect.
![[Image: ItxmkIg.png]](https://i.imgur.com/ItxmkIg.png)
As you can see above, there is only one unique column in this set of data. The other columns all are duplicates, with columns D, E, and F actually being triplicates. Clearly going in the wrong direction, I tried something else. If running the batches faster made the duplicates more frequent, would running them slower eliminate duplicates? Instead of intentionally slowing down the script, I increased the batch size from 500 to 800. Not only would this cause the script to run longer, but would also provide me a larger sample size to analyze. After 10 batches of 800 exhibition games each, I came up with the following.
![[Image: Re6SalL.png]](https://i.imgur.com/Re6SalL.png)
Now that looks much better. This data set has only one pair of duplicate results, seen in columns E and F. So to summarize, decreasing the total time per batch generated more duplicate results, while increasing the total time per batch led to fewer duplicate results. By this logic, it seemed to me that there was a “sweet spot”, so to speak, in terms of batch duration that would produce little to no duplicates. Seeing as how the previous test produced the fewest number of repeat columns by far, I decided to time one. From the initial game launch to launching the game again after running the Excel analysis macro, it took a total of 3 minutes and 21 seconds.
Explanation
Now I had the “what”, but not the “why”. I thought some more about this and recalled some programming classes I took in college. Disclaimer, I am not a programmer by any means, so I apologize for any technical inaccuracies in what I am about to say. Essentially, computers cannot generate true randomness. If you were to write a program that dealt you a hand of cards without including any randomization, you would get the same 5 cards every time you ran the program from the beginning. I started to suspect something similar was happening here, but in roughly 3.5 minute intervals.
It was at this point that I sent Mag a message, hoping that maybe he had encountered this before with his script. After starting out with my randomization idea, going through everything I had done above with him, and much patience on his part, I brought up the 3 minute, 21 second number and how there seemed to be a “sweet spot” for minimizing the amount of duplicate data. Mag did a little digging and came back to me with the concept of a “tick”. A tick in C# programming – the language that the sim is coded in – is the smallest unit of time, representing 1/10,000,000th of a second. Additionally, when specifying a random number, the maximum value which you can assign it is 2,147,483,647, the 32-bit integer limit. If we assume that there is a 1:1 correlation between ticks and this maximum value, then 2,147,483,647 ticks equals 214.7 seconds. Converted to minutes, that is 3 minutes, 35.7 seconds! There it was, the missing link, so to speak. This meant that if I could somehow time my batches to be exactly 3 minutes and 35.7 seconds long between game launches, I theoretically should produce no duplicate results and not have to waste time manually removing them from the resulting data.
Manipulation
I started off by trying to achieve what was stated above: time my batches to be as close to 3 minutes and 35.7 seconds as possible. By increasing the number of sims to 900 in each batch, I was able to achieve a total batch time of roughly 3 minutes, 36.5 seconds. Pretty close to the desired 3:35.7. Those results are below.
![[Image: V2IU4UD.png]](https://i.imgur.com/V2IU4UD.png)
And just like that, there was no duplicate data. Mission accomplished, right? For the most part, yes. But I was still curious about this 3 minute, 35.7 second “randomization window”, so to speak. How was it possible before that I was closing the game entirely and still getting duplicate data? At one point I even tried running a batch of 500, exporting the data, restarting my computer, and running another batch to verify if duplicates still persisted. Sure enough, they did. This led me to believe that the randomization window wasn’t being timed by something within the code of DDSPF16, but something outside of it.
The only thing that seemed intuitive to me at the time was basing this randomization off of Window’s system time. DDSPF16 would have access to this as a program running on said system. So I started looking into ways on how to manipulate the system clock within Windows. Having some previous experience with PowerShell, I turned in that direction to find a script which would allow me to stop system time entirely. While it seemed there was no way to truly “freeze” system time, I found a script to reset the system time to a specified value after a specific time interval. In this case, I would be resetting the system time to 3:00 PM every 1 second. As long as this script was running, the time would never advance past 3PM.
With that in mind, I started up the script, and then began the testing. I elected to only do batches of 1, for the sake of brevity. Even a batch of 1 takes roughly 30 seconds from start to finish, so as long as I did 8 or more, I should cover the entirety of the randomization window. The image below are the results of that test.
![[Image: WVJT6NK.png]](https://i.imgur.com/WVJT6NK.png)
As you can see, all duplicates. This confirmed my theory that DDSPF16 was using the Windows system clock for its randomization. By manipulating this, we can control the 3.5 minute randomization window and, thus, eliminate duplicates. The final step in this journey for me was to modify our testing script to take advantage of this. If we could use PowerShell to freeze system time to force duplicates, we could also use it to advance system time and ensure zero duplicates. By adding a line to the testing script, we could run a PowerShell command that incremented system time by 5 minutes, thus entirely avoiding the chance of two batches being run within the 3.5 minute window. After going through all of this with Slate, he made the same change within his sim-batcher program.
Practical Applications
Aside from the obvious benefit of eliminating duplicates during sim testing, what other practical application does this discovery have? Well, one theoretical application would be to find a specific 3.5 minute window where your team wins the first game that is simmed for them (since sim team, if I’m understanding correctly, just takes the first result for each game they sim). The main issue with this is that the sim team would have to sim the game in the exact same 3.5 minute window. Additionally, you would have to guess your opponent’s strategy and depth chart exactly the same as what you were testing. Otherwise, even if one variable is different, the game will diverge from what was simmed (more on this later, with pictures).
Another application I thought of was build testing or depth chart testing in general. If we can eliminate the randomness and focus on one set of say 500 games, we can directly seem the impact of any changes we make. At least in theory anyway. The issue with this is that we have no way of knowing if our sample of 500 games is actually close to the true average of how that build/depth chart would perform, or if it is several standard deviations away from the true mean (yay statistics). That being said, I feel that this application is a bit more practical than the one in the first paragraph and could easily identify outliers in terms of performance.
The final application, which I have to credit Slate for coming up with, would be of use to sim team. The time freezing mechanic could be used in the event there is an error in the week’s league file and a game has to be resimmed. Whatever machine that would be used for simming the games could have its system time frozen, the error within the league file corrected, and then all of the games are re-simmed. For example, if Jamar Lackson’s speed was somehow set to 0 (F speed, if you will), the simmer could correct this error within the sim file, and re-sim the week. I understand that re-sims are very rare and I am sure it is a pain for the sim team to do them. However, this method would ensure that nothing else is modified, either intentionally or unintentionally, other than whatever error needs correcting.
One More Thing
I was not sure where else to put this, so I made it its own section. I thought of one more thing I wanted to share. As I mentioned earlier, the entirety of this article only applies if everything is kept the same between batches. If I ran a batch of 500 sims, then for example changed the team’s 1st and 10 defense from Nickel to 3-4, the game would play the same up until they are on defense. The pictures below illustrate this point more clearly.
![[Image: Test_1_Box_Score.png]](https://cdn.discordapp.com/attachments/735257031958593567/748626737889542154/Test_1_Box_Score.png)
![[Image: Test_1_Play_by_Play.png]](https://cdn.discordapp.com/attachments/735257031958593567/748626751843729480/Test_1_Play_by_Play.png)
The two images above will serve as the baseline. This was simply the first game simmed using whatever gameplan and depth chart I had set for London. After changing London’s first down defense from Nickel to 3-4, the game started to diverge from the moment London’s defense was in a first and ten situation.
![[Image: Test_3_Box_Score.png]](https://cdn.discordapp.com/attachments/735257031958593567/748627188265254973/Test_3_Box_Score.png)
![[Image: Test_3_Play_by_Play.png]](https://cdn.discordapp.com/attachments/735257031958593567/748627190450618498/Test_3_Play_by_Play.png)
The second image here, the play by play, is arguably the more important one, especially when compared to the original play by play image. Comparing the two, we can see that the two games start out exactly the same. London receives the opening kickoff, returns it 16 yards, goes 3 and out, and punts it back to Kansas City. Once London’s defense takes the field on first and ten, the two play by plays are no longer the same and neither are the games from that point. First and ten defense was used here as it is the only situation that is guaranteed at the start of every drive. You could change something like 4th and long defense and the change would not be as drastic simply because there are fewer 4th and long situations. Additionally, if you were to modify a player attribute that never gets used, such as increasing a defensive end’s kicking power from 1 to 2, this would not make any difference as there would be no situation the defensive end was actually kicking.
Conclusion
As the TL;DR says, DDSPF16 randomizes itself after 3 minutes and 35.7 seconds have elapsed in your system time. I am sure this is outlined somewhere within the code of the sim itself. However, not having ever decompiled the sim engine, I thought the roundabout discovery of this mechanic warranted a write-up. The one question that remains to be tested is whether or not DDSPF21 utilizes this same randomization mechanic, and if so, then it’s application for game re-sims is still valid. Thank you very much for reading all of this. I apologize if anything is not clear, I tried to explain everything as best as I possibly could with my current understanding. If you have any questions, please feel free to post them below or message me on the forums or Discord.
Foreword
I have been wanting to write this article for a while now but have not mainly due to laziness. Additionally, life has gotten much busier for me recently as I started a new job. However, with some downtime during the holidays, media paying out twice the normal amount, and the death of DDSPF 16 (and sim testing in its current form) quickly approaching, I figured now is as good a time as ever to publish this article. This information may already be known to some members of the league, but I have not seen it published anywhere so I am taking the opportunity to do just that.
I have just two quick shoutouts I want to make before I begin. First, to the members of the London Royals who introduced me to the concept of sim testing as a rookie. While I had limited control over my player’s performance on the field, helping the team sim test off the field was a tangible way for me to contribute to the team’s success. My other shoutout goes to @Maglubiyet and @slate for being solid dudes who I could bounce my thinking off of. As I said to both of them, I probably sounded like a conspiracy theorist when I first approached them with this information. But they heard me out and helped me make sense of it.
Introduction
I would venture to say that if you have been in the ISFL for at least a full season, you have most likely heard the phrase “sim luck” thrown around in some contexts. Sim luck is how some people explain upset victories, or players with ridiculous stat lines. Really, the phrase is just a convenient way for us to summarize the multitude of factors within DDSPF that went into determining a team’s or player’s performance without actually diving into the nitty gritty. Obviously, there are certain factors teams have control over with the primary ones being strategy and depth chart; smaller factors include tempo, primary receiver designation, etc. All that being said, it is very difficult to forecast a score or an individual stat line of a single game. The exact results are seemingly random every time a game is simmed. But what if you could control this randomization? This article will take a look at how I accidentally discovered one of DDSPF16’s randomization mechanics, how you can control it, and the very limited practical applications. For those of you who utilize Slate’s sim-batcher program, the information contained here provides context to the “System Time Manipulation” section.
Discovery
I first started sim testing for London about halfway through S24 by using Mag’s AutoHotkey script to start. Over time, we modified the script in-house to be more automated, and combined it with an Excel macro to analyze game data at a high level. Running batches of 500 exhibition games at a time, the final Excel file would look something like what’s below.
![[Image: WoBxwX7.png]](https://i.imgur.com/WoBxwX7.png)
At first, I thought these results were great. The script was working, and I could change strategy to directly see how it affected the team’s win rate. But upon closer inspection, I realized several of the columns were duplicates. In the above picture, for example, columns B and C, E and F, and H and I are exact copies of each other. For a while, I did not pay much attention to this. I would simply delete the duplicate columns and recalculate the average of each metric. However, this became problematic in larger batches where roughly 40% of the data was duplicated and thus wasting time. I thought maybe the Games.csv file was not correctly exporting every time and the macro was reading off of the old file. I added a line to the script to delete the old Games.csv file after running the macro. That did not fix the issue. Maybe the macro was being run twice somehow with the Games.csv file open. I ran the macro manually after pausing the script, and still received duplicate results. I decided to investigate a little further to see if I could mitigate the occurrence of repeat data.
The first thing I tried was adjusting the timings in my script, specifically reducing the amount of time between running each exhibition game. I figured maybe if I just ran the games faster, there would be fewer duplicates. This unfortunately had the opposite effect.
![[Image: ItxmkIg.png]](https://i.imgur.com/ItxmkIg.png)
As you can see above, there is only one unique column in this set of data. The other columns all are duplicates, with columns D, E, and F actually being triplicates. Clearly going in the wrong direction, I tried something else. If running the batches faster made the duplicates more frequent, would running them slower eliminate duplicates? Instead of intentionally slowing down the script, I increased the batch size from 500 to 800. Not only would this cause the script to run longer, but would also provide me a larger sample size to analyze. After 10 batches of 800 exhibition games each, I came up with the following.
![[Image: Re6SalL.png]](https://i.imgur.com/Re6SalL.png)
Now that looks much better. This data set has only one pair of duplicate results, seen in columns E and F. So to summarize, decreasing the total time per batch generated more duplicate results, while increasing the total time per batch led to fewer duplicate results. By this logic, it seemed to me that there was a “sweet spot”, so to speak, in terms of batch duration that would produce little to no duplicates. Seeing as how the previous test produced the fewest number of repeat columns by far, I decided to time one. From the initial game launch to launching the game again after running the Excel analysis macro, it took a total of 3 minutes and 21 seconds.
Explanation
Now I had the “what”, but not the “why”. I thought some more about this and recalled some programming classes I took in college. Disclaimer, I am not a programmer by any means, so I apologize for any technical inaccuracies in what I am about to say. Essentially, computers cannot generate true randomness. If you were to write a program that dealt you a hand of cards without including any randomization, you would get the same 5 cards every time you ran the program from the beginning. I started to suspect something similar was happening here, but in roughly 3.5 minute intervals.
It was at this point that I sent Mag a message, hoping that maybe he had encountered this before with his script. After starting out with my randomization idea, going through everything I had done above with him, and much patience on his part, I brought up the 3 minute, 21 second number and how there seemed to be a “sweet spot” for minimizing the amount of duplicate data. Mag did a little digging and came back to me with the concept of a “tick”. A tick in C# programming – the language that the sim is coded in – is the smallest unit of time, representing 1/10,000,000th of a second. Additionally, when specifying a random number, the maximum value which you can assign it is 2,147,483,647, the 32-bit integer limit. If we assume that there is a 1:1 correlation between ticks and this maximum value, then 2,147,483,647 ticks equals 214.7 seconds. Converted to minutes, that is 3 minutes, 35.7 seconds! There it was, the missing link, so to speak. This meant that if I could somehow time my batches to be exactly 3 minutes and 35.7 seconds long between game launches, I theoretically should produce no duplicate results and not have to waste time manually removing them from the resulting data.
Manipulation
I started off by trying to achieve what was stated above: time my batches to be as close to 3 minutes and 35.7 seconds as possible. By increasing the number of sims to 900 in each batch, I was able to achieve a total batch time of roughly 3 minutes, 36.5 seconds. Pretty close to the desired 3:35.7. Those results are below.
![[Image: V2IU4UD.png]](https://i.imgur.com/V2IU4UD.png)
And just like that, there was no duplicate data. Mission accomplished, right? For the most part, yes. But I was still curious about this 3 minute, 35.7 second “randomization window”, so to speak. How was it possible before that I was closing the game entirely and still getting duplicate data? At one point I even tried running a batch of 500, exporting the data, restarting my computer, and running another batch to verify if duplicates still persisted. Sure enough, they did. This led me to believe that the randomization window wasn’t being timed by something within the code of DDSPF16, but something outside of it.
The only thing that seemed intuitive to me at the time was basing this randomization off of Window’s system time. DDSPF16 would have access to this as a program running on said system. So I started looking into ways on how to manipulate the system clock within Windows. Having some previous experience with PowerShell, I turned in that direction to find a script which would allow me to stop system time entirely. While it seemed there was no way to truly “freeze” system time, I found a script to reset the system time to a specified value after a specific time interval. In this case, I would be resetting the system time to 3:00 PM every 1 second. As long as this script was running, the time would never advance past 3PM.
With that in mind, I started up the script, and then began the testing. I elected to only do batches of 1, for the sake of brevity. Even a batch of 1 takes roughly 30 seconds from start to finish, so as long as I did 8 or more, I should cover the entirety of the randomization window. The image below are the results of that test.
![[Image: WVJT6NK.png]](https://i.imgur.com/WVJT6NK.png)
As you can see, all duplicates. This confirmed my theory that DDSPF16 was using the Windows system clock for its randomization. By manipulating this, we can control the 3.5 minute randomization window and, thus, eliminate duplicates. The final step in this journey for me was to modify our testing script to take advantage of this. If we could use PowerShell to freeze system time to force duplicates, we could also use it to advance system time and ensure zero duplicates. By adding a line to the testing script, we could run a PowerShell command that incremented system time by 5 minutes, thus entirely avoiding the chance of two batches being run within the 3.5 minute window. After going through all of this with Slate, he made the same change within his sim-batcher program.
Practical Applications
Aside from the obvious benefit of eliminating duplicates during sim testing, what other practical application does this discovery have? Well, one theoretical application would be to find a specific 3.5 minute window where your team wins the first game that is simmed for them (since sim team, if I’m understanding correctly, just takes the first result for each game they sim). The main issue with this is that the sim team would have to sim the game in the exact same 3.5 minute window. Additionally, you would have to guess your opponent’s strategy and depth chart exactly the same as what you were testing. Otherwise, even if one variable is different, the game will diverge from what was simmed (more on this later, with pictures).
Another application I thought of was build testing or depth chart testing in general. If we can eliminate the randomness and focus on one set of say 500 games, we can directly seem the impact of any changes we make. At least in theory anyway. The issue with this is that we have no way of knowing if our sample of 500 games is actually close to the true average of how that build/depth chart would perform, or if it is several standard deviations away from the true mean (yay statistics). That being said, I feel that this application is a bit more practical than the one in the first paragraph and could easily identify outliers in terms of performance.
The final application, which I have to credit Slate for coming up with, would be of use to sim team. The time freezing mechanic could be used in the event there is an error in the week’s league file and a game has to be resimmed. Whatever machine that would be used for simming the games could have its system time frozen, the error within the league file corrected, and then all of the games are re-simmed. For example, if Jamar Lackson’s speed was somehow set to 0 (F speed, if you will), the simmer could correct this error within the sim file, and re-sim the week. I understand that re-sims are very rare and I am sure it is a pain for the sim team to do them. However, this method would ensure that nothing else is modified, either intentionally or unintentionally, other than whatever error needs correcting.
One More Thing
I was not sure where else to put this, so I made it its own section. I thought of one more thing I wanted to share. As I mentioned earlier, the entirety of this article only applies if everything is kept the same between batches. If I ran a batch of 500 sims, then for example changed the team’s 1st and 10 defense from Nickel to 3-4, the game would play the same up until they are on defense. The pictures below illustrate this point more clearly.
![[Image: Test_1_Box_Score.png]](https://cdn.discordapp.com/attachments/735257031958593567/748626737889542154/Test_1_Box_Score.png)
![[Image: Test_1_Play_by_Play.png]](https://cdn.discordapp.com/attachments/735257031958593567/748626751843729480/Test_1_Play_by_Play.png)
The two images above will serve as the baseline. This was simply the first game simmed using whatever gameplan and depth chart I had set for London. After changing London’s first down defense from Nickel to 3-4, the game started to diverge from the moment London’s defense was in a first and ten situation.
![[Image: Test_3_Box_Score.png]](https://cdn.discordapp.com/attachments/735257031958593567/748627188265254973/Test_3_Box_Score.png)
![[Image: Test_3_Play_by_Play.png]](https://cdn.discordapp.com/attachments/735257031958593567/748627190450618498/Test_3_Play_by_Play.png)
The second image here, the play by play, is arguably the more important one, especially when compared to the original play by play image. Comparing the two, we can see that the two games start out exactly the same. London receives the opening kickoff, returns it 16 yards, goes 3 and out, and punts it back to Kansas City. Once London’s defense takes the field on first and ten, the two play by plays are no longer the same and neither are the games from that point. First and ten defense was used here as it is the only situation that is guaranteed at the start of every drive. You could change something like 4th and long defense and the change would not be as drastic simply because there are fewer 4th and long situations. Additionally, if you were to modify a player attribute that never gets used, such as increasing a defensive end’s kicking power from 1 to 2, this would not make any difference as there would be no situation the defensive end was actually kicking.
Conclusion
As the TL;DR says, DDSPF16 randomizes itself after 3 minutes and 35.7 seconds have elapsed in your system time. I am sure this is outlined somewhere within the code of the sim itself. However, not having ever decompiled the sim engine, I thought the roundabout discovery of this mechanic warranted a write-up. The one question that remains to be tested is whether or not DDSPF21 utilizes this same randomization mechanic, and if so, then it’s application for game re-sims is still valid. Thank you very much for reading all of this. I apologize if anything is not clear, I tried to explain everything as best as I possibly could with my current understanding. If you have any questions, please feel free to post them below or message me on the forums or Discord.
Code:
2662 words
![[Image: 007p.png]](https://i.postimg.cc/pdkSHqG7/007p.png)