# Dice roll stats+balance needed

I was thinking today about how stats pan out for dice rolls and I think I’ve made advantage too powerful.

here’s regular 2d6, which looks pretty acceptable:

however, here’s 3d6k2+7, which includes +1 for each match that happens:

feels a bit much!!

i’m gonna have to revisit how difficulty scaling works a little bit i think

it looks a little bit better when you’re only at +3, which is a middling stat:

i need to poke at this some more to understand the math better, but basically, more advantage = the more impact from stats to hit the “average” difficulty, so i believe I need to think of a way to change stat scaling a little bit.

If you’re curious, here’s the python code I used to calculate this:

``````def calc_at_least_dice(bonus, adv, disadv):
# maps the resulting dice roll to the number of times that value is rolled
val_map = {}
exploded_values = [
[i for i in range(1, 7)]
for j in range(total_to_roll)
]
roll_values = [1 for i in range(total_to_roll)]
total_rolls = 0
expected_roll_values = 6 ** total_to_roll
while total_rolls < expected_roll_values:
accum = bonus
seen = set()
for roll in roll_values:
if roll in seen:
accum += bonus_roll
accum += roll
val_map[accum] = val_map.get(accum, 0) + 1
total_rolls += 1
next_die_plus_index = 0
while next_die_plus_index < len(roll_values) and roll_values[next_die_plus_index] == 6:
roll_values[next_die_plus_index] = 1
next_die_plus_index += 1
if next_die_plus_index >= len(roll_values):
break
roll_values[next_die_plus_index] += 1
print("Total: % that value is at least this amount")
percentage = 0.0
probabilities = {}
for roll in reversed(sorted(val_map.keys())):
num_rolled = val_map[roll]
percentage += num_rolled / total_rolls
probabilities[roll] = percentage
for roll in sorted(probabilities.keys()):
print(f'{roll}\t{round(100.0*probabilities[roll], 2)}')
``````

In Short, the likelihood to hit the average difficulty (14) when given a single advantage increases from:

• 61% to 92% for +7 (31%)
• 41% to 88% for +6 (47%)
• 30% to 78% for +5 (48%)
• 16% to 65% for +4 (49%)
• 11% to 54% for +3 (43%)

Advantage is stronger than I want it to be so I need to figure out a way to fix that. i 100% coded this wrong

OK here’s the updated numbers, which looks muuuuuuuuch better. I think the consequence is that we need to drop the max bonus a player can get right away from +8 down to +6 or +7, but I’m not 100% sure, and am going to have to think about this some more, I could potentially keep it as-is!

Updated code:

``````def calc_at_least_dice(bonus, adv, disadv):
# maps the resulting dice roll to the number of times that value is rolled
val_map = {}
exploded_values = [
[i for i in range(1, 7)]
for j in range(total_to_roll)
]
roll_values = [1 for i in range(total_to_roll)]
total_rolls = 0
expected_roll_values = 6 ** total_to_roll

while total_rolls < expected_roll_values:
dice1 = 0
dice2 = 0
accum = 0
seen = set()
for roll in roll_values:
if roll in seen:
accum += bonus_roll
dice2 = dice1
dice1 = roll
dice2 = roll
roll_total = accum + bonus + dice1 + dice2
val_map[roll_total] = val_map.get(roll_total, 0) + 1
total_rolls += 1
next_die_plus_index = 0
while next_die_plus_index < len(roll_values) and roll_values[next_die_plus_index] == 6:
roll_values[next_die_plus_index] = 1
next_die_plus_index += 1
if next_die_plus_index >= len(roll_values):
break
roll_values[next_die_plus_index] += 1
# print(val_map)
print(f'{total_to_roll}d6{k2}+{bonus}')
print("total\t% at least = roll")
percentage = 0.0
probabilities = {}
for roll in reversed(sorted(val_map.keys())):
num_rolled = val_map[roll]
percentage += num_rolled / total_rolls
probabilities[roll] = percentage
for roll in sorted(probabilities.keys()):
print(f'{roll}\t{round(100.0*probabilities[roll], 2)}')
``````

The likelihood to hit the average difficulty (14) when given a single advantage increases from:

• 61% to 86% for +7 (25%)
• 41% to 72% for +6 (31%)
• 30% to 59% for +5 (29%)
• 16% to 38% for +4 (22%)
• 11% to 27% for +3 (16%)

i played with adding an extra dice and the math doesn’t feel good

so my next step is going to be to poke at the base stats and proficiency level and see what it feels like to make those stronger vs. advantage

if i’m targeting 3d6 as the baseline for dice rolls, then the rest of the stats have to bump up with it

so given the baseline roll is 10.5, i thought to estimate the same amount for stats, but the numbers just don’t feel right - advantage still feels too strong by a little bit, so i think instead of moving to 3d6 i’m going to instead buff the baseline stats a little more and see where the numbers land there, since buffing the baseline stats will make the rolls’ value have a lesser share

here’s the graphs for 3d6 advantage, using the +1s per match still. i looked into regular advantage and it still felt like too much, and i didn’t like how the numbers felt from a non-mathematical standpoint because moving from 2d6 keep 2 to 3d6 keep 3 ends up making advantage crits stronger, but i don’t want to remove the concept of crits in the game because i still want there to be a “WOW” factor that can happen when you roll that isn’t ignored just because you already rolled high on the dice you get to keep

so we’ll see how this lines up when i think on it more next time

in the baseline 3d6 i started thinking about the difference between 20 and 21 for the targeted “average”, and i don’t like 21 as much as 20 just from a “do the numbers feel natural perspective” so a lot of my thinking on the balance was centered around hitting 20 in the 3d6 system.

oh right. the code.

``````def calc_at_least_dice(bonus, adv, disadv):
# maps the resulting dice roll to the number of times that value is rolled
val_map = {}
exploded_values = [
[i for i in range(1, 7)]
for j in range(total_to_roll)
]
roll_values = [1 for i in range(total_to_roll)]
total_rolls = 0
expected_roll_values = 6 ** total_to_roll

while total_rolls < expected_roll_values:
dice1 = 0
dice2 = 0
dice3 = 0
accum = 0
seen = set()
for roll in roll_values:
if roll in seen:
accum += bonus_roll
dice3 = dice2
dice2 = dice1
dice1 = roll
dice3 = dice2
dice2 = roll
dice3 = roll
roll_total = accum + bonus + dice1 + dice2 + dice3
val_map[roll_total] = val_map.get(roll_total, 0) + 1
total_rolls += 1
next_die_plus_index = 0
while next_die_plus_index < len(roll_values) and roll_values[next_die_plus_index] == 6:
roll_values[next_die_plus_index] = 1
next_die_plus_index += 1
if next_die_plus_index >= len(roll_values):
break
roll_values[next_die_plus_index] += 1
# print(val_map)
# print(f'{total_to_roll}d6{k3}+{bonus}')
# print("total\t% at least = roll")
percentage = 0.0
probabilities = {}
for roll in reversed(sorted(val_map.keys())):
num_rolled = val_map[roll]
percentage += num_rolled / total_rolls
probabilities[roll] = percentage
return {
roll: round(100.0*probabilities[roll], 2) for roll in sorted(probabilities.keys())
}

probs = {}
roll_types = []
for bonus in range(6, 13):
k3 = '' if adv == 0 else 'k3'
roll_types.append(dice_amt)

for roll in sorted(roll_dist.keys()):
pct = roll_dist[roll]
probs[roll] = probs.get(roll, {})
probs[roll][dice_amt] = str(pct)

print(f'roll total\t' + '\t'.join(roll_types))
for roll_total in sorted(probs.keys()):
values = [probs[roll_total].get(dice_amt, str(0.0)) for dice_amt in roll_types]
print(f'{roll_total}\t' + '\t'.join(values))
``````

OK, I’ve learned my lesson.

1. Using advantage for crit mechanics is bad/makes advantage too strong. Advantage is already incredibly strong - giving a bonus on top of the base advantage is not working, so I’m nixing the crit mechanics. Instead, I’m going to opt for crit mechanics to be an optional rule, which will either be separate from the dice roll itself, or if it is a part of the dice roll, it needs to be done in a way that distinguishes itself from advantage somehow.
2. 2d6 still makes advantage numerically too strong, so I am going to need to advance the dice to 3d6. Oops!
3. Stat spread and PL mechanics will have to change to accommodate this.

I still want to balance the baseline game around the concept of “the average baseline roll is what you want to target for difficulty”, ie. that someone of average competence trained in a particular challenge would have a pretty high chance to succeed if given any advantage on top of that.

Average roll is 10.5 on 3d6, so I’m going to file off the .5 and say that skill to match the average likelihood is also 10, so the new target average roll for competence is 20.

So, if you want experience to exactly equal baseline proficiency in terms of power, I can see PL going up to 10 in this case. I think I’m going to stick with PL +3 as the base level (+1 and +2 are still reserved more for “youth” or “npc” stat lines), so that your weakest physical and mental HP do not suffer too much, which means the new spread of stats ends up being something like 7, 6, 6, 5, 4, 3 or something like that.

I’m gonna fiddle with the numbers some more and see if I come up with anything else that needs to change as a result of this, but hopefully with this change the game will feel a little less ridiculous.

Tomorrow I’m going to put to tables the number crunching for a 3d6 system to show you where I land.

also, here’s the latest version of the code I built to play around with this. i rebuilt to be able to accommodate changing the base dice. enjoy

``````def calc_at_least_dice(bonus, adv, disadv, base_dice=2):
# maps the resulting dice roll to the number of times that value is rolled
val_map = {}
exploded_values = [
[i for i in range(1, 7)]
for j in range(total_to_roll)
]
roll_values = [1 for i in range(total_to_roll)]
total_rolls = 0
expected_roll_values = 6 ** total_to_roll

while total_rolls < expected_roll_values:
dice = [0 for i in range(base_dice)]
accum = 0
seen = set()
for roll in roll_values:
if roll in seen:
accum += bonus_roll
for i in range(len(dice)):
dice = dice[0:i] + [roll] + dice[i:-1]
break
roll_total = accum + bonus + sum(dice[0:base_dice])
val_map[roll_total] = val_map.get(roll_total, 0) + 1
total_rolls += 1
next_die_plus_index = 0
while next_die_plus_index < len(roll_values) and roll_values[next_die_plus_index] == 6:
roll_values[next_die_plus_index] = 1
next_die_plus_index += 1
if next_die_plus_index >= len(roll_values):
break
roll_values[next_die_plus_index] += 1
# print(val_map)
# print(f'{total_to_roll}d6{k3}+{bonus}')
# print("total\t% at least = roll")
percentage = 0.0
probabilities = {}
for roll in reversed(sorted(val_map.keys())):
num_rolled = val_map[roll]
percentage += num_rolled / total_rolls
probabilities[roll] = percentage
return {
roll: round(100.0*probabilities[roll], 2) for roll in sorted(probabilities.keys())
}

probs = {}
roll_types = []
base_dice = 3
for bonus in range(10, 11):
roll_dist = calc_at_least_dice(bonus, adv, 0, base_dice)
k2 = '' if adv == 0 else f'k{base_dice}'
roll_types.append(dice_amt)

for roll in sorted(roll_dist.keys()):
pct = roll_dist[roll]
probs[roll] = probs.get(roll, {})
probs[roll][dice_amt] = str(pct)

print(f'roll total\t' + '\t'.join(roll_types))
for roll_total in sorted(probs.keys()):
# print(f'{roll_total}: {str(probs[roll_total])}')
# print(str([probs[roll_total].get(dice_amt, str(0.0)) for dice_amt in roll_types]))
values = [probs[roll_total].get(dice_amt, str(0.0)) for dice_amt in roll_types]
print(f'{roll_total}\t' + '\t'.join(values))
``````

prints out the likelihood that your roll will be at least the roll total listed:

roll total 3d6+10 4d6k3+10 5d6k3+10
13 100.0 100.0 100.0
14 99.54 99.92 99.99
15 98.15 99.61 99.92
16 95.37 98.84 99.73
17 90.74 97.22 99.2
18 83.8 94.29 98.05
19 74.07 89.51 95.86
20 62.5 82.48 92.05
21 50.0 73.07 86.01
22 37.5 61.65 77.46
23 25.93 48.77 66.13
24 16.2 35.49 52.56
25 9.26 23.15 37.71
26 4.63 13.04 23.42
27 1.85 5.79 11.39
28 0.46 1.62 3.55

as you can see, at 20, your likelihood of rolling at least a 20 increases by 20% with 1 advantage, and 10% again with 2, which imo helps make it much more intuitive to think about “average effectiveness of advantage”