Results -9 to 0 of 38

Threaded View

  1. #35
    Player
    PyurBlue's Avatar
    Join Date
    May 2015
    Posts
    734
    Character
    Saphir Amariyo
    World
    Brynhildr
    Main Class
    Thaumaturge Lv 40
    Quote Originally Posted by Daeriion_Aeradiir View Post
    The top 1%/bottom % is basically irrelevant in this scenario.
    If that is not a case of interest, then I can understand the desire for split attacks a little better. However I still think forced crit is a better solution.

    Splitting single huge attacks into multiple instances of damage would in fact creating a far more concentrated, consistent damage profile around a single expected value - purely because outside of a blue moon, no one will be getting lucky enough to hit multiple god rolls for each individual part of the multi-hit on their big skills
    I'm not sure that the difference would be that much just because of the number of attacks in a given run of anything is fairly high.

    Out of curiosity I coded a simulator for the situation we've been discussing and let 1000 instances of 100 big vs 300 small attacks play out and took the max total damage from both. As it turns out, despite the odds, the small attacks gave the highest value. My code is a little sloppy because I did this quick, but if anyone is interested in playing with it or checking it for errors, this can be run in Python:

    Code:
    import random
    big = 1200
    sml = 400
    crit = 0.5
    rate = 0.25
    crit_set = 0
    big_sum = 0
    sml_sum = 0
    big_list = []
    sml_list = []
    atk_num = 100
    tst_num = 1000
    
    for k in range (tst_num):
        for i in range(atk_num):
         crit_check = random.random()
         #print(crit_check)
         if crit_check > rate:
                crit_set = 0
         else:
            crit_set = 1
            big_damage = big + big* crit * crit_set
            #print(big_damage)
            big_sum = big_sum + big_damage
            #big_average = big_sum/tst_num
            #big_list.append(big_average)
        i = i + 1
        big_average = big_sum/tst_num
        big_list.append(big_average)
        for j in range(atk_num*3):
         crit_check = random.random()
         #print(crit_check)
         if crit_check > rate:
                crit_set = 0
         else:
            crit_set = 1
            sml_damage = sml + sml* crit * crit_set
            #print(sml_damage)
            sml_sum = sml_sum + sml_damage
            sml_average = sml_sum/tst_num
            sml_list.append(sml_average)
        j = j + 1
        sml_average = sml_sum/tst_num
        sml_list.append(sml_average)
    #print("Sum of big attacks is ", big_sum)
    #print("Sum of small attacks is ", sml_sum)
        big_average = big_sum/tst_num
        big_list.append(big_average)
        sml_average = sml_sum/tst_num
        sml_list.append(sml_average)
        print("List of big attack logs is ", big_average)
        print("List of small attack logs is ", sml_average)
    print("Max of big attack logs is ", max(big_list))
    print("Max of small attack logs is ", max(sml_list))
    #print("big is:", len(big_list))
    Result:
    Max of big attack logs is 44962.2
    Max of small attack logs is 45021.6

    The fact that the small attacks ended up being larger is a small detail, what's more telling is that they're so close. You could argue that my approach is flawed because this is just the same attack over and over, not a rotation. That is a limitation with my method that might skew results.

    Edit
    Code needs fixing, I tested it again and noticed that it's not making a list of averages, so it's actually just printing the last values. I was hoping this would be quick, but code never is. Sorry.
    (0)
    Last edited by PyurBlue; 02-20-2024 at 04:29 AM.