I asked Llama 3.1 8B AI model to generate random numbers [OC]



I asked Llama 3.1 8B AI model to generate random numbers [OC]

Posted by campus735

7 comments
  1. It would be very interesting to observe the generation with token probabilities shown as well. Because this is either the neural network itself generating (pseudo-)randomness, or it’s throwing out all number tokens in equal (ish) probabilities and letting the sampler (which is fed actual random numbers) do the picking.

  2. If you had asked humans, excluding 42 and 69, than 37 and any number ending with a 7 would probably outliers

  3. Since humans are bad at recognizing randomness, I doubt the value of this graph. Raw data would be more valuable here, as you could actually run distribution tests on it. Or crank up the sample size by 2 or 3 orders of magnitude and check if the peaks valleys and peaks become more narrower.

  4. I am no stochastics expert but I feel like 1000-1250 is too small of a sample size for the 1-10 category to be that equal. It feels to perfect to actually be random. I would‘ve thought you need atleast 10 times more runs to have it that balanced. But maybe I‘m wrong

  5. Really needs a statistical test to see if it is consistent with a random distribution. You can’t really infer much from the graphs alone.

Leave a Reply