

Following are some designs that can make the RNG weak: Strangely enough ( or maybe not ) this behaviour seem to slightly increase the winning score.Īdding to what Demento said, the extent of randomness in the Random Number Generation Algorithm is the key issue. If I reduce the size of the training set under a certain limit, I see that the classifier starts to predict always the same few numbers, which are among the most frequent in the pseudo-random generation. So maybe the neural network is able to learn about pseudo-random number distributions ? However, if I generate the pseudo-random lottery extractions with a specific distribution function, then the numbers predicted by the neural network are roughly generated with the same distribution curve ( if you plot the occurrences of the random numbers and of the neural network predictions, you can see that that the two have the same trend, even if in the predicytions curve there are many spikes. Of course, the classificator obtained a winning score comparable with the one of random guessing or of other techniques not based on neural networks (I compared results with several classifiers available in scikit-learn libraries ) In practice, I aimed to a function that given N numbers, coud predict the next one.Īsked the trained classificator to predict the remaining numbers.
#HOW TO USE RNG REPORTER FOR 3RD GEN CODE#
Here is the final code without detailed explanation, still quite simple and useful in case the link goes offline: from random import randint
#HOW TO USE RNG REPORTER FOR 3RD GEN HOW TO#
I happened to stumble upon it right after looking at a guide of how to build such neural network, demonstrating echo of python's randint as an example.

Old question, but I thought it's worth one practical answer. So if you are able to build a neural network that predicts the next bit of a PRNG (considered secure for cryptography) with a 55% success rate, you'll probably make the security news headlines for quite a while. It is also worth noting that it is not necessary to exactly predict the output of a PRNG to break cryptography - it might be enough to predict the next bit with a certainty of a little more than 50% to weaken an implementation significantly. Therefore I am pretty sure that it is not possible with currently available computational resources to build a neural network to successfully attack a PRNG that's considered secure for cryptography. On a positive note, it is helpful that you can generate an arbitrary amount of training patterns for the neural network, assuming that you have control over the PRNG and can produce as many random numbers as you want.īecause modern PRNGs are a key component for cryptography, extensive research has been conducted to verify that they are "random enough" to withstand such prediction attacks. The less predictable the PRNG gets, the more data will be required to find some kind of pattern. The stronger the PRNG gets, the more input neurons are required, assuming you are using one neuron for each bit of prior randomness generated by the PRNG. The neural network could be trained to find certain patterns in the history of random numbers generated by a PRNG to predict the next bit. But in the real world things look different. A very weak PRNG like the one XKCD published could of course be easily predicted by a neural network with little training. Depending on the quality of the PRNG, the problem ranges from easy to almost impossible.

When we talk about pseudo RNG, things change a little. It is impossible to predict a truly random number, otherwise it wouldn't be truly random. If we are talking about a perfect RNG, the answer is a clear no.
