I did the following test:
For every prime, take the prime gap distance $dp$ to the previous prime and the next prime $dn$, then calculate $a=(pp\ mod\ dp)$ and $b=(np\ mod\ dn)$. If $a$ or $b$ $\in \Bbb P$ or equal to $1$ then the result is successful, if not the result is an error.
I have tested it with Python in the interval $[1,10^8]$ and the results are very interesting, the error percentage is very low, close to 1%:
Total errors: 62157 of 5761453 total primes.
Successful primes: 5758741 in the interval [1,100000000]
%Error vs Total primes: 1.0788424378364276%
Family mod 3 errors: 59445
Uncategorized errors: 2712
Let $E$ be the set of prime numbers that are the error results of the test. I would like to isolate them at least partially, so I am trying to find a non basic commonality between them.
For instance, this is the error set $E$ for the interval $[1,57829]$, $1409$ is the first prime that does not succeed:
$E=\{1409,2039,3109,10799,13669,16519,20369,27631,31859,42359,42989,44221,46549,47609,54959,55259, 55579, 55589, 57829\}$
When the interval is extended, other primes like $68863$ or $71647$ will appear, so not all of them will finish with $9$ or $1$.
I have tried checking at OEIS, and some basic modularity reviews, but without luck, I can not see any special relationship between them, apart from being primes, and that they belong to $1+6k$ or $5+6k$ (as usual because they are prime numbers) and that most of them finish with $9$ (this could be something important, but I can not see the reason behind). Maybe it does not exists at all, but maybe there is something I can not see.
Regarding to the value of the congruences, I have been able to verify that, at least in the tested interval, is in most of the cases a multiple of $3$ (I called them "family mod 3" errors), but I guess that as the interval gets bigger other multiples ("uncategorized errors") of a prime number will appear more often.
The reason for this test is just that I wanted to learn if the prime gaps are following some kind of pattern, at least for some cases. The idea came from a very different test I did some days ago (link here) for a quite different topic.
From those results, I thought that if there is such symmetry, maybe I could try to find similar patterns in between primes, so I started making the same test I wrote in the present question.
Originally I was using the minimum distance $d$ that applied to the prime $p$ makes $p-d$ and $p+d$ primes. Just by chance I realized that in most of the cases $(p\mod\ d)$ was also prime or $1$, so I tried to make a longer test. Finally, after some tests, I did the version of the test that I wrote in this question.
If somebody is interested to try or modify the test, here is the Python code:
def distance_primality():
from sympy import nextprime, prevprime
from gmpy2 import is_prime
n=5
errorcount = 0
family_mod_3_errorcount = 0
totalpr = 0
print("RESULT"+"\t"+"P"+"\t"+"PP"+"\t"+"D1"+"\t"+"Mod"+"\t"+"pr/1?"+"\t"+"NP"+"\t"+"D2"+"\t"+"Mod"+"\t"+"pr/1?")
print("-----"+"\t"+"-"+"\t"+"--"+"\t"+"--"+"\t"+"---"+"\t"+"-----"+"\t"+"--"+"\t"+"--"+"\t"+"---"+"\t"+"-----")
test_limit = 100000000
while n < test_limit:
totalpr = totalpr + 1
pp = prevprime(n)
while not is_prime(pp):
pp = prevprime(pp-1)
np = nextprime(n+1)
while not is_prime(np):
np = nextprime(np+1)
d1 = n-pp
d2 = np-n
if ((pp%d1==1) or is_prime(pp%d1) and ((np%d2==1) or is_prime(np%d2))):
print("Success"+"\t"+str(n)+"\t"+str(pp)+"\t"+str(d1)+"\t"+str(pp%d1)+"\t"+"Yes"+"\t"+str(np)+"\t"+str(d2)+"\t"+str(np%d2)+"\t"+"Yes")
elif (pp%d1==1) or is_prime(pp%d1):
print("Success"+"\t"+str(n)+"\t"+str(pp)+"\t"+str(d1)+"\t"+str(pp%d1)+"\t"+"Yes"+"\t"+str(np)+"\t"+str(d2)+"\t"+str(np%d2)+"\t"+"No")
elif (np%d2==1) or is_prime(np%d2):
print("Success"+"\t"+str(n)+"\t"+str(pp)+"\t"+str(d1)+"\t"+str(pp%d1)+"\t"+"No"+"\t"+str(np)+"\t"+str(d2)+"\t"+str(np%d2)+"\t"+"Yes")
else:
if (pp%d1)%3==0 or (np%d2)%3==0:
print("ErrF3"+"\t"+str(n)+"\t"+str(pp)+"\t"+str(d1)+"\t"+str(pp%d1)+"\t"+"No"+"\t"+str(np)+"\t"+str(d2)+"\t"+str(np%d2)+"\t"+"No")
family_mod_3_errorcount = family_mod_3_errorcount + 1
else:
print("Error"+"\t"+str(n)+"\t"+str(pp)+"\t"+str(d1)+"\t"+str(pp%d1)+"\t"+"No"+"\t"+str(np)+"\t"+str(d2)+"\t"+str(np%d2)+"\t"+"No")
errorcount = errorcount + 1
n=nextprime(n)
while not is_prime(n):
n=nextprime(n)
print()
print("Total errors: " + str(errorcount+family_mod_3_errorcount) + " of " + str(totalpr) + " total primes" + ". Successful primes: " + str(totalpr-errorcount) + " in the interval [1," + str(test_limit ) + "]")
print("%Error vs Total primes: "+str(((errorcount+family_mod_3_errorcount)/totalpr)*100)+"%")
print("Family mod 3 errors: " + str(family_mod_3_errorcount))
print("Uncategorized errors: " + str(errorcount))
distance_primality()
I would like to share the following questions:
The error percentage seems very low, I have tested through $10^8$, but could it be just a result of the Strong Law of Small Numbers?
Is there a way to isolate at least a subset of the primes at $E$?
Given these results, is there any hint about other type of test that I could do to know more about the observed behavior?
Thank you!
Answer
The initial low density of the "errors" and the fact that initially most of them end in $9$ can readily be explained. I don't know what you mean by "isolate" in question $2$, and I don't think there's any need for further tests as referred to in question $3$.
Except for the prime $2$ and the subsequent prime gap of $1$, primes are odd and prime gaps are even. Thus primes modulo prime gaps are odd. Of the odd numbers up to $13$, only $9$ is not either $1$ or a prime. Thus, until you get up to numbers that have prime gaps of at least $16$ on either side, the only way to get an "error" is to have a remainder of $9$ on both sides, and even that requires a prime gap of at least $10$ on either side, and even then there's at most a $1$ in $25$ chance for both remainders to be $9$. The typical prime gap of $\log x$ is $10$ at $\mathrm e^{10}\approx22026$. We can estimate the probability of two successive prime gaps of length $10$ at your first "error" value of $1409$ by considering the prime gaps as very roughly Poisson-distributed with parameter $\log x$; then the probability for a gap of size at least $10$ is
$$
\sum_{k=10}^\infty\frac{\log^kx\mathrm e^{-\log x}}{k!}=\sum_{k=10}^\infty\frac{\log^kx}{k!\cdot x}\;,
$$
which for $x=1409$ is about $0.2$, so the overall probability is about $0.2^4=.0016$ (two factors of $0.2$ on either side, one for the probability of a prime gap at least $10$, the other for a remainder of $9$ modulo $10$). So the low initial density of your numbers requires no further systematic explanation.
This also explains the frequent $9$s in the last decimal digit: The most likely prime gap to produce initial "errors" is $10$; the only (in)admissible remainder in that case is $9$; and this constellation only has to occur on either side in order to force the prime in the middle to end in $9$; that is, for the prime in the middle not to end in $9$, we need to have prime gaps of at least $12$ on both sides, which takes even higher primes to become likely.
From an appropriate song:
I've looked at primes from both sides now
From norms and forms and still somehow
It's prime confusions I recall
I really don't know primes at all.
No comments:
Post a Comment