Академический Документы
Профессиональный Документы
Культура Документы
jzakiya Mar 16
I need to generate some constant arrays of numbers for a ﴾selectable﴿ prime generator ﴾PG﴿.
When the array size gets past some ﴾?﴿ value the compiler quits with this message:
This SERIOUSLY impedes my development process in Nim, which doesn't exist in C++.
I provide code for the case that will compile, then the version that the compiler squawks at.
The array residues for P13 ﴾prime generator for prime of 13﴿ contains 5760 values. The array restwins contains 2970
values.
When I attempt to compile array inverses , containing another 5760 values, the compiler squawks.
I ended up using Ruby to generate the inverses array values, then displayed to terminal, copied, pasted into an editor,
line formatted, copied, then pasted the formatted array values into my Nim source code so I could compile and run the
program.
I need to do the next higher prime generator P17, whose array residues|resinvrs each contain 92,160 values, but the
compiler ends when trying to create just the residues array.
So somewhere between ﴾5760 + 2970﴿ = 8730 ‐ 92,160 iterations, the compiler stops working.
Besides this being an arbitrary decision to have the compiler act like this, it's undocumented, and there is apparently no
﴾known﴿ mechanism to extend this arbitrary limit to a user. Is there some compiler flag to eliminate|extend it?
My development has stalled because of this. And it is totally not feasible for me to generate two 92,160 element arrays in
Ruby ﴾which it has no problem doing﴿ and go through the print|format|copy|paste dance of two 1000+ lines of numbers,
again. And that's just for P17. P19 has 1,658,880 values for each array,
It would make me very, very Happy :-) to eliminate this barrier to my continued project development.
var pc = 1
var residues: seq[int] = @[]
while pc < modpg: (pc += 2; if gcd(modpg, pc) == 1: residues.add(pc))
let rescnt = residues.len
var pc = 1
var residues: seq[int] = @[]
while pc < modpg: (pc += 2; if gcd(modpg, pc) == 1: residues.add(pc))
let rescnt = residues.len
Reply
StasB Mar 16
As a temporary workaround, couldn't you just take your existing Nim function, use it to generate your stuff at runtime,
then dump it into a .nim file?
Reply
jzakiya Mar 16
I could use Ruby to create all the parameters and dump them into a file too, which would be easier.
I Don't want to jump through extra HOOPs because the Nim compiler is deficient!!
The compiler should work for the human, not the other way around.
Reply
StasB Mar 16
I could use Ruby to create all the parameters and dump them into a file too, which would be easier.
You could, couldn't you? It does sound better than doing this:
displayed to terminal, copied, pasted into an editor, line formatted, copied, then pasted the formatted array
values into my Nim source code
So maybe you should give it some consideration. Or you could wait for someone to add your desired compiler flag, I
suppose.
The compiler should work for the human, not the other way around.
I don't know who the C++ compiler works for, but it's certainly not human kind.
Reply
Araq Mar 16
1. It is perfectly feasible to write a Nim program that outputs an array literal as a string. The compiler supports this out of
the box with staticExec and its caching mechanism. No need for Ruby.
2. You're the first person on earth that did run into this limit of Nim's VM, afaict anyway. Which exists to prevent endless
programs at compile‐time. There is currently no switch to extend it but the number of iterations is at 1.5 million
iterations. Edit compiler/vmdef.nim to increase the value and tell me at which value your programs start to compile.
3. It's a mystery to me in what sense C++ supports that out of the box but I never used its new constexpr features.
Reply
StasB Mar 16
It's a mystery to me in what sense C++ supports that out of the box
C++ has supported the feature of throwing the compiler into an infinite loop at least since the introduction of an
accidentally Turing‐complete template system.
Reply
xomachine Mar 16
I'm also reached this limit before, but avoided it by simplifying compile‐time code. You may try to find a way to reduce
number of function calls per loop iteration to fit the limit.
Reply
Araq Mar 16
I'm also reached this limit before, but avoided it by simplifying compile‐time code.
BTW let instead of const for computed lookup tables is also an option.
Reply
jzakiya Mar 16
I started changing the MaxLoopIterations* = 1500_000 # max iterations of all loops line in compiler/vmdef.nim
in increments of 1 million, until it was 20M, recompiling after each change, and still got error. Then I went up in 100M
increments until 1 trillion, and still got error.
Do I need to recompile|rebuild Nim to make the change? Is merely changing the file sufficient? I doesn't seem so.
Reply
jzakiya Mar 16
OK, I finally got it to compile with the original iteration count, but man, did I have to jump through hoops to do it. :‐﴾
To reduce the number of loops, instead of doing brute force trial-and-error to test each residue against the others to
find its inverse, I found the code here
https://rosettacode.org/wiki/Modular_inverse#Nim
on Rosetta Code, that had a Nim version already done. [Notice: the code there doesn't compile without changing proc
mulInv(a0, b0): int = to proc mulInv(a0, b0: int): int = . In fact, quite a few of those old Rosetta code Nim
examples don't compile now, but I digress.]
Then I was able to eliminate the loop‐in‐a‐loop construct, and just do this:
I was trying to avoid using another function just to do this, since I don't use it at runtime, but you do what you gotta do. :‐﴾
Does Nim already have this function in its library? If not, can you please add it (math lib, etc).
Below is final code that compiles with original compiler iteration value, with some added diagnostic checks.
var pc = 1
var residues: seq[int] = @[]
while pc < modpg: (pc += 2; if gcd(modpg, pc) == 1: residues.add(pc))
let rescnt = residues.len
So what will it do with P17? And the answer is.......Nope! It bombs ﴾again﴿ when trying to compute the 92,160 values for the
residues array, as it never gets to the inverse array computation. I ultimately tried changing the iteration variable upto 1
trillion again before giving up. I think I will need to rebuild Nim with a new value in that file to make it take. That's for
tomorrow ﴾maybe﴿, unless someone can instruct me how to do it without rebuilding Nim.
So, at least partial success, and I have learned something new and useful.
Reply
jzakiya Mar 16
I changed this:
to this:
This reduces the number of times gcd is called from half the numbers ﴾all the odds﴿ upto modpg ﴾modpg/2﴿ to a third
﴾modpg/3﴿.
So while it shaves off some iterations of gcd it's still not enough to get P17 to compile.
A REQUEST: It would be really, really nice to use the boolean operators like: x ^= y; x &= y ; etc.
Reply
jzakiya Mar 16
Compiling again for P17, with 0.18.0 and 0.17.2, the error messages point to gcd function in both cases:
In both cases it points to the same line in lib/pure/math.nim ﴾line 452 in 0.18.0 or 439 in 0.17.2﴿:
Changing var (x,y) = (x,y) to var x = x; var y = y or changing variable names, in both cases, creates same results.
Reply
Araq Mar 17
Yup. koch boot -d:release creates the modified compiler if you have an installation where koch boot works ﴾pick the
"installation from source" route: https://github.com/nim‐lang/Nim#compiling
Because that line triggered the 1_500_000th iteration limit. This line is nothing special, any line could trigger this limit.
Reply
StasB Mar 17
Why not add a flag to set/disable the iteration limit like he wants? I'm sure that as the user base grows, more people are
going to hit the limit and get annoyed.
Reply
Araq Mar 17
Why not add a flag to set/disable the iteration limit like he wants? I'm sure that as the user base grows, more
people are going to hit the limit and get annoyed.
Because I am curious whether it can be supported out of the box by a "reasonable" limit first.
Reply
StasB Mar 17
Such a limit would probably be around a few seconds, rather than a fixed number of iterations, because that's where most
people start to suspect that their computation is either too long or stuck in an infinite loop.
Reply
Araq Mar 17
Then the limit would depend on the used CPU. Bad idea IMO.
Reply
StasB Mar 17
You're right. It would cause code that compiles on one machine to fail on another. Still, if the goal was to prevent infinite
loops, then a limit which causes the computation to fail almost immediately ﴾time‐wise﴿ on most machines makes little
sense, so even if it's fixed, it should at least be tuned to match a long‐ish computation on modern machines.
Reply
jzakiya Mar 17
I think this value is way more reasonable, because as stated previously, on modern machines ﴾or really any system with
gigahertz clock speeds﴿ we're only talking seconds for even a billion iterations.
I really hope you will up the value. Until so, I'll patch my own systems. Thanks for instructing how to do this.
Reply
jyapayne Mar 17
Araq, maybe it would be more appropriate to have this as a warning instead of stopping compilation completely?
Something like "Warning: Compiler has been through 1_000_000_000 iterations, compile time code might be in an infinite
loop"
Reply
Araq Mar 18
@jyapayne : Not sure, I like it to error out. Warning&continue doesn't seem right for what is ultimately a batch process.
@jzakiya : The limit is at one billion now.
Reply
jzakiya Mar 19
+1!! :‐﴿
Reply
4 MONTHS SINCE LAST REPLY
Reply