- cross-posted to:
- hackernews@lemmy.smeargle.fans
- cross-posted to:
- hackernews@lemmy.smeargle.fans
New language promises to reduce compilation times by using all threads and gpu cores available on your machine. What’s your opinions on it so far?
New language promises to reduce compilation times by using all threads and gpu cores available on your machine. What’s your opinions on it so far?
Skeptical. I wrote a compiler from scratch which does this. The biggest problem is not in the execution but in the memory bandwidth that becomes costly.
Automatic parallel computing is to me a pipe dream.
The concept also appears to downplay the importance of software architecture. You must design your program around this. The compiler can’t help you if you express your programs in a serial fashion, which is the fundamental problem that makes parallel computing hard.
I don’t mean to be a downer. By all means, give it a shot. I’m just not seeing the special ingredient that makes this attempt successful where many others like it have failed.