@maegul @astrojuanlu @programming
BTW, I’m not talking about “product code” in Julia, necessarily. The point is that I can stay at the “research code” level and don’t have to rewrite anything, but can just move from a toy problem with a few parameters to a full size mod with 100s of parameters.
In my case I had a Python code solving an ODE system and a global optimisation of all parameters took a week on the 24 cores on a cluster. The same case runs over the lunch break on my laptop in Julia.
For the most recent paper a PhD student of ours use 2000-ish GPU hours for a global sensitivity/uncertainty analysis. That would have been impossible in Python or Matlab. On a single GPU that would have taken 10 years (300 years in Python), instead of 3 months*.
*yes, you can just through more GPUs at the problem, but then we can start the discussion about CO2 footprint being 40 or 600 times higher, respectively.
Note: all our benchmarks are for solving ODE system. Just doing linear algebra, there isn’t really a difference in speed between Matlab, Bumpy, or Julia, because that’s all going to run on the same LA libraries.
@hrefna @maegul @astrojuanlu @programming
Interesting. But seems to be limited to arrays. And I’m wondering what the benefit is over regular Numpy. In my tests I did not find Numpy to be slower than any other Blas or Lapack implementation (if type stable and immutable).
Do you have any comparisons?
In any case, I don’t think it would work for solving ODEs unless I’d implement the RHS in a JIT compiled function.
I used to love Python, but now it looks like a collection of add-ons/hacks to fix the issues I have.