[Prev][Next][Index][Thread]

performance for scientific computing



Hi,

I was wondering if Dylan is fast (compared to C/C++) on such operations:

define a tridimensional array A(M,N,P) initialised with
        A(i,j,k) = i + j / k (or any other function of i,j,k)
compute
        C = A*A+cos(A)     (to understand as C(i,j,k) = A(i,j,k)*A(i,j,k) +
cos(A(i,j,k)))
or
        d = Tr(A) (trace of the matrix = Sum(A(i,i,i) for i = 0 to M and
M=N=P)

More specifically:
- which implementation of Dylan is faster
- is it possible to "allocate" an array and at the same time initialise the
elements
- if a map function is used, is it fast compared to C loops
- are there unnecessary array creations (for instance if I want to compute
A=A*A+cos(A), there should not be any allocation of new arrays for
efficiency or for low memory)

In fact, I want to work with Dylan for intensive scientific computing
(MonteCarlo simulations, dynamic programming, etc...). Currently, it is done
in C++ with blitz++ as array templates. However, it is not very flexible and
I loose a lot of time in compilation (templates compilation are very slow
with gcc). So maybe, Dylan with its macros and functional features could
help me if it compiles in a fast enough program.

Thank you in advance for your expertise!

Sébastien de Menten

BTW: it is very hard to find any comprehensive tutorial on Dylan or example
of Dylan scientific code... So welcome to any reference on the topic





Follow-Ups: