by khingebjerg on 3/29/10, 7:36 PM with 19 comments
by eric_t on 3/29/10, 9:35 PM
pure function diffusion(x0) result(x)
real, intent(in), dimension(:,:) :: x0
real, intent(out),dimension(:,:) :: x
x = x0(2:n+1,1:n ) &
+ x0(0:n-1,1:n ) &
+ x0(1:n ,2:n+1) &
+ x0(1:n ,0:n-1) &
-a*x0(1:n ,1:n )
end function
Some things to note:- The "pure" keyword guarantees that this function has no side effects.
- No do loops are needed! Fortran array slicing is very handy.
- The compiler will convert this to use SIMD instructions
- Adding some OpenMP hints to make it run on all cores is also very easy.
So this type of code in Fortran is short, very easy to understand and you are guaranteed extreme performance. Maybe functional programming has some benefits when you're dealing with more complex datastructures (for instance I'm working on a code right now which uses parallel octrees, kind off a pain in Fortran), but for simple things like this, I fail to see the point.
I want to believe, so perhaps someone here can enlighten me?
by ewjordan on 3/29/10, 11:32 PM
I feel (as I do so often when projects written in certain languages that may or may not start with "Haskell" show up here) that the main take away is "Look, we can write working apps, too, and they're Better-Because-They're-Functional!" And yes, there are a lot of things that are better when you code them functionally, but simple numerical algorithms like this one play specifically to the strengths of imperative languages, and I see no clear benefit.
It's the same code smell that I get when people start putting together utility libraries, macro tools, and bytecode processors just so they can pretend to do functional programming in Java: if you find yourself struggling to do things non-idiomatically in one language that would be trivially natural in another, why not just switch languages? Especially if they all live on the same JVM and play fairly well together...
by obeattie on 3/29/10, 9:12 PM
by a-priori on 3/29/10, 9:30 PM