-
Notifications
You must be signed in to change notification settings - Fork 93
Reduce RAM usage of nonlocal term #1088
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Here is an example showing the problem: If we have oxygen from PseudoDojo there are 13 projectors. Now assume we perform a computation with roughly Each If we have the same setting but 8 oxygen atoms the memory usage goes to 16 gigabytes. With this PR, the goal is that it should remain around 2 gigabytes. |
So this is a performance memory tradeoff. I did it this way because then you use full blas3 operations. I remember something like this gave a huge boost on abinit, but possibly it was blas3ifying the operation on all the bands at once that gave the performance boost (as opposed to band by band as it was done before). So I'm not opposed to switching but keep in mind the performance implications and benchmark it. |
Nice reference, maybe that lets me find how abinit does it. We also need to be careful not to destroy GPU performance. 😓 |
If you want to look, it's in the opernla routine (I worked on that code a long time ago, it still haunts my dreams) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. Yeah the BLAS 2 versus BLAS 3 tradeoff is something to be understood a little better by benchmarks before we commit on one way to do this.
src/terms/nonlocal.jl
Outdated
for proj in eachcol(p.projectors) | ||
# TODO: what allocates here? and does it use BLAS? | ||
ψ_scratch .= p.structure_factors .* proj | ||
C[iproj, :] .= dropdims(ψ_scratch' * ψk; dims=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dropdims is weird. Is this not just a dot product ?
src/terms/nonlocal.jl
Outdated
@@ -28,6 +28,74 @@ struct TermAtomicNonlocal <: Term | |||
ops::Vector{NonlocalOperator} | |||
end | |||
|
|||
struct AtomProjectors{T <: Real, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's see first to what extend things show up in benchmarking, but we should perhaps rethink these structs (where to put scratch arrays, where to store them etc., how they can be reused across various datastructures and tasks etc.)
src/terms/nonlocal.jl
Outdated
for iband in size(B, 2) | ||
C[:, iband] .+= ψ_scratch .* (α * B[iproj, iband]) | ||
for iband in axes(B, 2) | ||
@views C[:, iband] .+= ψ_scratch .* (α * B[iproj, iband]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
5-argument mul!
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not obvious but this is actually an axpy
-type call. (level 1 :( )
@@ -250,3 +250,42 @@ end | |||
end | |||
end | |||
end | |||
|
|||
@testitem "Test nonlocal term operations" tags=[:psp] setup=[mPspUpf] begin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about this test. The goal was just for me to be able to try my changes with <5s waiting time.
@@ -282,6 +381,7 @@ function PDPψk(basis, positions, psp_groups, kpt, kpt_minus_q, ψk) | |||
D = build_projection_coefficients(basis, psp_groups) | |||
P = build_projection_vectors(basis, kpt, psp_groups, positions) | |||
P_minus_q = build_projection_vectors(basis, kpt_minus_q, psp_groups, positions) | |||
# TODO: probably needs an extra parenthesis to first compute P'ψ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So... I noticed that Julia has custom *
overloads for matrix multiplication with more than 2 operands to select the chain of operations that will minimize the total cost. Presumably it will almost always compute P_minus_q' * ψk
first. But if it doesn't we are in trouble, so this should probably be changed to (P_minus_q' * ψk)
.
ST <: AbstractVector{Complex{T}}, | ||
PT <: AtomProjectors, | ||
} <: AbstractMatrix{Complex{T}} | ||
# TODO: this is a real problem wrt. thread-safety, no? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How bad is this? DftHamiltonianBlock
should handle it fine, but GenericHamiltonianBlock
seems to be parallelizing over bands which will cause problems!
Still super super WIP.
The problem lies with how the$e^{-(G+k)\cdot r_{\text{atom}}}$ .
P
matrices are stored in the nonlocal term. Each column of theP
matrix corresponds to a projector (and each psp usually has multiple projectors). Multiple atoms with the same psp have the same projectors except for a structure factorThe basic idea in this PR is to store the projectors without the phase factor, and apply the structure factor on-the-fly. The cleanest way to do it that I could find is to use a custom matrix-like type for
P
such that from the outsideP
still looks like a dense matrix.Not sure yet what the best way to apply the structure factor on-the-fly would be. Either we use a temporary vector to hold
ψ[:,iband] .* structure_factor
or we write a loop but then the question is whether it will work on the GPU.Partially fixes #1032.
TODO: