Skip to content

Reduce RAM usage of nonlocal term #1088

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

Technici4n
Copy link
Contributor

@Technici4n Technici4n commented Apr 30, 2025

Still super super WIP.

The problem lies with how the P matrices are stored in the nonlocal term. Each column of the P matrix corresponds to a projector (and each psp usually has multiple projectors). Multiple atoms with the same psp have the same projectors except for a structure factor $e^{-(G+k)\cdot r_{\text{atom}}}$.

The basic idea in this PR is to store the projectors without the phase factor, and apply the structure factor on-the-fly. The cleanest way to do it that I could find is to use a custom matrix-like type for P such that from the outside P still looks like a dense matrix.

Not sure yet what the best way to apply the structure factor on-the-fly would be. Either we use a temporary vector to hold ψ[:,iband] .* structure_factor or we write a loop but then the question is whether it will work on the GPU.

Partially fixes #1032.

TODO:

  • Actually fix the issue and make the code work.
  • Check that the CPU performance and memory allocations are still fine.
  • Check that the memory usage went down.
  • Check that the GPU performance is still fine.

@Technici4n
Copy link
Contributor Author

Here is an example showing the problem:

If we have oxygen from PseudoDojo there are 13 projectors. Now assume we perform a computation with roughly $10^4$ plane waves per $k$-point, and with $10^3$ $k$-points.

Each P matrix would have $13 \cdot 10^4$ entries. With $10^3$ such matrices (one per $k$-point) and considering that each entry is a complex number which requires 16 bytes, this gives a total memory usage of roughly 2 gigabytes.

If we have the same setting but 8 oxygen atoms the memory usage goes to 16 gigabytes. With this PR, the goal is that it should remain around 2 gigabytes.

@antoine-levitt
Copy link
Member

So this is a performance memory tradeoff. I did it this way because then you use full blas3 operations. I remember something like this gave a huge boost on abinit, but possibly it was blas3ifying the operation on all the bands at once that gave the performance boost (as opposed to band by band as it was done before). So I'm not opposed to switching but keep in mind the performance implications and benchmark it.

@antoine-levitt
Copy link
Member

@Technici4n
Copy link
Contributor Author

Nice reference, maybe that lets me find how abinit does it. We also need to be careful not to destroy GPU performance. 😓

@antoine-levitt
Copy link
Member

If you want to look, it's in the opernla routine (I worked on that code a long time ago, it still haunts my dreams)

Copy link
Member

@mfherbst mfherbst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. Yeah the BLAS 2 versus BLAS 3 tradeoff is something to be understood a little better by benchmarks before we commit on one way to do this.

for proj in eachcol(p.projectors)
# TODO: what allocates here? and does it use BLAS?
ψ_scratch .= p.structure_factors .* proj
C[iproj, :] .= dropdims(ψ_scratch' * ψk; dims=1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The dropdims is weird. Is this not just a dot product ?

@@ -28,6 +28,74 @@ struct TermAtomicNonlocal <: Term
ops::Vector{NonlocalOperator}
end

struct AtomProjectors{T <: Real,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's see first to what extend things show up in benchmarking, but we should perhaps rethink these structs (where to put scratch arrays, where to store them etc., how they can be reused across various datastructures and tasks etc.)

for iband in size(B, 2)
C[:, iband] .+= ψ_scratch .* (α * B[iproj, iband])
for iband in axes(B, 2)
@views C[:, iband] .+= ψ_scratch .* (α * B[iproj, iband])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5-argument mul! ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not obvious but this is actually an axpy-type call. (level 1 :( )

@@ -250,3 +250,42 @@ end
end
end
end

@testitem "Test nonlocal term operations" tags=[:psp] setup=[mPspUpf] begin
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure about this test. The goal was just for me to be able to try my changes with <5s waiting time.

@@ -282,6 +381,7 @@ function PDPψk(basis, positions, psp_groups, kpt, kpt_minus_q, ψk)
D = build_projection_coefficients(basis, psp_groups)
P = build_projection_vectors(basis, kpt, psp_groups, positions)
P_minus_q = build_projection_vectors(basis, kpt_minus_q, psp_groups, positions)
# TODO: probably needs an extra parenthesis to first compute P'ψ
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So... I noticed that Julia has custom * overloads for matrix multiplication with more than 2 operands to select the chain of operations that will minimize the total cost. Presumably it will almost always compute P_minus_q' * ψk first. But if it doesn't we are in trouble, so this should probably be changed to (P_minus_q' * ψk).

ST <: AbstractVector{Complex{T}},
PT <: AtomProjectors,
} <: AbstractMatrix{Complex{T}}
# TODO: this is a real problem wrt. thread-safety, no?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How bad is this? DftHamiltonianBlock should handle it fine, but GenericHamiltonianBlock seems to be parallelizing over bands which will cause problems!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Stress computation can use a lot of RAM
3 participants