-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MeshDLR creation memory usage is linear in $\beta$ (should be $\log\beta$) #917
Comments
Hi Hugo. Thanks for pointing this out. I agree with your diagnosis, and my back-of-the-envelope memory estimate gives the same order of magnitude as your data. The unfortunate answer is that improving this is a small research problem, though I am extremely optimistic it is solvable. Here are my favorite possible approaches, in no particular order, though there are others as well. (1) Some kind of precomputation of DLR nodes. (2) If you find precomputing and storing things distasteful (I don't): figure out the correct method of treating Matsubara frequency space not as a set of discrete points, but as a continuum, by figuring out the correct interpolation of the kernel and convincing yourself that this is legal (I need to think more about this, but there's probably a way). Assuming this works, now you can use a composite Chebyshev grid as a fine grid in Matsubara frequency, just as for imaginary time. You will get points which aren't Matsubara frequencies, and you have two options. You could show that working with the interpolated kernel is allowed, and then you could just keep the points that come out of the algorithm, but this could cause compatibility issues with other codes. Or, you could snap each point to the nearest Matsubara frequency, and check that this still works (I'm guessing it does). (3) Use some kind of pre-specified logarithmically-spaced grid as your fine grid, and pray. This isn't necessarily such a crazy idea, because one could check empirically for many values of Lambda and epsilon that it works. Plausibility argument: the choice of nodes tends to be somewhat forgiving, meaning if you jiggle the nodes around, the quality of your interpolation doesn't seem to be negatively affected too much. So it might be that starting with any sufficiently-dense grid with roughly the right clustering is good enough. So this problem is probably solvable, but somebody needs to sit down and do a little bit of research work and thinking about the correct solution. I'm happy to participate in those discussions and give my thoughts about what to try first, and once we know what to do, I'm happy to implement the solution in cppdlr. Or, it's a nice quick research project for anybody out there to pick up. Jason |
Dear Jason, I agree that there are many ways to handle this. I think that since we have an upper bound on how many imaginary frequency points the DLR construction will select (given by the epsilon rank The idea is to put out panels on the positive Matsubara frequency indices where the first panel contains all Here is a code example that constructs the range of indices with panels with doubled spacing for each panel index when the flag if dense_imfreq:
n = np.arange(-nmax, nmax+1)
else:
# -- Sparse starting grid in imaginary frequency
r = self.rank
n_panels = int(np.ceil(np.log2(nmax/r))) + 1 if nmax > r else 2
n = np.zeros(r * n_panels)
idx = 1
for p in range(n_panels):
d_idx = r * 2**p
nn = np.arange(idx, idx + d_idx, 2**p)
n[r*p:r*(p+1)] = nn
idx += d_idx
n = np.concatenate((1-n[::-1], n)) and here is the commit that implements this in Can I go on and implement this in Best, Hugo |
Sorry for the confusion: since this is really a cppdlr issue, I'm moving the discussion to the corresponding issue there. Let's get the cppdlr implementation settled, and then we can figure out what to do on the TRIQS end. |
Description
This is issue is related to the
cppdlr
library and its creation of the DLR basis.When creating a TRIQS DLR mesh, e.g.$\beta$ , preventing the use of DLR at low temperatures. This is probably caused by the current approach to select DLR Matsubara frequencies, since a dense Matsubara grid is used with an upper cutoff proportional to $\beta$ .
MeshDLR
, the memory usage is linear in the inverse temperatureCurrently this limits$\beta$ to $10^4$ in practical use cases, see appended plot showing peak memory used during the initialization of a
MeshDLR
mesh in python.This issue was originally observed by @YannIntVeld, thank you for sharing 🥇.
Steps to Reproduce
Here is the benchmark script used to produce the plot above
Expected behavior: The peak memory usage should be$\log \beta$ for the DLR meshes to be applicable in the low temperature regime.
Actual behavior: Peak memory usage is linear in$\beta$
Versions
The text was updated successfully, but these errors were encountered: