I am trying to get all of the particles (gas, dm, stars) within a certain volume (e.g., a cube of a specified size centered on some specified location).
According to the example scripts, the following should work:
dm_pos = il.snapshot.loadSubset(basePath,99,'dm',['Coordinates'])
However, I am getting a memory error for TNG300.
I could try and find a way to use loadHalo around a list of FoF halos, but this must be missing "inner fuzz" and "outer fuzz" (http://www.tng-project.org/data/docs/specifications/ )
Can you suggest an algorithm to help? Is there a ways to load the "fuzz" separately?
For TNG300-1, this load command requires exactly 2500^3 * 3 * 8 bytes (the 3 from x,y,z, the 8 from float64 each). This is 350 GB, which is much more than I can allow on the JupyterLab interface, for instance.
2500^3 * 3 * 8
If you have an analysis node with this much memory, and have downloaded the data there, then this will certainly work. I do indeed often work like this, as it will be the most efficient way.
But if you don't, there really isn't any need to load all the particles at the same time. You could e.g. load just 1% of the particles (requiring 3.5 GB) of memory, select those within the volume of interest, compute whatever you need (e.g. a radial profile?), then discard those particles, load the next 1%, and so on.
If the type of computation you have requires simultaneously all particles in the volume, then you could do it in two phases, first: locate all the particle indices in the volume with the chunked approached as above, save this index set, then load only the properties of those particles.