Loading all (global) particle data in subsets/chunks (avoid running out of memory)

Aniruddha Madhava
  • 20 Jul '22

Good evening,
I wanted to extract particle data from the TNG300-3 simulation for a couple of redshifts. Of course, due to memory limits in the JupyterLab Workspace, it is not possible to extract information for all the particles in the entire box. As such, I was looking into extracting information for particles located within a smaller "field of view". I looked at the following threads.:

I tried to implement the suggestions and code given in the threads, and it did not work since I did not know what I was doing. Would it be possible to clarify this information? How exactly do I go about looking for a smaller FOV within the TNG300 simulation? I would really appreciate if someone could help.
Thank you so much.

Dylan Nelson
  • 1 Aug '22

Both of the code snippets in the second thread show good starts at this.

The first example shows how you can re-use existing functions, and create a dictionary called subset with the specific index range you want to load, and then pass this to snapshot.loadSubset().

The second example shows how you can use the simulation.hdf5 file, in order to skip using the illustris_python scripts completely, and just use h5py slice syntax to load particle data chunks, in sequence.

Dylan Nelson
  • 1 Feb

A powerful alternative is to use scida, which can automatically chunk calculations.

  • Page 1 of 1