Downloading data (in HDF5) for different snapshots

Shivani Thakur
  • 18 Jul '22

I have been trying to download ParticleType = star for the position, velocity and mass of each particle in a given subfind id through your API documentation for python. It worked perfectly the first time around, but then on running the same code for a different snapshot, gives the error :
KeyError: "Unable to open object (object 'PartType4' doesn't exist)"

It can't seem to overwrite the previously created HDF5 file, is there a way to create multiple HDF5 files for the data needed for different snapshots instead of needing to delete the HDF5 file whenever its created?

Thank you for your time!

~ Shivani

Dylan Nelson
  • 18 Jul '22

If you are using the get() function, if it has downloaded and saved a file, then it returns the name of that file.

You should probably then rename (with python), or organize in some way, the resulting files. E.g. into directories for simulation, snapshot, and so on. Otherwise, you may start to lose track of what files came from where, and they may start to overwrite each other (as you have seen).

Shivani Thakur
  • 18 Jul '22

I have been trying to rename the files in this way:
url = "" + str(snapno) + "/subhalos/" + str(id)
sub = get(url) # get json response of subhalo properties
file_name = url + "/cutout.hdf5/{0}".format(snapno)
saved_filename = get(file_name,params) # get and save HDF5 cutout file

But it ends up giving me another error:

in <module>
with h5py.File(saved_filename) as f:

in init
name = filename_encode(name)

in filename_encode
filename = fspath(filename)

TypeError: expected str, bytes or os.PathLike object, not Response

Am I naming them wrong?

Dylan Nelson
  • 18 Jul '22

Hi Shivani,

You will have to look into the Python code you are writing. Check that saved_filename is actually a string, containing the name of a file, and not something else.

  • Page 1 of 1