As I was browsing the snapshot of the Halo 501 in Illustris-TNG-3-100-Dark simulation, I noticed that the particles Ids in the said halo would change every adjacent snapshot (for exemple, not one particle share the same ID in the snapshot 98 that in the snapshot 99). Would you have an explaination for that ? I am furthermore confused since it is clearly stated the IDs remain constant during the simulation. Does the whole pack of particles in a halo change every snapshot?
A given dark matter particle will have the same ID for the entire simulation, and dark matter particles never appear or disappear.
But "Halo 501" at snapshot 99 is not Halo 501 at snapshot 98. You need to use a merger tree to find the progenitor Halo ID of that halo at an earlier snapshot.
If you mean TNG100-3-Dark snapshot 99, then the central subhalo of halo 501 is subhalo 15174 whose (main) progenitor at snapshot 98 is subhalo 15222, with parent halo 489. I am sure that a large fraction of the dark matter particle (IDs) are common between these two (sub)halos.
Thanks for the answer. I still have a doubt, regarding your answer.
Does the merger tree relate as well the story of the halos, or only of the subhalos? Is it really possible to access the progenitor of a FoF Group? I don't see any field mentioning it in th FoF section. Moreover, I don't see a 'coordinate' fields in the subhalo section of the 'Data Specification' page, only 'SubhaloPos', for the particles of minimum energy (there's still N elements though). Is it possible to track the particle positions within a (sub)Halo?
All of the merger trees we have made are based on subhalos. There are no halo merger trees.
If you would like to find the progenitor/descendants of a halo, then it makes the most sense to go to the central subhalo of that halo, and then use the subhalo merger tree (as I did above).
All of the fields in the catalogs, such as SubhaloPos, are derived from the member particles/cells of a given (sub)halo. SubhaloPos is one definition of the position of the subhalo. You can also load the particles/cells which belong to a subhalo, and all of these have a Coordinates field (individual x,y,z position for each particle/cell).
Thanks again for your answer.
I have tried what you suggest. That is, getting the progenitor of the subhalo in order to follow its evolutions along the tree. Can you confirm the field 'FirstProgenitorID' allow me this manipulation? I asked because I get doubtful numbers that doesn't seem to match any subhalo ID. I can have for example :200000093 after reading the 'MainLeafProgenitorID' of a tree variable. I got this tree variable via the loadTree command. I tried to load the tree of a central subhalo ID, I could itself get with the 'GroupFirstSub' as explained in the example script. Even if I try to load the 'SubhaloID', I get number like 30000000300623854, which sounds absurd to me. Isn't supposed to display the ID of the central subhalo I used to load the tree? The section specifies : "Unique identifier of this subhalo, assigned in a "depth-first" fashion (Lemson & Springel 2006). This value is contiguous within a single tree."
What do I miss?
The IDs in the tree are "internal" to the tree, you can use them to follow nodes and branches of the tree.
But they do not refer to the subhalo IDs of the subhalo catalog at that snapshot. For that purpose, the SubfindID entry of the tree is used.
It clarifies a lot of things, thank you very much!
I just would like to confirm something, to be sure. The subfindID return a list of ID, the first element being the subhalo concerned by the loading of the tree. The others elements of the list belong to the successive progenitors of the tree right? It seems to make sense since I checked the second element of the list is in the list of the central halo in snapshot 98, thus a snapshot before the 99, which is the one I used to load the tree. Same remark for MainLeafProgenitorID and FirstProgenitorID (but this time with the 'internal' ID of the nodes). Moreover, since there is only one central subhalo per halo
The managing ans structure of the tree seems rather tricky, and I am not sure I can understand how it exactly works by myself.
The other elements of the "list' (i.e. dataset named SubfindID) for a given tree are, yes, belonging to all the other progenitors, across different snapshots.
Most use cases of the tree need only the main progenitor branch, which (by construction) is the first N elements of the list. I imagine you only need to work with the MPB (which you can exclusively load with the loadMPB=True option).
Another subject, but I can't connect to the servor since this morning. It is displayed after I clicked on the button 'Launch Now" the following message : "Your server appears to be down. Try restarting it from the hub". Is it a global problem or only concern my account? (which is related to the e-mail : firstname.lastname@example.org)
I would have another technical question. I'd like to track the coordinates of dm particles which belong to a specific halo at a given snapshot. The problem is, as you mentionned in a previous post, there is no such thing as a halo merger tree for halo. Therefore, I do not really know how I am supposed to track these particles along the differents snapshots, except storing the particle IDs, and finding them back within the list of dm particles, obtained with the loadSubset command. This is a bit long, is there a easiest way to do it?
If you want to track individual DM particles, since their IDs are constant and never changing (unlike all baryonic components), then yes you can simply load all DM IDs at snapshots back in time, find the specific ID(s) you are interested in, and then load the corresponding positions.
Or, if you just want to track the location of a halo, you can use the merger tree (by tracking the location of its central subhalo).
Thanks for the answer.
Just by curiosity, the baryonic components have their IDs changed because there are not conserved (star formation, chemical process or whatever...) ?
Yes, exactly. In addition, mass can flow between elements (e.g. mass return from stars to gas), so following an element with the same ID through time does not, in that case, actually follow mass. This is the role of the tracer particles.
I have tried tracking the 'dm' particles along the different snapshot, but my current program is excessively long... I wanted to know if according to you, there was a better way to do it, maybe I missed a key data structure which would help me to perform it.
I do as the following :
It still take hours, the main problem is I do not know how to find the good halo to search for the dm particles...
Any idea by chance?
The code should be simple, but yes it maybe very slow/expensive to run.
I like your idea to first check if the DM particle is in the same halo (that is, in the progenitor). This is a quick check, requiring loading only this subset of particles. If it is there, you are done. If it isn't there, you have no choice but to load the entire snapshot. This will be a bit slow. You can do it in chunks to save memory, if needed.
Ok, the loading command is very demanding in time? Because I do not actually load the whole subset with the loadSubset command, but load every halo known until I have all the particles. Maybe loading is the heaviest part, because just comparing two numbers should be a matter of less than microsecond normally...
You can benchmark the speed of parts of your code, if you need to determine which takes the most time.
Although I would suggest to search the progenitor halo (N), I would not search any other halos (N+1, N+2, ...). If the DM particle is not in the progenitor halo, it is likely in the IGM, or in a halo of a very different mass, so looking through halos one at a time will be slower than simply loading the entire snapshot at that point.
I still have too much trouble to do it, it takes too much time (severel tens of minutes) and anyway, the program 'crashes' before it has any chance to finish. Maybe the best chance is yet to sort the list with a fusion sort algorithm and use a research by dichotomy... Beforehand, I store the ancient index of the elements with the particle IDs associated and only then sort it with a merge sort (in log2(n)n in best case). Then, I research the particle IDs by dichotomy (in O(log2(N)) with its previous index in order to retrieve the coordinates. However, I am still helpless, the program crash when I try to build the list with index and the IDs. There is around 150 000 000 particles, and it stops around the 35 000 000th iteration...
I have really no idea what I can do, I think it theoratically works and improve the speed of calculations, but the jupyter notebook seems like can not withstand so much. Or is it my computer?
If you are running on the Lab, there is a memory limit (10GB), so it will not be possible to load e.g. the Coordinates of an entire large snapshot at once.
A nice approach is to load the data "in chunks" as discussed here. Since you are just looking for your ancient ID(s), you can search chunk by chunk, and stop once you have found them all.