![]() |
Project
|
Detailed debug information about stepping can be directed to standard output using the LD_PRELOAD
env variable, which "injects" a special logging library (which intercepts some calls) in the executable that follows in the command line.
The stepping logger information can also be directed to an output tree for more detailed investigations. Default name is MCStepLoggerOutput.root
(and can be changed by setting the MCSTEPLOG_OUTFILE
env variable).
Finally the logger can use a map file to give names to some logical grouping of volumes. For instance to map all sensitive volumes from a given detector DET
to a common label DET
. That label can then be used to query information about the detector steps "as a whole" when using the StepLoggerTree
output tree.
Note also the existence of the LD_DEBUG
variable which can be used to see in details what libraries are loaded (and much more if needed...).
LD_PRELOAD
must be replaced by DYLD_INSERT_LIBRARIES
, e.g. :
LD_DEBUG=libs
must be replaced by DYLD_PRINT_LIBRARIES=1
LD_DEBUG=statistics
must be replaced by DYLD_PRINT_STATISTICS=1
Information collected and stored in MCStepLoggerOutput.root
can be further investigated using the excutable mcStepAnalysis
. This executable is independent of the simulation itself and produces therefore no overhead when running a simulation. 2 commands are so far available (analyze
, checkFile
) including useful help message when typing
2 file formats having a standardised structure play a role. On one hand, these are the files produced by the step logging which are the input files for the analysis as explained in the following. On the other hand, each analysis produces an output file containing histograms along with some meta information. Sanity and the type of the file can be checked via
The basic command containing all required parameters is
where
-f <MCStepLoggerOutputFile>
passes the input file produced with the MCStepLogger
as explained above (default name is MCStepLoggerOutput.root
)-o <parent/output/dir>
provides the top directory for the analysis output (if this does not exist, it is created automatically)-l <label>
adds a label, e.g. for plots produced later.A ROOT
file at parent/output/dir/MetaAnalysis/Analysis.root
is produced containing all histograms as well as important meta information. Histogram objects are derived from ROOT
s TH1
classes.
Files produced as described before can be investigated further or used to plot the histograms therein. The interface to read these files is the class AnalysisFile
and histograms can be requested by their names.
MCAnalysisManager
There is the staic method MCAnalysisManager::Instance()
which returns a reference to a static instance. So always make sure you don't copy it but get the reference in case you want to work with that instance on a global scope, i.e.
Histograms which should be written to disk in an analysis are managed by MCAnalysisFileWrapper
objects. These also make sure that no histogram is created twice. Therefore, all of these histograms should be created like T* myHisto = MCAnalysis::getHistogram<T>(...)
where the template parameter T
must be a class deriving from ROOT's TH1
. It then returns a pointer to the desired object. Managing histograms not on the level of an analysis also enables for requesting histograms from another analysis. In that way one can write a custom analysis for a specific use case but can still ask for e.g. for a histogram from the BasicMCAnalysis
to derive some additional and more generic information about a simulation run. Hence, never manually delete an object obtained like this.
For the BasicMCAnalysis
there is a small test suite to compare the obtained values from a simulation run to reference values contained in a JSON file. So far, that is a prototype only caring about the total number of steps and total number of tracks obtained in the simulation. The JSON
file looks as follows
The test is steered via
Note, that the test does not know anything about the settings of the simulation run, i.e. there are no information about the primary generator of the transport engine etc. The user has to make sure to apply this coherently.
Although providing already a number of different observables, users might want to add custom observables for their analysis. To do so, a directory for custom analyses has to be created where analysis macros can be provided and loaded at run-time. Note, that only the basic analysis is actually contained in the compiled code. One of the main reasons for that is to enable for a coherent comparison between different points in the git history. However, if you feel like there is an important observable missing, feel free to report that.
The logic of adding a custom analysis is very similar to that of Rivet
and the general workflow should look familiar in any case. Say, your analysis macro directory is $ANALYSIS_MACROS/
where you have your macro mySimulationAnalysisc.C
(don't place any other files there since these cannot be read...). A skeleton looks as follows, also containing more information on how and why things are implemented like they are:
After having this, you are ready to include this in the analysis run by typing
where now
-d $ANALYSIS_MACROS
points the executable to the directory of where your macros are located-a mySimulationAnalysis
tells which analysis to load. In case you have more analyses in that directory you want to load, just append the names of all analyses you want to run. The output of the custom analysis is written to parent/output/dir/mySimulationAnalysis/
and that's it.