![]() |
Project
|
The TPC reconstruction workflow starts from the TPC digits, the clusterer reconstructs clusters in the ClusterHardware format. The clusters are directly written in the RAW page format. The raw data are passed onto the decoder providing the TPC native cluster format to the tracker.
Note: The format of the raw pages is preliminary and does not reflect what is currently implemented in the CRU.
The workflow consists of the following DPL processors:
tpc-digit-reader -> using tool o2::framework::RootTreeReadertpc-clusterer -> interfaces o2::tpc::HwClusterertpc-cluster-decoder -> interfaces o2::tpc::HardwareClusterDecodergpu-reconstruction -> interfaces o2::tpc::GPUCATrackingtpc-track-writer -> implements simple writing to ROOT fileDepending on the input and output types the default workflow is extended by the following readers and writers:
tpc-raw-cluster-writer writes the binary raw format data to binary branches in a ROOT filetpc-raw-cluster-reader reads data from binary branches of a ROOT filetpc-cluster-writer writes the binary native cluster data to binary branches in a ROOT filetpc-cluster-reader reads data from binary branches of a ROOT fileMC labels are passed through the workflow along with the data objects and also written together with the output at the configured stages (see output types).
The input can be created by running the simulation (o2sim) and the digitizer workflow (digitizer-workflow). The digitizer workflow produces the file tpcdigits.root by default, data is stored in separated branches for all sectors.
The workflow can be run starting from digits, raw clusters, or (native) clusters, or directly attached to the o2-sim-digitizer-workflow, see comment on inputs types below.
The workflow is implemented in the o2-tpc-reco-workflow executable.
Display all options
Important options for the tpc-digit-reader as initial publisher
The tpc-raw-cluster-reader uses the same options except the branch name configuration –databranch arg (==TPCClusterHw) RAW Cluster branch –mcbranch arg (==TPCClusterHwMCTruth) MC label branch
Options for the tpc-track-writer process
Examples:
Input type digitizer will create the clusterers with dangling input, this is used to connect the reconstruction workflow directly to the digitizer workflow.
All other input types will create a publisher process reading data from branches of a ROOT file. File and branch names are configurable. The MC labels are always read from a parallel branch, the sequence of data and MC objects is assumed to be identical.
The output type selects up to which final product the workflow is executed. Multiple outputs are supported in order to write data at intermediate steps, e.g.
MC label data are stored in corresponding branches per sector. The sequence of MC objects must match the sequence of data objects.
By default, all data is written to ROOT files, even the data in binary format like the raw data and cluster data. This allows to record multiple sets (i.e. timeframes/events) in one file alongside with the MC labels.
Parallel processing is controlled by the option --tpc-lanes n. The digit reader will fan out to n processing lanes, each with clusterer, and decoder. The tracker will fan in from multiple parallel lanes. For each sector, a dedicated DPL data channel is created. The channels are distributed among the lanes. The default configuration processes sector data belonging together in the same time slice, but in earlier implementations the sector data was distributed among multiple time slices (thus abusing the DPL time slice concept). The tracker spec implements optional buffering of the input data, if the set is not complete within one invocation of the processing function.
By default, all TPC sectors are processed by the workflow, option --tpc-sectors reduces this to a subset.
The tracker spec interfaces the o2::tpc::GPUCATracking worker class which can be initialized using an option string. The processor spec defines the option --tracker-option. Currently, the tracker should be run with options:
The most important tracker options are:
In one shell start the data distribution playback, e.g.
In another shell start the pedestal calibration
Create a raw-reader.cfg e.g.
Then run
Remove the --no-write-ccdb option and add
o2-tpc-laser-track-filter filters TPC/TRACKS looking for laser track candidates. The output is provided as TPC/LASERTRACKS. With the option --enable-writer, the filtere tracks can be writte to file (tpc-laser-tracks.root).
By default o2-tpc-calib-laser-tracks assumes non-filtered TPC/TRACKS as input. Using the option --use-filtered-tracks the input TPC/LASERTRACKS will be used.
running without laser track filter
running with laser track filter
The time slot calibration assumes as input prefiltered laser track candates (o2-tpc-laser-track-filter). They are published as TPC/LASERTRACKS.
–tf-per-slot arg (=5000) number of TFs per calibration time slot –max-delay arg (=3) number of slots in past to consider –min-entries arg (=100) minimum number of TFs with at least 50 tracks on each sideto finalize a slot so 100 means 5000 matched laser tracks on each side. –write-debug write a debug output tree.
Sending side zeromq
Receeving side zeromq
Sending side shmem
Receeving side shmem
This requires to do zero suppression in the first stage. For this the DigiDump class is used, wrapped in an o2 workflow.
Use either the DD part or raw file playback from above and add as processor
To directly dump the digits to file for inspection use for the reco workflow
The CMV workflows parse raw TPC data, buffer Common Mode Values per CRU on FLPs, then merge and aggregate them on a calibration node before serializing the CMVContainer in a TTree. The resulting object can be uploaded to the CCDB or written to the disk.
| Executable | Output | Description |
|---|---|---|
o2-tpc-cmv-to-vector | TPC/CMVVECTOR | Parses raw TPC data and creates vectors of CMVs per CRU |
o2-tpc-cmv-flp | TPC/CMVGROUP | Buffers N TFs per CRU on the FLP and groups them for forwarding |
o2-tpc-cmv-distribute | TTree / CCDB payload | Merges CRUs over N TFs on the calibration node, serializes the CMVContainer into a TTree, and either writes it to disk (--dump-cmvs) or forwards it as a CCDB object (--enable-CCDB-output) |
o2-tpc-cmv-to-vector| Option | Default | Description |
|---|---|---|
--input-spec | A:TPC/RAWDATA | DPL input spec for raw TPC data |
--crus | 0-359 | CRU range to process, comma-separated ranges |
--write-debug | false | Write a debug output tree every TF |
--write-debug-on-error | false | Write a debug output tree only when decoding errors occur |
--debug-file-name | /tmp/cmv_vector_debug.{run}.root | Name of the debug output ROOT file |
--write-raw-data-on-error | false | Dump raw data to file when decoding errors occur |
--raw-file-name | /tmp/cmv_debug.{run}.{raw_type} | Name of the raw debug output file |
--raw-data-type | 0 | Raw data format to dump on error: 0 = full TPC with DPL header, 1 = full TPC with DPL header (skip empty), 2 = full TPC no DPL header, 3 = full TPC no DPL header (skip empty), 4 = IDC raw only, 5 = CMV raw only |
--check-incomplete-hbf | false | Check and report incomplete HBFs in the raw parser |
o2-tpc-cmv-flp| Option | Default | Description |
|---|---|---|
--crus | 0-359 | CRU range handled by this FLP |
--lanes | hw_concurrency/2 | Parallel processing lanes (CRUs split per lane) |
--time-lanes | 1 | Parallel lanes for time-frame splitting |
--n-TFs-buffer | 1 | Number of TFs to buffer before forwarding |
--dump-cmvs-flp | false | Dump raw CMV vectors per CRU to a ROOT file each TF (for debugging) |
o2-tpc-cmv-distribute| Option | Default | Description |
|---|---|---|
--crus | 0-359 | CRU range expected from upstream |
--timeframes | 2000 | Number of TFs aggregated per calibration interval |
--firstTF | -1 | First time frame index; -1 = auto-detect from first incoming TF; values < -1 set an offset of \|firstTF\|+1 TFs before the first interval begins |
--lanes | 1 | Number of parallel lanes (CRUs are split evenly across lanes) |
--n-TFs-buffer | 1 | Number of TFs buffered per group in the upstream o2-tpc-cmv-flp (must match that workflow's setting) |
--enable-CCDB-output | false | Forward the CMVContainer TTree as a CCDB object to o2-calibration-ccdb-populator-workflow |
--use-precise-timestamp | false | Fetch orbit-reset and GRPECS from CCDB to compute a precise CCDB validity timestamp |
--dump-cmvs | false | Write the CMVContainer TTree to a local ROOT file on disk |
--use-sparse | false | Sparse encoding: skip zero time bins (raw uint16 values; combine with --use-compression-varint or --use-compression-huffman for compressed sparse output) |
--use-compression-varint | false | Delta + zigzag + varint compression over all values; combined with --use-sparse: varint-encoded exact values at non-zero positions |
--use-compression-huffman | false | Huffman encoding over all values; combined with --use-sparse: Huffman-encoded exact values at non-zero positions |
--cmv-zero-threshold | 0 | Zero out CMV values whose magnitude is below this threshold (ADC) after optional rounding and before compression; 0 disables |
--cmv-round-integers-threshold | 0 | Round values to nearest integer ADC for |v| ≤ N ADC before compression; 0 disables |
--cmv-dynamic-precision-mean | 1.0 | Gaussian centre in |CMV| (ADC) where the strongest fractional-bit trimming is applied |
--cmv-dynamic-precision-sigma | 0 | Gaussian width (ADC) for smooth CMV fractional-bit trimming; 0 disables |
--drop-data-after-nTFs | 0 | Drop data for a relative TF slot after this many TFs have passed without receiving all CRUs; 0 uses the default derived from --check-data-every-n |
--check-data-every-n | 0 | Check for missing CRU data every N invocations of the run function; -1 disables checking, 0 uses the default (timeframes/2) |
--nFactorTFs | 1000 | Number of TFs to skip before flushing the oldest incomplete aggregation interval |
In a real online setup, multiple FLPs each process their own CRU subset and forward compressed CMV groups to a central aggregator node via ZeroMQ.
**FLP side (Send.sh)** — run one instance per FLP (pass N_FLPs as first argument):
Each FLP connects to the aggregator's pull socket on port 30453 and pushes TPC/CMVGROUP and TPC/CMVORBITINFO messages. The CRU range is automatically split evenly across N_FLPs.
**Aggregator side (Receive.sh)**:
The aggregator binds the ZeroMQ pull socket and waits for all FLPs to connect. Once TPC/CMVGROUP and TPC/CMVORBITINFO data arrive, o2-tpc-cmv-distribute merges them, applies compression, writes the object to the disk and uploads to the CCDB.