Proposed Trigger Configuration file changes
for 2002-2003 run
Version 10.0
Jeff Landgraf: jml@bnl.gov
We need to support two major changes to the trigger for the 2002-2003 run. The first is that we need to support the “New TCU”, which from the configuration perspective is different mainly because the Trigger Word lookup table is split in two: the Physics Word LUT and the Trigger Word LUT. Ideally, we keep a similar user interface, in which the user defines bit patterns that correspond directly to a trigger condition. This is not entirely trivial to do. The second change is to support the “Trigger accounting”. The main point here is that the higher level triggers are linked to the lower level triggers. Each L1 trigger will have a list of Trigger Words, and will only accept events if the trigger word is on that list. L1 & L2 triggers will only accept events accepted by a corresponding algorithm at the lower level.
Also, purely for my convenience, I want to clean up a few aspects of the configuration. First, L3 algorithms should go into the trigger setup structures. Second, algorithm structures should be modified so that L1, L2, & L3 algorithm are setup the same way. Finally, I want to expand the configurable DSM registers, to be more general and to reduce wasted space a bit.
The general run control scheme stays exactly the same. All commands are unchanged. The run control writes the configuration structure into binary files that are available to all nodes over NFS. Run control sends the message RTS_SEND_CONFIG to all nodes. Each node reads the configuration file as required and once finished returns RTS_SEND_CONFIG to the run control. The configuration file is not changed between the two messages, but after the RTS_SEND_CONFIG message has been sent back to Run Control, the file should not be read by any node.
There are a lot of changes needed the Trigger & some for the L3 algorithm setup. The rest of the configuration remains unchanged.
The most current version of the configuration structures will be in
/RTS/include/RC_Config.10.0.h
When we make this the production version, I will link RC_Config.h to point to this file.
There are many points throughout this document where I have arrays of quantities that can have a variable number of arguments. I set the maximum index of these arrays to MAX, for now as I haven’t specified all of them yet. I assume that I know the number of elements in the arrays. In the final configuration file I will know this by delimiting the last entry with zeros if possible. If zero is a valid parameter I will add a separate length field to the appropriate structures.
In the end to configure the TCU we need to create 4 LUTs for the TCU: the Physics Word LUT, Trigger Word LUT, Action Word LUT and Prescale table. Last year, run control sent a set of bit masks to L1CTL which completely specified the TW LUT and the AW LUT. We will do the same thing this year, although the rules for building the LUT’s from the bit masks will be more complicated. In addition, the GUI itself will be building these lookup tables locally according to the same rules as part of the process for generating the structures and verifying that they are consistent.
On the user side, the trigger setup will include one or more Triggers. Each Trigger
Is a collection of trigger conditions, where a trigger condition is the physics part of the TW_DEF tables bit masks from last year. It also includes the detector LIVE requirement, the desired action word, pre-post counters, and detector request. Finally, it will include fields that the GUI will use to calculate desired pre-scales. (It will also include the info for the L1, L2 & L3 triggers to be discussed later.) The structure will look like this:
struct
Trigger
{
UINT32 offlineBit;
PwCondition L0conditions[MAX];
UINT32 detectorLiveOnBits;
// required to be alive
UINT32 detectorLiveOffBits;
UINT32 detectorRequest; // these fire!
UINT32 desiredAW;
UINT32 desiredPre;
UINT32 desiredPost;
float
ZDC_rate;
float expected_L0_fraction;
float desired_L0_rate;
//-------------------------------------------------------
//
The following are generated by the GUI from
//
the above parameters...
UINT32 desiredL0PS; // User doesn’t enter
UINT32 PW_used[MAX]; // List of PW’s contributing
UINT32 TW_used[MAX]; // List of TW’s contributing
//-------------------------------------------------------
....more stuff not related to TCU configuration....
};
struct
PwCondition
{
UINT32 onbits;
UINT32 offbits;
};
This is roughly the same as TW_DEF last year. There are several important differences:
The overall trigger setup is a collection of Triggers:
struct
TrgSetup
{
Trigger triggers[MAX]; // GUI only
PwCondition contaminationDef; //----------------------
PwCondition pwc[MAX];
//
PwLink pwl[MAX]; // Define the TCU LUTs
TwCondition twc[MAX];
//
TwLink twl[MAX]; //
AwCondition awc[MAX];
//----------------------
...stuff unrelated to TCU...
};
So we start with a TrgSetup structure, containing n valid trigger entries that were entered by the user. The contaminationDef condition is also set by the user. All other entries in the TrgSetup structure start out zeroed.
The first thing I do is loop through each trigger and copy each L0Condition into the pwc[] array. It is possible that two triggers have conditions that are exactly the same. In that case, I only enter the condition into pwc[] once. I call the index in this array a condition’s pwcIdx.
Secondly I build a table that is closely analogous to the PW LUT, which I’ll call the PwDef table. This is constructed as follows:
UINT32
PwDef[2^16]; // index is input bits
for PW LUT
for(int
input = 0; input<2^16; input++)
{
for(int
pwcIdx=0;pwcIdx<nConditions;pwcIdx++)
{
if(ConditionSatisfied(pwc[pwcIdx], input))
PwDef
|= 1<<pwcIdx;
}
}
PwDef is logically just like a PW although it has nConditions bits rather than 6 bits. In fact, it is even better than an arbitrary PW because it explicitly contains the conditions that lead to the PW. Regions where two conditions overlap are obvious because the pwcIdx bit is set for both conditions. The regions defined by a given PwDef are mutually exclusive by construction, and the confusing order dependence of conditions is completely removed.
To get a 6 bit action word, we need to associate the PwDef’s to the PW’s. For this we use the pwl[] array
PwLink
{
UINT32 pwDef;
UINT32 PW;
};
I fill this table by looping through the PwDef[] array I constructed above. For each unique value for PwDef I add an entry to PwLink. The PW value starts at 0 (for PwDef==0) and increments on each new PwDef.
Potentially, There could be as many as 2^nConditions PW created in this way, but in fact most conditions only overlap with one or two other conditions and the number of PW’s really turns out to be on the order of 2 * nConditions which easily fits into the 6-8 bits allocated.
To construct the actual PW LUT, one uses the pwc[] and pwl[] in a similar loop:
UINT32
PWLUT[2^16]; // index is input bits
for PW LUT
for(int
input = 0; input<2^16; input++)
{
UINT PW;
UINT PwDef
for(int
pwcIdx=0;pwcIdx<nConditions;pwcIdx++)
{
if(ConditionSatisfied(pwc[pwcIdx], input))
{
PwDef
|= 1<<pwcIdx;
}
}
for(int i=0;i<pwlEntries;i++)
{
if(pwl[i].PwDef == PwDef)
{
PW
= pwl[i].PW;
break;
}
}
PWLUT[input] = PW;
if(ConditionsSatisfied(contaminationDef,
input)
{
PWLUT[input] |= 1 << 13; // Assuming contamination
// bit is 13!
}
}
Only two hitch’s: (1) you need to find the PW corresponding to the PwDef, (2) the contamination bit is effectively the 13th bit of the PW from which it gets sent to the detector busy FPGA.
The construction for the TW LUT is highly analogous to that of the PW LUT. The first difference is that the twConditions are not defined by onbits and offbits. Instead the TW condition is defined by:
struct
TwConditions
{
UINT32
PW;
UINT32
detectorLiveOnBits;
UINT32
detectorLiveOffBits;
};
Each PW can require different detector bits. In addition, it is possible that a single PW comes from more than one trigger, resulting in more than one set of detector bits to be associated with a given PW.
For each trigger, I will maintain a list of PWs that satisfy the conditions for that trigger. I can produce this list at any time by looping through the pwl[] array to find conditions contributing to each PW. Then I loop through each trigger to see which triggers contain that condition.
To build TwConditions, I loop through every trigger. For each PW involved I add a entry into twc[] containing PW and that triggers detector masks. Again, as for the PwConditions, I will suppress repeated conditions.
Now I follow the exact procedure as for the PW LUT to build the twl[] structure.
struct
TwLink
{
UINT32 twDef_hi;
UINT32 twDef_lo;
UINT32 TW;
};
I am assuming here, that there may be more than 32 TwConditions, so I expand the number of bits for TwDef.
In the same way as I obtain the list of PW for each Trigger, I will construct a list of TW for each Trigger.
There is an important implication here, which is that the TW and PW are generated by the computer without any information about the intended use. Therefore, TW and PW become featureless sequential integers. We drop any 0x1100 --> central. There definition is not guaranteed to be the same for runs with different configurations. They become internal parameters and will only be used by us for debugging.
I treat the AW LUT & Prescale table interchangeably because they are both indexed by the TW. At this point, we have a set of Triggers. Each Trigger has a corresponding set of PW’s and a set of TW’s associated with it.
The GUI will provide the following structure to aid in the AW & Prescale table construction:
struct
AwConditions
{
UINT32 TW;
UINT32 PS;
UINT32 AW;
UINT32 detectorRequest;
UINT32 pre;
UINT32 post;
};
To build this structure, for each TW I will examine all Triggers that contain TW. I will set the values according to the following table:
desiredPS take the smallest value
desiredAW require exact match
desiredDetectorRequest require exact match
desiredPre take the biggest value
desiredPost take the biggest value
If the value requires an exact match, but two or more triggers have different values then the trigger setup is impossible and the GUI will not allow the run to start.
It is important to make the handling of High-Level triggers consistent. It is also important to be able to integrate the different levels together if we wish to analyze the data produced.
Here is the paradigm I propose. (Most elements should be familiar because this is not a radical departure from the current scheme in L3.):
The structure used to configure L1, L2 & L3 algorithms will look something like this:
struct
L1Algorithm
{
int id;
int userInt[5];
int userFloat[5];
int specialProcessing;
//-------------------------------------------
int PS; // generated by GUI
int statusBit;
UINT32 evtTW[MAX];
FLOAT
evtRS[MAX];
//-------------------------------------------
};
struct
L2Algorithm // L2 & L3
{
int id;
// algorithm id
int userInt[5]; // user variables
float userFloat[5];
int specialProcessing;
//-------------------------------------------
int PS; // generated by GUI
int statusBit;
UINT32 evtTrgBit;
//-------------------------------------------
};
Here, the statusBit is the bit corresponding to this instance of the algorithm in the trigger summary flags. For L1 algorithms, the evtTW[] lists the TW’s that the algorithm should examine. If an events TW is not on this list, the algorithm must reject the event. Similiarly, for L2 & L3 algorithms, the algorithm should examine the events trigger summary for the previous level. If the evtTrgBit bit is not set, then the algorithm must reject the event. The specialProcessing flag is for future expansion. It’s use would be to set special processing for this event. (Write out raw data instead of clusters, etc...)
In the same way as for the L0 configuration, some of these values are entered by the user, and others are calculated by the GUI. The user will enter values into the trigger structure.
struct
Trigger
{
.... L0 Configuration above ....
Algorithm l1;
Algorithm l2;
Algorithm l3;
float expected_L1_fraction;
float desired_L1_rate;
float expected_L2_fraction;
float desired_L2_rate;
float expected_L3_fraction;
float desired_L3_rate;
.... unrelated to high level algorithms
....
};
As for the L0 configuration, the subsystems will not examine the data from the Trigger structure. Instead, there will be corresponding entries in TrgSetup
struct
TrgSetup
{
... L0 config ...
Algorithm l1_algorithms[MAX];
Algorithm l2_algorithms[MAX];
Algorithm l3_algorithms[MAX];
... other stuff ...
};
The GUI will generate the structures in TrgSetup from the information the user provided info in the Trigger structures.
To be useful for analysis, the high-level triggers should uniformly sample events satisfying the lower-level conditions for that trigger. To do this, we must handle the complication that the L0 TW divides the event space into mutually exclusive sets, but the triggers we are interested in overlap. I have suggested a bunch of ways to do this in the past, at the trigger meeting I suggested doing this in a new system after DAQ but before OFFLINE. Here I suggest yet another method.
The goal is just to ensure that the effective prescale of every TW being fed into a L1 algorithm is the same. Each TW has a different PS at L0, so L1 needs to correct for this by maintaining different PS for each TW. The relationship we want is simply:
PS_0(TW) * PS_1(TW) = Constant (or)
PS_0(TW) * RS(TW) * PS_1 = Constant
Where RS(TW) is the rescale. With this definition, PS_1 is the additional scaling done purely at L1, independent of TW. PS_1 can never be less than one, so assuming we don’t want L1 to add any scaling, the constant should be MAX(PS_0(TW)).
The upshot is that L1 must use a different prescale for each Trigger Word. The prescale factor should be given by: RS(TW) * PS.
This simple rescale combined with the requirements / restrictions placed on the L1,L2&L3 algorithms guarantees that the sample of events satisfying a given L3 algorithm defined in a Trigger is exactly the same sample no matter how many other Triggers are defined at the same time. The effective prescale of the trigger is:
PS_EFF = MAX(PS_0(TW)) * PS_1 * PS_2 * PS_3
where TW is restricted to the trigger words contributing to this trigger.
The scaler information is NOT required to ensure that the event sample is unbiased.
The deadtime for the Trigger can be easily calculated from the scalers because of the restriction that each TW contributing to the Trigger has the same detectorRequest.
The total cross-section can be calculated by using detector deadtime along with the effective prescale, and external information about the beam characteristics.
Special Triggers would also be implemented within the scheme of the Trigger structure. Generally the L1, L2 & L3 triggers would all be “Always accept”. The L0 setup is unchanged. The main addition is that the TCD controller needs to be given instructions to fire the special triggers with some frequency. These instructions are placed in the Trigger structure.
struct
Trigger
{
... L0, L1, L2 & L3 stuff ...
TcdSetup tcd;
};
struct
TcdSetup
{
int tcdId;
int seconds;
};
This is the same as last year, except I will pick up the trigger command from the L0 part of the Trigger definition.
The L3 summary is already available in DATAP. This because there is a one-to-one
correspondence between L3 algorithm instances and Trigger entries, the L3
summary becomes a bitmask showing which Triggers were satisfied by the
event. (The difference from last year
is that the L0, L1 & L2 requirements and the requirement that the event be
unbiased are also now given by the L3 summary alone.) The only trick is that the bits in the L3 summary are
arbitrary. They just refer to the index
of the Trigger in the Trigger array. We
need to map these bits to a larger bitmask that contains static bits for every
Trigger used by STAR for all time. I do
this by writing an entry to the database for every run.
struct
TriggerInfo
{
UINT32 statusBits[32];
UINT32 PS[32];
UINT32 PsL0[32];
UINT32 PsL1[32];
UINT32 PsL2[32];
UINT32 PsL3[32];
float fractionalCrossSection[32];
float totalCrossSection[32];
float deadTime[32];
// etc...
};
The statusBits flag is a lookup table linking the bit in the L3 summary to the static Trigger bit. This bit is defined in the original Trigger definition structure. PS is the effective total prescale by trigger. The additional fields are ideas for quantities that could be added to the database record when the run is stopped to make analysis easier. These could each be calculated from a scaler analysis.
The final step is that StEvent needs a function
bool
checkTriggerBit(int triggerBit)
{
int statusBit=0xffff;
for(int i=0;i<32;i++)
{
if(TriggerInfo.statusBits[i]
== triggerBit)
{
statusBit = i;
break;
}
}
if (statusBit
== 0xffff)
return
false; //trigger not in run
return (statusBit & L3_Summary);
}
To query bits in the 2^32 bit virtual Trigger bitmask.
Last year we had a bunch of huge arrays for the DSM register values:
struct
TRG_SETUP
{
.....
int L1_DSM_Reg_Data_Values[32][32];
int L2_DSM_Reg_Data_Values[32][32];
.....
int MWDC_DSM_Reg_Data_Values[32][32];
.....
};
This led to lots of unused space, as we only used a few registers per DSM. Also, it meant that the display had trigger parameters organized by DSM rather than by use. Also, the dictionary entries that allowed easy naming of these parameters made them very useful for expansion parameters for unrelated systems such as the FPD.
To make things a little more clear this year I want to switch to a single array:
struct
RegValue
{
int
object;
int
index;
int
register;
int
value;
};
struct
TrgSetup
{
... L0, L1, L2, L3, special trigs, etc...
RegValue registers[MAX];
};
We will agree on object numbers for the different DSM types. Index & register will map to the two indexes of the last years DSM’s. We add new object numbers for expansion.