Meeting Minutes from March 25
Stan Wojcicki's Talk on Off-Axis Experiment Beam Studies
http://home.fnal.gov/~wojcicki/Offaxis_final_0322.ppt.pdf
Stan has set up a stand-alone code that he used for optimization
studies with the numi beamline. The beauty of this code is that it
can be used to get neutrino fluxes with high statistical accuracy in
30 seconds or so, but the drawback is that it's a simplified model:
no secondary interactions of the pions once they are produced, and
the multiple scattering is applied when the particle leaves the horn
location, not "as the pion is going through the horn material". But
he can study what things it pays to worry about, and what things it
doesn't pay to worry about.
Minute-taker's comment: this is a really
interesting way to look at this problem, and I highly recommend
looking at this talk!
The main point here is that optimizing the off-axis flux is by no means
equivalent to optimizing the on-axis flux. By varying the target and
horn z positions, and slightly changing the "off axis" angle, Stan
was able to arrive at an optimized beam, which is peaked at 2.5GeV
instead of 2GeV. The two figures of merit he considered concerned the nue
background, and the nc background. The figures of merit were
the signal (which has the sin^2 delta mL/E weighting)
divided by the sqrt(nue background) or sqrt(nc background)
where he evaluated the nc background assuming a detector with perfect
energy resolution, but no particle id. The nc background really comes
from events in the peak, not the events in the high energy tail, once
you're at this level of a high energy tail. Thus the comment: " we have
met the enemy, and she is us".
The "optimal" beam he arrived at
by these considerations has about a factor of two higher figure of merit
over the LE beam, and about a factor of 30-50% higher figure of merit
over the ME beam (page 31 gives the summary).
The biggest increase in figure of merit comes from
moving the target back 1m from
the nominal Low energy position, and moving horn 2 to 24m, or 14m
downstream of the nominal low energy position. By changing the target
to be thinner and longer he gets about a 10% increase in neutrino flux,
and Mark Messier pointed out that the R&D for "medium" or "high energy"
targets has already been done so that's not as painful a change as one might
imagine...
Stan also looked at collimation or putting a third horn into the beamline,
neither of these things increased the flux (or the two figures of merit,
which are related to the signal/sqrt(background) ) by more than a
few per cent.
In these slides delta p/p refers to the width of the neutrino beam (p is the
neutrino momentum). One interesting question is: what's the best
width of the neutrino beam, or conversely, what's the goal for your
energy resolution in an off-axis detector?
These studies should be followed up with a more realistic simulation
(i.e. geant, which would have reinteractions and better multiple
scattering), although this study shows what the interesting
variables are to tune.
Another thing Stan looked at was the use of a near off-axis detector.
Since the off-axis flux is not very dependent on the momenta of the pions,
and the momenta of the pions is well detected by the on-axis neutrino
detector, that helps. Also, since the off-axis detector nue background
is primarily from muon decays not kaon decays, again this is constrained
by the on-axis near detector measurements (or the muon monitors). Finally,
he suggests that one could determine the off-axis nc background by using
the far detector data itself--since the NC process has a well-defined y
(visible hadronic energy divided by total neutrino energy)
distribution, one could arguably extrapolate from low visible
hadronic energy events, and look at the "pi0 energy/visible energy"
distributions and extrapolate under the peak. (this has certainly
been done in high energy neutrino beams). Kevin commented that it's
not clear that one would have the statistics to do this with the
far detector, and it's slightly worrisome with a near on-axis detector
since you certainly wouldn't have the same neutrino energy distribution,
and you could never get the underlying neutrino energy distribution
for the nc events you do see in the near detector. Finally, one other
concern Kevin pointed out about measuring the nc contamination is
that at 2GeV there are lots of nuclear/final state effects going on, so the
functional form you use to
"extrapolate under the peak" may not be justified at the 5 to 10% level,
and could very well be the dominant systematic error in the analysis.
Debbie Harris' Talk on Matter Effects Off Axis
http://home.fnal.gov/~dharris/meet318.ps.gz
These slides show the result of a back of the envelope study, looking
at what baselines are interesting for matter effects, and what matters
and what doesn't for studying matter effects. What is plotted on
many axes is the chi2 difference between making the right and wrong
choice for delta m^2, but there was no uncertainty assumed for the value
of theta_13 itself, which of course must also be incorporated. The
plots might be what you would consider doing given a precise measurement
of theta_13, say from JHF. You can see this chi2 difference
versus several different things, i.e. the background fraction, the
uncertainty on the background fraction, the beam energy, delta m^2,
etc., all as a function of baseline. This study shows that if one is
willing to take the hit in precision on theta_13 by itself, it's really
favorable to go to higher baselines--for a 2GeV beam and a baseline of
900km, the matter effects are as big as 80%!
The plots all assume (unless stated otherwise) that the error from
antineutrino running is as small as that from neutrino running, which
we saw earlier corresponds to a run time of about 2.5 to 3 times longer.
One thing to check is what the optimal use of time would be...what
fraction would you want to run in nubar vs nu? Also, the plots all
asume the same efficiency and systematic error for nu and nubar, which
is certainly not going to be the csse. The systematic errors in
nubar running are assumed to be uncorrelated with those in nu running.
But from the plots vs systematic errors you can see that there's not
much loss in going from 0 syst. error to 10% systematic error.
Deborah Harris
Last modified: Mon Apr 1 11:30:41 CST 2002