Comments on the D0 note 4330
"Flavor Oscillations in Bd mesons with opposite side muon tagging"

removes ~ 3.5% of signal

N(D0) = 99394.1: +/- 609.786
N(D*) before tag = 25718.4: +/- 167.748
N(D*) after tag = 1222.03: +/- 36.2998
N(D*) after tag & fiducial cuts = 1091.94: +/- 34.2717

New Asymmetry (y) vs. VPDL (x)
Double_t x[7] = {-0.0125,0.0125,0.0375,0.0625,0.0875,0.1125,0.1875};
Double_t ex[7] = {0.0125,0.0125,0.0125,0.0125,0.0125,0.0125,0.0625};
Double_t y[7] = {0.323095,0.431907,0.424368,0.387412,0.197254,0.0626717,-0.120423};
Double_t ey[7] = {0.139223,0.0628955,0.058983,0.0707609,0.0937261,0.11207,0.0839216};

New Minimization output
chi^2  =   2.12/5    
dM             0.49614   +/-    0.05060
pur(IST)     0.73610   +/-    0.02117

the change is small.

bad runs & double counting removed

bug: nseg >2 for wrong sign of tag combination - should have been nseg>1. since there are almost no nseg=2 cases the change is small.

Results :
0 & -0.025 & 0 & 36.1612 & 6.37292 & 18.4427 & 4.73711 & 0.324492 & 0.139352
1 & 0 & 0.025 & 161.298 & 13.2199 & 66.6762 & 8.56855 & 0.415054 & 0.0630816
2 & 0.025 & 0.05 & 183.417 & 13.925 & 77.3423 & 9.25024 & 0.406793 & 0.0591101
3 & 0.05 & 0.075 & 127.26 & 11.6175 & 59.1881 & 8.01713 & 0.365098 & 0.070785
4 & 0.075 & 0.1 & 71.2529 & 8.81503 & 49.6743 & 7.34371 & 0.178443 & 0.093317
5 & 0.1 & 0.125 & 45.6823 & 7.0466 & 40.2941 & 6.6016 & 0.0626717 & 0.11207
6 & 0.125 & 0.25 & 67.2523 & 8.64769 & 85.6674 & 9.56725 & -0.120423 & 0.0839216
: N(D0) = 99394.1: +/- 609.786
: N(D*) before tag = 25718.4: +/- 167.748
: N(D*) after tag = 1222.03: +/- 36.2998
: N(D*) after tag & fiducial cuts = 1091.94: +/- 34.2717
Double_t x[7] = {-0.0125,0.0125,0.0375,0.0625,0.0875,0.1125,0.1875};
Double_t ex[7] = {0.0125,0.0125,0.0125,0.0125,0.0125,0.0125,0.0625};
Double_t y[7] = {0.324492,0.415054,0.406793,0.365098,0.178443,0.0626717,-0.120423};
Double_t ey[7] = {0.139352,0.0630816,0.0591101,0.070785,0.093317,0.11207,0.0839216};

chi2 = 1.655572
dM             0.50192  +/-    0.05315
pur(IST)     0.72690  +/-    0.02127

Asymmetry :
Bin: 0 0.324492 0.139352 0.44307
Bin: 1 0.415054 0.0630816 0.434185
Bin: 2 0.406793 0.0591101 0.390509
Bin: 3 0.365098 0.070785 0.30904
Bin: 4 0.178443 0.093317 0.202149
Bin: 5 0.0626717 0.11207 0.0854351
Bin: 6 -0.120423 0.0839216 -0.105729

the change is small.

Bad runs & double counting removed

Changed procedure to determnine # of D* events in the mass difference peak. Used # of events in the window [0.141-0.149 Gev].
Bkg was estimated as # of events with wrong sign and subtracted.  Results

chi^2 = 1.15/5
dM             0.47595  +/-    0.05271
pur(IST)     0.73204  +/-    0.02194

Asymmetry :
Bin: 0 0.37037 0.161499 0.453449
Bin: 1 0.452991 0.0659028 0.445257
Bin: 2 0.3829 0.0595146 0.404907
Bin: 3 0.383784 0.0764083 0.329159
Bin: 4 0.209677 0.0947979 0.228584
Bin: 5 0.149425 0.119254 0.116679
Bin: 6 -0.109677 0.0886623 -0.0803413


Variation of the above with different normalization for the bkg. the bkg is taken from the wrong slow pion sign distribution
as before but there is a normalization factor applied to the bkg. the normalization factor is determined as ratio of bkg off
mass difference peak in the window [0.16 - 0.18 GeV].
chi^2 =   2.025985/5
dM           0.46090   +/-    0.05358
pur(IST)   0.73262   +/-    0.02228

Asymmetry
Bin: 0 0.451524 0.201277 0.454791
Bin: 1 0.443057 0.0655189 0.447077
Bin: 2 0.382375 0.0599601 0.409042
Bin: 3 0.424946 0.0850637 0.337388
Bin: 4 0.189265 0.0952682 0.241627
Bin: 5 0.216802 0.138732 0.133975
Bin: 6 -0.0917272 0.0883765 -0.0629367

Changed D* fitting function to "gaus + (1-exp+p1)"  (originally "gaus+sqrt")

chi2 =  2.205614
dM             0.50611 +/-  0.05482
pur(IST)     0.73116 +/-  0.02179

Asymmetry
Bin: 0 0.304334 0.12838 0.45132
Bin: 1 0.44119 0.0683116 0.442121
Bin: 2 0.400782 0.0584471 0.396915
Bin: 3 0.368547 0.0667766 0.312683
Bin: 4 0.192028 0.0918918 0.202385
Bin: 5 0.0368326 0.122242 0.0823411
Bin: 6 -0.129667 0.0901646 -0.111856


max variation from nominal 0.502 declared the "fitting procedure systematic error" = 0.502 - 0.461 = 0.041 ps^-1
0 & -0.025 & 0 & 36.1612 & 6.37292 & 18.4427 & 4.73711 & 0.324492 & 0.139352
1 & 0 & 0.025 & 161.298 & 13.2199 & 66.6762 & 8.56855 & 0.415054 & 0.0630816
2 & 0.025 & 0.05 & 183.417 & 13.925 & 77.3423 & 9.25024 & 0.406793 & 0.0591101
3 & 0.05 & 0.075 & 127.26 & 11.6175 & 59.1881 & 8.01713 & 0.365098 & 0.070785
4 & 0.075 & 0.1 & 71.2529 & 8.81503 & 49.6743 & 7.34371 & 0.178443 & 0.093317
5 & 0.1 & 0.125 & 45.6823 & 7.0466 & 40.2941 & 6.6016 & 0.0626717 & 0.11207
6 & 0.125 & 0.25 & 67.2523 & 8.64769 & 85.6674 & 9.56725 & -0.120423 & 0.0839216
 N(D0) = 99394.1: +/- 609.786
: N(D*) before tag = 25718.4: +/- 167.748
: N(D*) after tag = 1222.03: +/- 36.2998
: N(D*) after tag & fiducial cuts = 1091.94: +/- 34.2717
Double_t x[7] = {-0.0125,0.0125,0.0375,0.0625,0.0875,0.1125,0.1875};
Double_t ex[7] = {0.0125,0.0125,0.0125,0.0125,0.0125,0.0125,0.0625};
Double_t y[7] = {0.324492,0.415054,0.406793,0.365098,0.178443,0.0626717,-0.120423};
Double_t ey[7] = {0.139352,0.0630816,0.0591101,0.070785,0.093317,0.11207,0.0839216};

no change
muon+ case
0 & -0.025 & 0 & 14.3697 & 3.99822 & 6.55792 & 2.89528 & 0.373275 & 0.224572
1 & 0 & 0.025 & 90.8665 & 9.94031 & 31.3914 & 5.89613 & 0.486473 & 0.0829607
2 & 0.025 & 0.05 & 93.8083 & 9.94785 & 31.2276 & 5.9155 & 0.500502 & 0.0813557
3 & 0.05 & 0.075 & 69.0866 & 8.49243 & 24.9203 & 5.19967 & 0.46982 & 0.0943577
4 & 0.075 & 0.1 & 31.1442 & 5.87808 & 21.6321 & 4.91788 & 0.180235 & 0.142939
5 & 0.1 & 0.125 & 20.9815 & 4.86694 & 20.2419 & 4.65962 & 0.0179404 & 0.163347
6 & 0.125 & 0.25 & 32.0094 & 5.88752 & 41.0786 & 6.65545 & -0.124086 & 0.120669
: N(D0) = 99394.1: +/- 609.786
: N(D*) before tag = 25718.4: +/- 167.748
: N(D*) after tag = 589.494: +/- 25.1657
: N(D*) after tag & fiducial cuts = 529.339: +/- 23.8126
Double_t x[7] = {-0.0125,0.0125,0.0375,0.0625,0.0875,0.1125,0.1875};
Double_t ex[7] = {0.0125,0.0125,0.0125,0.0125,0.0125,0.0125,0.0625};
Double_t y[7] = {0.373275,0.486473,0.500502,0.46982,0.180235,0.0179404,-0.124086};
Double_t ey[7] = {0.224572,0.0829607,0.0813557,0.0943577,0.142939,0.163347,0.120669};
chi2 =   2.373372/5
dM           0.50176   +/-    0.06425
pur(IST)   0.77505   +/-    0.02920

muon- case
0 & -0.025 & 0 & 21.7907 & 4.9632 & 11.8455 & 3.73724 & 0.29567 & 0.177553
1 & 0 & 0.025 & 70.5269 & 8.72023 & 35.3065 & 6.22004 & 0.332791 & 0.0956974
2 & 0.025 & 0.05 & 89.6055 & 9.74431 & 46.2482 & 7.10697 & 0.319147 & 0.0845406
3 & 0.05 & 0.075 & 58.0137 & 7.91546 & 34.3346 & 6.11287 & 0.25641 & 0.10478
4 & 0.075 & 0.1 & 40.0559 & 6.56125 & 27.9521 & 5.45995 & 0.177976 & 0.123424
5 & 0.1 & 0.125 & 24.5102 & 5.08236 & 20.0732 & 4.67318 & 0.0995209 & 0.154337
6 & 0.125 & 0.25 & 35.5352 & 6.37075 & 44.6143 & 6.8739 & -0.113277 & 0.116678
: N(D0) = 99394.1: +/- 609.786
: N(D*) before tag = 25718.4: +/- 167.748
: N(D*) after tag = 630.494: +/- 25.9885
: N(D*) after tag & fiducial cuts = 559.975: +/- 24.4526
Double_t x[7] = {-0.0125,0.0125,0.0375,0.0625,0.0875,0.1125,0.1875};
Double_t ex[7] = {0.0125,0.0125,0.0125,0.0125,0.0125,0.0125,0.0625};
Double_t y[7] = {0.29567,0.332791,0.319147,0.25641,0.177976,0.0995209,-0.113277};
Double_t ey[7] = {0.177553,0.0956974,0.0845406,0.10478,0.123424,0.154337,0.116678};
chi2 =  0.2585747
dM           0.49550  +/-     0.09009
pur(IST)   0.67953  +/-     0.03058

two cases are (very) consistent
reference values :
dM             0.50192  +/-    0.05315
pur(IST)     0.72690  +/-    0.02127


worsened resolution by factor of 2  :   dM = 0.518724 ; IST eff = 0.730528
improved resolution by factor of 5   :  dM = 0.496095 ; IST eff = 0.725683
increased B lifetime by 10 um :           dM = 0.494735 ; IST eff = 0.725536
set all relative MC efficiencies to 1 :    dM = 0.515511 ; IST eff = 0.730347

the final table with systematic errors now looks as follows

  Here are the results of the combined (D0+D* samples) fit for opposite 
side muon tagging:

1) purities for B0 and B+ are different:

FCN = 7.261267

dM = 0.495362 +- 0.0453544
pur0 = 0.74696 +- 0.0239902
pur+ = 0.695649 +- 0.0185146






2) purities for B0 and B+ are the same:

FCN= 9.677110

dM = 0.498221 +- 0.0496028
pur0 = pur+ = 0.715805 +- 0.0131498



thanks for the note (well in advance of the deadline!).
>
> a couple of quick comments.
>
> 1) Shouldn't eqns. 6 and 7 be derivable from (4) and (5)? If so, they
> seem to be backward.

thanks - typo fixed
> 
> 2) You asymmetry is presented as a function of VPDL. But you've taken
> the K factor into account when you calculate
> A_expected. Shouldn't you be using the K factors for data too?

K-factor is already used for the data by nature, we can't change it.
All we can do is to describe it by introducing K-factor in
the expected distributions.

Technical stuff:

(1) I really don't like the fact that the analysis does not use muon-id
certified muons. I agree 100% with the arguments previously made, that
the way the ID groups work forces everybody to go back and reprocess
their data & redo their analyses. A great inconvenience, to say the
least. We all had to do this at some point. Also, it is not true that
the muon quality definitions were optimized for high-pt physics. The
muon guys have spent considerable time for low-pt folks, like us (they
have asked us for feedback many times, by the way). I did see better
results with the certified muons.

these are the cuts on this muon from the note

 \item{certified P14 muon with nseg $>$ 1}
 \item{Pt $>$ 2 GeV/c}
 \item{P$_{tot}$ $>$ 3 GeV/c}
 \item{$|\eta^{\mu}| < 2$} 
 \item{$\chi^2$ of local fit of muon $>$ 0}
 \item{N(SMT) hits $>$ 1}
 \item{N(CFT) hits $>$ 1}

this is the definition of the

tight muons
This definition is unchanged since p10. Only |nseg=3| muons can be tight. A muon is tight if it has:



since the number of  nseg=2 muons is very small our muons are pretty close to the tight muons, this has been checked and documented here
http://d0server1.fnal.gov/users/nomerot/Run2A/Muons/looser_muons.html  (closer to the end of that page). we found that our definition and
"tight" muon ID definition overlap by 95% for our sample.

however we found that nseg=2 muons are useful for tagging (= good dilution) so we use them. nseg=2 muons cannot be "tight muons".

in any case as i said the overlap is large and the result will not be affected. in general this measurement is a relative one, it measures a ratio
and determines efficiency from the data and in this sense is "self-calibrated" - so this question is not really important for the measurement itself.
perhaps for the paper we should switch to some linear combination of the muon ID definitions but as it is now we would loose if we do it without extra studies.


(2) There is no mentioning of systematic effects due to the K-factor
(maybe you are already working on this). My studies show this to be the
most significant contribution. Also, another significant uncertainty
(which I think you are underestimating) is the charged B contribution.
This is NOT the uncertainty that one gets by varying by one sigma the BR
from the PDG book.

we are working on the systematics - this will include the K-factors of course.
Reg the B+ contribution i did not quite understand what you mean - could you explain?
do you mean sample composition or purity or both or something else?


(3) The resolution function should be given either analytically or
numerically (e.g. 3 gaussians with relative weights X and Y, sigmas s1,
s2, s2 and means m1. m2, m3). This is not important for the Bd
measurement (as discussed many times before), but it is important for
the methodology. We do this only to prove that we can do Bs mixing.
After all, Bd mixing was discovered in 1985.

done



(4) I am curious how you feel about the very low chi2/Ndof = 1.7/5. Was
that "massaged"? I also noticed that Sergey in his talk yesterday gives
numbers like chi2/Ndof = 4/12, etc. Where do all these low chi2's come
from?

you'll be surprised but the fit we have in the note was the only one we've done for those points.

reg the errors : indeed it looks like we overestimate them though the chi2 is not too small. we checked the math many times
and we are pretty sure it's correct. we suspect we introduce some correlation between the points and we are studying this -
we have some ideas but no conclusions yet.. we should also compare our method to calculate errors with yours because
you are getting better precision with comparable statistics and this puzzles me.

(5) Summary/conclusion: A Bd mixing measurement of 0.050 ps^-1 is not a
precise measurement. Precise measurements are done by Belle and Babar
(sigma < 0.010 ps^-1).

well, then let's put it this way - it matches the best CDF single measurement in run 1. so we are doing well for hadron colliders :)


Ok, now my opinion on how to proceed with the analysis.

You are mentioning in the note something along the lines: we found out
that if we loosen-up our D0/D* cuts, we can gain 50-100% more
statistics. However, keeping the conference deadlines in mind we decided
not to change the cuts at this point.

I could not disagree more with this statement. You quote a statistical
uncertainty of 0.050 ps^-1 by using just soft-muon tagging. Compare this
with:

(a) An uncertainty of 0.033 ps^-1 I got by using soft muons and 0.052
ps^-1 by using jet-charge.
(b) An uncertainty of 0.055 ps^-1 Sergey got by using same-side tagging.

It can't be more obvious to me that our priority should be to get out
there a mixing measurement with three taggers (or three measurements).
Keep in mind that CDF has about 1/4 of our semileptonic statistics with
much worse soft muon AND SST tagging. What could be better for D-Zero
than a mixing machinery that *already* produces numbers better than the
CDF Run-I result by a factor of two? I don't understand why we should
give them the opportunity to think "oh, we suck, but so does D-Zero".

using the tight sample was the only way for us since doing otherwise would require all new inputs  for the
minimization procedure and we just did not have time to do it as carefully as we've done it for the
present sample which we used for the lifetime ratio analysis.

the results can be combined - i have no problem with that. we indeed have bright prospects here.
i should note though that together with combination of taggers we should start adding more decay channels -
not much is happening on this front (i mean K3pi)



Reg the B+ contribution i did not quite understand what you mean - could you explain?
do you mean sample composition or purity or both or something else?


I meant that there are many things to check to make sure that that the B+- contribution is what one expects it to be, e.g.

- there are different contributions from all the D, D*, D** animals as Jan has been saying (side note: why is Jan NOT working for the B group?
we could use at least his expertise with electrons, if nothing else)

we take all these contributions into account and we believe that we do it correctly. this is discussed in the note  in section 7.2.
we may discuss which particular experimental result should or should not be used and what uncertainty it has but fundamentally
the procedure is correct. hopefully Jan will agree with us at the end - we are working on this. if you have any specific comments
you are welcome to join this discussion.

see also Gennadi's responses to the Jan's questions on the lifitime ratio analysis


- there could be possible differences in the way the jet algorithm (that defines the totality of the tracks one does analysis with)
could be dealing with tracks from B+/- and B0

that is may be a valid point for SST or at a smaller level for the OS jet algo but for the OS muon tagging the effect should be really small.
let me think if we can quantify it.


I looked at your last Bd mixing talk on the web (the note has, I think, some inconsistencies: more & more data added, but result/uncertainties do not change)

the data sample did not change between the presentation and the note - we just updated the luminosity number.



So, you have:
# of D* events: 21'400, soft-muon dilution = 48.6%, efficiency: 4.7%, sigma(Delta m) = 0.049 ps^-1

I have:
# of D* events: 32'500, soft-muon dilution = 50.0%, efficiency: 4.5%, sigma(Delta m) = 0.033 ps^-1

If I do the math, I expect to have sigma_1/sigma_2 = 1.24. Instead, we have sigma_1/sigma_2 = 1.48.

I can think of three explanations:

(1) You overestimate the error (on the event populations for different decay lengths, and therefore, Delta-m). I don't know why that would be,
but low chi2's point towards that direction.
(2) I underestimate the errors. One thing to check is, what happens if I change the way I determine the # of signal events
(as you pointed out at the meeting last Thursday). This may change not just the # of signal events, but more importantly, the uncertainty of that #.
(3) A combination of the above, and/or a statistical fluctuation. The estimation of the statistical sensitivity by using a set of input parameters
(# of events, tagger efficiency & dilution) works only on average.

what we were thinking is that the background subtraction (done implicitely by fitting of mass difference peak) may introduce
the correlation between points because the background shape does not really change from point to point. we'll do a cross check
of this doing just event counting in a window and taking bkg from the wrong sign combination  as number of events in the window
without any fitting. that should get rid of this possible correlation so we can see if the errors differ.


Is your number consistent with what Sergey gets for SST? I am not sure how I can get from n_0, n_+ to dilution.

dilution = 2 \eta - 1  by definition




(4) I am curious how you feel about the very low chi2/Ndof = 1.7/5. Was
> that "massaged"? I also noticed that Sergey in his talk yesterday gives
> numbers like chi2/Ndof = 4/12, etc. Where do all these low chi2's come
> from?

The probability for chi2/Ndof = 1.7/5 = 0.8889
and the probability for chi2/Ndof = 4/12 = 0.983436

So they aren't too unreasonable. The probability distribution is
basically flat so having a probability of 50% is ~ the same as having
90%.

> It can't be more obvious to me that our priority should be to get out
> there a mixing measurement with three taggers (or three measurements).

We should make 3 independent measurements and them add them
together to get our final number. If our error is ~ .02-.03 we
aren't too far from the best individual measurements.

Brad



we thank everybody for the comments - we think it will help to improve this analysis!