AFE Readout Replacement Stefan Grünendahl, Jamieson Olsen, Stefano Rapisarda, Paul Rubinov [http://d0server1.fnal.gov/users/stefan/www/AFE_II_readout.pdf/txt] V2 (Nov. 17, 2004): Glink reuse Motivation: To get rid of grey Sequencer cables (unbalanced ground) and Sequencer backplane (many known and probably also many unknown bad connectors). Basic outline: Read AFE II ADC output (500-750 Byte/event/board) via LVDS into a new crate of concentrators/buffers. The new crate itself is a standard DFEB crate, incl. Backplane, PS etc.. The new LRC LVDS Receiver Cards receive LVDS inputs from the AFE, distribute SCL (over LVDS) output to the AFE, and send Glink fiber output to the VRB, i.e. they are a hybrid of the existing Mixer, DFEC and Sequencer cards(*). The SBC bandwidth to the L3 system is adapted as needed by adding Gbit or multiple 100Mbit ethernet links. ---------------------------------------------------------------------------- (*): the LRC boards are very similar to the new VLSB boards being designed for DZERO AFE II testing and for MICE. --------------page 2--------------------------------------------------------------------- LRC (LVDS Receiver Crate) + Glink into VRB AFE II modifications: - output data format: no timing: 16 bit = 9 bit address + 1 bit discriminator + 6 bit ADC with timing: 22 bit = 9 bit address + 1 bit discriminator + 6 bit TDC + 6 bit ADC - SCL input via separate pair from LVDS Concentrator - concentrator chip to concatenate 8 MCMs onto LVDS output - optional: 16 event L2 buffering on AFE to reduce output bandwidth requirements - I/O: additional LVDS chips: 2 pairs * 16 bit DS92LV16 LVDS output and 1 pair SCL input through existing AFE backplane (1 cable per board, in addition to unchanged trigger cables) LRC (LVDS Receiver/Concentrator): - 6U DFEB crate (reuse DFEB backplane, power supply with 1553 interface, crate hardware, and DFEC controller) - slot 1 has DFEC for download/monitoring/control and SCL distribution. - 20 slots (not all needed for AFE system) with receiver/buffer boards. - Input: Each board has - 14 LVDS links with 5*3 pins: - 2 pairs for 2 parallel 16:1 DS92LV16 LVDS links from AFE - 1 pair SCL output (over 16:1 DS92LV16 ?) to AFE - Buffer depth of 16 events (same as VRB) = 14 * 16 * 500 = 112 kByte. Easy to have more buffering. Optional: operate in AFE buffering mode; transfer from AFE only on L2 accept - Output: 4 Glinks per board. Encoder embedded into FPGA, compact optical driver replaces Finisar part. Receiver: existing VRB/SBC system; modified operation: transfer into VRB only on L2 accept (I.e. events with L2 reject appear to VRB/VRBC as having zero length data). Alternatively, keep ‘readout geometry’ identical to current system: 12 LVDS in and 6 Glinks out per LRC; 17 LRCs needed; reuse all existing Glinks. Can forego bandwidth gain over current system by preserving current L1/L2 buffering scheme. ----------------page 3-------------------------------------------------------------------------------- LRC Bandwidth; DFEB crate; BLVDS Bandwidth: L1 transfer: AFE to Concentrator: 250 channels (at 50% occupancy) with 2 (4 with timing) Bytes parallel (1 or2 LVDS pairs) at 53 MHz = 4.7 microseconds per event L2 transfer 1: LRC to VRB: 4 * 250 (500 with timing) 16bit words per Glink = 38 (76) microseconds L2 transfer 2: VRB/SBC to L3: 50 * 500 (1000) = 25 (50) kBytes over single Gbit ethernet link = 0.25 (0.5) milliseconds; currently we use 6 (of 8 possible) 100 Mbit ethernet links in parallel (4 readout crates with up to two ethernet links each). For a 3kHz rate (fully derandomized) one would need 3-4 Gbit links. -------------------------------------------------------------------- comment from Jamieson 2004-10-29: Regarding this new concentrator board -- the new DFE backplane, power supplies and crate controller may be of use here. The new DFE backplane will allow for up to 14 28-bit LVDS inputs (or 20 21-bit LVDS inputs), and then this board would have some FPGAs and a gigabit optical transmitter or two on it. [We are thinking about using 16:1 (16 bit per pair) links, with two pairs out and one pair in.] Each LVDS input on this concentrator board would come from the AFEII. If the LVDS link is 28-bits, there is an unused twisted pair which could be used to send the RF clock and control bits to the AFEII. [cf. comment above.] The new crate controller provides a simple read/write bus and SCL timing signals to each card via the backplane. I've got extra crates, backplanes, controllers, power supplies and it's all working, so all you need is the new board. Here are the specs for the backplane, crate controller, power supplies, etc. http://www-d0.fnal.gov/~jamieson/run2b/ ---------------------------------------------------------------------- Bus LVDS (BLVDS) (from National Semiconductor LVDS 'Owner's Manual' http://www.national.com/appinfo/lvds/files/ownersmanual.pdf) “1.4 Bus LVDS (BLVDS) Bus LVDS, sometimes called BLVDS, is a new family of bus interface circuits based on LVDS technology, specifically addressing multipoint cable or backplane applications. It differs from standard LVDS in providing increased drive current to handle double terminations that are required in multipoint applications. Bus LVDS addresses many of the challenges faced in a high-speed bus design. • Bus LVDS eliminates the need for a special termination pull-up rail • It eliminates the need for active termination devices • Utilizes common power supply rails (3.3V or 5V) • Employs a simple termination scheme • Minimizes power dissipation in the interface devices • Generates little noise • Supports live insertion of cards • Drives heavily loaded multi-point busses at 100’s of Mbps The Bus LVDS products provide designers with new alternatives for solving high-speed, multi-point bus interface problems. Bus LVDS has a wide application space ranging from telecom infrastructure and datacom applications where card density demands high-performance backplanes to industrial applications where long cable length and noise immunity are useful. Refer to Chapter 5 for more details on Bus LVDS.” (We would be using the BLVDS chipsets for point-to-point links.) -----------------------page 4---------------------------------------------------------------------------- Implications Need to test LVDS chip/cable combinations Order DS92LV16 evaluation board Order cable samples Need to test Glink implementation LRC design: tag onto Stefano Rapisarda’s VLSB design; start as soon as VLSB design (not board production!) done AFE II-T design: incorporate concentrator FPGA and LVDS SERDES chips Adiabatic installation: need space for LRC crate on south side (minimize LVDS length) Solution I: Install AFE II-T for x51 and x53 (non-axial crates) adiabatically w/ old readout; when complete, replace Sequencer 5 in PC19-1 with LRC; then continue with axial (north side) AFE replacement Disadvantage: need to get readout and old SCL distribution over grey cable/Sequencer working Solution II: find temporary spot for LRC until all AFEs are replaced (e.g. laptop platform between cryostats; LRC rests on side panel, backplane to the south; fan tray for cooling) Advantage: can drop Sequencer connection from AFE II-T design completely Solution III: LRC on Platform West, if speed vs. cable length allows Other potential benefits: Factor 4 bandwidth gain off the AFE compared to the grey cable Seq-AFE connection (2*16 Bits @ 53 MHz vs ½ * 16 bits @53 MHz) Could replace 1553 with optical Gbit/LVDS connection (through DFEC controller and the SCL-LVDS pair from the LRC to each AFE): faster and more reliable AFE download and control SCL availability on AFE allows to move buffering between L1 and L2 accepts to AFE: need lower bandwidth off platform for same rate ---------------------page 5------------------------------------------------------------------------------ Option 2: LRC + Gbit optical ethernet via PCs into L3 [this was the baseline option until Jamieson came up with the Glink reimplementation.] Motivation: Bypass Glink + VRB, go directly into L3. Change to basic outline: The LRCs have Gigabit ethernet fiber output instead of the Glink output.They also implement the L1/L2 buffering that is currently done by the VRBs. The LRCs feed their information to one or more PCs that implement the L3 buffering and routing functionality currently done by the SBCs. The PC(s) (MBC = multi-board computer ?) have ethernet output (single Gbit or multiple 100Mbit, as needed) to the L3 system. ------------page 6-------------------------------------------------------------------------------------- Option 2: LRC (LVDS Receiver Crate) + PCs AFE II modifications: same as option I LRC (LVDS Receiver/Concentrator): -same as option I, except: - replace Glinks with one optical Gbit ethernet output. - Buffer depth of 16 events (same as VRB) = 14 * 16 * 500 = 112 kByte. Easy to have more buffering. Optional: operate in AFE buffering mode; transfer from AFE only on L2 accept - Output: Raw ethernet to 1-4 PCs (same as DFEC). No need to implement a) TCP/IP b) SBC buffering and routing functionality. Receiver PC/PCs ('MBC'): Receives up to 20 optical Gbit inputs, either through a switch (we know we can route raw ethernet through a switch), or direct via multiple Gbit cards, and sends one or more ethernet outputs to L3. Could be crate mounted (i.e. SBC), but doesn't need to. Could be more than one PC if backplane bandwidth is a concern. Except for the special inputs it runs standard SBC software. (Doug Chapin: replace custom kernel driver module for VME input with new driver; interface to SBC 'user level' code is 'BIGPHYS' memory array; 'user code' = route manager and L3 interface identical to SBC) Bandwidth: L1 transfer AFE to Concentrator: 250 channels (at 50% occupancy) with 4 Byte parallel (2 LVDS pairs) at 53 MHz = 4.7 microseconds per event L2 transfer 1: concentrator to PC: 14 * 500 (750) Bytes over Gbit fiber = 56 (84) microseconds L2 transfer 2: PC to L3: 200 * 500 (750) = 100 (150) kBytes over single Gbit ethernet link = 1 (1.5)millisecond; currently we use 6 (of 8 possible) 100 Mbit ethernet links in parallel (4 readout crates with up to two ethernet links each). For a 3kHz rate (fully derandomized) one would need 3-4 Gbit links.