AFE II readout via LVDS and Gigabit Ethernet sg, 2004-10-28 ---------------------------------------------- v0.2 2004-10-29: comments from Jamieson, Paul R. and Doug Chapin included v0.3 2004-11-02: added timing; changed L2 accept target rate to 3kHz v0.4 2004-11-04: allow for L2 buffering on AFE; LVDS implementation example Motivation: get rid of grey Sequencer cables (unbalanced ground) and Sequencer backplane (many known and probably also many unknown bad connectors). Bypasses VRB to avoid having to buy/rework GLink transmitters. Basic outline: read AFE II ADC output (500 Byte/event/board) via LVDS into a new crate of concentrators/buffers. These boards have LVDS inputs, SCL (over LVDS?) outputs, and Gigabit fiber output, i.e. they are a hybrid of Mixer and DFEC, and also implement the L1/L2 buffering that is currently done by the VRBs. These LVDS concentrator boards feed their information to one or more PCs that implement the L3 buffering and routing functionality currently done by the SBCs. The PC(s) (MBC = multi-board computer ?) have ethernet output (single Gbit or multiple 100Mbit, as needed) to the L3 system. Details ------- AFE II modifications: - output data format: no timing: 16 bit = 9 bit address + 1 bit discriminator + 6 bit ADC with timing: 22 bit = 9 bit address + 1 bit discriminator + 6 bit TDC + 6 bit ADC (modified after talking to Paul R.) - SCL input options: - separate pair from LVDS Concentrator (preferred), - separate receiver and backplane bus, - separate receiver/fanout board and thin wire SCL receivers on board a la DFEC - concentrator chip to concatenate 8 MCMs onto LVDS output - optional: 16 event L2 buffering on AFE to reduce output bandwidth requirements - 2 pairs * 16 bit DS92LV16 LVDS output and 1 pair SCL input through existing AFE backplane LVDS Concentrator: - 6U DFEB crate (reuse DFEB backplane, power supply with 1553 interface, crate hardware, and DFEC controller) - slot 1 has DFEC for download/monitoring/control and SCL distribution. - 20 slots (not all needed for AFE system) with receiver/buffer boards. - Each board has - 14 LVDS links with 5*3 pins: - 2 pairs for 2 parallel 16:1 DS92LV16 LVDS links from AFE - 1 pair SCL output (potentially also over 16:1 DS92LV16 ?) to AFE - and one optical Gbit ethernet output. - Buffer depth of 16 events (same as VRB) = 14 * 16 * 500 = 112 kByte. Easy to have more buffering (?). Optional: operate in AFE buffering mode; transfer from AFE only on L2 accept - Raw ethernet to the MBC (same as DFEC). No need to implement a) TCP/IP b) SBC buffering and routing functionality. Receiver PC/PCs ('MBC'): Receives up to 20 optical Gbit inputs, either through a switch (we know we can route raw ethernet through a switch), or direct via multiple Gbit cards, and sends one or more ethernet outputs to L3. Could be crate mounted (i.e. SBC), but doesn't need to. Could be more than one PC if backplane bandwidth is a concern. Except for the special inputs it runs standard SBC software. (Doug Chapin: replace custom kernel driver module for VME input with new driver; interface to SBC 'user level' code is 'BIGPHYS' memory array; 'user code' = route manager and L3 interface identical to SBC) Bandwidth: L1 transfer AFE to Concentrator: 500 Bytes (at 50% occupancy) with 2 Byte parallel (16 out of 21 bits) over single LVDS at 53 MHz = 4.7 microseconds per event With timing info: 2 transfers (word 1: address + discr.bit/ word 2: TDC + ADC) at 53 MHz = 9.4 microseconds per event (Need to check whether two 16:1 LVDS links can be used in synchronized mode.) L2 transfer 1 concentrator to PC: 14 * 500 Bytes over Gbit fiber = 56 microseconds L2 transfer 2 PC to L3: 200 * 500 = 100 kBytes over single Gbit ethernet link = 1 millisecond; currently we use 6 (of 8 possible) 100 Mbit ethernet links in parallel (4 readout crates with up to two ethernet links each). For a 3kHz rate (fully derandomized) one would need 3-4 Gbit links. ----------------------------------------------------------------------------------------------------------- comment from Jamieson 2004-10-29: Hi guys, Regarding this new concentrator board -- the new DFE backplane, power supplies and crate controller may be of use here. The new DFE backplane will allow for up to 14 28-bit LVDS inputs (or 20 21-bit LVDS inputs), and then this board would have some FPGAs and a gigabit optical transmitter or two on it. Each LVDS input on this concentrator board would come from the AFEII. If the LVDS link is 28-bits, there is an unused twisted pair which could be used to send the RF clock and control bits to the AFEII. The new crate controller provides a simple read/write bus and SCL timing signals to each card via the backplane. I've got extra crates, backplanes, controllers, power supplies and it's all working, so all you need is the new board. Here are the specs for the backplane, crate controller, power supplies, etc. http://www-d0.fnal.gov/~jamieson/run2b/ Let me know if anything here is of interest. cheers, jamieson ---------------------------------------------------------------------------------------------------------- short explanantion of Bus LVDS (BLVDS), from National Semiconductor LVDS 'Owner's Manual' http://www.national.com/appinfo/lvds/files/ownersmanual.pdf 1.4 Bus LVDS (BLVDS) Bus LVDS, sometimes called BLVDS, is a new family of bus interface circuits based on LVDS technology, specifically addressing multipoint cable or backplane applications. it differs from standard LVDS in providing increased drive current to handle double terminations that are required in multipoint applications. Bus LVDS addresses many of the challenges faced in a high-speed bus design. • Bus LVDS eliminates the need for a special termination pull-up rail • It eliminates the need for active termination devices • Utilizes common power supply rails (3.3V or 5V) • Employs a simple termination scheme • Minimizes power dissipation in the interface devices • Generates little noise • Supports live insertion of cards • Drives heavily loaded multi-point busses at 100’s of Mbps The Bus LVDS products provide designers with new alternatives for solving high-speed, multi-point bus interface problems. Bus LVDS has a wide application space ranging from telecom infrastructure and datacom applications where card density demands high-performance backplanes to industrial applications where long cable length and noise immunity are useful. Refer to Chapter 5 for more details on Bus LVDS.