Attached below is an answer from Yasuyuki Akiba for RICH.
As well as specific data volume in bytes, I think it contains some
interesting suggestions and open questions.

	On the other hand, I am afraid we cannot hear from the TOF
group by tomorrow as Kazu Kurita is flying back from CERN via Tokyo to
New York right now.  Thank you.

				Yours Sincerely,  Kenta
======================================================================
    Kenta Shigaki     PHENIX Group, Brookhaven National Laboratory
    shigaki1@bnl.gov            http://sgc4.rhic.bnl.gov/~shigaki/
    phone : +1-516-344-7801                  fax : +1-516-344-7841
======================================================================

----- begin message from Akiba -----

>  1 - WHAT KINDS OF DATA AND WHAT DATA VOLUMES ARE PRODUCED?
>  ----------------------------------------------------------
>  For each type of data, please note its format, if it is produced/collected
>  by small standalone systems ...
>
>  1.1 - ONE-TIME DATA
>  -------------------
>  This would be for example data collected during contruction of a detector.
>
> - Detector component characterizations, identification, history ...
> - Detector mechanical data. Geometry, surveys, wire tensions ...
> - other?

We have 5120 PMTs in RICH. We have
   A) data sheets from the manufacture (Hamamatsu)
   B) test data at INS/Tokyo before they are shipped to US
   C) test data at SUNY-SB in "supermodule"
for each of the tubes. We are setting up ORACLE database for those data.
S. Salomone at SUNY-SB is working on it. They will be organized in separate
tables (relations) with a common key (serial #). Each table has about 10
columns. Table (A) and (B) has one entires for each tube, and (C) will have
at least two entries before installtions.

We are planning to store "histograms" of the measurement at SB in database.
We are currently studying how to do this.

We will also has a table of (location <--> serial #), where location is
the location of the tube in PHENIX detector.

We will also has tables of electronics. (FEE modules, HV modules, DCM)
The plan for them is not established yet.

>  1.2  - RUN-TIME DATA
>  --------------------
>  These would be data that are produced on a regular basis once we are running.
>
> - Data returned from slow controls: temperatures, voltages ...
> - Data produced by calibration programs
> - Component swap/repair/replacement histories
> - other?

We plans to monintor
 - temperature, cherenkov radiator gas purity, flow rate etc from vessel
 - HV for tubes. (one HV channel supply 8 to 16 tubes)
Those data will be sampled regulary in the run, and stored in database. The
sampling frequencey will be at least once per 8 hour shift, but should be 
more frequrent initially.

We also records the calibration data. I imagine at least two kinds of data here
(1) calibration data (pedestal, gain, noise level, etc) determined from the
data
(2) calibration data that is loaded in FEE and DCM during the run. (We have
variable-gain amplifier in FEE. The setting of VGA should also be recorded)

Component history should also be recorded. This will be naturally handled
(location <---> serial #) table and history entry for each tube. For
FEE and DCM cards, we may have "repair history" entry. (A tube is not repaired
if it fails, so there will be no repair history) 

The largest demand for database will come from the calibration data. Since
we have 5120 tubes, and each tube produces at least 8 words (ADC ped, ADC gain,
ADC noise, one-photon resolution, TDC clock, TDC slewing, TDC t0, HV setting)
per calibration, we will have > 40K words = 160K bytes per calibration.
This means we need at least 1 Mb per day for database storage just for
calibration data.

>  1.3 - DOCUMENTATION
>  -------------------
> - Writeups, technical drawings, minutes, photos ...
> - other?

It is nice to have those documents in database, too. Is anyone studies how
to do this? If someone in PHENIX setup for a good database for documents,
we will use it.

>  2 - ACCESS
>  ----------
>  This is more difficult to define, but also more important in that the
>requirements and wishes expressed here will drive decisions on what the actual
>implementations will be. Imagine an ideal situation, where we could dream up
>the nicest software tools and interfaces. If we get lucky, those things may
>already be out there...
>
>  2.1 - INPUT
>  -----------
>  How are the data generated, and how would you like to see - from a users
> point of view - the data transported to the 'central' data base.
>
> - are they/will they be in some existing data base program? If so, which?
> - if they are/will be in some other computer form, what is it? Machine type,
>   software application, format ....
> - do you want your program/application to write directly into the data bases?
> - other?

There should be many ways to put the data into database. The phototube 
datasheets from Hamamatsu is supplied as ASCII text, and S. Salomone has
already put them into ORACLE. (I think she used SQL*LOAD.) The test results
from INS and SUNY-SB will be put in ORACLE in a similar way.

I think the direct write access to database from application is a policy issue.
We need a method to put calibration data determiend by "calibration modules"
of analysis program. However, this does not necessarily mean that the
calibration modules directly write the results to the DB. This is a
convenient way, but it may introduces a possibility that 'wrong' results can
put in the DB. We need to make the polcy decision here. I think the issues
to be considered are:
- how to 'certify' the calibration results so that that are put in DB
- is all 'calibration data' will be kept in DB? (Wrong data is removed?)
- how to control "write permission" to DB
- I think we need some "standard keys" for calibrations. What is it?
  run number, or time stamp or else or combined?
If we decide that the calibration modules directly writes to the DB, we
need a "standard database API". The question is

- How the API is defined?
- Who implement it?

coupled questions are the design of the calibration DB. Namely,
- Who define the tables (relations) for the DB?
- Who design/implement/maintain the DB access API.
- How the design of the DB affect the performace (i.e. speed of query,
  storage space) of DB.
- How to modify the table definition if it is needed? When a table is
  modified, how the application program that access to it is modified? How
  to keep the compatibility for old data and new data.


>  2.1 - OUTPUT
>  ------------
>  How would you like to access this information. In the ideal world, what
> would the user interfaces look and feel like? Some of you might have good/bad
> experiences from previous experiments - lets hear them.
>
> - organization, searchability ...
> - graphing, manipulation, display ...
> - direct access from analysis programs
> - for documents: searching by title/keywords/content/date/author ....
> - are all documents accessible from the web?
> - other?
>
As long as the data is in the database, there is many ways to access to
the data. One can use SQL for example to get the information. The questions
are
- we need some "standard GUI" for the DB?
- who is going to take care of it?
- what is the highest priority item?
Personally, I do not care about how fancy the GUI of the database is.

To my view, more important thing is the "interface" from the
analysis program and for initialization for FEE/DCM. If we can not get
the data loaded into FEE/DCM, we can not run the experiment. If we can not
the the calibration data from the DB, we can not analyze it.
I have a few specific requirement for OUTPUT of calibration database.
(1) There must be a standard interface to analysis program for calibration
data
(2) The DB is capable of initialize the calibration data structure in a
few second.
(3) The user can change the source of the calibration data from the
database to some external file.
(4) It is possible to get the history of each tube/channel as a function of
 time (or run-number).
The requirement (2) could have a strong implication for the design of
DB (i.e., how the data is organized). In the ideal world, one want to
organize the database as "normalized form", but this could make the 
query unpractically slow and may requires too much data storage. If this
is the case, we have to make a compromise. I think someone should make a
performance study of the database so that we can make the decisions on
database organization from the quantitative data.

----- end message from Akiba -----