ID061: Acquisition and Data Logging for the ANTARES Project

J. F. Gournay, R. Azoulay, N. de Botton, P. Lamare, J. Poinsignon

CEA, DSM/DAPNIA, Saclay, France A. Le Van Suu IN2P3 - CPPM, France

The goal of the ANTARES R&D project (Astronomy with a Neutrino Telescope and Abyss environmental RESearch) is to study the feasibility of the deployment and operation of a very large deep sea high energy cosmic neutrino detector. This program has begun in 1996 in the Mediterranean sea along the French coast at depth down to 2500 m. Several lines have been deployed repeatedly for site parameter measurements such as optical background, bacteria and sediment fouling, undersea currents. These lines include a small autonomous intelligent controller which performs the acquisition and the storage of environment parameters, the interfacing with photo-multiplier tubes and the control of some specific devices (acoustic modem, current meter, compass, motor). This paper presents these tests and the different configurations of the controller used to drive them. The controller has to work autonomously with limited resources and with high reliability. The technical options chosen to meet these stringent requirements are described. The next step of the program is presented. It will consist in the deployment of a complex mooring line including 8 photo-multiplier tubes and several instrumentation subsystems. This line will be connected to the shore by a 40 km electro-optical cable which will bring the power and allow data acquisition and supervision. The same controller will be used for data concentration in a WorldFIP fieldbus architecture.

Submitted by : Jean-François GOURNAY
Full address : CE Saclay ,DAPNIA/SIG, 91191 Gif/Yvette CEDEX, FRANCE
E-Mail : jgournay@cea.fr
Fax : 33-1-69-08-63-01
Keywords : ANTARES, Deep-sea, Acquisition, Data-logging, Microcontroller


ID062: Allen-Bradley SLC 504 versus Sixtrak PLC Control's Integration

C. Briegel,

Fermilab

Allen-Bradley SLC 504 is implemented in Fermilab's Main Injector for power supply monitoring and basic control. Allen-Bradley's DH+ is used for communication to a VME front-end. Sixtrak PLC is implemented in Fermilab's Main Injector for LCW (low conductivity water) controls. Sixtrak's Gateway utilizes Ethernet to communicate to a VME front-end. The two implementations are compared with respect to communications, flexibility, functionality, and integration into a global control system.

Operated by the Universities Research Association, Inc. under contract with the U.S. Department of Energy.

Submitted by:Charlie Briegel
Full address: M.S. 347, Fermilab, P.O. Box 500, Batavia, IL 60510
Fax: (630) 840-4510, (630) 840-3093
Keywords: PLC DH+ Ethernet


ID063: Instrument Lift, Counter-Weight, and Telescope Motion
Controls for SDSS

C. Briegel

Fermilab

SDSS (Sloan Digital Sky Survey) has three distinct motion systems implemented in a VME162 with various Industry Packs and a intelligent motion controller utilizing VxWorks and C. The instrument lift inserts or removes a camera, spectrograph, or fiber cartridges from the telescope. The software uses two finite state machines; one for the motion and the other for controlling the progress of the motion. The motion is implemented with a digital controller with built-in dithering for pneumatic control. The counter-weight motion moves four 350 pound weights to a specified position for a give instrument. The counter-weight motion is controlled by a simple trapezoidal movement with two different slew rates. The telescope has three axis and is implemented with an intelligent 6-axis controller utilizing a PID loop with extensions.

Operated by the Universities Research Association, Inc. under contract with the U.S. Department of Energy.

Submitted by: Charlie Briegel
Full address: M.S. 347 , Fermilab, P.O. Box 500, Batavia, IL 60510
Fax number: (630) 840-4510 , (630) 840-3093
Keywords: SDSS Telescope Motion


ID064: Construction of the Central Control System (COCOS) for the Large Helical Device (LHD) Fusion Experiment

K. Yamazaki, H. Yamada, K.Y. Watanabe, K. Nishimura, S. Yamaguchi, M. Shoji, S. Sakakibara, O. Motojima and the LHD Control Group

National Institute for Fusion Science, 322-6 Oroshi-cho, Toki-shi, Gifu-ken 509-52, Japan

The Large Helical Device (LHD) is the world-largest superconducting helical fusion experimental machine with magnetic energy of 1.6 GJ, which is now under construction in Toki-City, Japan. All superconducting coils have already been completed, and the plasma vacuum vessel and the upper cryostat are on the final construction stage. As for the LHD control system, we started the construction of the main unit of the Central (Chu-Oh) COntrol System (COCOS) in April based on the design philosophy; (1) flexibility for the physics experiment, (2) reliability for the large engineering machine and (3) extensibility for the central control system. The design philosophy (1) requires human-friendly man-machine interface and advanced real-time plasma control systems, the item (2) requires reliableprotective interlock systems with hardwires, and the requirement (3) leads to the distributed and modularized control instrumentation systems. The COCOS is composed of the central control unit (central console, central control board, central control computer, and the timing board), the torus instrumentation unit (torus instrumentation computer board andprotective interlock board), the LHD Man-machine System(LMS), the control data acquisition system, the LHD experimental LAN and the uninterrupted power supply(UPS) systems. These systems use a variety of computers such as UNIX engineering work station, Windows-NT personal computers, VME computer boards with real time OS (VxWorks) and programmable logic controllers. The design of the COCOS had been started almost 10 years ago, and at that time the large-sized processing computer was considered as a main control computer, and later changed to several engineering workstations. Now, some client-server systems by Windows-NT are added for control and data acquisition. These central systems and more than 50 sub-systems are connected by FDDI network. The present mission of the LHD project is to produce a first plasma as soon as possible. The COCOS central console and central board with programmable logic controllers directly connected with hard wires will be used for this initial purposes. Especially, the protective interlock system requires hardwires for simplicity and reliability. In addition to this, we can use the flexible man-machine system LMS in the COCOS. The LHD superconducting magnet will be operated for about 10 hours per day, and the number of short-pulsed plasma operations with 10 second duration will be typically 50 - 100 shots per day. Different from the present conventional pulsed fusion machines, the LHD machine is going to be operated in steady state (more than 1 hour pulse length) and requires interactive control of the machine and the plasma, especially in the plasma control system. The LHD Control Building with the main control room was completed in November, 1996, and the first plasma is expected at the end of March in 1998.

Submitted by: Kozo Yamazaki, Professor
Full address: National Institute for Fusion Science 322-6 Oroshi-cho, Toki-shi, Gifu-ken 509-52, Japan (Our Institute has been moved to this New Address !!!)
E-mail address: yamazaki@nifs.ac.jp
Fax number: +81-572-58-2618 (New Number !!!)
Keywords: fusion machine, Large Helical Device, central control system, computer network, man-machine system


ID065: INDUS-2 CONTROL SYSTEM

J.S.Adhiakri , B.J.Vaidya

Control systems, Accelerator Programme, Centre for Advanced Technology, Indore, INDIA 452013

A 2-GeV Synchrotron Radiation Source is being setup at Centre for Advanced Technology, Indore, India. Machine parameters are finalised and different subsystem are under design and fabrication stage. This paper describe the control system architecture of INDUS-II along with past experiences of INDUS-I system. Control system is being designed around three layer architecture namely, user interface, supervisory layer and machine interface layer. Pentium PC will be used as user interface with WIN NT as operating system. Development environment will be mainly Visual C++. These systems will be networked via ethernet. User will be connected to the network via bridge or gateways. There will be a database server connected to network for periodic data logging. Supervisory layer consists of VME based system. They includes MC68040 microprocessor and operates under OS/9 RTOS. Each supervisory system is dedicated to magnet power supplies system, r.f. system, vacuum system, radiation monitoring, system interlock and status and timing system. These are connected to user interface via ethernet and to machine interface via Profi Bus. Machine interfaces are based on VME system which includes MC68000 microprocessor. This is a dedicated layer and proper isolation is maintained at every stage. Control room may have many other facilities for direct probing of beam like video frame grabber, video monitor, CRO etc.

Submitted by: J.S.Adhiakri , B.J.Vaidya
Full address: Control systems, Accelerator Programme, Centre for Advanced Technology, Indore, INDIA 452013
E-mail: jsa@cat.ernet.in
FAX: 91 731 488000


ID066: Control System for VEPP-5 electron-positron complex.

D.Yu. Bolkhovityanov, Yu.I. Eidelman

The Budker Institute of Nuclear Physics, Novosibirsk, Russia

A new complex VEPP-5 in the Budker Institute of Nuclear Physics is being built. Electron and positron linacs, a damping ring, a PHI-factory and a C-TAU factory are included in this complex. The control system being designed for VEPP-5 consists of three main levels. First, CAMAC hardware in crates with intellectual transputer controllers. Second, a server program, running in an x86 Unix computer, directly connected to the transputers. Third, a set of control programs, running in a number of UNIX computers which form a control system local network via the Ethernet. The server provides communication between third-level control programs and processes in the controllers. The main working mode of the system is control and measurement with a frequency of 1 Hz, but faster interaction between control programs and transputer is also allowed. All the control programs send requests for the hardware to the server, which redirects them to the transputer controllers after special preliminary processing. So, the transputer controllers get commands only from the server. Due to this, the control programs even don't know with which type of block and in which crate they are interacting. Such an approach also enables easy resolution of conflicts, when two or more programs want to access the same block simultaneously. The transputer controller runs a dispatcher communicating with the server, and a number of drivers (one for each CAMAC block), which take requests from the dispatcher. The server program consists of a server part itself; a manager, which is used to administer the system; and a porter, which is responsible for access control. The server is a special application-oriented program, which uses its own protocol to communicate with transputer controllers. All the logic of control, i.e. the algorithms, used to control the system or some of its subsystems, is contained in the third-level programs. They interact with the server via the client library, which implements a special server communication protocol over the TCP/IP. These programs can be written not only by professional programmers, but also by the physicists and engineers, employed at the complex. Information about hardware configuration is a part of the complex database. So, when a block is added or moved, or anything else changes in the system, all the modifications are reflected in both the server and control programs automatically. The database also contains information about the logical structure of the system: channel properties (such as minimal/maximal permissible values or formulae for calculating composite channels) and information about groupping the channels into logical elements. In order to provide extendability, portability and ease of editing, the database is made textual. To avoid input mistakes special procedure is designed for making changes, which prevents getting errors into the work copy. Currently the first variant of this system is used for the control of the RF system of the damping ring. Evaluation of the system performance was made, and it confirmed the validity of the solutions being used. Design of the full version of the system is in progress.

Submitted by: Yuri I. Eidelman
Full address: The Budker Institute of Nuclear Physics, 11 Academician Lavrentiev prospect, Novosibirsk, 630090, Russia
E-mail address: eidelyur@inp.nsk.su
Fax number: 383-2-352-163
Keywords: BINP, VEPP-5, control system, transputers


ID067: R.F. Control System For INDUS-1

Pravin S. Fatnani

Centre For Advanced Technology (CAT),Indore, India

Control system for the RF subsystem of 450 MeV Synchrotron Radiation Source INDUS-1 was completed about 2 years back and has been operating quite satisfactorily. This controls the RF system of 700 MeV booster synchrotron & 450 MeV storage ring. The range of equipments under control regime are rf amplifiers of 4KW & 10 KW and associated equipments like 2 rf cavities, high voltage power supplies, preamplifiers, phase shifters, phase detectors, stepper motor control units, cooling water pumps, chillers and flow controllers etc. Additionally, it controls the rf amplifiers and associated equipments for bending magnet and straight section ion-clearing electrode systems. The system controls and monitors the crucial rf parameters like forward and reflected power( input and output), rf phase, tuning error etc. besides host of other parameters.The operator interface is user friendly and completely menu driven that does not require any direct data entry from the operator. Coarse and fine up,down controls facilitate easy setting of output levels. Fast interlock validation at the EIU (VME) level ensures safe, timely tripping in case of rf system faults. Local / Remote mode of operation for all rf equipments provides flexibilty and safety in case of maintenance on those systems. This system integrates into the overall control system for INDUS-which follows a two layer architecture. PCs in the main control room are usedas operator consoles. These form the top layer. These operator consoles are hooked up to netware file server using ethernet. The bottom layer contains Equipment Interface Units(EIUs). These are VME crates running in 68000 micro processors. Parallel serial links exist between the two layers, one for each subsystem.

Submitted by: Pravin S. Fatnani
Full address: #114, Accelerator Development Lab Centre For Advanced Technology (CAT) P.O. - CAT, Indore, MP, INDIA 452 013
Email: fatnani@cat.cat.ernet.in
Fax No. : 91 731 488000 91 731 481525
Keywords : PC, VME, 68000


ID068: Java Application for Creating of Shared Object Cash

Igor Mejuev and Isamu Abe

High Energy Physics Accelerator Research Organization (KEK), 1-1 OHO, Tsukuba, Ibaraki 305, Japan

Java language is used for creating of thin GUI clients connected to server implemented as Java application. Server contains object cash which is updated by calls to underlying system layer. Using object connection technology we establish connections between objects from server cash and GUI clients' objects. The states of connected objects are synchronized so that all changes in objects' states are transferred from client to server and vise versa. Since only changes are transferred control network traffic is reduced and performance increased. Software development is also simplified as both client and server don't have to take care about objects' states synchronization. Sockets or distributed Java object systems such as RMI or HORB can be used for states transfer. The system is implemented completely on Java so that it can be made multiplatform and control clients can run on any Java enabled browser with minimum system requirements.

Submitted by: Igor Mejuev
Full address: High Energy Physics Accelerator Research Organization (KEK), 1-1 OHO, Tsukuba, Ibaraki 305, Japan
E-mail address: mejuev@kekvax.kek.jp
Fax number: +81-298-64-7529
Keywords: Java, data push, distributed objects


ID069: An Object-Oriented Framework for Client/Server Applications

Walt Akers

Thomas Jefferson National Accelerator Facility

When developing high-level accelerator applications it is often necessary to perform extensive calculations to generate a data set that will be used as an input for other applications. Depending on the size and complexity of these computations, regenerating the interim data sets can introduce errors or otherwise negatively impact system perform. If these computational data sets could be generated in advance and be updated continuously from changes in the accelerator, it could substantially reduce the time and effort required in performing subsequent calculations. UNIX server applications are well suited to accomodate this need by providing a centralized respository for data or computational power. Because of the inherent difficulty in writing a robust server application, the development of the network communications software is often more burdensome than the computational engine. To simplify the task of building a client/server application, we have developed an object-oriented server shell which hides the complexity of the network software development from the programmer. This document will discuss how to implement a complete client/server application using this C++ class library with a minimal understanding of network communications mechanisms.

Submitted by: Walt Akers
Full Address: Thomas Jefferson National Accelerator Facility Mail Stop 16A 12000 Jefferson Avenue Newport News, Virginia 23606
E-Mail Address: akers@jlab.org
Keywords: Networks, CDEV, Object-Oriented


ID070: Applying the Knowledge-Discovery in DataBases (KDD) Process to Fermilab Accelerator Machine Data

K. Yacoben and L. Carmichael

Fermi National Accelerator Laboratory

This paper describes the steps needed to apply KDD techniques to accelerator machine data in order to improve accelerator machine performance and understanding. Fermilab collects a substantial amount of accelerator machine data during its day to day operations. This data provides an ideal counterpart understanding of the accelerators. The objective of this work is to develop an infra-structure supporting accelerator machine data that utilizes the KDD process in order to facilitate the tracking of trends in machine data and the analyzing of the inherent correlations that exist between accelerator components. The initial phase of this process involves the creation of a data warehouse, in SyBase, which serves as a repository of clean, high-quality machine data. The next step involves the development of a Knowledge-Discovery Support Environment (KDSE) which provides a facility for viewing accelerator data by linking the data warehouse to commercial packages, such as EXCEL. Additionally, the KDSE also provides a facility that allows for the autonomous tracking of data attributes, which are denoted by user-defined functions of warehouse data. These data attributes are stored in the warehouse as meta-data; thus providing a level of abstraction to the data collected. An initial application for the KDSE infra-structure involves automating some of the preliminary analysis that is done of shot data. This Knowledge-Discovery Application (KDA) allows users to define exception conditions which are represented by Finite-State Machines that use the warehouse data and data attributes to perform a variety of tests at specified machine states. Any exceptions detected during a shot are automatically reported to selected users. This KDA, in conjunction with the KDSE, will serve as the building blocks upon which true knowledge discovery engines, such as ones which perform accelerator machine data trend and correlation analysis,are developed.

Submitted by: Kevin Yacoben
Full address: Fermi National Accelerator Laboratory PO Box 500, MS 347 Batavia, IL 60510-0500
E-mail address: yacoben@fnal.gov
Fax number: 630.840-3093
Keywords: DataBase, Automation, FSM


ID071: Event Handling in TRIUMF's Central Control System

B. Davison, S.G. Kadantsev, E. Klassen, K.S. Lee, M.M. Mouat, J.E. Richards, T.M. Tateyama, P.W. Wilmshurst, P.J. Yogendran

TRIUMF

In TRIUMF's Central Control System, alarm handling and other types of events are dealt with in a software "scans" package. Many changes that must be monitored are not considered as "alarms" because there is no error or hazard associated with the various values of the control variable and thus the software package was named to reflect the action of scanning the system to look for defined state changes. This scan package can issue messages to different logs and take actions as determined by the user. The initial requirements, design, implementation, and user interface are described.

Submitted by: Brenda Davison
Full Address: 4004 Wesbrook Mall Vancouver B. C. Canada V6T 2A3
Email Address: DAVISON@TRIUMF.CA
Fax Number: 604-222-1074
Keywords: Event handling, alarms, scans, monitoring


ID072: Handling CAMAC Interrupts in Alpha OpenVMS/PCI

K.S. Lee, S.G. Kadantsev, E. Klassen, M.M. Mouat, P.W. Wilmshurst

TRIUMF

Software for Alpha/OpenVMS systems has been developed to support CAMAC interrupts (LAMs) via PCI bus. A number of devices in TRIUMF's Central Control System generate interrupts that are delivered via CAMAC systems. These interrupts arrive using previously existing CAMAC executive crates and system crate interfaces. Until this development, these interrupts were only being serviced by VAX/OpenVMS computers using Qbus but the tendancy to replace VAXes by Alphas has required that this LAM handling software be developed. The initial requirements, hardware and software configuration, driver structure, and performance are described.

Submitted by: Sing Lee
Full Address: 4004 Wesbrook Mall Vancouver B. C. Canada V6T 2A3
Email Address: SING@TRIUMF.CA
Fax Number: 604-222-1074
Keywords: Interrupt, OpenVMS, Alpha, PCI, LAM


ID073: Status Report on the TRIUMF Central Control System

M.M. Mouat, B. Davison, S.G. Kadantsev, E. Klassen, K.S. Lee, J.E. Richards, T.M. Tateyama, P.W. Wilmshurst, P.J. Yogendran

TRIUMF

This paper presents the current status of the TRIUMF 500 Mev Cyclotron's Central Control System (CCS). The original TRIUMF CCS employed Data General's Nova computers and for more than 20 years Nova CPUs were at the heart of the CCS. A modest upgrade project was commenced in earnest in 1993 to replace these Novas and accomplish several other goals. The status of the CCS is described now that the Novas are removed and the other goals have been largely met. The current hardware and software configurations are discussed and the experience gained during the upgrade is examined. Reliability, performance, error diagnosis, and the development environment are also described.

Submitted by: Mike Mouat
Full Address: 4004 Wesbrook Mall Vancouver B. C. Canada V6T 2A3
Email Address: MOUAT@TRIUMF.CA
Fax Number: 604-222-1074
Keywords: Status Report, Controls, TRIUMF, Upgrade


ID074: Human Computer Interface for the Computerised Control of Electron-Cyclotron Resonance Ion Source (ECRIS) at Calcutta

Tapas Samanta, C. D. Datta and D. Sarkar

Variable Energy Cyclotron Centre Department of Atomic Energy 1/AF, Bidhan Nagar, Calcutta- 700 064, India

Variable Energy Cyclotron Centre (VECC) at Calcutta has developed a compact room temperature Electron Cyclotron Resonance Ion Source (ECRIS) [1] for varieties of heavy ion beams viz. N5+, O7+, Ne7+ etc. A PC-based control system for the ECR Ion Source is under development. This paper describes the Human-Computer Interface (HCI) of the control system. A user-friendly HCI is a vital requirement of the control system of any modern and sophisticated experimental physics setup. An ECR of this kind is very complicated in nature consisting of not less than 12 numbers of pumps, 11 numbers of valves, 20 numbers of highly stable power supplies of various specifications, 8 numbers of gauges and others. Some of the equipment controls are on-off, some are continuous in nature and so on. In the development of the HCI, Visual Basic version 3.0 on Pentium PC @ 100 MHz hardware platform has been employed. The starting form displays the overall system of ECRIS at a glance. Detailed Mimic of each of the subsystems with all necessary process parameters are displayed for monitor and control purposes, at a click of a mouse button. The interlocking parameters of various subsystems have been implemented. Special security arrangements have been employed in the system to prevent unauthorised entry of operators into the Control System. An online alarming system has been incorporated to alert the operator much earlier than a possible breakdown.

Submitted by: Tapas Samanta,
Full address: C. D. Datta and D. Sarkar Variable Energy Cyclotron Centre Department of Atomic Energy 1/AF, Bidhan Nagar, Calcutta- 700 064, India
E-mail:


ID075: CMLOG: A Common Message Logging System

Jie Chen, Walt Akers and William Watson III

Control Software Group Thomas Jefferson National Accelerator Facility 12000, Jefferson Avenue Newport News, VA 23606, U.S.A.

Danjin Wu

Integrated Technologies International, Inc. 42 East Rahn Road Kettering, OH 45429, U.S.A.

The Common Message Logging (CMLOG) System is a distributed and object-oriented system that not only allows any application or system to log data of any type into a centralized database but also lets applications view incoming messages in real-time or retrieve stored data from the database according to selection rules. It serves as an error reporting system for the CDEV package or a general logging system for any control system. It consists of a UNIX server that handles incoming logging or searching messages, a Motif browser that can view incoming messages in real-time or display stored data in the database, a client daemon that buffers and sends all logging messages on a host to the server, a client library that is used by applications to log messages, and a browser library that may be used by applications to search messages previously stored in the database. The CMLOG server is a concurrent server running on a UNIX host. It has been implemented in C++ using multi-threading or multi-processing where applicable to improve network responsiveness and concurrency. It supports the notion of a callback mechanism to applications, and has a set of parameters to which one may assign values to suit different sites. To allow applications or systems to log data of any type, a dedicated C++ data type (cdevData) that has multiple tagged fields of any type has been used as a vehicle to transport messages among all CMLOG parties. At run time a server thread or process time-stamps and writes all incoming logging messages into the database, meanwhile other threads or processes can search the database according to selection rules from applications and send back results to the applications. The database contains multiple UNIX files that contain time stamped logging messages in binary form. Each file is indexed by time and is organized in a B+ tree structure. All logging messages from applications on a host are sent to a CMLOG client daemon that buffers and sends the logging messages to the server. Each logging client is assigned a unique number by the client daemon. The combination of the client daemon and logging clients on a host reduces the number of connections to the server and improves the scalability of the CMLOG system. The Motif browser can be used to view incoming messages in real-time or to fetch messages logged inside the database. In comparison to logging clients, a browser is connected to the server directly. This paper will present the design and implementation of the CMLOG system and several object-oriented design patterns used in the network programming, and demonstrate that the object-oriented technology can be easily applied to a distributed and concurrent programming environment without sacrificing efficiency. Finally, CMLOG has been compiled and tested on Solaris, HP-UX and Linux, and in addition the client APIs have been tested on targets running vxWorks. A Java browser will be added soon.

Names: Jie Chen and William Watson III
Address: 12000, Jefferson Avenue. Thomas Jefferson National Accelerator Facility MS 12H Newport News, VA 23606, U.S.A
E-mail: chen@cebaf.gov watson@cebaf.gov
FAX: (757)269-5800
KeyWrods: CDEV, Threads, Object-oriented, Distributed


ID076: Meta Object Facilities and their Role in Distributed Information Management Systems

Nigel L. Baker(1) and Jean-Marie Le Goff(2)

The Centre for Complex Cooperative Systems Faculty of Computer Studies & Mathematics, University of the West of England, Coldharbour Lane, Frenchay, Bristol, United Kingdom BS16 1QY Phone: +44 117 965

The rapid convergence of the communications and information systems industries has motivated the movement towards the decentralization of computer systems and computer applications. Organizations, people and information are naturally distributed and as such the global market demands and users expect distributed computer systems to be integrated and interoperable. Further it is expected that these systems be adaptable, available and can evolve to meet new demands. The fundamental objective of any distributed application or information system is for the separate components to co-operate and co-ordinate computation in order to do useful work and or achieve some overall common system goal. However as systems integrate and grow so does complexity and the ease with which information can be found and managed. Although some progress has been made using distributed object based technology towards reducing the complexity of systems interaction and towards making systems more interoperable, there is still a lot of issues to be resolved. One aspect of interoperability is that systems to be integrated should have common ways of handling such things as events, security, systems management, transactions, faults and location queries. Software components must be able to plug into these common distributed services and facilities. Another critical aspect of interoperability which is a comparatively new area of development concerns ways of making components and systems self describing. That is, we want our systems to be able to retain knowledge about their dynamic structure ( meta-data) and that this knowledge is available to the rest of the infrastructure through the way that the system is plugged together. This is absolutely critical and necessary for the next generation of distributed systems to be able to cope with the size and complexity explosion. Parts manufactured for LHC experiments will be in operational use for many years, well into the next millennium, thus a huge quantity of data will accumulate which must be easily accessible to future projects. As the HEP experiment production process evolves, this data and the relationships between different aspects of the data must be permanently recorded. HEP groups, projects and systems will in the future require flexible ways to find, access and share this production data. The actual information required will depend very much on the viewpoint and the role of the user in the organization. The production system for example must provide support for the "as built" view of the manufacturing process and production data. However future HEP user groups and systems may well require a calibration, maintenance or systems management viewpoint. For a system to support a particular viewpoint requires a corresponding underlying object schema normalized to that viewpoint of a user or system which is attempting to navigate and find relevant data. The underlying system framework should be capable of supporting self navigation and self indexing services for each viewpoint. Also, over time, new systems will need to inter-operate with existing systems in unforeseeable ways. In order to inter-operate in an environment of future systems and users and in order to adapt to reconfigurations and versions of itself these systems must be self describing. Self describing information is termed meta-information or meta-data. In general meta-information requires the descriptive power of types, containment and relationships. What is required for universal interoperability and universal self describing systems is a meta schema to describe all types of meta-information. The information in the meta schema will therefore be describing information that describes information, in other words meta-meta-information. This meta schema facility must have the expressive power to describe such diverse models as object oriented analysis and design diagrams, SQL database schema's, IDL types , product data management models and workflow facility models. This paper discusses the issues surrounding interoperability and self describing information systems. Work in progress at the OMG on the Meta Object Facility (MOF), the Object Oriented Analysis and Design (OA&D) Task Force meta-model, the Workflow Management meta -model and the Manufacturing SIG's Product Data Management Enablers is also presented. The paper concludes with a discussion on how this work might impact on the design of HEP engineering data management and production management systems in the future.

Authors: Nigel L. Baker(1) and Jean-Marie Le Goff(2)
Affiliations:
(1)The Centre for Complex Cooperative Systems Faculty of Computer Studies & Mathematics, University of the West of England, Coldharbour Lane, Frenchay, Bristol, United Kingdom BS16 1QY
Phone: +44 117 965 6261
Email: Nigel.Baker@csm.uwe.ac.uk
(2) Electronic and Computing for Physics Division, CERN, Geneva, Switzerland
Phone: +41 22 767 6559
Email: Jean-Marie.Le.Goff@cern.ch
Keywords: CORBA, Product Data Management (PDM), OMG, Meta-Object Facility (MOF)


ID077: Performance Evaluation of EPICS on PowerPC

J.Odagiri, A.Akiyama, N.Yamamoto, T.Katoh

KEK, High Energy Accelerator Research Organization

EPICS core software has been ported to a VME single board computer(IOC) based on PowerPC micro processor. We have studied 1) the performance of EPICS core software, 2) Channel Access performance and 3) an interrupt response on PowerPC based CPU board. Performance of EPICS core software was measured using the standard benchmark database developed by APS/ANL. The CPU load of the IOC arising from database scanning is measured in the benchmark. The transaction process time of putting and getting a value through Channel Access is studied as a performance indicator of Channel Access. The response time to an event notified by a bus interrupt on the VME backplane was measured to study overall interrupt response in EPICS on PowerPC board. The results of the measurements are compared with the results on MC68060/MC68040 based CPU boards.

Submitted by: Jun-ichi Odoagiri
Full address: Accelerator Lab,KEK, High Energy Accelerator Research Organization Oho I - I , Tsukuba, Ibaraki, JAPAN 305
E-mail address: odagiri@kekvax.kek.jp
Fax number: +81-298-64-0321
Keywords: Control System, EPICS, VME, IOC, PowerPC


ID078: KEKB Power Supply Interface Controller Module

A. Akiyama, T. Nakamura, M. Yoshida, T. Katoh and T. Kubo

KEK

There are more than 2,600 magnet power supplies for KEKB storage rings. It was a very important problem to solve how to control such a large number of poser supplies distributed around the ring. Sometimes, it is required to change magnetic fields of some magnets synchronously with each other. The cost of interfaces is also a large problem. After discussions, we decided to develop an interface controller module which will be mounted inside the power supply controller or power supply itself. It has a 16-bit microprocessor, ARCnet interface, trigger pulse input interface, and general purpose parallel interface to the power supply. The microprocessor receives command from the control system via ARCnet and analyzes command and send signals to the power supply. It does not have DAC or ADC for simplicity. DAC is located in the power supply controller. And the output currents is monitored by analog scanning sub-system. We have also developed an ARCnet driver module on VME-bus and it has four channels of ARCnet interface. On each ARCnet segment, there will be twenty power supply interface modules at maximum.

Submitted by: Tadahiko Katoh
Full address: Accelerator Laboratory, KEK, High Energy Accelerator Research Organization 1-1 Oho, Tsukuba 305, JAPAN
E-mail address: Tadahiko.KATOH@kek.jp
Fax number: +81-298-64-0321
Keywords: Power Supply, Network, VME, Field-bus, ARCnet


ID079: Present Status of the KEKB Control System

T. Katoh, A. Akiyama, T. Kawamoto, I. Komada, K. Kudo, T. Naito, T.Nakamura, J. Odagiri and N. Yamamoto,

KEK

M. Kaji,

Mitsubishi Electric Co. Ltd. and S.Yoshida, Kanto Information Service.

Construction of the KEKB storage rings is now in the last phase. The control system for these two rings is under way. The main server workstation MELCOM ME-RK460 with two CPUs is connected to an FDDI switching equipment DEC GIGA Switch. From the FDDI switch, 26 optical fibre links are connected to 26 sub-control rooms around the rings and the injector linac. At present, 15 VME-bus based Input/Output Computers(IOCs) are installed in the local control rooms. About 60 IOCs will be added to the system this year. Software system is based on EPICS Tool Kit distributed through EPICS Collaboration. Relational database system, ORACLE, is used to store all the data about rings. By using this database, EPICS channel database records can be generated automatically.

Submitted by: Tadahiko Katoh
Full address: Accelerator Laboratory, KEK, High Energy Accelerator Research Organization 1-1 Oho, Tsukuba 305, JAPAN
E-mail address: Tadahiko.KATOH@kek.jp
Fax number: +81-298-64-0321
Keywords: EPICS, Control System, Network, VME


ID080: Software for control the betatron parameters
DESY HEPA proton ring.

S.Herb, C.Luttge (Desy, Germany)

L.Kopylov, M,Mikheev, S.Merker (IHEP, Russia )

Betatron parameters are critical for superconducting ring operation due to strong eddy current effects at the injection level. A set of application has been developed to measure and control betatron tune, chromaticity and coupling. A DSP card is used for signal processing allows up to 8Hz tune refresh period. For realistic estimation of eddy current effects the data from on-line magnet measurements is used in chromaticity calculation.The physical model and implementation aspects are described.

Submitted by: Mikhail Mikheev
Full address: Institute for High Energy Physics, Pobeda 1, Protvino, Moscow reg, Russia 142284
E-mail address: mms@oea.ihep.su
Fax number: +7 096 779 0811
Keywords: betatron parameters, superconducting collider,software