Publications
Selected Articles
Links to Computing and Software Engineering Journals
Links to Specification, Modeling and Analysis Resources


Last modified by Sheldon January 27, 2009

Jump to journal articles below:
64, 58, 54, 53, 51, 48, 46, 41, 40, 35, 30, 22, 21, 12, 02.
Bibliometrics:
IEEE Xplore
ACM Digital Library
DBLP Computer Science Bibliography
CiteSeerX

ORNL Reports Repository:
Publications Tracking System PTS 2006+

Comprehensive Publications and Presentations Registry 2003-2005
PTS internal link (requires login)

Creative Commons provides free tools that let authors, scientists, artists, and educators easily mark their creative work with the freedoms they want it to carry. You can use CC to change your copyright terms from "All Rights Reserved" to "Some Rights Reserved."
75 Methodology for Evaluating Security Controls Based on Key Performance Indicators and Stakeholder Mission

Sheldon, F.T. Abercrombie, R.K., and Mili, A., IEEE Procs Hawaii Int'l Conf. on System Sciences (HICSS-42 CSIIRM), Waikola, Big Island, Hawaii, Jan. 5-8, 2009.

ABSTRACT (pdf of full paper): Information security continues to evolve in response to disruptive changes with a persistent focus on information-centric controls and a healthy debate about balancing endpoint and network protection, with a goal of improved enterprise/business risk management. Economic uncertainty, intensively collaborative styles of work, virtualization, increased outsourcing and ongoing compliance pressures require careful consideration and adaptation. This paper proposes a Cyberspace Security Econometrics System (CSES) that provides a measure (i.e., a quantitative indication) of reliability, performance and/or safety of a system that accounts for the criticality of each requirement as a function of one or more stakeholders’ interests in that requirement. For a given stakeholder, CSES reflects the variance that may exist among the stakes she/he attaches to meeting each requirement. This paper introduces the basis, objectives and capabilities for the CSES including inputs/outputs as well as the structural and mathematical underpinnings.

74 Challenging the Mean Time to Failure: Measuring Dependability as a Mean Failure Cost

Mili, A. and Sheldon, F.T., IEEE Procs Hawaii Int'l Conf. on System Sciences (HICSS-42 CSIIRM), Waikola, Big Island, Hawaii, Jan. 5-8, 2009.

ABSTRACT (pdf of full paper): As a measure of system reliability, the mean time to failure falls short on many fronts: it ignores the variance in stakes among stakeholders; it fails to recognize the structure of complex specifications as the aggregate of overlapping requirements; it fails to recognize that different components of the specification carry different stakes, even for the same stakeholder; it fails to recognize that V& V actions have different impacts with respect to the different components of the specification. Similar metrics of security, such as MTTD (Mean Time to Detection) and MTTE (Mean Time to Exploitation) suffer from the same shortcomings. In this paper we advocate a measure of dependability that acknowledges the aggregate structure of complex system specifications, and takes into account variations by stakeholder, by specification components, and by V& V impact.

73 Error Reduction in Portable, Low-Speed Weigh-In-Motion (WIM)

L. M. Hively, R. K. Abercrombie, F.T. Sheldon and M. B. Scudiere, ORNL/TM-2008/004, November 2008.

ABSTRACT (Full report pdf available soon): Federal and State agencies need certifiable vehicle weights for various applications, such as highway inspections, border security, check points, and port entries. Oak Ridge National Laboratory (ORNL) weigh-in-motion (WIM) technology was previously unable to provide certifiable weights, due to natural oscillations, such as vehicle bouncing and rocking. Recent ORNL work demonstrated a novel filter to remove these oscillations. This details the filtering algorithms that enable certifiable weight measurements (error < 0.1%) for a higher traffic volume with less effort (elimination of redundant weighing). The system automatically obtains data from a vehicle that is driven slowly (≤5 MPH) over multiple weigh-pads on smooth asphalt or concrete surfaces. Keywords: Portable Weigh-in-Motion, Vehicle Oscillation Error Characterization, Timeserial Error Filtration, WIM Data Management Methodology

72 Synopsis of Evaluating Security Controls Based on Key Performance Indicators and Stakeholder Mission Value

Abercrombie, R.K., Sheldon, F.T. and Mili, A., Proceedings 11-th IEEE High Assurance Systems Engineering Symposium, Nanjing, China, December 3 - 5, 2008

ABSTRACT (pdf of full paper): Information security continues to evolve in response to disruptive changes with a persistent focus on information-centric controls and a healthy debate about balancing endpoint and network protection, with the goal of improved enterprise and business risk management. Economic uncertainty, intensively collaborative work styles, virtualization, increased outsourcing and ongoing compliance pressures require careful consideration and adaptation of a balanced approach. The Cyberspace Security Econometrics System (CSES) provides a measure of reliability, security and safety of a system that accounts for the criticality of each requirement as a function of one or more stakeholders’ interests in that requirement. For a given stakeholder, CSES reflects the variance that may exist among the stakes one attaches to meeting each requirement. This paper summarizes the basis, objectives and capabilities for the CSES including inputs/outputs as well as the structural underpinnings. Keywords: Cyber Security Metrics, Cybersecurity Investments, Dependability

71 Breakthrough Error Reduction in Portable, Low-Speed Weigh-in-Motion (Sub-0.1 percent error)

Abercrombie, R.K., Hively, L.M., Scudiere, M.B. and Sheldon, F.T., Proceedings International Conference on Heavy Vehicles Incorporating Heavy Vehicle Transport Technology (HVTT 10) and Weigh-In-Motion (ICWIM 5), Paris (J. Wiley) May 19-22. 2008.

ABSTRACT (pdf of full paper, poster bibliographic citation): We present breakthrough findings via significant modifications to the Weigh-in-Motion (WIM) Gen II approach, so-called the modified Gen II. The revisions enable slow speed weight measurements at least as precise as in ground static scales, which are certified to 0.1% error. Concomitant software and hardware revisions reflect a philosophical and practical change that enables an order of magnitude improvement to sub-0.1% error in low-speed weighing precision. This error reduction breakthrough is presented within the context of the complete host of commercial and governmental application rationale including the flexibility to extend information and communication technology for future needs. Keywords: Portable Weigh-in-Motion, Vehicle Oscillation Error Characterization, Timeserial Error Filtration, WIM Data Management Methodology

70 Proceedings Fourth Annual 2008 Cyber Security and Information Intelligence Research Workshop

Edited by F.T. Sheldon, A. Krings, R. K. Abercrombie and A. Mili, Developing Strategies to Meet the Cyber Security and Information Intelligence Challenges Ahead, May 12-14, 2008.

ABSTRACT (ACM Digital Library): As our dependence on the cyber infrastructure grows ever larger, more complex and more distributed, the
systems that compose it become more prone to failures and/or exploitation. Intelligence is information valued for its currency and relevance rather than its detail or accuracy. Information explosion describes the pervasive abundance of (public/private) information and the effects of such. Gathering, analyzing, and making use of information constitutes a business- / sociopolitical- / military-intelligence gathering activity and ultimately poses significant advantages and liabilities to the survivability of "our" society. The combination of increased vulnerability, increased stakes and increased threats make cyber security and information intelligence (CSII) one of the most important emerging challenges in the evolution of modern cyberspace "mechanization." The goal of the workshop was to challenge, establish and debate a far-reaching agenda that broadly and comprehensively outlined a strategy for cyber security and information intelligence that is founded on sound principles and technologies. We aimed to discuss novel theoretical and applied research focused on different aspects of software security/dependability, as software is at the heart of the cyber infrastructure. The workshop scope covered a wide range of methodologies, techniques, and tools to (1) assure, measure, estimate and predict software security/dependability and (2) analyze and evaluate the impact of such applications on software security/dependability. We encouraged researchers and practitioners from a wide swath of professional areas (not only the programmers, designers, testers, and methodologists but also the users and risk managers) to participate. In this way, we can all understand the needs, stakes and context of the ever-evolving cyber world. We looked to software engineering to help provide us the products and methods to accomplish these goals including: Better precision in understanding existing and emerging vulnerabilities and threats (e.g., insider threat). (1) Advances in insider threat detection, deterrence, mitigation and elimination, (2) Game-changing ventures, innovations and conundrums (e.g., quantum computing, QKD, phishing, malware market, botnet/DOS), (3) Assuring security, survivability and dependability of our critical infrastructures, (4) Assuring the availability of time-critical scalably secure systems, information provenance and security with privacy, (5) Observable/ measurable/ certifiable security claims, rather than hypothesized causes, (6) Methods that enable us to specify security requirements, formulate security claims, and certify security properties, (7) Assurance against known and unknown (though perhaps pre-modeled) threats, (8) Mission fulfillment, whether or not security violations have take place (rather than chasing all violations indiscriminately).

69 Oak Ridge National Laboratory's Experiences with Multiple Uses of Weigh-In-Motion (WIM) Data

R. K. Abercrombie, F. T. Sheldon, and R. M. Walker, Proceedings of North American Travel Monitoring Exhibition & Conference (2008 NATMEC), Washington, D.C., August 6-8, 2008.

ABSTRACT (pdf of presentation): The Oak Ridge National Laboratory (ORNL) involvement in the Weigh-in-Motion (WIM) research with both government agencies and private companies dates back to 1989. The discussion here will focus on the US Army’s current need for an automated WIM system to weigh and determine the center-of-balance for military wheeled vehicles and cargo and the expanded uses of WIM data with Federal Agencies and State Safety and Enforcement agencies. ORNL is addressing not only configuration and data management issues as they relate multiple uses of WIM data, but also its dissemination of this information related to the collection, management, and use of monitored traffic data.

68 Breakthrough Error Reduction in Portable, Low-Speed Weigh-In-Motion (Sub-0.1 Percent Error)

Abercrombie, R.K., Hively, L.M., Scudiere, M.B. and Sheldon, F.T., Proceedings International Conference on Heavy Vehicles Incorporating Heavy Vehicle Transport Technology (HVTT 10) and Weigh-In-Motion (ICWIM 5), Paris (J. Wiley) May 19-22. 2008.

ABSTRACT (pdf of full paper, citation): We present breakthrough findings via significant modifications to the Weigh-in-Motion (WIM) Gen II approach, so-called the modified Gen II. The revisions enable slow speed weight measurements at least as precise as in ground static scales, which are certified to 0.1% error. Concomitant software and hardware revisions reflect a philosophical and practical change that enables an order of magnitude improvement to sub-0.1% error in low-speed weighing precision. This error reduction breakthrough is presented within the context of the complete host of commercial and governmental application rationale including the flexibility to extend information and communication technology for future needs. Keywords: Portable Weigh-in-Motion, Vehicle Oscillation Error Characterization, Timeserial Error Filtration, WIM Data Management Methodology

67  Multi-Modal Integrated Safety, Security & Environmental Program Strategy

Walker, R.M., Omitaomu, O.A., Ganguly, A.R., Abercrombie, R.K. and Sheldon, F.T., Proceedings 87th Transportation Research Board Annual Meeting, Wash. DC, Jan. 13-17 2008.

ABSTRACT (pdf of full paper, bibliographic citation): This paper describes an approach to assessing and protecting the surface transportation infrastructure from a network science viewpoint. We address transportation security from a human behavior-dynamics perspective under both normal and emergency conditions for the purpose of measuring, managing and mitigating risks. The key factor for the planning and design of a robust transportation network solution is to ensure accountability for safety, security and environmental risks. Keywords: Multi-Modal Integrated Safety Security Environmental Transportation Network

66 Tracking and Monitoring of Radioactive Materials in the Commercial Hazardous Materials Supply Chain

Walker, R.M., Kopsick, D.A., Warren, T.A., Abercrombie, R.K., Sheldon, F.T., Hill, D.E., Gross, I.G. and Smith, C.M., Proc. 15th Int’l Symposium on the Packaging and Transportation of Radioactive Materials (PATRAM2007), Miami, Oct. 20-26 2007.

ABSTRACT (pdf of full paper, poster, bibliographic citation): One of the main components of the Environmental Protection Agency’s (EPA) Clean Materials Program is to prevent the loss of radioactive materials through the use of tracking technologies. If a source is inadvertently lost or purposely abandoned or stolen, it is critical that the source be recovered before harm to the public or the environment occurs. Radio frequency identification (RFID) tagging on radioactive sources is a technology that can be operated in the active or passive mode, has a variety of frequencies available allowing for flexibility in use, is able to transmit detailed data and is discreet. The purpose of the joint DOE and EPA Radiological Source Tracking and Monitoring (RadSTraM) project is to evaluate the viability, effectiveness and scalability of RFID technology under a variety of transportation scenarios.The goal of the Phase II was to continue testing integrated RFID tag systems from various vendors for feasibility in tracking radioactive sealed sources which included the following performance objectives: 1. Validate the performance of RFID intelligent systems to monitor express air shipments of medical radioisotopes in the nationwide supply chain, 2. Quantify the reliability of these tracking systems with regards to probability of tag detection and operational reliability, 3. Determine if the implementation of these systems improves manpower effectiveness, and 4. Demonstrate that RFID tracking and monitoring of radioactive materials is ready for large scale deployment at the National level. Keywords: Tracking Monitoring Radioactive Materials Commerce Commercial Hazardous Materials HAZMAT Supply Chain

65 Authentication Protocol using Quantum Superposition States

Yoshito Kanamori, Seong-Moo Yoo, Don A. Gregory and Frederick T. Sheldon, Int', Journal of Network Security (accepted June 2007).

ABSTRACT (pdf of full paper): When it became known that quantum computers could break the RSA (named for its creators – Rivest, Shamir, and Adleman) encryption algorithm within a polynomial-time, quantum cryptography began to be actively studied. Other classical cryptographic algorithms are only secure when malicious users do not have sufficient computational power to break security within a practical amount of time. Recently, many quantum authentication protocols sharing quantum entangled particles between communicators have been proposed, providing unconditional security. An issue caused by sharing quantum entangled particles is that it may not be simple to apply these protocols to authenticate a specific user in a group of many users. An authentication protocol using quantum superposition states instead of quantum entangled particles is proposed. The random number shared between a sender and a receiver can be used for classical encryption after the authentication has succeeded. The proposed protocol can be implemented with the current technologies we introduce in this paper. Keywords: Authentication, Encryption, Photon, Polarization, Quantum cryptography, Superposition states

64 Prototype Weigh-In-Motion Performance

R. K. Abercrombie, D. L. Beshears, L. M. Hively, M. B. Scudiere, F. T. Sheldon, J. L. Schmidhammer, J. Vanvactor, ORNL/TM-2007/039 (ORNL/TM-2005/164: revision update from June 2005), October 2006

ABSTRACT (pdf of full report , Bibliographic citation): This report details the results of weight measurements performed in May 2005 at two sites using different types of vehicles at each site. In addition to the weight measurements, the testing enabled refinements to the test methodology and facilitated an assessment of the influence of vehicle speed on the dynamic-mode measurements. The initial test at the National Transportation Research Center in Knoxville, TN, involved measurements of passenger and light-duty commercial vehicles. A subsequent test at the Arrival/Departure Airfield Control Group (A/DACG) facility in Ft. Bragg, NC, involved military vehicles with gross weights between 3,000 and 75,000 pounds (1,356 to 33,900 kilograms) with a 20,000-pound (9,040 kilograms) limit per axle. For each vehicle, four or more separate measurements were done using each weighing mode. WIM dynamic, WIM stop-and-go, and static-mode scale measurements were compared for total vehicle weight and the weight of the individual axles. We made WIM dynamic mode measurements with three assemblages of weight-transducer pads to assess the performance with varying numbers (2, 4, and 6) of weigh pads. Percent error in the WIM dynamic mode was 0.51%, 0.37%, and 0.37% for total vehicle weight and 0.77%, 0.50%, and 0.47% for single-axle weight for the two-, four-, and six-pad systems, respectively. Errors in the WIM stop-and-go mode were 0.55% for total vehicle weight and 0.62% for single-axle weights. In-ground scales weighed these vehicles with an error of 0.04% (within Army specifications) for total vehicle weight, and an error of 0.86% for single-axle weight. These results show that (1) the WIM error in single-axle weight was less than that obtained from in-ground static scales; (2) the WIM system eliminates time consuming manual procedures, human errors, and safety concerns; and (3) measurement error for the WIM prototype was less than 1% (within Army requirements for this project). All the tests were performed on smooth, dry, level, concrete surfaces. Tests under non-ideal surface conditions are needed (e.g., rough but level, sun-baked asphalt, wet pavement) .

63 Oak Ridge National Laboratory's (ORNL) Weigh-In-Motion Configuration and Data Management Activities

R. K. Abercrombie, F. T. Sheldon, and R. G. Schlicher, Improving Traffic Monitoring Data for Transportation Decision-Making, Proc. North American Travel Monitoring Exhibition and Conference (NATMEC), Minneapolis, MN, June 7, 2006 (with addendum update of data collected Oct. 3-4, 2006)

ABSTRACT (presentation): This presentation covers the WIM time and Motion Study, Generation II vision and conceptual overview, system architecture, process flow and (dis)assembly as well as results from test and evaluations studies conducted at Ft. Lewis, Ft. Eustis and Ft. Bragg.

62 Proceedings Third Annual 2007 Cyber Security and Information Infrastructure Research Workshop

Edited by F.T. Sheldon, Seong-Moo Yoo, A. Mili and A. Krings, Towards Comprehensive Strategies that Meet the Cyber Security Challenges of the 21st Century, published at Lulu.com by Oak Ridge National Laboratory, May 14-15, 2007.

ABSTRACT (eBook published at lulu.com): Workshop goal was to challenge, establish and debate a far-reaching discussion that broadly and comprehensively outlines a strategy for cyber security that is founded on sound technologies for better understanding of existing and emerging threats; advances in insider threat detection, deterrence, mitigation and threat elimination; ensuring security, survivability and dependability of critical infrastructures; guaranteeing availability of time-critical scalably secure systems; observable measurable certifiable security effects, rather than hypothesized causes; quantitative metrics that enable us to specify security requirements, formulate security claims, and certify security properties; solutions that provide a measure of assurance against known and unknown (though perhaps pre-modeled) threats; mission fulfillment, whether or not security violations have taken place rather than mitigating all violations indiscriminately.

61 A Methodology to Evaluate Agent Oriented Software Engineering Techniques

Chia-En Lin, K.M. Kavi, F.T. Sheldon and R.K. Abercrombie, IEEE Proc. HICSS-40, Big Island HI, (Software Agents and Semantic Web Technologies Minitrack [nominated best paper]), Jan 3-6 2007.

ABSTRACT (pdf of full paper | presentation | paul-lin-dissertation): In this paper, we explore the various applications of Agent-based systems categorized into different application domains. We describe what properties are necessary to form an Agent society with the express purpose of achieving system-wide goals in MAS. A baseline is developed herein to help us focus on the core of Agent concepts throughout the comparative study and to investigate both the Object-Oriented and Agent-Oriented techniques that are available for constructing Agent-based systems. In each respect, we address the conceptual background associated with these methodologies and how available tools can be applied to specific domains.

60 Towards an Engineering Discipline of Computational Security

A. Mili, Vinokurov, A., Jilani, L.L., F.T. Sheldon and Ayed, R.B., IEEE Proc. HICSS-40, Big Island HI, (Next Generation Software-Engineering Minitrack), Jan 3-6 2007.

ABSTRACT (pdf of full paper) : George Boole ushered the era of modern logic by arguing that logical reasoning does not fall in the realm of philosophy, as it was considered up to his time, but in the realm of mathematics. As such, logical propositions and logical arguments are modeled using algebraic structures. Likewise, we submit that security attributes must be modeled as formal mathematical propositions that are subject to mathematical analysis. In this paper, we approach this problem by attempting to model security attributes in a refinement-like framework that has traditionally been used to represent reliability and safety claims. Keywords: Computable security attributes, survivability, integrity, dependability, reliability, safety, security, verification, testing, fault tolerance.

59 Bank Transfer Over Quantum Channel with Digital Checks

Yoshito Kanamori and Seong-Moo Yoo and F.T. Sheldon, IEEE Proc. Global Telecom. Conf. (GlobeCom), San Francisco, CA, 27 Nov. – 1 Dec. 2006.

ABSTRACT (pdf of full paper) : In recent years, many quantum cryptographic schemes have been proposed.  However, it seems that there are many technical difficulties to realize them (except Quantum Key Distributions) as practical applications. In this paper, we propose a bank transfer  (i.e., Electronic Funds Transfer or EFT) system utilizing both classical and quantum cryptography to provide theoretically unbreakable security. This system can be realized using current technologies (e.g., linear polarizers and Faraday rotators) and requires no additional authentication and no key distribution scheme. However, a trusted third party must keep all member banks’ private keys for encryption, authentication and also for functions to generate classical digital signatures. Keywords: Digital signature, encryption, photon, polarization, quantum cryptography.

58 Qualitative and Quantitative Models of Redundancy

A. Mili, L. Wu, K. Pickard, F.T. Sheldon and A. Salem, Science of Computer Programming (ISSN: 0167-6423) Elsevier, New York (Submitted June 2006).

ABSTRACT: Redundancy is a system property that generally refers to duplication of state information or system function. While redundancy is usually investigated in the context of fault tolerance, one can argue that it is in fact an intrinsic feature of a system that can be analyzed on its own without reference to fault tolerance. Redundancy may arise by design, generally to support fault tolerance, or as a natural byproduct of design, and is usually unexploited. In this paper, we tentatively explore observable forms of redundancy, as well as mathematical models that capture them.

57 Proceedings Second Annual 2006 Cyber Security and Information Infrastructure Research Workshop

Edited by Frederick Sheldon, Axel Krings, Seong-Moo Yoo, Ali Mili and Joseph Trien, Beyond the Maginot Line, published at Lulu.com by Oak Ridge National Laboratory, May 10-11, 2006

ABSTRACT (eBook published at lulu.com) : Cyber Security: Beyond the Maginot Line. Recently the FBI reported that computer crime has skyrocketed costing over $67B in 2005 and affecting 2.8M+ businesses and organizations. Attack sophistication is unprecedented along with availability of open source concomitant tools. Private, academic, and public sectors invest significant resources in cyber security. Industry primarily performs cyber security research as an investment in future products and services. While the public sector also funds cyber security R&D, the majority of this activity focuses on the specific agency mission. Broad areas of cyber security remain neglected or underdeveloped. Consequently, this workshop endeavors to explore issues involving cyber security and related technologies toward strengthening such areas and enabling the development of new tools and methods for securing our information infrastructure critical assets. We aim to assemble new ideas and proposals about robust models on which we can architect a secure cyberspace.

56 eCGE: A Multi-Platform Petri Net Editor

David Dugan (Mjr. Advisor F.T. Sheldon, Defended Apr. 2005 [Prof. A. Andrews represented WSU Grad. Sch.])

ABSTRACT (pdf of full thesis | presentation | eCGE tool is available after signing the GNU public licence) : This thesis describes the design and application of the enhanced (CSPL) Petri Net Graphical Editor (eCGE) application. The CSPL (C-based Stochastic Petri net Language) was developed at Duke University by G. Ciardo and K. Trivedi for their tool called SPNP (Stochastic Petrn Net Package). The eCGE project endeavored to provide some core features of a Stochastic Petri net modeling tool environment with a GUI for developing and visualizing Petri net models. The eCGE is implemented in Java for portability with highly interactive layout and editing features designed to detect syntactic errors in a Petri net model on-the-fly. The eCGE contains a number of improvements over the original application, such as an improved design, which provides the ability to read and write a file in the CSPL format and features to help organize a model. The eCGE was originally concieved as an add on to the CSPN tool (see Sheldon, F.T., Specification and Analysis of Stochastic Properties of Concurrent Systems Expressed Using CSP) which is a CSP to Stochastic Petri Net Translator (specifically from CSP to CSPL) and was originated by K.M. Kavi and F.T. Sheldon (including various student contributors from the University of Colorado), but this task was not completed. The CSPN tool is also avaliable (version 3.6 delivered in 1998 to NASA ARC) after signing the GNU public licence.

55 Challenges in Computational Software Engineering

A. Mili, and F.T. Sheldon, Next Generation Software Engineering, Co-located Workshop at IEEE HICSS-39, Kauai, HI, Presented Jan 4-7 2006.

ABSTRACT (pdf of full paper) : Broadly speaking, it is possible to charaterize next generation software by the following features: size, complexity, distribution, heterogenecity, etc. In the face of such complexity, it is natural to turn to the tool of choice that scholars have always used to maintain intellectural control viz mathematics; yet paradoxically, mathematics has remained of limited use in dealing with software engineering in the large. Automated tools, built on computational models of software engineering, are required to help fill the wide gap between human capabilities and the daunting task of designing, analyzing, and evoking modern sofware systems [Heavner et. al. 2006]. In this position paper, we briefly discuss some computational issues pertaining to this gap.

54 Measuring the Relations Among Class Diagrams to Assess Complexity

F.T. Sheldon and Hong Chung, Jr of Software Maintenance and Evolution: Research & Practice (John Wiley & Sons), 18:5, pp. 333-350, Sept/Oct 2006.

ABSTRACT (pdf of full paper | purchase): Complexity metrics for Object-Oriented systems are plentiful. Numerous studies have been undertaken to establish valid and meaningful measures of maintainability as they relate to the static structural characteristics of software. In general, these studies have lacked the empirical validation of their meaning and/or have succeeded in evaluating only partial aspects of the system. In this study, we have determined through limited empirical means, a practical and holistic view by analyzing and comparing the structural characteristics of UML class diagrams as those characteristics relate to and impact maintainability. The class diagram is composed of three kinds of relations: association, generalization and aggregation, which make their overall structure difficult to understand. We combined these relations in a way that enables a comprehensive and valid measure of complexity. To validate our metric, we measured the level of understandability of the system by determining the time needed to reverse engineer the source code for a given class diagram including the number of errors produced while creating the diagram as one indicator of maintainability. The results as compared to other complexity metrics indicate our metric shows promise especially if proven to be scalable. Keywords: perfective/corrective maintenance, object-oriented metrics, complexity metrics, and class diagram.

53 Modeling Security as a Dependability Attribute: A Refinement Based Approach

A. Mili, F.T. Sheldon, L.L. Jilani, A. Vinkurov, A. Thomasian and R.B. Ayed, Innovations in Systems and Software Engineering Journal (Springer-Verlag London Ltd), Vol. 2, No. 1, pp. 9-48, March 2006.

ABSTRACT (pdf of full paper): As distributed, networked computing systems become the dominant computing platform in a growing range of applications, they increase opportunities for security violations by opening heretofore unknown vulnerabilities. Also, as systems take on more and more critical functions, they increase the stakes of security by acting as custodians of assets that have great economic or social value. Finally, as perpetrators grow increasingly sophisticated, they increase the threats on system security. Combined, these premises place system security at the forefront of engineering concerns. In this paper, we introduce and discuss a refinement-based model for one dimension of system security, namely survivability.

52 On Quantum Authentication Protocols

Y. Kanamori, S-M Yoo, D.A. Gregory and F.T. Sheldon, Proc. IEEE GlobeCom, St. Louis, MO, Vol. 3, pp. 1650-54, 28 Nov. – 2 Dec. 2005.

ABSTRACT(pdf of full paper): When it became known that quantum computers could break the RSA (named for its creators – Rivest, Shamir, and Adleman) encryption algorithm within a polynomial-time, quantum cryptography began to be actively studied. Other classical cryptographic algorithms are only secure when malicious users do not have computational power enough to break security within a practical amount of time. Recently, many quantum authentication protocols sharing quantum entangled particles between communicators have been proposed, providing unconditional security. An issue caused by sharing quantum entangled particles is that it may not be simple to apply these protocols to authenticate a specific user in a group of many users.  We propose an authentication protocol using quantum superposition states instead of quantum entangled particles. Our protocol can be implemented with the current technologies we introduce in this paper.  Keywords Authentication, Encryption, Photon, Polarization, Quantum cryptography, Superposition states.

51 A Short Survey on Quantum Computers

Y. Kanamori, S-M Yoo and F.T. Sheldon, Int'l Jr. of Computers and Applications, ACTA Press Calgary 28:3, pp. 227-233, 2006.

ABSTRACT(pdf of full paper | purchase): Quantum computing is an emerging technology. The clock frequency of current computer processor systems may reach about 40 GHz within the next 10 years. By then, one atom may represent one bit. Electrons under such conditions are no longer described by classical physics and a new model of the computer may be necessary by then. The quantum computer is one proposal that may have merit in dealing with the problems associated with the fact that certain important computationally intense problems present that current (classical) computers cannot solve because they require too much processing time. For example, Shor’s algorithm performs factoring a large integer in polynomial time while classical factoring algorithms can do it in exponential time. In this paper we briefly survey the current status of quantum computers, quantum computer systems, and quantum simulators.  Keywords Classical computers, quantum computers, quantum computer systems, quantum simulators, Shor’s algorithm.

50 Modeling Redundancy: Quantitative and Qualitative Models

A. Mili, Lan Wu, F.T. Sheldon, M. Shereshevsky and J. Desharnais, Proc. ACS/IEEE AICCSA-06 Conf., Dubai/Sharjah, Mar. 8-11, 2006.

ABSTRACT(pdf of full paper): Redundancy is a system property that generally refers to duplication of state information or system function. While redundancy is usually investigated in the context of fault tolerance, one can argue that it is in fact an intrinsic feature of a system that can be analyzed on its own without reference to fault tolerance. Redundancy may arise by design, generally to support fault tolerance, or as a natural byproduct of design, and is usually unexploited. In this paper, we tentatively explore observable forms of redundancy, as well as mathematical models that capture them. Keywords Redundancy, Quantifying Redundancy, Qualifying Redundancy, Error Detection, Error Recovery, Fault Tolerance, Fault Tolerant Design, Redundancy as a Feature of State Representation, Redundancy as a Feature of System Function.

49 Challenges in Cyber Security

F.T. Sheldon, A. Mili, D. Neergaard and R. Abercrombie (STV06, unpublished Dec 2005).

ABSTRACT (pdf of full paper): In this position paper, we argue that a quantitative, formal approach is needed for modeling system security, and briefly discuss the outline of a refinement based approach that integrates security with other dimensions of dependability. We describe the significance of the cyber security gap in terms of three dimensions (1) criticality, (2) threat and (3) vulnerability. We have increased criticality due to the emerging economic dependance on the Internet; increased threat, as a consequence of emerging global tensions coupled with an increased sophistication of perpetrators; increased vulnerability because of the increased pervasiveness of computing. Cyber security counter measures on the other hand are primarily defensive, qualitative and ad-hoc. Therefore, it is necessary to bring dicipline to security management by providing a logic for specifying security requirements and verifying secure systems against such requirements. A model for managing system security by quantifying costs, risks, measures and counter-measures. Automated tools that support security management according to the proposed models.We propose a mathematical framework for modeling system security using refinement-like calculi which allow us to seamlessly integrate security attributes with other attributes of dependability. A tool that records quantified security measures and deploys an inference mechanism to answer queries about overarching properties of the system is under development.

48 Modeling Dependability Measures

A. Mili, A. Thomasian, A. Vinokurov, and F. Sheldon, (Unpublished Fall 2005).

ABSTRACT (pdf of full paper): In past work, we have discussed a uniform model to represent different verification techniques, and have shown how this model can be used to support two divide-and-conquer strategies. How to compose eclectic verification claims; and how to decompose composite verification goals. In this paper, we broaden the original model, most notably by integrating cost considerations, and by encompassing multiple dimensions of dependability (reliability, security, and safety). We briefly illustrate our approach with a very simple demo, that we run on a very elementary, tentative prototype. This paper doesn not offer conclusive research results as much as it offers motivated premises conc epts and approaches for further research. Keywords Dependability, reliability, safety, security, verification, testing and fault tolerance.

47 Weigh-In-Motion (WIM) and Measurement Reach Back Capability (WIM-RBC) - The Configuration and Data Management Tool for Validation, Verification, Testing and Certification Activities

R. K. Abercrombie, F. T. Sheldon, R. G. Schlicher and K. M. Daley, 40th Annual International Logistics Conference 2005, Logistics: Product and Process for Capacity, Orlando, FL, August 16, 2005.

ABSTRACT (pdf of presentation): Overview of the configuration and data management tools used for testing and certification activities for the WIM Gen II system.

46 Recovery Preservation: A Measure of Last Resort

A. Mili, F. Sheldon, F. Mili and J. Desharnais, Innovations in Systems and Software Engineering Journal (Springer-Verlag London Ltd), 2005:1:54-61.

Preliminary version published in:

Proceedings Int'l Conf. Principles of Software Engineering
, Buenos Aires, Argentina, pp. 121-130, Nov. 22-27, 2004.

ABSTRACT(pdf of full paper | published | PRISE presentation | PRISE stream [7mb zipped]): Traditionally, it is common to distinguish between three broad families of methods for dealing with the presence and manifestation of faults in digital (hardware or software) systems: Fault Avoidance, Fault Removal and Fault Tolerance. We focus on fault tolerance and submit that current techniques of fault tolerance would benefit from a better undersdtanding of recoverability preservation, i.e. a systemOs ability to preserve recoverability even when/ if it does not preserve correctness. In this extended abstract, we briefly introduce the concept of recoverability preservation, discuss some preliminary characterizations of it, then explore possible applications thereof. Keywords Programming Calculi, Relational Mathematics, System Fault Tolerance, Fault, Error, Failure, Recoverability Preservation, Recovery Routine.

45 Weigh-In-Motion Research and Development Activities at the Oak Ridge National Laboratory

R. K. Abercrombie, D. L. Beshears, M. B. Scudiere, J. E. Coats, Jr., F. T. Sheldon, C. Brumbaugh, E. Hart, and R. McKay, Proceeding of the Fourth International Conference On Weigh-In-Motion (ICWM4), Taipei, Taiwan, National Science Council, National Taiwan University Publications (ISBN 986-00-0417-X), pp. 139-149, Taipei, Taiwan, Feb. 21, 2005

ABSTRACT(pdf of full paper | presentation): The Oak Ridge National Laboratory (ORNL) has been involved in Weigh-in-Motion (WIM) Research with both government agencies and private companies since 1989. The discussion here will focus on the United States Army’s need for an automated system to weigh and determine the center-of-balance for military wheeled vehicles as it relates to deployments for both military and humanitarian activities. A demonstration test at Fort Bragg/Pope AFB of ORNL’s first generation portable Weigh-in-Motion (WIM Gen I) will be discussed as well as the development and fielding activities for a WIM Gen II system. Keywords Weigh-in-Motion, WIM, center-of-balance, defense deployments, aircraft load planning.

44 Perspectives on Redundancy: Applications to Software Certification

A. Mili, F. Sheldon, F. Mili, M. Shereshevsky and J. Desharnais, IEEE Proc. HICSS-38, (Testing and Certification of Trustworthy Systems Minitrack), Big Island, Hawaii, Jan. 3-6, 2005.

ABSTRACT(pdf of full paper | presentation): Redundancy is a feature of systems that arises by design or as an accidental byproduct of design, and can be used to detect, diagnose or correct errors that occur in systems operations. While it is usually investigated in the context of fault tolerance, one can argue that it is in fact an intrinsic feature of a system that can be analyzed on its own without reference to any fault tolerance capability. In this paper, we submit three alternative views of redundancy, which we propose to analyze to gain a better understanding of redundancy; we also explore means to use this understanding to enhance the design of fault tolerant systems. Keywords Redundancy, Quantifying Redundancy, Qualifying Redundancy, Error Detection, Error Recovery, Fault Tolerance, Fault Tolerant Design.

43 Methodology to Support Dependable Survivable Cyber-Secure Infrastructures

Frederick T. Sheldon, Stephen G. Batsell, Stacy J. Prowell and Michael A. Langston, IEEE Proc. HICSS-38, (Security & Surviveability of Networked Systems Minitrack), Big Island, Hawaii, Jan. 3-6, 2005.

ABSTRACT(pdf of full paper | presentation): Information systems now form the backbone of nearly every government and private system. Increasingly these systems are networked together allowing for distributed operations, sharing of databases, and redundant capability. Ensuring these networks are secure, robust, and reliable is critical for the strategic and economic well being of the Nation. This paper argues in favor of a biologically inspired approach to creating survivable cyber-secure infrastructures (SCI). Our discussion employs the power transmission grid. Keywords Infrastructure Vulnerability, Reliability, Cyber-Security, Software Agents, Autonomic Computing Paradigm

42 Characterizing Software Quality Assurance Methods: Impact on the Verification of Learning Systems

Frederick T. Sheldon and Ali Mili, Workshop on Verification, Validation and Testing of Learning Systems in conjunction with the Eighteenth Annual Conference on Neural Information Processing Systems Conference, Whistler, BC, CA, Dec. 16-18, 2004.

ABSTRACT(pdf of full paper | presentation): While learning systems offer great promise in reducing cost and improving quality of control applications, they also raise thorny issues in terms of the mismatch between the quality standards that these systems must achieve [10] and the available technology. There is widespread agreement that current verification technology does not apply to online learning (i.e., adaptive) systems, whose function evolves over time, and cannot be inferred from static analysis. Yet, we claim that one can still use insights from traditional verification technology to better develop verification techniques for adaptive systems. In this short paper, we wish to explore this possibility by (1) Characterizing traditional verification techniques, and using further dimensions of such to propose a classifi- cation scheme for assurance (i.e., verification) methods, (2) Using the proposed classification scheme to characterize methods that have been or are being developed for online learning systems, (3) Perhaps most importantly, showing how the classification/ characterization scheme can be used as a tool to formulate coherent conclusions from an eclectic verification effort (i.e. a verification effort that uses more than one method).

41

Development of the Joint Weigh-In-Motion (WIM) and Measurement Reach Back Capability – The Configuration and Data Management Tool

R.K. Abercrombie, F.T. Sheldon, R.B. Schlicher and K.M. Daley, SOLE Logistics Spectrum Magazine, 38:4, pp. 4-9, Dec. 2004.

ABSTRACT (pdf of full paper, scan-pdf) : The development of the Joint Weigh-In-Motion (WIM) and Measurement Reach Back Capability (WIM-RBC) embodied in the current WIM Gen II system demonstrates a configuration and data management strategy that ensures data integrity, coherence and cost effectiveness during the WIM and Measurement systems validation, verification, testing and certification activities. Using integrated Commercial-off-the-shelf (COTS) products, the WIM-RBC is based on a Web services architecture implemented through the best practices of software design with the Unified Modeling Language (UML) and eXtensible Markup Language (XML) schema. Fielded WIM and measurement systems and XML-compliant messages can engage the WIM-RBC to store collected data in the WIM-RBC information repository. Through a Web browser, authorized users can securely access this repository, generate reports, and obtain separate tabular data for follow-on, custom analysis. It is the intent of the WIM-RBC to store all collected measurement data that will ultimately be used to determine the life-cycle cost of the WIM and measurement systems. Keywords: Aircraft Load Planning

 
40

Testing Software Requirements with Zed and Statecharts Applied to an Embedded Control System

Hye Yeon Kim and Frederick Sheldon, Software Quality Journal, Kluwer, Dordrecht Netherlands, pp. 232-266, Vol.12, Issue 3, August 2004.

ABSTRACT(pdf of full paper): Software development starts from specifying the requirements. A Software Requirements Specification (SRS) describes what the software must do. Naturally, the SRS takes the core role as the descriptive documentation at every phase of the development cycle. To avoid problems in the latter development phases and reduce life-cycle costs, it is crucial to ensure that the specification be reliable. In this paper, we describe how to model and test (i.e., check, examine, verify and prove) the SRS using two formalisms (Zed and Statecharts). Moreover, these formalisms were used to determine strategies for avoiding design defects and system failures. We introduce a case study performed to validate the integrity of a Guidance Control Software SRS in terms of completeness, consistency, and fault-tolerance.

39 Critical Energy Infrastructure Survivability, Inherent Limitations, Obstacles and Mitigation Strategies

Frederick T. Sheldon, Tom Potok, Axel Krings and Paul Oman, Int'l Jr of Power and Energy Systems –Special Theme Blackout, ACTA Press, Calgary Canada, Issue 2, pp. 86-92, 2004. An earlier version of this work was published at PowerCon03. See item 32 below.

ABSTRACT(pdf of full paper | purchase): Information systems now form the backbone of nearly every government and private system from targeting weapons to conducting financial transactions. Increasingly these systems are networked together allowing for distributed operations, sharing of databases, and redundant capability. Ensuring these networks are secure, robust, and reliable is critical for the strategic and economic well being of the Nation. The blackout of August 14, 2003 affected 8 states and fifty million people and could cost up to $5 billion . The DOE/NERC interim reports indicate the outage progressed as a chain of relatively minor events consistent with previous cascading outages caused by a domino reaction. The increasing use of embedded distributed systems to manage and control our technologically complex society makes knowing the vulnerability of such systems essential to improving their intrinsic reliability/survivability. Our discussion employs the power transmission grid. Keywords Infrastructure Vulnerability, Reliability, Cyber-Security, Software Agent Petri net Models

38 Managing Secure Survivable Critical Infrastructures To Avoid Vulnerabilities

Frederick T. Sheldon, Tom Potok, Andy Loebl, Axel Krings and Paul Oman, Eighth IEEE Int'l Symp. on High Assurance Systems Engineering, Tampa Florida, pp. 293-296, March 25-26, 2004.

ABSTRACT(pdf of full paper | presentation): Information systems now form the backbone of nearly every government and private system from targeting weapons to conducting financial transactions. Increasingly these systems are networked together allowing for distributed operations, sharing of databases, and redundant capability. Ensuring these networks are secure, robust, and reliable is critical for the strategic and economic well being of the Nation. The blackout of August 14, 2003 affected 8 states and fifty million people and could cost up to $5 billion . The DOE/NERC interim reports indicate the outage progressed as a chain of relatively minor events consistent with previous cascading outages caused by a domino reaction . The increasing use of embedded distributed systems to manage and control our technologically complex society makes knowing the vulnerability of such systems essential to improving their intrinsic reliability/survivability. Our discussion employs the power transmission grid

37 Energy Infrastructure Survivability, Inherent Limitations, Obstacles and Mitigation Strategies (Preliminary)

Frederick T. Sheldon, Tom Potok, Andy Loebl, Axel Krings and Paul Oman, IASTED Int'l Power Conference -Special Theme Blackout, New York, NY, pp. 49-53, Dec. 10-12, 2003 (ACTA Press)

ABSTRACT(pdf of full paper): The blackout of August 14 2003 affected eight states and fifty million people. It is too early to know what exactly triggered the outage and it is highly likely that a chain of events, not a single cause will be shown to have instigated what turned out to be a domino reaction . Nevertheless, the increasingly ubiquitous use of embedded systems to manage and control our technologically increasingly complex society even more vulnerable. Knowing just how reliable and survivable such systems are as well as their vulnerabilities is essential to improving their intrinsic reliability/survivability (in a deregulated environment knowing these important properties is equally essential to the providers). This paper presents a structured compositional modeling method for assessing reliability and survivability based on characteristic data and stochastic models. Key Words - Stochastic Modeling, Reliability, Coincident Failures, Usage-Profiles.

36 Assessing the Effect of Failure Severity, Coincident Failures and Usage-Profiles on the Reliability of Embedded Control Systems

Frederick T. Sheldon, and Kshamta Jerath, ACM Symp. on Applied Computing, Nicosia Cyprus, pp. 826-833, Mar. 14-17, 2004

ABSTRACT(pdf of full paper | pdf of presentation): The increasingly ubiquitous use of embedded systems to manage and control our technologically (ever-increasing) complex lives makes us more vulnerable than ever before. Knowing how reliable such systems are is absolutely necessary especially for safety, mission and infrastructure critical applications. This paper presents a structured compositional modeling method for assessing reliability based on characteristic data and stochastic models. We illustrate this using a classic embedded control system (sensor-inputs | processing | actuator-outputs), Anti-lock Braking System (ABS) and empirical data. Special emphasis is laid on modeling extra-functional characteristics of severity of failures, coincident failures and usage-profiles with the goal of developing a modeling strategy that is realistic, generic and extensible. The validation approach compares the results from the two separate models. The results are comparable and indicate the effect of coincident failures, failure severity and usage-profiles is predictable. Keywords design, measurement, performance, reliability Key Words - Design, Measurement, Performance, Reliability.

35 Multi-Agent System Case Studies in Command and Control, Information Fusion and Data Management

Frederick T. Sheldon, Thomas E. Potok and Krishna M. Kavi, Informatica Journal (ISSN 0350-5596) Vol.28, pp. 79-89, 2004.

ABSTRACT(pdf of full paper): On the basis of three different agent-based development projects (one feasibility study, one prototype, one fully fielded), we assess the fitness of software (SW) agent-based systems (ABS) in various application settings: (1) distributed command and control (DCC) in fault-tolerant, safety-critical responsive decision networks, (2) agents discovering knowledge an open and changing environment, and (3) light weight distributed data management (DM) for analyzing massive scientific data sets. We characterize the fundamental commonalities and benefits of ABSs in light of our experiences in deploying the different applications. Keywords: Intelligent software agents, ontology, information fusion, collaborative decision support.

34 Suitability of Agent Technology for Command and Control in Fault-tolerant, Safety-critical Responsive Decision Networks

Thomas E. Potok, Laurence Phillips, Robert Pollock, Andy Loebl and Frederick T. Sheldon, Proc.16th Int'l Conf. Parallel and Distributed Computing Systems, Reno NV, pp. 283-290, Aug. 13-15, 2003

ABSTRACT(pdf of full paper): We assess the novelty and maturity of software (SW) agent-based systems (ABS) for the Future Combat System (FCS) concept. The concept consists of troops, vehicles, communications, and weapon systems viewed as a system of systems [including net-centric command and control (C2) capabilities]. In contrast to a centralized, or platform-based architecture, FCS avoids decision-making/execution bottlenecks by combining intelligence gathering and analysis available at lower levels in the military hierarchy. ABS are particularly suitable in satisfying battle-space scalability, mobility, and security (SMS) expectations. A set of FCS SW requirements (SRs) was developed based on needs aligned with current computer science technology and inherent limitations. ABS advantages (i.e., SMS) are enabled mainly through a stronger messaging/coordination (MC) model. Such capabilities in an FCS environment do not currently exist, though a number of strong (analogous) agent-based systems have been deployed due to the lack of information fusion and decision support. Nevertheless, ABS can support most networked FCS C2 requirements despite the lack of current empirical and theoretical validation. Keywords: Intelligent SW Agents, Fusion and Decision Support

33

VIPAR: Advanced Information Agents discovering knowledge in an open and changing environment

Thomas E. Potok, Mark Elmore, Joel Reed and Frederick T. Sheldon, Proc. 7th World Multiconference on Systemics, Cybernetics and Informatics Special Session on Agent-Based Computing, Orlando FL, pp. 28-33, July 27-30, 2003.

Awarded Best Paper

ABSTRACT(pdf of full paper): Given the rapid evolution of information technology, most people on a daily basis are confronted by more information than they can reasonably process. The challenge to organize, classify and comprehend immense amounts of information is vitally important to the scientific, business, and defense/security communities (particularly when projecting the future evolution of information technology). For example, the defense/security community is faced with the daunting challenge of gathering and summarizing information so that military and political leaders can make informed decisions and recommendations. One such group, the Virtual Information Center (VIC) at US Pacific Command, gathers, analyzes, and summarizes information from Internet-based newspapers on a daily basis (a manual, time and resource intensive process). The VIPAR project has addressed this need. Intelligent agent technology was chosen 1) to utilize the ability for broadcast as well as peer-to-peer communication among agents, 2) to follow rules outlined in an ontology, and 3) because of the ability for agents to suspend processing on one machine, move to another, and resume processing (persistence). These strengths are well suited to address the challenges of gathering Internet-based information automatically. This multi-agent system has demonstrated 1) the ability to self-organize newspaper articles in a manner comparable to humans, 2) how a flexible Resource Description Framework (RDF) ontology is employed to monitor and manage Internet-based newspaper information, and 3) the capability to dynamically add and cluster new information entering the system. VIPAR includes thirteen information agents that manage thirteen different newspaper sites. Results show that VIPAR output is comparable to human output and validates the agent approach taken. Keywords: Intelligent Persistent Agents, Ontology, Mobile Agent Community, Vector-Space Model

32

Modeling with Stochastic Message Sequence Charts

Zhihe Zhou, Frederick T. Sheldon, and Thomas E. Potok, IIIS Proc. Int'l. Conf. on Computer, Communication and Control Technology (CCCT'03), Orlando FL, July 31 - Aug. 2, 2003

ABSTRACT(pdf of full paper):Message Sequence Chart (MSC) is a formal language widely used in industry for requirement specifications. In this paper, we propose a stochastic extension to MSC and this extend version of the MSC language is called Stochastic Message Sequence Charts (SMSC). Compared with MSC, SMSC is suitable for performance modeling and analysis. We integrated the SMSC language into the M?bius framework, which has a well-defined interface that facilitates interactions and solutions to hybrid models from different modeling formalisms, to enable the use of the M?bius built-in solvers for evaluating the modelsO stochastic properties. Keywords: Message Sequence Charts, Stochastic Modeling, Formal Specification, and Performance Analysis

31

An Ontology-Based Software Agent System Case Study

Frederick T. Sheldon Mark T. Elmore and Thomas E. Potok, IEEE Proc. International Conf. on Information Technology: Coding and Computing, Las Vegas Nevada, pp. 500-506, April 28-30 2003

ABSTRACT(pdf of full paper): Developing a knowledge-sharing capability across distributed heterogeneous data sources remains a significant challenge. Ontology-based approaches to this problem show promise by resolving heterogeneity, if the participating data owners agree to use a common ontology (i.e., a set of common attributes). Such common ontologies offer the capability to work with distributed data as if it were located in a central repository. This knowledge sharing may be achieved by determining the intersection of similar concepts from across various heterogeneous systems. However, if information is sought from a subset of the participating data sources, there may be concepts common to the subset that are not included in the full common ontology, and therefore are unavailable for knowledge sharing. One way to solve this problem is to construct a series of ontologies, one for each possible combination of data sources. In this way, no concepts are lost, but the number of possible subsets is prohibitively large. We offer a novel software agent approach as an alternative that provides a flexible and dynamic fusion of data across any combination of the participating heterogeneous data sources to maximize knowledge sharing. The software agents generate the largest intersection of shared data across any selected subset of data sources. This ontology-based agent approach maximizes knowledge sharing by dynamically generating common ontologies over the data sources of interest. The approach was validated using data provided by five (disparate) national laboratories by defining a local ontology for each laboratory (i.e., data source). In this experiment, the ontologies are used to specify how to format the data using XML to make it suitable for query. Consequently, software agents are empowered to provide the ability to dynamically form local ontologies from the data sources. In this way, the cost of developing these ontologies is reduced while providing the broadest possible access to available data sources.

30

Assessment of High Assurance Software Components for Completeness, Consistency, Fault-Tolerance, and Reliability

Hye Yeon Kim, Kshamta Jerath and Frederick T. Sheldon, Book Chapter In Component-Based Software Quality: Methods and Techniques, Eds. Alejandra Cechich, Mario Piattini and Antonio Vallecillo, Springer LNCS 2693 Heidelburg, Vol. 2693 (2003), pp. 259-86.

CHAPTER ABSTRACT(pdf of full paper): In an attempt to manage increasing complexity and to maximize code reuse, the software engineering community has, in recent years, put considerable effort into the design and development of component-based software development systems and methodologies (Cox & Song, 2001). The concept of building software from existing components arose by analogy with the way that hardware is now designed and built, using cheap, reliable standard off-the-shelf modules. Therefore, the success of component based software technology is dependent on the fact that the effort needed to build component based software systems can be significantly decreased compared to traditional custom software development. Consequently, component producers have to ensure that their commercial components possess trusted quality (Wallin, 2002). To achieve a predictable, repeatable process for engineering high-quality component based software systems, it is clear that quality must be introduced and evaluated at the earliest phases of the life cycle. Developing component-based software (CBS) systems are motivated by component reusability. The development process for CBS is very similar to the conventional software development process. In CBS development, however, the requirements specification is examined for possible composition from existing components rather than direct construction. The components can be functional units, a service provider (i.e., application programs, Web-based agent and/or enterprise system (Griss & Pour, 2001)), or components of an application ranging in size from a subsystem to a single object . To ensure the quality of the final product, assessment of such components is obligatory. Some form of component qualification at the earliest possible phase of system development is therefore necessary to avoid problems in the latter phases and reduce life-cycle costs. Evaluation of the software system must take into consideration how the components behave, communicate, interact and coordinate with each other (Clements, Bass, Kazman, & Abowd, 1995). Reliability, a vital attribute of the broader quality concept, is defined as the degree to which a software system both satisfies its requirements and delivers usable services (Glass, 1979). Quality software, in addition to being reliable, is also robust (and fault tolerant), complete, consistent, efficient, maintainable/extensible, portable, and understandable. In this chapter, we discuss how one can evaluate the quality of the components using formal model based (FMB) methods (i.e., Zed, Statecharts, Stochastic Petri Nets and/or Stochastic Activity Networks). We present a FMB framework for assessing component properties like completeness and consistency of requirement specifications using Zed and Statecharts; and approaches for verifying properties like reliability using two different stochastic modeling formalisms. Two case studies are discussed in this context based on both a mission critical (guidance control) software requirements specification and a vehicular system with various interacting components (possibly) provided by different vendors. The assessment of quality (i.e., reliability) for components such as anti-lock brakes, steer-by-wire and traction control are considered based on empirical data.


BOOK OVERVIEW: During recent years, new software engineering paradigms like component-based software engineering and COTS-based development have emerged. Both paradigms are concerned with reuse and customising existing components. The use of components has become more and more important in state-of-the-art and state-of-the-practice software and system development. Developing component-based software promises faster time-to-market, which can yield substantial advantages over competitors with regards to earlier placement of a new product on a market. At the same time, components introduce risks such as unknown quality properties that can inject harmful side effects into the final product. The main objective of this book is to give a global vision of Component-based Software Quality, exposing the main techniques and methods, and analysing several aspects (component selection, COTS assessment, etc.) related to component quality. This book provides direction, based on working experience, on methods, techniques and guidelines to deal with both component selection and composition evaluation.
29

PCX: From Model Checking to Stochastic Analysis

Shuren Wang and Frederick T. Sheldon –Whitepaper

ABSTRACT(pdf of full paper): Stochastic Petri Nets (SPNs) are a graphical tool for the formal description of systems with the features of concurrency, synchronization, mutual exclusion and conflict. SPN models can be described with an input language called CSPL (C-based SPN language). Spin is a generic verification system that supports the design and verification of software systems. PROMELA (Protocol or Process Meta Language) is Spin1s input language. This work provides the translation rules from a subset of PROMELA constructs to CSPL, and also offers an experimental tool PCX (PROMELA to CSPL Translator) and approach to explore the specification and analysis of stochastic properties for systems. The PCX tool translates the formal description, written in PROMELA, into an SPN, represented by CSPL. The approach requires users to add stochastic property information, during (or after) the translation. Translation of the PROMELA model to a CSPL specification will allow the analysis of non-functional requirements such as reliability, availability, and performance through SPNP (Stochastic Petri Net Package), a stochastic analysis tool. This is useful in the design and validation of performance where parameters such as failure rate or throughput are available. Moreover, certain structural and architectural features of software can be evaluated and considered within the context of Spin-verifiable properties. This approach provides additional flexibility to the PROMELA specification-modeling paradigm to include stochastic analysis of structural and non-functional properties. Thus, PCX provides a practical bridge between system verification and system validation.

28

Case Study: B2B E-Commerce System Specification and Implementation Employing Use-Case Diagrams, Digital Signatures and XML

Frederick T. Sheldon, Kshamta Jerath, Orest Pilskalns, Young-Jik Kwon, Woo-Hun Kim and Hong Chung, IEEE Proc. International Symp. Multimedia Software Engineering (MSE 2002) Newport Beach, CA., pp. 106-113, Dec. 11-13, 2002

ABSTRACT(pdf of full paper): A case study highlighting the best practices for designing and developing a B2B e-commerce system is presented. We developed a remote order-and-delivery web-based system for an auto-parts manufacturing company. The system requirements were determined by interviewing employee stakeholders. An initial scenario of the system was prototyped and refined untill the users and developers were satisfied. A formalized specification of the requirements employing Use-Case Diagrams and based on event flow was developed and coded using XML. This helped keep the documentation simple and clear. Testing was performed at the component level allowing for feedback to previous steps when errors appeared. Digital signatures were employed for implementing security. The end product enabled a reduction in the processing time of transactions, reduced processing cost, improved accuracy, efficiency, reliability, and security of transmitted data; and our strategy shortened the System Development Life Cycle

27

Case Study: Implementing a Web Based Auction System using UML and Component-Based Programming

Frederick T. Sheldon, Kshamta Jerath, Young-Jik Kwon and Young-Wook Baik, IEEE Proc. COMPSAC 2002 (26th Ann.Computer Software and Applications Conference) Oxford, England., pp. 211-16, August 26-29, 2002

ABSTRACT(pdf of full paper): This paper presents a case study highlighting the best practices for designing and building a web-based auction system using UML (Unified Model Language) and component-based programming. We use the Use Case, Class, Sequence, and Component Diagrams offered by UML for designing the system. This enables new functions to be added and updated easily. Our implementation, with its basis in component-based programming, enabled us to develop a highly maintainable system with a number of reusable components: the MethodofBidding (the bidder can bid at three different frequencies - fast, medium or leisurely), the Certification (Identity verification function), and the RegistrationGood (Product entry function) Components. Further, the system uses intelligent agents that permit fair help to bidders participating in auctions and at the same time achieve maximum profit for the seller. The design and implementation environment, along with the tools used, provide excellent support for the successful development of the system.

26

Extending MSCs for Multi-level and Multi-formalism Modeling in Möbius

Zilhe Zhou
Master's Thesis (Defended December 6, 2002)

ABSTRACT (pdf of thesis | zipped presentation): Message Sequence Chart (MSC) is a formal language to describe the communication behavior of a system. Möbius is an extensible multi-level multi-formalism modeling tool that facilitates interactions of models from different formalisms. We propose a new version of MSC, Stochastic MSC (SMSC), which is a stochastic extension to the traditional MSC. SMSC is suitable for performability analysis. Mappings from SMSC to Möbius entities are defined so that it can be integrated into the Möbius framework. Together with other formalisms of Möbius, SMSC can be used as a building block for large hybrid models. Users will have additional flexibility in choosing modeling languages in Möbius. Not like other formalisms so far included in Möbius, SMSC has both textual and graphical representations. Modeling with a text editor is the same as writing a traditional program while the graphical representation gives users a direct view of the system.

25

Modeling and Stochastic Analysis of Embedded Systems Emphasizing Coincident Failures, Failure Severity and Usage-Profiles

Kshamta Jerath
Thesis (Defended August 2002)

ABSTRACT(pdf of thesis | zipped defense presentation): The increasingly ubiquitous use of software systems has created the need of being able to depend on them more than before, and of being able to measure just how dependable they are. Knowing that the system is reliable is absolutely necessary for safety-critical systems, where any kind of failure may result in an unacceptable loss of human life. This study models and analyzes the Anti-lock Braking System of a passenger vehicle. Special emphasis is laid on modeling extra-functional characteristics of coincident failures, severity of failures and usage-profiles - the goal is to develop an approach that is realistic, generic and extensible for this application domain. Components in a system generally interact with each other during operation, and a faulty component can affect the probability of failure of other correlated components. The severity of a failure is the impact it has on the operation of the system. This is closely related to the notion of hazard which defines what undesirable consequence will potentially result from the incorrect system operation (i.e., threat). Usage profile characterizes how the system is used for the purpose of modeling and reliability analysis. Only those failures that occur during active use are considered in reliability calculations. The strategy of modeling these characteristics (using real empirical data) is innovative in terms of the approach used to integrate them into the Stochastic Petri Net and Stochastic Activity Network formalisms. The validation approach compares the results from the two separate models using the two different modeling formalisms. The results were found to be comparable and confirm that the effect of modeling coincident failures, failure severity and usage-profiles is noticeable in determining overall system reliability. The contribution of this research to the automotive industry is substantial as it offers a greater insight into the strategy for developing realistic models. This work also provides a solid basis for modeling more complex systems and carrying out further analyses.

24

Examining Coincident Failures and Usage-Profiles in Reliability Analysis of an Embedded Vehicle Sub-system

Frederick T. Sheldon, Kshamta Jerath and Stefan Greiner, SCS Proc. ESM'2002 / ASMT (16th Euro Simulation Multiconf: Analytical and Stochastic Modelling Techniques) Darmstadt, Germany, pp. 558-563, June 3-5, 2002

ABSTRACT(pdf of full paper | zipped presentation): Structured models of systems allow us to determine their reliability, yet there are numerous challenges that need to be overcome to obtain meaningful results. This paper reports the results and approach used to model and analyze the Anti-lock Braking System of a passenger vehicle using Stochastic Petri Nets. Special emphasis is laid on modeling extra-functional characteristics like coincident failures among components, severity of failure and usage-profiles of the system. Components generally interact with each other during operation, and a faulty component can affect the probability of failure of other components. The severity of a failure also has an impact on the operation of the system, as does the usage profile - failures which occur during active use of the system are the only failures considered (i.e., in reliability calculations).

23

Validation of Guidance Control Software Requirements Specification for Reliability and Fault-Tolerance

Hye Yeon Kim
Thesis (Defended May 14, 2002)

ABSTRACT(pdf of thesis | zipped defense presentation): Many critical control systems are developed using CASE tools. Validation for such systems is largely based on simulation and testing. Current software engineering research has sought to develop theory, methods, and tools based on mechanized formal methods that will provide increased assurance for such applications. In addition to the previous fact, the present software engineering research focuses on allowing earlier error detection of overlooked cases, more complete testing using model checking to examine all reachable states, and full verification of critical properties using an automated theorem prover to undertake formal verification. This case study was performed for validating the integrity of a software requirements specification (SRS) for Guidance Control Software (GCS) in terms of reliability and fault-tolerance. A verification of the extracted parts of the GCS Specification is provided as a result. Two modeling formalisms were used to evaluate the SRS and to determine strategies for avoiding design defects and system failures. Z was used first to detect and remove ambiguity from a portion of the Natural Language based (NL-based) GCS SRS. Next, Statecharts, Activity-charts, and Module charts were constructed to visualize the Z description and make it executable. Using executable models, the system behavior was assessed under normal and abnormal conditions. Faults were seeded into the model, an executable specification, to probe how the system would perform. Missing or incorrectly specified requirements were found during the process. In this way, the integrity of the SRS was assessed. The significance of this approach is discussed by comparing this approach with similar studies and possible approaches for achieving fault tolerance. This approach is envisioned to be useful in a more general sense as a means to avoid incompleteness and inconsistencies along with dynamic behavioral analysis useful in avoiding major design flaws. The iteration between these two formalisms gives pertinent analysis of a problem - i.e., operational errors between states, functional defects, etc.

22

Metrics for Maintainability of Class Inheritance Hierarchies

Frederick Sheldon, Kshamta Jerath and Hong Chung, Jr of Software Maintenance and Evolution: Research & Practice (John Wiley & Sons) Vol. 14 Issue 3, pp 147-160, May/June 2002.

ABSTRACT(pdf of full paper | purchase): Since the proposal for the six object-oriented metrics by CK (Chidamber and Kemerer), several studies have been conducted to validate their metrics and discovered some deficiencies. Consequently, many new metrics for object-oriented systems have been proposed. Among the various measurements of object-oriented characteristics, we focus on the metrics of class inheritance hierarchies in design and maintenance. As such, we propose two simple and heuristic metrics for the class inheritance hierarchy for the maintenance of object-oriented software. In this paper we investigate the work of CK and Li, and have extended their work to apply specifically to the maintenance of a class inheritance hierarchy. In doing so, we have suggest new metrics for understandability and modifiability of a class inheritance hierarchy. The main contribution here includes the various comparisons that have been made. We discuss the advantages over CK's metrics and Henderson-Sellers's metrics in the context of maintaining class inheritance hierarchies.

21

A Review of Some Rigorous Software Design and Analysis Tools

Frederick Sheldon, Gaoyan Xie, Orest Pilskalns, and Zhihe Zhou, Software Focus (John Wiley & Sons) Vol. 2 Issue 4, pp. 140-150 Winter 2001.

ABSTRACT(pdf of full paper): The increasing maturity of formal methods cannot be attributed to only the formal notations and methodologies that are accessible to system designers. The development of powerful software tools that apply and facilitate the use of these notations and methodologies effectively has been crucial. In this paper, we survey some well-known software tools that have been deployed and used by both academic and industrial sections for rigorous design and analysis. The software tools are categorized by both the notations and methodologies, upon which they are based. We mainly discuss the tools' underlying formal methods, achievements and scope of applicability. We finish with the future trend of the development of such software tools.

20

Validation of Guidance Control Software Requirements Specification for Reliability and Fault-Tolerance

Frederick Sheldon, and Hye Yeon Kim, IEEE Proc. Reliability and Maintainability Symp. RAMS'2002, Seattle WA, pp. 312-318, Jan. 28-31, 2002

ABSTRACT(pdf of full paper): A case study was performed to validate the integrity of a software requirements specification (SRS) for Guidance Control Software (GCS) in terms of reliability and fault-tolerance. A partial verification of the GCS specification resulted. Two modeling formalisms were used to evaluate the SRS and to determine strategies for avoiding design defects and system failures. Z was applied first to detect and remove ambiguity from a part of the Natural Language based (NL-based) GCS SRS. Next, Statecharts and Activity-charts were constructed to visualize the Z description and make it executable. Using this formalism, the system behavior was assessed under normal and abnormal conditions. Faults were seeded into the model (i.e., an executable specification) to probe how the system would perform. The result of our analysis revealed that it is beneficial to construct a complete and consistent specification using this method (Z-to-Statecharts). We discuss the significance of this approach, compare our work with similar studies, and propose approaches for improving fault tolerance. Our findings indicate that one can better understand the implications of the system requirements using Z-Statecharts approach to facilitate their specification and analysis. Consequently, this approach can help to avoid the problems that result when incorrectly specified artifacts (i.e., in this case requirements) force corrective rework.

19

A Case Study: Validation of Guidance Control Software Requirements for Completeness, Consistency and Fault Tolerance

Frederick Sheldon, Hye Yeon Kim and Zhihe Zhou, IEEE Proc. of the Pacific Rim Dependability Conf. (PDRC'2001), Seoul Korea, pp. 311-318, Dec. 17-20, 2001

ABSTRACT(pdf of full paper): In this paper, we discuss a case study performed for validating a Natural Language (NL) based software requirements specification (SRS) in terms of completeness, consistency, and fault-tolerance. A partial verification of the Guidance and Control Software (GCS) Specification is provided as a result of analysis using three modeling formalisms. Zed was applied first to detect and remove ambiguity from the GCS partial SRS. Next, Statecharts and Activity-charts were constructed to visualize the Zed description and make it executable. The executable model was used for the specification testing and faults injection to probe how the system would perform under normal and abnormal conditions. Finally, a Stochastic Activity Networks (SANs) model was built to analyze how fault coverage impacts the overall performability of the system. In this way, the integrity of the SRS was assessed. We discuss the significance of this approach and propose approaches for improving performability/fault tolerance.

16-18
Proceedings 5th Int'l Workshop on Performability Modeling of Computer and Communication Systems (PMCCS-5) Erlangen Germany, Arbeitsberichte Des Instituts Für Infomatik (Vol. 34, No. 13, ISSN 0344-3515), Sept. 15-16 2001
Next three articles:


Reliability Analysis of an Anti-lock Braking System using Stochastic Petri Nets (pp. 56-60)

Kshamta Jerath and Frederick Sheldon

ABSTRACT(Zipped paper and presentation): The "Reliability Analysis of an Anti-lock Braking System using Stochastic Petri Nets" is a work in progress and an extension to the work presented in the paper "Specification, Safety and Reliability Analysis Using Stochastic Petri Net Models"[9]. The current work attempts to model the Anti-lock braking sub-system of a vehicle system using Stochastic Petri Nets. The reliability analysis is undertaken with particular focus on coincident failures of components. The model is specified in C-based Stochastic Petri Net language, the input language for SPNP.

Integrating the CSP Formalism into the Mobius Framework for Performability Analysis (pp. 86-89)

Zhihe Zhou and Frederick Sheldon

ABSTRACT(Zipped paper and presentation): In the past two decades, lots of research works have been conducted in the area of formal methods. Various formalisms have been studied and the corresponding tools developed. The use of formal methods has evolved as the choice to make software and hardware systems, which are undergoing ever-growing complexity, more dependable and of higher performance. However, except for some costly mission/safety critical systems, formal methods are seldom used. Factors that hamper the use of formal methods include initial cost, lack of expertise, etc. One major problem that system engineers face is how to choose an appropriate tool and formalism from a vast array when they decide to adopt formal method(s). Naturally, good tools will facilitate the popularity of formal methods. The Mobius framework provides mechanisms that accommodate different formalisms. Models from different formalisms can interact with each other. The Stochastic Activity Network (SAN) formalism has been successfully built-in. We are attempting to integrate the Communicating Sequential Processes (CSP) formalism into the Mnbius framework so as to fortify the usefulness of Mnbius.

PCX: A Translation Tool from PROMELA/Spin to the C-Based Stochastic Petri Net Language (pp. 116-120)

Frederick Sheldon and Shuren Wang

ABSTRACT(Zipped paper and presentation): Stochastic Petri Nets (SPNs) are a graphical tool for the formal description of systems with features of concurrency, synchronization, mutual exclusion and conflict. SPN models can be described with an input language called CSPL (C-based SPN language). Spin is a generic verification system that supports the design and verification of distributed software systems. PROMELA (Protocol or Process Meta Language) is Spin?s input language. This work provides the translation rules from PROMELA (a subset of PROMELA constructs) to CSPL, and also offers an experimental tool (PCX: PROMELA to CSPL Translator) and approach to explore the specification and analysis of stochastic properties for distributed systems. The PCX tool translates the formal description of systems, written in PROMELA, into a SPN, represented by CSPL. The tool allows users to add stochastic properties, during or after the translation, which the PROMELA language does not provide. Translation of the PROMELA model to a CSPL specification allows the analysis of non-functional requirements (i.e. performability) through SPNP (Stochastic Petri Net Package). This is useful in the design and validation of performance where parameters such as failure rate or throughput are available. Moreover, certain structural and architectural features of software can be evaluated and considered within the context of Spin-verifiable properties. This approach provides additional flexibility to the PROMELA specification-modeling paradigm to include stochastic analysis of structural, functional and non-functional properties. Thus, PCX provides a practical bridge between verification and validation for system architects/software engineers.

15 Software Requirements Specification and Analysis Using Zed and Statecharts

Sheldon, F.T., and Kim, H. Y., Proc. Third Workshop on Formal Descriptions and Software Reliability, October 7, 2000

ABSTRACT (pdf of full paper | presentation): This paper presents a prototypical study, of an embedded system requirement specification, used to establish the basis for a complete case study. We are interested in comparing different specification methods that accommodate the notion of state. A partial modeling of a NASA provided Guidance and Control Software (GCS) development specification was employed. The GCS describes, in natural language, how software is used to control a planetary landing vehicle during the terminal phases of descent. Our ultimate goal is to develop a complete software requirement specification based on the IEEE Standard 830-1998.

The first step in the study was to derive a Zed description for a small portion of the system (Altitude Radar Sensor Processing [ARSP]). The ARSP module reads the altimeter counter provided by the radar and converts the data into a measure of distance to the planet surface. In the second step, Statecharts were developed to model and graphically visualize the Zed specified ARSP. Using Statemate we analyzed the specification for completeness and consistency. This was accomplished through the generation of activity-charts and simulations.

We present the results of this work and discuss the issues associated with comparing the two methods in terms of their ability to ascertain consistency and completeness of the final products. A more comprehensive assessment of tools publicly available for the specification, modeling and analysis of embedded systems is envisioned.

14 Stochastic Petri Nets and Discrete Event Simulation: A Comparative Study of Two Formal Description Methods

Sheldon, F.T. and Dugan, D., Third Wkshp on Formal Descriptions and Software Reliability, October 7, 2000

ABSTRACT: Two methods of calculating information from a system are analytic modeling and Discrete Event Simulation (DES). The purpose of this paper is to compare and contrast these two modeling formalisms. Background information about each method is given to provide a basic understanding of each method; references are provided for more information. This can be useful in deciding which modeling technique is the best method of describing a system. One area, which is briefly explored, is how to construct a DES model from a Petri Net (PN) model. This conversion may be useful in those cases where direct analysis is intractable.
13 Specification, Safety and Reliability Analysis Using Stochastic Petri Net Models

Sheldon, F.T., Greiner, S.A. and Benzinger, M., IEEE Proc. Int'l Wkshp of Software Specification and Design, pp. 123-132, Nov. 5-7, 2000
ABSTRACT(pdf of full paper): In this study we focus on the specification and assessment of Stochastic Petri net (SPN) models to evaluate the design of an embedded system for reliability and availability. The system provides dynamic driving regulation (DDR) to improve vehicle derivability (anti-skid, -slip and steering assist). A functional SPN abstraction was developed for each of three subsystems that incorporate mechanics, failure modes/effects and model parameters. The models are solved in terms of the subsystem and overall system reliability and availability. Four sets of models were developed. The first three sets include subsystem representations for the TC (Traction Control), AB (Antilock Braking) and ESA (Electronic Steering Assistance) systems. The last set combines these systems into one large model. We summarize the general approach and provide sample Petri net graphs and reliability charts that were used to evaluate the design of the DDR in parts and as a whole.
12
Composing, Analyzing and Validating Software Models to Assess the Performability of Competing Design Candidates

Sheldon, F.T. and Greiner, S.A., Annals of Software Engineering, Spec. Issue on Software Reliability, Testing and Maturity, Vol. 8, pp. 239-287, 1999

ABSTRACT (html version of full paper | pdf of full paper): Modern high-assurance systems share five key attributes: (1) reliable, meaning they are correct, (2) available, meaning they remain operational, (3) safe, meaning they are impervious to catastrophe (fail-safe), (4) secure, meaning they will never enter a hazardous state, and (5) timely, meaning their results will be produced on time and satisfy deadlines (i.e., timing correctness). The correctness, safety and robustness of a critical system specification are generally assessed through a combination of rigorous specification capture and inspection; formal modeling and analysis of the specification; and execution and/or simulation of the specification (or possibly a model of such). In a perfect world, verification and validation of a software design specification could be possible before any code was written (or generated). Indeed, in a perfect world we would know that the implementation was correct because we could trust the class libraries, the development tools, verification tools and simulations etc., to provide the confidence needed to know that all aspects (including complexity, logical and timing correctness) of the design were satisfied fully and correctly (i.e., everything was right). Right in the sense that we built it right (its correct with respect to its specification) and it solves the right problem. In our view of the world, we constrain those classic notions first to verifying that at least all of the possibly bad things we could think of can not happen or at least the chances of happening are well bounded. Second, that the performability of the models of what we plan (or propose) building are determined to be adequate with respect to function, structure and behavior and the assumptions of the operating environment and potential hazards. Therefore, it is useful to develop and validate methods and tools for the creation of safe and correct software based on the premise that its not a perfect world. Moreover, our goal is to continue to develop and refine our approach as an open framework coupled with useful formal representation and analysis of software components and architectures that relate specifications to programs and programs i to behavior.

This paper considers the modeling and analysis of systems expressed using formal notations. We motivate the need for tool supported rigorous methods used for reasoning about software and systems, introduced a framework codified by the modeling cycle. We introduce some systematic formal techniques for the creation and composition of software models through a process of abstraction and refinement, and enumerate several formal modeling techniques within this context (i.e., reliability and availability models, performance and functional models, performability models etc.). This discussion includes a more precise discourse on stochastic methods (i.e., DTMC and CTMC) and their formulation. In addition, we briefly review the underlying theories and assumptions that are used to solve these models for the measure of interest (i.e., simulation, numerical and analytical techniques). Finally, we presented a small example that employees generalized stochastic Petri nets.
11 Tool-Based Approach to Distributed Database Design: Includes Web-Based Forms Design for Access to Academic Affairs Data

Owens, David A. and Sheldon, F.T., ACM Proc. Symp. on Applied Computing San Antonio TX, pp. 227-231, Feb. 28 - Mar. 2, 1999

ABSTRACT (pdf of full paper): This paper describes a tool-based approach for designing and prototyping a distributed database application. This approach is demonstrated for an Academic Affairs Information System (AAIS) to assist the Webster University main campus and its 70+ remote sites in managing the information required to admit students, approve programs, schedule courses, assign faculty, register students, and generate the required queries and reports. ORACLE? Relational Database Management (RDBMS) tools and products for Windows NT? were used to support AAIS requirements analysis, design, and prototype implementation. The Designer/2000? Process Modeler tool was used to document the top-level business functions, and the Data Modeler tool was used to develop a third normal form data model. The Developer/2000? Forms tool was used to prototype several user interface forms for main campus staff, remote staff, and students to enter and update student and program data. A Web Server was also installed, along with the Java software and AppletViewer, to test the prototype forms from a Web Browser. Keywords Distributed, Database, ORACLE?, Web, and Academic Affairs.
10
Analysis of Real-Time Concurrent Systems Models Based on CSP Using Stochastic Petri Nets
Sheldon, F.T., SCS Proc. 12th European Simulation Multiconference, pp. 776-783, June 16-19, 1998

ABSTRACT (pdf of full paper): Theoretical models like CSP (Communicating Sequential Processes) and CCS (Calculus of Communicating Systems) describe concurrent computations that synchronize. Such models define independent system entities or processes that cooperate by explicit communication. In safety critical systems these communications represent visible actions which, if they do not occur or are delayed beyond their deadline, will cause a failure to occur. This paper addresses the real-time and reliability analysis of specifications for concurrent systems. We provide a basic example to illustrate how to link failure behavior to specification characteristics. The approach converts a formal system description into the information needed to predict its behavior as a function of observable parameters (i.e., topology, fault-tolerance, deadline and resource allocation, communications and failure categories). The CSP-based specifications are automatically translated into Petri nets (PNs) using a tool we have developed. The tool uses a set of algorithms which codify the translations between essential CSP constructs and their PNs translations. The PNs are represented as coincidence matrices. In the Petri net form we are able to perform various analyses. We give a simple example to show the analytical derivation of timing failure probability and reliability predictions for a candidate railroad specification. The term "CSP-based" used here is to distinguish between the exact notation of Hoare's original CSP and our textual representations which are similar to Occam 2. Our CSP-based grammar is sufficient to preserve the structural properties of the original specification. Consideration of other CSP properties (e.g., traces, refusal sets, livelock, etc.) is not precluded but is also not considered here. Keywords: Formal specification, CSP, Stochastic Petri Nets, Reliability analysis, Markov models.
9
Specification and Analysis of Real-Time Systems Using CSP and Petri Nets
Kavi, K.M., Sheldon, F.T. and Reed, S.C., Int'l Jr of Software Engineering and Knowledge Engineering, pp. 229-248, June 1996.

ABSTRACT (pdf of full paper): Formal methods such as CSP (Communicating Sequential Processes) are widely used for reasoning about concurrency, communication, safety and liveness issues. Some of these models have been extended to permit reasoning about real-time constraints. Yet, the research in formal specification and verification of complex systems has often ignored the specification of stochastic properties of the system under study. We are developing methods and tools to permit stochastic analyses of CSP-based specifications. Our basic objective is to evaluate candidate design specifications by converting formal systems descriptions into the information needed for analysis. In doing so, we translate a CSP-based specification into a Petri net which is analyzed to predict system behavior in terms of reliability and performability as a function of observable parameters (e.g., topology, fault-tolerance, deadlines, communications and failure categories). This process can give insight into further refinements of the original specification (i.e., identify potential failure processes and recovery actions). Relating the parameters needed for performability analysis to user level specifications is essential for realizing systems that meet user needs in terms of cost, functionality, and other non-functional requirements.

The translation from CSP-based specifications is currently manual (while a tool to permit automatic translation is under development). An example translation is shown (in addition, some general examples of CSP -> Petri net translations are given in Appendix A). Based on this translation, we report both the discrete and continuous time Markovian analysis which provides reliability predictions for the candidate specification. The term "CSP-based" is used here to distinguish between the notation of Hoare's original CSP and our textual representations which are similar to occum. Our CSP-based grammar does not restrict consideration of the properties of CSP (traces, refusal sets, livelock, etc.), but we are not considering those properties. We are only interested that the structural properties are preserved. We define performability as a measure of the system's ability in meeting deadlines, in the presence of failures and variance in task execution times. Keywords: Real-Time Systems, CSP, Stochastic Petri Nets, Performability and Reliability
8
Stochastic Analysis of CSP Specifications Using a CSP-to-Petri Net Translation Tool: CSPN
Sheldon, F.T., IEEE Proc. MetroCon, Arlington. TX, Feb. 1996.

ABSTRACT (pdf of full paper): An experimental tool and approach has been developed to explore the specification and analysis of stochastic properties for concurrent systems expressed using CSP (communicating sequential processes). The approach is to translate a formal system description into the information needed to predict its behavior as a function of observable parameters. The idea uses a theory based on proven translations between CSP and Petri nets (PNs). In particular, the tool translates the design specification, written in a textual based CSP dialect named P-CSP, into stochastic Petri nets for analysis based on the structural and stochastic properties of the specification. The grammar and CSP-to-Petri net (CSPN) tool enable service and failure rate annotations to be related back into the original CSP specification. The annotations are then incorporated in the next round of translations and stochastic analysis. The tool therefore automates the analysis and iterative refinement of the design and specification process. Within this setting, the designer can investigate whether functional and non-functional requirements can be satisfied. Keywords: Specification modeling, dependable systems, process algebras, Petri net translation tool.
7
Linking Software Failure Behavior to Specification Characteristics II
Sheldon, F.T., and Kavi, K.M., IEEE Proc. 4th Int'l Wkshp on Evaluation Techniques for Dependable Systems, San Antonio, Oct. 1995.

ABSTRACT: This research addresses the specification and analysis of stochastic properties for concurrent real-time systems expressed using CSP (Communicating Sequential Processes). The main interest is in improving the dependability and fault-tolerance of computing systems by devising techniques to evaluate, prevent, detect and compensate for anomalies. In our current effort, the goal is to (1) translate the formal system description into the information needed to predict its behavior as a function of observable parameters (topology, fault-tolerance, timeliness, communications and failure categories), (2) relate stochastic parameters back to the user CSP specification level, and (3) enable the designer to consider the costs relative to possible compromises to other factors in the design equation.
6
Reliability Analysis of CSP Specifications: A New Method Using Petri Nets
Sheldon, F.T., Kavi, K.M, and Kamangar, F.A., AIAA Proc. Computing in Aerospace 10, pp. 317-326, March 1995.

ABSTRACT (pdf of full paper and presentation): Theoretical models like CSP and CCS describe computation using synchronization. Such models define independent system entities or processes that cooperate by explicit communication. In safety critical systems these communications represent visible actions which, if they do not occur or are delayed beyond their deadline, will cause a failure to occur. This paper describes the basic methodology for converting a formal description of a system into the information needed to predict system behavior as a function of observable parameters. Currently under development is a tool to permit stochastic analyses of CSP-based system specifications. The CSP-based grammar used by this tool is presented and isomorphisms between CSP-based specifications and Petri net-based stochastic models are shown. A brief example of the translation between these two formalisms is given along with (1) an analytical derivation of timing failure probability and cost minimization, and (2) discrete and continuous time Markovian analysis which provide reliability predictions for candidate designs. The translation process is currently being automated. Keywords: Formal specification, CSP, Stochastic Petri Nets, Reliability analysis, Markov models.
5
Reliability Analysis of CSP Specifications Using Petri Nets and Markov Processes
Kavi, K.M, Sheldon, F.T., Shirazi, B. and Hurson, A.R., IEEE Proc. Hawaii Int'l Conf. on System Sciences, pp. 516-524, March 1995.

ABSTRACT (pdf of full paper): In our research we are developing methodologies and tools to permit stochastic analyses of CSP-based system specifications. In this regard, we have been developing morphisms between CSP-based models and Petri net-based stochastic models. This process has given us insight for further refinements to the original CSP specifications (i.e., identify potential failure processes and recovely actions). In order to create systems that meet user needs in terms of cost, functionality, performance and reliability, it is essentialto relate the parameters needed for reliability analysis to the user level specification. Keywords: Formal specification, CSP, Petri Nets, Reliability analysis, Markov models.
4
Specification of Stochastic Properties with CSP,” IEEE Proceedings Int'l Conference on Parallel and Distributed Systems
Kavi, K.M. and Sheldon, F.T., IEEE Proc. 4th Int'l Conf. on Parallel and Distributed Systems, Taiwan, ROC, pp. 288 - 293, Dec. 1994.

ABSTRACT (pdf of full paper):Formal methods such as CSP (Communicating Sequential Processes), CCS (Calculus of Communicating Systems) and Dataflow based process models are widely used for formal reasoning in the areas of concurrency, communication, and distributed systems.  The research in formal specification and verification of complex systems has often ignored the specification of stochastic properties of the system.  We are exploring new methodologies and tools to permit stochastic analysis of CSP-based systems specifications.  In doing so, we have investigated the relationship between specification models and stochastic models by translating the specification into another form that is amenable to such analyses (e.g., from CSP to stochastic Petri Nets).  This process can give insight for further refinements of the original specification (i.e., identify potential failure processes and recovery actions).  It does this by relating the parameters needed for reliability analysis to user level specifications which is essential for realizing systems that meet the users needs in terms of cost, functionality, performance and reliability. Keywords: Formal Specification, CSP, Petri Nets, Reliability Analysis, Markov Models .
3
Reliability Prediction of Distributed Embedded Fault-Tolerant Systems
Sheldon, F.T., Mei, Hsing, and Yang, S.M., IEEE Proc. 4th Int'l Symp. on Software Reliability Engineering, pp. 92-102, Nov. 1993.

Reprinted in Proc. 4th Int'l Conf. on Applications of Software Measurement, 10 pages, 27 Refs., 0.5 hrs., Nov. 1993 (Awarded Runners-Up Best Paper). Invited paper in cooperation with ASQC and Centre for Software Reliability (City University, London).

ABSTRACT: A new reliability model is introduced for selecting the best software fault-tolerant (FT) design. This model uses a task graph technique that allows different candidate FT configurations to be analyzed based on the structure and organization of different distributed embedded systems. Reliability prediction with this approach can be useful for addressing system dependability issues (i.e., fault detection/recovery processes and steady-state availability) in addition to ascertaining fault coverage (i.e., the likelihood of missing and/or false alarms). The results of analyzing three different Simplified Unmanned Vehicle Systems FT configurations are presented. This work is described within the framework of the Conservative and Do-Best FT design policies and fits consistent with a software development model for real-time control systems that was introduced in earlier work by the authors. Keywords: Reliability modeling and prediction, distributed embedded systems, fault-tolerant software.
2
Reliability Measurement: From Theory to Practice

Sheldon, F.T., Kavi, K.M., Everett, W.W., Brettschneider, R., Yu, J.T., and Tausworthe, R.C., IEEE Software - Spec Issue on Applications of Software Reliability Models, pp. 13-20, July 1992.

ABSTRACT (pdf of full paper): Pressure on software engineers to produce high-quality software and meet increasingly stringent schedules and budgets is growing. In response, reliability measurement has become a significant factor in quantitatively characterizing quality and determining when to release software on the basis of predetermined reliability objectives. Keywords: Operational profile, software defects and classification, software life-cycle.
Dissertation
1 Specification and Analysis of Stochastic Properties for Concurrent Systems Expressed Using CSP
Ph.D. Dissertation, Conputer Science and Engineering Dept, The University of Texas at Arlington, 260 Refs., May 1996.
Awarded Outstanding Dissertation ('96-97) UTA Chapter of the Sigma Xi Scientific and Research Society

ABSTRACT: This work offers an innovative approach to predicting system behavior (in terms of reliability and performance) based primarily on the structural characteristics of a formal functional specification. This work extends parts of the work by E-R. Olderog, by developing a CSP-based grammar and canonical CSP-to-Petri net translation rules for process composition and decomposition. The mechanism for process composition is codified in the CSP-to-Stochastic Petri net (CSPN) tool and consists of expanding the process description represented as a series of small Petri nets into larger and larger nets while preserving structural relationships and functional nomenclature. In the last phase, the tool reconciles synchronization points (for communicating processes), stochastic annotations and generates an executable "spnp.c" file used for stochastic analysis. Numerous command line options provide a high degree of versatility and control to the user including the ability to generate and view the Petri net graph. CSPN supports systematic specification, automatic translation and subsequent augmentation (e.g., failure rates, service rates, and transition probabilities) of the resultant Petri nets for assessing stochastic properties of different candidate implementations and relating those properties back to the specification level.

The CSPN tool and methodology is based on the sound formalism of CSP. The approach abstracts the critical information necessary for performance analysis and translates it to a Petri net for exploring feasible and critical markings and subsequent analysis of the Markov state space. The CSP-based language, P-CSP, is used for system specification. The CSPN tool parses the P-CSP specification and, using the set of canonical translation rules, produces equivalent Petri nets represented as coincidence matrices.

A pdf version of this work is available here:

Click here for the complete dissertation.
Chapters 0-3,
Chapters 4-7,
Appendix, and
Bibliography.