Booklet: Operations
Section: Risk Monitoring and Reporting
Subsection:
 

 

 

 

 

 

Action Summary additional information.

Regular risk monitoring provides management and the board with assurance that established controls are functioning properly. Comprehensive MIS reports are important tools for validating that IT operations are performing within established parameters. Examples of MIS include reports on hardware and telecommunications capacity utilization, system availability, user access, system response times, on time processing, and transaction processing accuracy. Periodic control self-assessments allow management to gauge performance, as well as the criticality of systems and emerging risks. Control self-assessments, however, do not eliminate the need for internal and external audits. Audits provide independent assessments conducted by qualified individuals regarding the effective functioning of operational controls. For additional detailed information on the IT audit function, refer to the IT Handbook’s “Audit Booklet.”

Management should regularly monitor technology systems—whether centralized or decentralized at business lines, support functions, affiliates, or business partners—to ensure resources are operating properly, used efficiently, and achieving the desired results predictably. Effective monitoring and reporting help identify insufficient resources, inefficient use of resources, and substandard performance that detract from customer service and product delivery. Monitoring and reporting also support proactive systems management that can help the institution position itself to meet its current needs and plan for periods of growth, mergers, or expansion of product lines.

Management should conduct performance monitoring for outsourced technology solutions as a part of a comprehensive vendor management program. Reports from service providers should include performance metrics, and identify the root causes of problems. Where service providers are subject to SLAs, management should ensure the provider complies with identified action plans, remuneration, or performance penalties. Vendor performance results should be considered in combination with internal performance as a part of sound capacity planning.

PERFORMANCE MONITORING
Performance monitoring and management involves measuring operational activities, analyzing the resulting metrics, and comparing them to internally established standards and industry benchmarks to assess the effectiveness and efficiency of existing operations. Measurable performance factors include resource usage, operations problems, capacity, response time, and personnel activity. Management should also review metrics that assess business unit and external customer satisfaction. Diminished system or personnel performance not only affects customer satisfaction, but can also result in noncompliance with contractual SLAs that could result in monetary penalties. Refer to the IT Handbook’s “Outsourcing Technology Services Booklet” for more detailed information.

If economically practicable, management should automate monitoring and reporting processes. Large mainframe systems have numerous automated tools available at the application and operating system level for generating technology and process-related metrics. Mid-range systems also typically possess native capability for capturing and reporting technology. There are also after-market reporting tools and vendor-supplied performance analysis tools available for mid-range systems. Client-server systems are not always equipped with analysis and reporting tools. Often management should decide between purchasing expensive after-market reporting tools to automate the data gathering and reporting or generating the reports manually.

Much of IT operations can and should be subject to measurement based on the size and complexity of the institution. The information gained from analysis supports not only daily management of operations and early diagnostics on impending problems, but provides the baseline and trend data used in capacity planning.

Examples of technology related metrics include:

Bullet

Central processing unit (CPU) utilization by application or time of day;

Bullet

Network availability;

Bullet

On-line performance measurements including availability, response time, distribution of access types by service, or average connect time;

Bullet

VRU performance (e.g. calls answered, average talk time, average wait time, call distribution by time of day, distribution of information access requests); and

Bullet

Abnormal program endings.

Examples of operations performance metrics include the following:

Bullet

Check processing - statement processing:
  dash bullet Percent of statements mailed by internal guideline;
  dash bullet Percent of exception statements mailed by internal guideline; and
  dash bullet Percent and volume of mismatched debit and credit items.

Bullet

Item processing – proof of deposit:
  dash bullet Average fields encoded per hour;
  dash bullet Ratio of errors per number of items encoded; and
  dash bullet Percentage of overall rejects for items captured.

Bullet

Operations:
  dash bullet Total debit and credit transactions for the month;
  dash bullet Percent of unposted items resolved the same day received; and
  dash bullet Percent of unposted items versus total posted debits and credits.

Bullet

Imaging operations:
  dash bullet Volume and percentage by document type (e.g. new account documents, loan documents, item processing) scanned and processed within internal guidelines;
  dash bullet Volume of transactions; and
  dash bullet Number of errors reported and percent to total maintenance volume.

Bullet

Electronic funds transfer and electronic banking:
  dash bullet Number of wire processing errors caused by department and percent of total volume;
  dash bullet Number of wires not processed due to failure to execute; and
  dash bullet Number of incidents reported and compensation paid due to department processing errors.

Bullet

Technology services - IT help desk:
  dash bullet Volume of calls received;
  dash bullet Percent of calls dropped compared to internal guidelines;
  dash bullet Average incoming call wait time compared to internal guidelines; and
  dash bullet Average duration of incoming calls.

Bullet

Human resource:
  dash bullet Actual versus budgeted staff size;
  dash bullet Department or group staffing compared with averages for the two previous years and budgeted headcount; and
  dash bullet Percent of staff with required training or certification.


CAPACITY PLANNING

Capacity planning involves the use of baseline performance data to model and project future needs. Capacity planning should address internal factors (growth, mergers, acquisitions, new product lines, and the implementation of new technologies) and external factors (shift in customer preferences, competitor capability, or regulatory or market requirements). Management should monitor technology resources for capacity planning including platform processing speed, core storage for each platform’s central processing unit, data storage, and voice and data communication bandwidth.

Capacity planning should be closely integrated with the budgeting and strategic planning processes. It also should address personnel issues including staff size, appropriate training, and staff succession plans.

CONTROL SELF-ASSESSMENTS
Control self-assessments validate the adequacy and effectiveness of the control environment. They also facilitate early identification of emerging or changing risks. Management should base the frequency of controls self-assessments on the risk assessment process and should coordinate the self-assessments with the internal audit plan. Control self-assessments are not a substitute for a sound internal audit program. The audit function should review the self-assessments for quality and accuracy. Internal audit also may reference the self-assessments as a part of the audit risk assessment process and may use them to plan the scope of audit work.

Depending on the size and complexity of the institution, the content and format of the controls self-assessment may be standardized and comprehensive or highly customized, focusing on a specific process, system, or functional area. IT operations management should collaborate with the internal audit function in creating the templates used. Typically, the self-assessment form combines narrative responses with a checklist. The self-assessment form should identify the system, process, or functional area reviewed, and the person(s) completing and reviewing the form. In general, the form should address the broad control topics in this booklet, including policies, standards, and procedures, as well as the specific controls implemented. Management review and analysis of reported events is an important supplement to the control self-assessment process. Forensic review of events and their resolution provides valuable insight into the effectiveness of the control environment and any need for additional controls.