Improving the Performance of Technical Systems , Upholding the Productivity of a Company

ISE is the process of systematically assessing the performance, value, and success of a system to determine if it meets organizational goals. This evaluation involves analyzing various aspects, including tangible and intangible benefits, costs, risks, and technical performance, and can be conducted using methods like goal-based, goal-free, or criteria-based evaluation. The goal is to ensure the system provides value, facilitates decision-making, and can be improved upon.

Information Systems Evaluation (ISE)

Key components of IS evaluation

  • Tangible and intangible benefits: Evaluate both the measurable, financial advantages and the less quantifiable, non-financial benefits the system provides.

  • Costs: Account for all costs, including hardware, software, development, integration, and training.

  • Risk analysis: Assess the risks associated with the investment in the system.

  • Technical performance: Evaluate the performance of the hardware, software, networks, and data.

  • User satisfaction and effectiveness: Measure how well the system is used and how satisfied users are with its performance.

Common evaluation methodologies

  • Goal-based evaluation: Assesses the system's success based on its initial goals and objectives.

  • Goal-free evaluation: Evaluates the system's outcomes without prior knowledge of its intended goals, to discover all effects (intended or unintended).

  • Criteria-based evaluation: Uses a set of pre-defined criteria to judge the system's performance and success.

  • Formative and summative evaluation: Formative evaluation occurs during development to provide feedback for improvement, while summative evaluation is done after the system is implemented to assess its overall success.

Key considerations for evaluation

  • Context matters: The specific methodology and focus of an evaluation will depend on the type of system and the goals of the evaluation.

  • Integrated process: Evaluation should be an ongoing process, integrated into the planning, development, and acquisition phases of the system lifecycle.

  • Organizational learning: The results of the evaluation should be used to facilitate organizational learning and improve future IS projects.

  • Complexity: Evaluating IS is a complex task due to the variety of benefits, costs, and risks, as well as the interaction between technical and human elements

Purposes of IS Evaluation

Organizations conduct IS evaluations for a variety of reasons, including:

  • Decision Making: To make informed decisions about whether to develop, modify, continue, or abandon an information system.

  • Performance Improvement: To identify areas for improvement in quality, efficiency, and maintenance of current systems or future projects.

  • Accountability: To justify the often significant investments in IT and demonstrate a return on investment (ROI), including both tangible and intangible benefits.

  • Organizational Learning: To learn from past experiences and improve future system development practices.

Types and Timing of Evaluation

Evaluations can be classified based on when they occur and their objectives:

  • Formative Evaluation: Conducted during the design and development process to provide feedback for immediate improvement.

  • Summative Evaluation: Performed after the system has been completed and implemented to assess the overall performance and success of the final product.

  • Process Evaluation: Examines how well the evaluation or project process was managed.

  • Outcome Evaluation: Assesses the final impact and results of the system.

Key Evaluation Criteria and Models

Evaluating an IS requires a comprehensive set of criteria that goes beyond just technical aspects. The widely recognized DeLone and McLean IS Success Model proposes six key dimensions for evaluation:

  • System Quality: The technical quality of the information system itself (e.g., reliability, response time, ease of use).

  • Information Quality: The quality of the output generated by the system (e.g., accuracy, relevance, timeliness).

  • Service Quality: The quality of the support provided by the IT department or vendors.

  • Use: The extent to which users interact with the system.

  • User Satisfaction: The level of contentment users have with the system and its outputs.

  • Individual & Organizational Impact: The effects of the system on individual performance and, ultimately, on the organization's performance, profitability, and strategic goals.

Other criteria also considered include economic effectiveness (cost/benefit analysis), risk analysis, and alignment with strategic objectives.

Methodological Approaches

The method chosen for evaluation often depends on the objectives and context:

  • Goal-Based Evaluation: Measures the extent to which the system meets predefined, specific objectives.

  • Goal-Free Evaluation: An inductive approach that looks for all actual outcomes, both intended and unintended, without being constrained by initial goals.

  • Criteria-Based Evaluation: Uses explicit general standards or checklists (e.g., usability heuristics, industry standards like ISO) as a yardstick for assessment.

  • Interpretive/Social Approaches: View IS as social systems with technical components, focusing on the subjective experiences, perceptions, and negotiations among stakeholders.

Overall, effective IS evaluation requires considering diverse perspectives from all stakeholders (users, developers, management, customers), using a mix of quantitative and qualitative methods, and integrating the evaluation process throughout the system's entire lifecycle.