Methodology
Simplifying decisions, not decision logic
Advanced Decision Methodology

SEAS captures complex human decision logic, and reapplies this logic to selection decisions. Our proprietary evaluation methodology precisely and completely expresses the needs of users. With hundreds of preference functions to capture buyer requirements and the power to handle tradeoffs and exceptions, LSP methodology offers a proven advantage over other methods in use today. Whereas widespread scoring methods such as "weighted average" can lead to unreliable results, SEAS guarantees accurate and fast evaluations. Our goal is to allow decision makers to focus on budget and business decisions, while our consultants handle all the complexities of a comparison and selection project.

The LSP Method

Logic Scoring of Preference (LSP) is a cutting-edge, proprietary evaluation methodology with theoretical foundations in continuous logic and advanced optimization techniques. LSP has been used in over 50 professional projects involving evaluation, optimization, comparison, and selection of products and services.

LSP : Eight Basic Steps in a Selection Process

Evaluating a complex system involves specifying criteria for all system attributes, aggregating preferences and data, and finally, quantifying the fit between the system and the requirements. The LSP method includes eight basic steps:

  1. Feasibility study
  2. Specification of performance variables
  3. Definition of elementary criteria
  4. Specification of the preference aggregation structure
  5. Request for proposals
  6. Preparation of proposals
  7. System evaluation, comparison, and selection using cost-preference analysis
  8. Contracting, equipment installation, and acceptance test

Performance Variables

Performance variables are all individually evaluated characteristics of analyzed systems. For example, the majority of computer systems can be evaluated using 40 to 120 performance variables from four main groups:

  • hardware
  • software
  • performance and availability
  • vendor support

The performance variables are derived using a hierarchical decomposition of the above groups.

Elementary Criteria

Elementary criteria define how to evaluate performance variables. methodfunction The result of evaluation is a value called the elementary preference, which can be defined as the percentage of satisfied requirements. For example, let R be the response time of an interactive terminal, and let E be the corresponding elementary preference. If we consider that R<1 sec perfectly satisfies our requirements and R>6 sec is fully unacceptable then the corresponding elementary criterion can be organized as the presented function E(R).

Using E(R) we can compute the elementary preference E for any value of the response time R.

Preference Aggregation

For each competitive system we use elementary criteria to compute n elementary preferences. Using a stepwise aggregation technique the elementary preferences can be aggregated yielding the global preference. The aggregation process is based on a set of sophisticated continuous preference logic operators having high expressive power to model the most complex logic relationships exactly reflecting all user-specific requirements.

Complex Criterion

Combining elementary criteria with a preference aggregation structure yields the presented model of a complex criterion.

methodpic

The end user participates in the creation of the criterion function. This approach gives the ultimate expressive power for exact modeling of user's needs.

The global preference E denotes the global percentage of fulfilled requirements.

Cost-Preference Analysis

At the end of the evaluation process each competitive system is described by two global indicators: the global preference E, and the global cost C. The goal of the cost-preference analysis is to aggregate E and C into a global quality indicator Q(E,C) which is used for final system ranking. This indicator is crucial for final financial negotiations: it is used to exactly compute the reduction of cost necessary for a competitor to outperform another competitor. The resulting procedure minimizes the total cost of the selected system and yields substantial savings.

Performance Measurements

The LSP method aggregates the results of computer performance measurements with all other components for evaluation of competitive systems. This includes industry standard benchmarks (SPEC, TPC, GPC, etc.), user's natural workloads, and/or the LSP Package Benchmarks measuring the following:

  • Monoprogramming performance (scientific CPU benchmark PSCI, commercial CPU benchmark PCOM, disk benchmarks DSEQ and DRAN, and a tape performance benchmark TSEQ).
  • Multiprogramming performance (a compound workload consisting of multiple copies of PSCI, PCOM, DRAN, DSEQ, and TSEQ).
  • File I/O system performance (disk subsystem saturation measurement, using file I/O intensive workloads based on multiple copies of DRAN).
  • Interactive system performance (Interactive Transaction Processing Benchmark ITPB, Interactive Program Development Benchmark, IPDB).
  • Network performance (based on the DCSP benchmark and the Network Performance Measurement Environment, NPME).

LSP : Software Support

The LSP methodology is supported by proprietary software.