Desktop version

Home arrow Marketing

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Metrics

IT ecosystems and their project teams rely on performance metrics and dashboards for ongoing process control and improvement. Several metrics related to cost, time, and quality were discussed in previous chapters, such as actual versus budgeted cost, actual versus planned schedule, project completion time, and others. Software projects have specific metrics that help evaluate the creation of code and features and functions. These are associated with the general categories of project management, problem resolution, unplanned design iterations, and the types of quality problems found and resolved by the team. Metrics are created and evaluated from different perspectives. Because questions differ across stakeholder groups, metrics dashboards exhibit a variety of formats.

Metrics are designed to answer questions from customers and internal stakeholders. Customers are interested in product and service availability, cost, quality, performance, and time to delivery. Other evaluation criteria are ease of use, availability, maintainability, upgrades, reuse, and disposability and sustainability. Marketing and sales need to know how products and services help grow market share, that is, their unique features and functions that excite customers and increase profitability. Finance needs to know revenue and profitability as well as compatibility with current products and services. From an operational perspective, questions are focused on global producibility, ease of use, and ease of maintenance, as well as disposability and sustainability. Other stakeholders will pose different questions. Metrics and dashboards are created to answer these types of questions.

Work activities measure the creation and deployment of hardware and software systems. Some of these include ongoing maintenance and upgrades, as well as refresh and disposal of equipment. Others are stratified by software modules, features, functions, lines of software code, and similar product attributes. Software is periodically released as versions that build on previous versions. Two types of metrics associated with releases include resource measurement (e.g., total labor hours and lead time per release) and numbers of errors and customer complaints from early test releases. Software and configuration metrics relative to features and functions delivered by a team include the cost per configuration, the time to resolve issues, and the percentage of design changes. From a customer perspective, measures include the cost per service issue, the time to resolve a service issue, actual versus planned service level by feature and function, the percent of service-related issues, and the accuracy of the measured service-related information.

In addition to these metrics, others are used to evaluate the software itself. These include the efficiency, ease of use, reliability, maintainability, and reusability. Efficiency measures the software’s performance relative to the time to accurately execute lines of code and calculations. Ease of use measures how easy it is for users to use the software through user interfaces. The software should be intuitive to use and should guide users through it with tips, drop-down menus, and other aids that provide clear instructions. Reliability measures the failure incident rates of hardware and software components. The overall system reliability depends on these failure rates as well as how they are organized (i.e., their architecture and especially the design of parallel and serial paths). Reliability with proactive and preventive maintenance contributes to a system’s availability for use. Maintainability measures the ease of repairing a system’s hardware or software. Reusability measures the degree to which hardware or software can be repurposed for new solutions.

Metrics are also classified as lagging, coincident, or leading. Lagging metrics measure historical performance. Examples that affect customers include lost revenue, percent complaints, returned goods and warranty expenses, and others. Examples that affect projects include actual versus budgeted cost, actual versus planned schedule attainment, and the average time to resolve a problem. Process improvement teams use historical information to create projects to eliminate chronic problems so that, over time, the customer experience improves. Coincident metrics measure current performance. Examples that affect customers include recent late deliveries, quality issues by type, and similar events. These events occur within the current reporting cycle and can be acted upon immediately to correct them. Examples that affect projects include total completed work activities, person days required to complete scrum sprints, forecast versus actual expenses, the cycle time per software release, the cost per service issue, and planned versus actual service levels by software feature and function. Leading metrics predict future performance and enable preventive actions that save time, reduce costs, and prevent deterioration of the customer experience. Project examples include accurate estimates of a project’s remaining work activities as well as the time to complete them, the person days required to complete remaining scrum sprints, and the amount of budget remaining.

 
<<   CONTENTS   >>

Related topics