Desktop version

Home arrow Economics

  • Increase font
  • Decrease font

<<   CONTENTS   >>

Two Categorical Endpoints

Hannah Ensor

Biomathematics and Statistics Scotland, United Kingdom

Christopher J. Weir

University of Edinburgh, United Kingdom


In this chapter, we extend the information-theoretic approaches introduced in Chapters 9 and 10 to the general categorical outcome context, where one or both of the surrogate and true outcomes is ordinal. With categorical outcomes, computational issues come to the fore when implementing the information- theoretic approach to surrogate evaluation. We address these at the individual and trial levels in turn.

Section 11.2 covers the scenario of binary surrogate S and ordinal true endpoint T, considers relevant computational aspects, and illustrates it using a case study. Section 11.3 addresses, in a similar format, the situation where S is ordinal and T is either binary or ordinal. Concluding remarks are offered in Section 11.4.

S and T: Binary-Ordinal

Information-Theoretic Approach

Individual-Level Surrogacy: Binary-Ordinal

At the individual level we apply the likelihood reduction factor (LRF) in the same manner as in the continuous case in Chapter 9, but now Gj is based on the difference in — 2 x log-likelihood of two proportional odds models for trial i, modeling the effect of treatment on T, with and without adjustment for S:

Here, w = 1,..., W — 1 and W is the number of categories in the ordinal true outcome Tj measured on patient j in trial i. and 60w are intercept parameters in each trial for each cut point of the ordinal true outcome; в and are treatment effect parameters (where Zj is the treatment group indicator) and 62j the surrogate parameter. The LRF represents the amount of information gained on the true outcome in each trial with knowledge of the surrogate.

Since (11.1) and (11.2) are based on discrete outcomes and conditional on Z, the information theory measure of individual-level surrogacy is bounded above by: < 1 — e-2H(T), where H(T) represents the entropy of T and

H (T) is calculated as ET [— log P (T)] = — ^ pi log pi. Hence, Rh is usually rescaled (Alonso and Molenberghs, 2007):

<<   CONTENTS   >>

Related topics