Download This is the thesis title PDF

TitleThis is the thesis title
LanguageEnglish
File Size4.8 MB
Total Pages159
Table of Contents
                            Acknowledgments
Abstract
Resumo
Scientific and Financial Results
Table of Contents
List of Figures
List of Tables
Acronyms
Introduction
	Objectives
	Outline
Non-Linear Methods
	Heart Rate
	Poincaré Plot
	Recurrence Plot Analysis
	Fractal Dimension
	Detrended Fluctuation Analysis
	Hurst Exponent
	Correlation Dimension
	Entropies
	Lyapunov exponent
	Numerical Noise Titration
	Symbolic Dynamics
	Mutual Information
Generalized Measure for Observer Agreement
	Information-Based Measure of Disagreement for more than two observers: a useful tool to compare the degree of observer disagreement
	Facilitating the evaluation of agreement between measurements
Complexity Measures Applied to Heart Rate Signal
	Entropy And Compression: Two Measures Of Complexity
	Multiscale Compression: an Effective Measure of Individual Complexity
	Compression vs Entropy: on the characterization of the nonlinear features of physiological signal
Non-Linear Models To Assess Continuous Glucose Monitoring Data
	``Glucose-at-a-Glance:" New Method to Visualize the Dynamics of Continuous Glucose Monitoring Data
	Dynamical Glucometry: Use of Multiscale Entropy Analysis in Diabetes
General Discussion and Conclusions
                        
Document Text Contents
Page 79

55 Facilitating the evaluation of agreement between measurements

3.2 Facilitating the evaluation of agreement
between measurements

Abstract

Background: Several measures have been applied to assess agreement in epidemiologic studies.
However, problems arise when comparing the degree of observer agreement among different methods,
populations or circumstances.

Objective: The project endeavors to create an intuitive Web-based software system, available to
mathematicians, epidemiologists and physicians that facilitate easy application of several statistical agree-
ment measures strategies to their own data. An R package obs.agree was also developed for an easy
computation of observer agreement measures.

Methods: In order to make the website easily available in several platforms (from mobile devices as
well as regular laptops) we used the responsive design approach. To implement this design, HTML5 and
the library bootstrap which includes both CSS and javascript modules for graphic response were used.
We use the programing language R to implement two recent functions for measuring agreement: the raw
agreement indices (RAI) to categorical data and information-based measure of disagreement (IBMD) to
continuous data, and made it available as an R package.

Results: There is now widely available a website where mathematicians, epidemiologists and physi-
cians can easily evaluate and compare the results of several measures of agreement. There is also available,
from the Comprehensive R Archive Network, a new R package obs.agree to assess the agreement among
multiple measurements for the same subject by different observers.

Conclusions: Comparison of the degree of observer disagreement is often required in clinical and
epidemiologic studies. However, the statistical strategies for comparative analyses are not straightforward
and software for RAI and IBMD assessment is lacking. The website and package have the potential to help
health care professionals and biostatisticians when performing observer agreement studies, as it provides
an easy way to calculate raw agreement indices to categorical data and information-based measure of
disagreement to continuous variables.

Introduction

Assessment of agreement among multiple measurements for the same subject by different observers un-
der similar circumstances remains an important problem in medicine and epidemiology. Several measures
have been applied to assess observer agreement based on the data type.

When assessing agreement on discrete data the Cohen’s Kappa coefficient is one of the most widely
used (there are more than one R package that includes this measure). It was proposed by Cohen [1] �as the
proportion of chance-expected disagreements which do not occur, or alternatively, it is the proportion
of agreement after chance agreement is removed from consideration.� However, it has some inherent
limitations, in particular not always very low values of Kappa reflect low rates of overall agreement since
Kappa coefficient is affected by prevalence and by imbalance in the table’s marginal totals [2, 3]. Uebersax

Page 80

Facilitating the evaluation of agreement between measurements 56

presented the generalized raw agreement indices (RAI) [4, 5] overcoming the concerns with the Kappa
measure.

Considering continuous data, Bland and Altman [6] proposed a technique to compare the agreement
between two methods of medical measurement allowing multiple observations per subject; and later
Schluter [7]proposed a Bayesian approach. However, problems rise when comparing the degree of observer
agreement among different methods, populations or circumstances. The Intraclass correlation coefficient
(ICC) is a measure of reliability, and not agreement. However, it is often used for the assessment of
observer agreement in situations with multiple observers [8]. The ICC is strongly influenced by variations
in the trait within the population in which it is assessed. Consequently, comparison of this value is not
possible across different populations [9]. Lin objected to the use of ICC as a way of assessing agreement
between methods of measurement, and developed the Lin’s concordance correlation coefficient (CCC).
However, a number of limitations of ICC, such as comparability of populations, are also present in
CCC [10]. Recently a new approach was introduced, the information-based measure of disagreement
(IBMD) [11, 12] that assesses disagreement allowing different number of observers for different cases.
The IBMD allows comparison of the disagreement in non-negative ratio scales of different populations.

We developed a website [13] to assist with the calculation of several agreement measures, namely:
the IBMD and respective 95% confidence intervals, the ICC, CCC and limits of agreement can also be
measured. The website contains a description of these methods.

During the development of the site we build R implementations of the RAI and of IBMD measures,
now available in the R package obs.agree . These two measures are adequate to assess the agreement
among multiple observers allowing their comparability across populations.

Methods

Disagreement website

The purpose of the website [13] is to make widely available to mathematicians, epidemiologists and
physicians, and to facilitate easy application, several statistical agreement measures strategies to their
own data. To achieve this goal one of the central points of the website was to have a calculator available so
users could actually see how the measures works, and even compare it’s results to other known measures
such as CCC or ICC. One of the requirements for the website was that it should be usable from mobile
devices as well as regular laptops, this was the main reason why we used the responsive design approach.
To implement this design, HTML5 and the library bootstrap [14] which includes both CSS and javascript
modules for graphic response were used. The website’s backend is implemented in PHP5. Since the site
is composed of a series of static content pages and a single interactive page were the calculator itself is,
to decrease the amount of duplicate code the PHP template engine Smarty [15] was used. In Illustration
1 we present the responsive design for various display aspects.

Illustration 1: Responsive Design for various display aspects. TopLeft:320x480, Top right:786x1024,
bottom: 1280x600

One of the concerns in any website with user input is the validation of the data entered. Given
its importance, we decided to implement the validation both on the frontend and the backend. The
reason for this double validation is because the backend functionalities could potentially be used directly

Similer Documents