Document Text Contents
Page 1
Characterization of frequency stability
mmi Bureau ot ^mmm
NOV 2 3 1970
NBS TECHNICAL NOTE 394
Characterization
of Frequency Stability
NBS TECHNICAL PUBLICATIONS
PERIODICALS NONPERIODICALS
JOURNAL OF RESEARCH reports National
Bureau of Standards research and development in
physics, mathematics, chemistry, and engineering.
Comprehensive scientific papers give complete details
of the work, including laboratory data, experimental
procedures, and theoretical and mathematical analy
ses. Illustrated with photographs, drawings, and
charts.
Published in three sections, available separately:
• Physics and Chemistry
Papers of interest primarily to scientists working in
these fields. This section covers a broad range of
physical and chemical research, with major emphasis
on standards of physical measurement, fundamental
constants, and properties of matter. Issued six times
a year. Annual subscription: Domestic, $9.50; for
eign, $11.75*.
• Mathematical Sciences
Studies and compilations designed mainly for the
mathematician and theoretical physicist. Topics in
mathematical statistics, theory of experiment design,
numerical analysis, theoretical physics and chemis
try, logical design and programming of computers
and computer systems. Short numerical tables.
Issued quarterly. Annual subscription: Domestic,
$5.00; foreign, $6.25*.
• Engineering and Instrumentation
Reporting results of interest chiefly to the engineer
and the applied scientist. This section includes many
of the new developments in instrumentation resulting
from the Bureau's work in physical measurement,
data processing, and development of test methods.
It will also cover some of the work in acoustics,
applied mechanics, building research, and cryogenic
engineering. Issued quarterly. Annual subscription:
Domestic, $5.00; foreign, $6.25*.
TECHNICAL NEWS BULLETIN
The best single source of information concerning the
Bureau's research, developmental, cooperative and
publication activities, this monthly publication is
designed for the industryoriented individual whose
daily work involves intimate contact with science and
technology
—
for engineers, chemists, physicists, re
search managers, productdevelopment managers, and
company executives. Annual subscription: Domestic,
$3.00; foreign, $4.00*.
• Difference in price is due to extra cost of foreign mailing.
Applied Mathematics Series. Mathematical tables,
manuals, and studies.
Building Science Series. Research results, test
methods, and performance criteria of building ma
terials, components, systems, and structures.
Handbooks. Recommended codes of engineering
and industrial practice (including safety codes) de
veloped in cooperation with interested industries,
professional organizations, and regulatory bodies.
Special Publications. Proceedings of NBS confer
ences, bibliographies, annual reports, wall charts,
pamphlets, etc.
Monographs. Major contributions to the technical
literature on various subjects related to the Bureau's
scientific and technical activities.
National Standard Reference Data Series.
NSRDS provides quantitive data on the physical
and chemical properties of materials, compiled from
the world's literature and critically evaluated.
Product Standards. Provide requirements for sizes,
types, quality and methods for testing various indus
trial products. These standards are developed coopera
tively with interested Government and industry groups
and provide the basis for common understanding of
product characteristics for both buyers and sellers.
Their use is voluntary.
Technical Notes. This series consists of communi
cations and reports (covering both other agency and
NBSsponsored work) of limited or transitory interest.
Federal Information Processing Standards Pub
lications. This series is the official publication within
the Federal Government for information on standards
adopted and promulgated under the Public Law
89306, and Bureau of the Budget Circular A86
entitled, Standardization of Data Elements and Codes
in Data Systems.
CLEARINGHOUSE
The Clearinghouse for Federal Scientific and
Technical Information, operated by NBS, supplies
unclassified information related to Governmentgen
erated science and technology in defense, space,
atomic energy, and other national programs. For
further information on Clearinghouse services, write:
Clearinghouse
U.S. Department of Commerce
Springfield, Virginia 22151
Order NBS publications from: Superintendent of Documents
Government Printing Office
Washington, D.C. 20402
UNITED STATES DEPARTMENT OF COMMERCE
Maurice H. Stans, Secretary
NATIONAL BUREAU OF STANDARDS • Lewis M. Branscomb, Director
NBS TECHNICAL NOTE 394
ISSUED OCTOBER 1970
Nat. Bur. Stand. (U.S.), Tech. Note 394, 50 pages (Oct. 1970)
CODEN: NBTNA
Chorocterization of Frequancy Sfability
J. A. Barnes
A. R. Chi
L. S. Cutler
D.J. Healey
D. B. Leeson
T. E. McGunigai
J. A. Mullen
W. L.Smith
R. Sydnor
R. F. C. Vessot
G. M. R.Winkler
*The authors of this paper are members of the Subcom
mittee on Frequency Stability of the Technical Committee
on Frequency and Time of the IEEE Group on Instrumen
tation & Measurement. See page ii.
NBS Technical Notes are designed to supplement the
Bureau's regular publications program. They provide
a means for making available scientific data that are
of transient or limited interest. Technical Notes may
be listed or referred to in the open literature.
For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C. 20402
(Order by SD Catalog No. 013.46:394), Price 60 cents
Members
Subcorainittee on Frequency Stability
of the Technical Comraittee on Frequency and Time
of the
Institute of Electrical & Electronics Engineers
Group on Instrumentation & Measurement
J. A. Barnes, Chairman
Time and Frequency Division
Institute for Basic Standards
National Bureau of Standards
Boulder, Colorado 80302
A. R. Chi
National Aeronautical and Space
Administration
Greenbelt, Maryland 20771
L. S. Cutler
HewlettPackard Company
Palo Alto, California 94304
D. J. Healey
Westinghouse Electric Corporation
Baltimore, Maryland 2 1203
D. B. Leeson
California Microwave
Sunnyvale, California 94086
T. E. McGunigal
National Aeronautical and Space
Administration
Greenbelt, Maryland 20771
J. A. Mullen
Raytheon Company
Waltham, Massachusetts 02154
W. L. Smith
Bell Telephone Laboratories
Allentown, Pennsylvania 18103
R. Sydnor
Jet Propulsion Laboratory
Pasadena, California 91103
R. F.C. Vessot
Smithsonian Astrophysical Observatory
Cambridge, Massachusetts 01922
G.M.R. Winkler
Time Service Division
U. S. Naval Observatory
Washington, D. C. 20390
11
TABLE OF CONTENTS
Page
Glossary of Symbols v
Abstract ix
I. Introduction 1
II. Stateinent of the Problem . .^ 4
IIIj Background and Definitions 5
TV. The Definition of Measures of Frequency
Stability (Second Moment Type) 6
V. Translations Among Frequency Stability Measures .... 13
VI. Applications of Stability Measures 20
VII. Measurement Techniques for Frequency Stability .... 22
VIII. Conclusions 31
Appendix A 33
Appendix B 41
111
GLOSSARY OF SYMBOLS
B^(N, r, ^), B^{r,ll)
a
c , c
o 1
c(t)
D^(T)
X
_ w
277
g(t)
h
i, j, k, m, n
M
N
n(t)
R (t;
y
Bias function for variances based on finite
saraples of a process with a power law
spectral density. (See [13].)
A real constant defined by (A15).
Real constants.
A real, deterministic function of time.
Expected value of the squared second
difference of x(t) with lag time t. See
(B8).
Fourier frequency variable.
High frequency cutoff of an idealized infinitely
sharp cutoff, low pass filter.
Low frequency cutoff of an idealized infinitely
sharp cutoff, high pass filter.
A real function of time.
rOi .
Positive, real coefficient of f in a power
series expansion of the spectral density of
the function y(t).
Integers, often a dumray index of summation.
Positive integer giving the number of cycles
averaged.
Positive integer giving the number of data
points used in obtaining a sample variance.
A nondeterministic function of time.
Autocovariance function of y(t). See (A3).
Positive, real number defined by r = T/t.
S An intermediate term, used in deriving (23).
The definition of S is given by (A9).
S (f) Onesided (power) spectral density on a per
hertz basis of the pure real function g(t).
The dimensions of S (f) are the diraensions
ofg^(t)/f. §
S (f) A definition for the raea sure of frequency stability,
Onesided (power) spectral density of y(t)
on a per hertz basis. The dimensions of
S (f) are Hz"^.
y
T Tirae interval between the beginnings of
two successive measurements of average
frequency.
t Time variable.
t An arbitrary, fixed instant of time.
o
t The time coordinate of the beginning of the
kth measurement of average frequency.
By definition, t, = t, + T, k = 0, 1, 2* • • .
k+1 k
u Dummy variable of integration; u = 77 f t.
V(t) Instantaneous output voltage of signal
generator. See (2).
V . Nominal peak amplitude of signal generator
output. See (2).
V (t) Instantaneous voltage of reference signal.
^
. See (40).
V Peak amplitude of reference signal. See (40).
or
v(t) Voltage output of ideal product detector.
v (t) Low pass filtered output of product detector.
x(t) Real function of time related to the phase of
the signal V(t) by the equation
x(t) = .
o
vi
x{t) . A predicted value for x(t).
y(t) Fractional frequency offset of V(t) from
the norainal frequency. See (7).
y Average fractional frequency offset during
the kth measureraent interval. See (9),
(y) The saraple average of N successive values
of y . See (B4).
z (t) Nondeterministic (noise) function with
n
y
(power) spectral density given by (2 5).
0! Exponent of f for a power law spectral
density.
y Positive, real constant.
6 (r1) The Kronecker 6function defined by
K,
6^(rl) .
1, if r = 1
0, if otherwise
.
C(t) Amplitude fluctuations of signal. See (2).
/LI Exponent of t. See (2 9).
V{t) Instantaneous frequency of V(t). Defined by
V Norainal (constant) frequency of V(t).
o
X(t) The Fourier transforra of n(t).
CT (N, T, T) Sample variance of N averages of y(t).
each of duration t, and spaced every T
units of time. See (10).
(a (N, T, T)) Average value of the
saraple variance <7 (N, T, t).
la (T) A second choice of the definition for the measure
of frequency stability. Defined by
O^(T) = <cr^(N =2, T = T, T)> .
vii
U (T) Time stability measure defined by
X y
T Duration of averaging period of y(t) to
obtain y . See (9).
<^(t) Instantaneous phase of V(t). Defined by
*(t) = 2TTV t + (Pit).
o
(p(t) Instantaneous phase fluctuations about the
ideal phase, 2771^ t. See (2).
ij) (T, T) Mean square time error for Doppler radar,
^
See (BIO) .
CO = 2'n'f Angular Fourier frequency variable.
vm
CHARACTERIZATION OF FREQUENCY STABILITY
by
.
.
J. A. Barnes, A. R. Chi, L. S. Cutler,
D. J. Healey, D. B. Leeson, T. E. McGunigal,
J. A. Mullen, W. L. Smith, R. Sydnor,
'
R.F.C. Vessot, andG.M.R. Winkler
ABSTRACT
Consider a signal generator whose instantaneous output voltage V(t)
may be written as
V(t) = [V + €(t)] sin [27ri^ t + (p(t)]
where V and V are the nominal amplitude and frequency respectively
o o
T
d(p
of the output. Provided that €(t) and (p (t) = 7— are sufficiently small
for all time t, one raay define the fractional instantaneous frequency
deviation from nominal by the relation
O
A proposed definition for the measure of frequency stability is the
spectral density S (f) of the function y(t) where the spectrum is con
sidered to be onesided on a per hertz basis.
An alternative definition for the measure of stability is the infinite
time average of the sample variance of two adjacent averages of y(t);
that is, if
(.
'.!
k+T
where t is the averaging period, t , = t + T, k = 0, 1, 2* * ' , t is
arbitrary, and T is the time interval between the beginnings of two
successive measurements of average frequency; then the second measure
IX
of stability is
where ( ) denotes infinite time average and where T = T .
In practice, data records are of finite length and the infinite time
averages implied in the definitions are normally not available; thus estimates f
the two measures must be used. Estimates of S (f) would be obtained
y
from suitable averages either in the time domain or the frequency domain.
2An obvious estimate for 0" (t) is
y
m (v,  y, )
Parameters of the nn.easuring system and estimating procedure are
of critical importance in the specification of frequency stability. In
practice, one should experimentally establish confidence limits for an
estimate of frequency stability by repeated trials.
Key words: Allan variance; frequency; frequency stability; sample variance;
spectral density; variance.
CHARACTERIZATION OF FREQUENCY STABILITY
I. Introduction
The measurement of frequency and fluctuations in frequency has
received such great attention for so raany years that it is surprising that
the concept of frequency stability does not have a universally accepte<^
definition. At least part of the reason has been that some uses are most
readily described in the frequency domain and other uses in the time
doinain, as well as in combinations of the two. This situation is further
complicated by the fact that only recently have noise raodels been presented
which both adequately describe performance and allow a translation between
the time and frequency donaains. Indeed, only recently has it been recog
nized that there can be a wide discrepancy between commonlyused time
domain measures themselves. Following the NASAIEEE Symposium on
ShortTerm Stability in 1964 and the Special Issue on Frequency Stability
of the Proc. IEEE of February 1966, it now seems reasonable to propose
a definition of frequency stability. The present paper is presented as
technical background for an eventual IEEE standard definition.
This paper attempts to present (as concisely as practical) adequate,
self consistent definitions of frequency stability. Since more than one
definition of frequency stability is presented, an important part of this
paper (perhaps the most iraportant part) deals with translations among the
suggested definitions of frequency stability. The applicability of these
definitions to the more comraon noise models is demonstrated.
Consistent with an attempt to be concise, the references cited have
been selected on the basis of being of most value to the reader rather than
on the basis of being exhaustive. An exhaustive reference list covering
the subject of frequency stability would itself be a voluminous publication. .
Almost any signal generator is influenced to some extent by its
environment. Thus observed frequency instabilities raay be traced, for
exaraple, to changes in arabient temperature, supply voltages, magnetic
field, baroraetric pressure, humidity, physical vibration, or even ouput
loading to mention the more obvious. While these environmental influences
may be extremely important for many applications, the definition of fre
quency stability presented here is independent of these causal factors.
In effect, we cannot hope to present an exhaustive list of environraental
factors and a prescription for handling each even though, in some cases,
these environmental factors may be by far the most important. Given a
particular signal generator in a particular environment, one can obtain its
frequency stability with the measures presented below, but one should
not then expect an accurate prediction of frequency stability in a new
environraent.
It is natural to expect any definition of stability to involve various
statistical considerations such as stationarity, ergodicity, average,
variance, spectral density, etc. There often exist fundamental difficulties
in rigorous attempts to bring these concepts into the laboratory. It is
worth considering, specifically, the concept of stationarity since it is
a concept at the root of many statistical discussions.
A random, process is mathematically defined as stationary if every
translation of the time coordinate maps the ensemble onto itself. As a
necessary condition, if one looks at the ensemble at one instant of tirae,
t, the distribution in values within the ensemble is exactly the same as
at any other instant of time, t . This is not to imply that the elements
of the ensemble are constant in time, but, as one element changes value
with time, other elements of the ensemble assume the previous values.
Looking at it in another way, by observing the ensemble at some instant
of time, one can deduce no information as to when the particular instant
was chosen. This same sort of invariance of the joint distribution holds
for any set of times t, , t^, . . . , t and its translation t, + t, t^ + t, . . . ,
1 2 n 12
t + T.
n
It is apparent that any ensemble that has a finite past as well as a
finite future cannot be stationary, and this neatly excludes the real world
and anything of practical interest. The concept of stationarity does
violence to concepts of causality since we iraplicitly feel that current
performance (i.e., the applicability of stationary statistics) cannot be
logically dependent upon future events (i.e., if the process is terminated
sometime in the distant future). Also, the verification of stationarity
would involve hypothetical measurements which are not experimentally
feasible, and therefore the concept of stationarity is not directly relevant
to experimentation. '
Actually the utility of statistics is in the formation of idealized
models which reasonably describe significant observables of real systems.
One may, for example, consider a hypothetical ensemble of noises with
certain properties (such as stationarity) as a model for a particular real
device. If a model is to be acceptable, it should have at least two properties
First, the raodel should be tractable; that is, one should be able to easily
arrive at estimates for the elements of the model; and, second, the model
should be consistent with observables derived frora the real device which
it is simulating.
Notice that one does not need to know that the device was selected
from a stationary ensemble, but only that the observables derived from
the device are consistent with, say, elements of a hypothetically stationary
ensemble. Notice also that the actual model used may depend upon how
clever the experimenter theorist is in generating models.
It is worth noting, however, that while some texts on statistics give
"tests for stationarity, " these "tests" are almost always inadequate.
Typically, these "tests"' determine only if there is a substantial fraction
of the noise power in Fourier frequencies whose periods are of the same
order as the data length or longer. While this may be very iraportant, it
is not logically essential to the concept of stationarity. If a nonstationary
raodel actually becomes common, it will almost surely be because it is
useful or convenient and not because the process is "actually nonstationary.
Indeed, the phrase "actually non stationary" appears to have no meaning
in an operational sense. In short, stationarity (or nonstationarity) is a
property of models not a property of data [l].
Fortunately, many statistical raodels exist which adequately describe
most presentday signal generators; many of these models are considered
below. It is obvious that one cannot guarantee that all signal generators
are adequately described by these models, but the authors do feel they
are adequate for the description of most signal generators presently
encountered.
II. Statement of the Problem
To be useful, a measure of frequency stability must allow one to
predict performance of signal generators used in a wide variety of situations
as well as allow one to make meaningful relative comparisons among signal
generators. One must be able to predict perforinance in devices which
may iTiost easily be described either in the time domain, or in the frequency
domain, or in a combination of the two. This prediction of performance
raay involve actual distribution functions, and thus second raonaent measures
(such as power spectra and variances) are not totally adequate.
Two common types of equipment used to evaluate the performance
of a frequency source are (analog) spectrum, analyzers (frequency domain)
and digital, electronic counters (time domain). On occasion the digital
counter data are converted to power spectra by computers. One must
realize that any piece of equipraent siraultaneously has certain aspects
raost easily described in the time domain and other aspects most easily
described in the frequency doinain. For example, an electronic counter
has a high frequency limitation, and experimental spectra are determined
with finite time averages.
Research has established that ordinary oscillators demonstrate
noise which appears to be a superposition of causally generated signals
and random, nondeterrainistic noises. The random noises include
thermal noise, shot noise, noises of undeternjined origin (such as flicker
noise), and integrals of these noises.
One might well expect that for the more general cases one would)
need to use a nonstationary model (not stationary even in the wide sense,
i.e., the covariance sense). Nonstationarity would, however, introduce
significant difficulties in the passage between the frequency and time
domains. It is interesting to note that, so far, experimenters have seldom
found a non (covariance) stationary model useful in describing actual
oscillators.
In what follows, an attempt has been made to separate general
statements which hold for any noise or perturbation frora the stateraents
which apply only to specific models. It is important that these distinctions
be kept in mind.
III. Background and Definitions
To discuss the concept of frequency stability immediately implies
that frequency can change with time and thus one is not considering
Fourier frequencies (at least at this point). The conventional definition of
instantaneous (angular) frequency is the time rate of change of phase;
that is,
ZTTVit) = ^^^ H i(t) , • (1)
where ^ (t) is the instantaneous phase of the oscillator. This paper uses
the convention that time dependent frequencies of oscillators are denoted
by T^{t) (cycle frequency, hertz), and Fourier frequencies are denoted by
60 (angular frequency) or f (cycle frequency, hertz) where
to = ZTTf .
In order for (1) to have meaning, the phase ^(t) must be a well
defined function. This restriction immediately eliminates some "non
sinusoidal" signals such as a pure, random, uncorrelated ("white") noise.
For most real signal generators, the concept of phase is reasonably
amenable to an operational definition and this restriction is not serious.
Of great importance to this paper is the concept of spectral density,
S (f). The notation S (f) is to represent the onesided spectral density
g g
of the (pure real) function g(t) on a per hertz basis; that is, the total
"power" or mean square value of g(t) is given by
/
S (f)df .
Since the spectral density is such an important concept to what
follows, it is worthwhile to present some important references on spectrum
estiraation. There are raany references on the estimation of spectra
from data records, but worthy of special note are [2  5].
IV. The Definition of Measures of Frequency Stability (Second Moment Type) !
A. General . Consider a signal generator whose instantaneous
output voltage, V(t), may be written as
'
'
 I
V(t) = [V + e(t)] sin [2T71/ t + (p(t)], (2)
where V and V are the nominal amplitude and frequency respectively
o o
of the output and it is assuraed that
€(t)
V
< < 1
and
(P(t)
27TI/'
< < 1
(3)
(4)
for substantially all time t. Making use of (1) and (2) one sees that
^(t) = ZTTV t + (Pit) ,
o
(5)
and
l^(t) =
^o
+ ^ ^(t) . (6)
Equations (3) and (4) are essential in order that (p(t) may be defined
conveniently and unambiguously (see measurement section).
Since (4) must be valid even to speak of an instantaneous frequency,
there is no real need to distinguish stability measures from instability
measures. That is, any fractional frequency stability measure will be
far from unity, and the chance of confusion is slight. It is true that in
a very strict sense people usually measure instability and speak of stability.
Because the chances of confusion are so slight, the authors have chosen
to continue in the custom of measuring "instability" and speaking of stability
(a number always much less than unity).
Of significant interest to many people is the rf (radio frequency)
spectral density, S (f). This is of direct concern in spectroscopy and
radar. However, this is not a good primary measure of frequency stability
for two reasons: First, fluctuations in the amplitude, €(t), contribute
directly to S (f); and second, for many cases when €(t) is insignificant,
the rf spectrum, S (f), is not uniquely related to the frequency fluctuations
[6].
B. General: First definition of the measure of frequency stability

frequency domain .
By definition, let
o
where <P{t) and V are as in (2). Thus y(t) is the instantaneous fractional
o
frequency deviation from the norainal frequency V . A proposed definition
of frequency stability is the spectral density S (f) of the instantaneous
fractional frequency fluctuations y(t). The function S (f) has the
dimensions of Hz
One can show [7] that if S (f) is the spectral density of the phase
fluctuations, then
* °' (8)
/ 1 \^
= ^ f^S (f).
v
I ^
\ o I
Thus a knowledge of the spectral density of the phase fluctuations, S (f),
(p
allows a knowledge of the spectral density of the frequency fluctuations,
S (f), the first definition of frequency stability. Of course, S (f) cannot
be perfectly measured this is the case for any physical quantity; useful
estimates of S (f) are, however, easily obtainable.
C. General: Second definition of the measure of frequency stability 
time domain .
The second definition is based on the sample variance of the fractional
frequency fluctuations. In order to present this measure of frequency
stability, define y by the relation
v =  y(t)dt = , (9)
'
t
°
k
where t.
, ,
= t. + T , k = 0, 1, 2, . . . , T is the repetition interval for
k+1 k
measurements of duration t, and t is arbitrary. Conventional frequency
counters measure the number of cycles in a period T; that is, they measure
V T (1 + y ). When t is one second they count the number V (1 + y, ).ok ok
The second measure of frequency stability, then, is defined in analogy to
the saraple variance by the relation
N / , N '2
(N, T.T))= (^
n=l ' k=l
where (g) denotes the infinite time average of g. This measure of
frequency stability is dimensionless.
In many situations it would be wrong to assume that (10) converges
to a meaningful limit as N * °°. First, of course, one cannot practically
let N approach infinity and, second, it is known that some actual noise
processes contain substantial fractions of the total noise power in the
Fourier frequency range below one cycle per year. In order to iraprove
comparability of data, it is important to specify particular N and T ,
For the preferred definition we recommend choosing N = 2 and T = t
(i.e., no dead time between measurements). Writing (O" (N = 2, T = t, t))
as (T (T), the Allan variance [8], the proposed measure of frequency
stability in the time domain may be written as
for T = T
.
Of course, the experimental estimate of (T (t) must be obtained
y
from finite samples of data, and one can never obtain perfect confidence
in the estimate the true time average is not realizable in a real
situation. One estimates 0" (t) frora a finite number (say, ra) of values
y
2 2 .
of (J (2, T, T) and averages to obtain an estimate of 0" (t). Appendix A
shows that the ensemble average of (J (2, t, t ) is convergent (i.e., as
3Q  00) even for noise processes that do not have convergent ((J (N, t, t))
as N * =° . Therefore, (J (t) has greater utility as an idealization than
does (cr (°°, t, t)) even though both involve assumptions of infinite averages.
In effect, increasing N causes 0" (N, T, t) to be more sensitive to the low
frequency components of S (f). In practice, one must distinguish between
an experimental estimate of a quantity (say, of (J (t)) and its idealized
value. It is reasonable to believe that extensions to the concept of statistical
("quality") control [9] may prove useful here. One should, of course,
specify the actual number m of independent saraples used for an estimate
of a"(T). . ,
In sumraary, therefore, S (f) is the proposed measure of (instan
taneous) frequency stability in the (Fourier) frequency doraain and (7 (t)
is the proposed measure of frequency stability in the time domain.
D. Distributions . It is natural that people first become involved
with second moment measures of statistical quantities and only later with
actual distributions. This is certainly true with frequency stability.
While one can specify the arguraent of a distribution function to be, say,
(y^  y, ), it makes sense to postpone such a specification until a real
k+1 k
use has inaterialized for a particular distribution function. This paper
does not attempt to specify a preferred distribution function for frequency
fluctuations.
E. Treatraent of Systematic Variations .
1. General . The definition of frequency stability 0" (t) in the
tirae domain is useful for raany situations. However, some oscillators,
for example, exhibit an aging or almost linear drift of frequency with
tirae. For some applications, this trend may be calculated and should be
removed [8] before estiraating 0" (t).
In general, a systematic trend is perfectly deterministic (i.e.,
predictable) while the noise is nondeterministic. Consider a function,
g(t), which may be written in the form
g(t) = c(t) + n(t) (12)
where c(t) is some deterrainistic function of time and n(t), the noise, is
a non deterrainistic function of tirae. We will define c(t) to be the
systeraatic trend to the function g(t). A problem of significance here is
to determine when and in what sense c(t) is measurable.
2. Specific Case Linear Drift . As an exaraple, if we consider
a typical quartz crystal oscillator whose fractional frequency deviation is
y(t), we may let
g(t) = ^y(t). (13)
10
With these conditions, c(t) is the drift rate of the oscillator (e.g., 10
•per day) and n(t) is related to the frequency "noise" of the oscillator by
a time derivative. One sees that the time average of g(t) becomes
^t + T
j_ r
o
T I
1
t +T
r o
g(t)dt = c^ + Y j
n(t)dt 14)
where c(t) = c is assumed to be the constant drift rate of the oscillator,
In order for c to be an observable, it is natural to expect the average
of the noise term to vanish, that is, converge to zero.
It is instructive to assume [8, 10] that in addition to a linear
drift, the oscillator is perturbed by a flicker noise, i.e..
y
s (f ) = / h i \ < f ^ 1
1
0, i>{^ ,
where h is a constant (see Sec. V.A.2) and thus,
S (f) = ({Zrrfh f, ^ f ^ f,
n \ 1 h
:i5)
(16)
0, f>f^,
for the oscillator we are considering. "With these assumptions, it is seen
that
Lim 1
T  00 Ta
t^+T
n(t) dt = X(0) = 17'
and that
VanL
'P
» 00
ance
T
t, +T
k
n(t)dt
J
= 0, (18)
where X (f ) is the Fourier transform of n(t). Since S (0) = 0, X(0) must
also vanish both in probability and in mean square. Thus, not only does
n(t) average to zero, but one may obtain arbitrarily good confidence on
the result by longer averages.
11
Having shown that one can reliably estimate the drift rate, c ,
of this (comraon) oscillator, it is instructive to atterapt to fit a straight
line to the frequency aging. That is, let
g(t) = y(t) (19)
and, thus,
g(t) = c^ + c^(t  t^) + n'(t) (20)
where c is the frequency intercept at t = t and c is the drift rate
o o 1
previously determined. A problem arises here because
S , (f) = S (f) (21)
n y
and
Lira
'J'
> 00
Variance
1
rVT
,
I
n (t) d t
^t
k
(22)
for the noise model we have assumed. This follows from the fact that the
(infinite N) variance of a flicker noise process is infinite [7, 8, lO]. Thus,
c cannot be measured with any realistic precision at least, in an absolute
o
sense.
We may interpret these results as follows: After experimenting
with the oscillator for a period of time one can fit an empirical equation
to y(t) of the forra
y(t)  c + tc + n'(t),
o 1
where n (t) is nondeterrainistic. At some later time it is possible to
reevaluate the coefficients c and c . According to what has been said,
o 1
the drift rate c should be reproducible to within the confidence estimates
of the experiment regardless of when it is reevaluated. For c , however,
this is not true. In fact, the more one attempts to evaluate c , the larger
the fluctuations are in the result.
12
Depending on the spectral density of the noise term, it may be
possible to predict future mea sureraents of c and to place realistic
confidence limits on the prediction [ll]. For the case considered here,
however, these confidence limits tend to infinity when the prediction
interval is increased. Thus, in a certain sense, c is "raeasurable"
o
but it is not in statistical control (to use the language of the quality
i
control engineer [9]).
V. Translations Among Frequency Stability Measures •
A. Frequency Domain to Time Domain .
1. General . It is of value to define r = T/T; that is, r is the
ratio of the tirae interval between successive measurements to the duration
of the averaging period. Cutler has shown (see Appendix A) that
K^'''^''^^~W^i "'y'"
[sin^(77f T)]
(TTfrf
sin (TTrfNT)
N^ sin^ (TTrf T)
(23)
Equation (23) in principle allows one to calculate the time domain stability
(cr (N, T, t)) from the frequency domain stability S (f).
2. Specific raodel . A model which has been found useful [7, 9,
10, 11, 12] consists of a set of five independent noise processes, z (t), n =
n
2, 1, 0, 1, 2, such that , ; .
y(t) = z (t) + z ^ (t) + z (t) + z (t) + z (t),
2 1 o 1 2
(24)
and the spectral density of z is given by
n
S (f)
z
n
J
h f"^, ^ f ^ f,
= < n 1
0, f >f , n = 2, 1, 0, 1, 2,
h .
(25;
where the h are constants. Thus, S (f) becomes
n y
.2 1
S(f) = h^f" + hf"+h +hf + h^f^
,
y 2 1 o 1 2
(26)
13
for ^ f ^ f, and S (f) is assumed to be negligible beyond this range,
h y
In effect, each z contributes to both S (f) and <CT (N.T, t)) independently
n y y
of the other z . The contributions of the z to ((J (N, T, t)) are tabulated
n n y
in Appendix B.
Any electronic device has a finite bandwidth and this certainly
applies to frequency measuring equipment also. For fractional frequency
fluctuations, y(t), whose spectral density varies as
S (f) ~f^, a ^ 1, (27)
for the higher Fourier components, one sees (from Appendix A) that
(CT (N, T, T)) may depend on the exact shape of the frequency cutoff. This
is true because a substantial fraction of the noise "power" raay be in
these higher Fourier components. As a simplifying assumption, this
paper assuines a sharp cutoff in noise "power" at the frequency f for
the noise models. It is apparent from the tables of Appendix B that the
time domain measure of frequency stability may depend on f in a very
important way, and, in some practical cases, the actual shape of the
frequency cutoff may be very important [?]. On the other hand, there are
many practical measurements where the value of f. has little or no effect.
Good practice, however, dictates that the system noise bandwidth, f,
,
h
should be specified with any results.
In actual practice, the model of (24), (25), and (26) seems to fit
almost all real frequency sources. Typically, only two or three of the
hcoefficients are actually significant for a real device and the others can
be neglected. Because of its applicability, this raodel is used in much of
what follows. Since the z are assumed to be independent noises, it is
n ^
normally sufficient to compute the effects for a general z and recognize
that the superposition can be accoraplished by simple additions for their
contributions to S (f) or <CT^ (N, T, t) >.
14
B. Time Domain to Frequency Domain
1. General . For general (g (N, T, t)) no simple prescription
is available for translation into the frequency domain. For this reason,
one might prefer S (f) as a general measure of frequency stability. This
is especially true for theoretical work.
Z. Specific model . Equations (24), (25), and (26) formi a i
realistic model which fits the random, nondeterministic noises found on
most signal generators. Obviously, if this is a good nnodel, then the
tables in Appendix B may be used (in reverse) to translate into the fre
quency doraain.
Allan [8] and Vessot [12] showed that if
S (f)
y
where 0^ is a constant, then
.cxV ' ^ f ^ ^h
0, i > t ,
(28)
<0^(N, T, T)> ~ ItI^, ZTTTi >> 1
y h
(29)
T
for N and r = — held constant. The constant (I is related to 0^ by
the mapping shown''' in Fig. 1. If (28) and (29) hold over a reasonable range
for a signal generator, then (28) can be substituted into (23) and evaluated
to deterraine the constant h from measurements of (o" (N, T, t)). It
C y
should be noted that the model of (28) and (29) may be easily extended to a
superposition of similar noises as in (26).
It should be noted that in Allan [8], the exponent, 0^, corresponds to the
spectrum of phase fluctuations while variances are taken over average
frequency fluctuations. In the present paper, ^ is identical to the exponent
a + 2 in [8].
15
9
FIG. I f^a MAPPING
16
C. Translations Among the Time Domain Measures
1. General. Since <0^(N, T, t)) is a function of N, T, and t
y
(for some types of noise f is also important), it is very desirable to be
able to translate among different sets of N, T, and t (f held constant);
this is, however, not possible in general.
2. Specific model . It is useful to restrict consideration tq a
case described by (28) and (29). Superpositions of independent noises
with different power law types of spectral densities (i.e., different Qf's)
can also be treated by this technique, e.g. , (26). One may define two
"bias functions, " B and B by the relations [13]:
<(T^(N, T, T)>
B (N, r, iU) = —I : (30)
1 <a^(2, T, T))
and 
<a^(2, T, T))
B (r, M) = ^t———T (31)
2 <Cr (2, T, T))
where r = T/t and fl is related to 0? by the mapping of Fig. 1. In words,
B is the ratio of the average variance for N samples to the average
variance for 2 samples (everything else held constant); while B is the
ratio of the average variance with dead time between measurements
(r ^ 1 ) to that of no dead time (r = 1 and with N = 2 and t held constant).
These functions are tabulated in [13], Figs. 2 and 3 show a computer plot
of B^(N, r=l, jLl) and B^(r, H).
Suppose one has an experimental estimate of ((7 (N , T '^,)^ ^^d
its spectral type is known that is, (2 8) and (2 9) form a good raodel and jLt
is known. Suppose also that one wishes to know the variance at some other
set of measurement parameters, N , T , t An unbiased estimate of
(O" (N T T )) raay be calculated by the equation:y22 2
17
m
I
o
X
I
cvj
6»
iZ
%
Fig. 3 THE BIAS FUNCTION, Bg (r,^^)
<%^(^'^2'^2^^=(^)
B^(N^,r^,^)B2(r^,^)
B^(N^,r^,jy)B^(r^,^)
<rr^CN^, T^, T^)), (32)
T T
where r = 1/'^, ^i^d r = 2/t
3. General . While it is true that the concept of the bias
functions, B and B , could be extended to other processes besides those
1 2
^
with the powerlaw types of spectral densities, this generalization has not
been done. Indeed, spectra of the forra given in (2 8) (or superpositions
of such spectra as in (26)) seem to be the most common types of non
deterministic noises encountered in signal generators and associated
equipment. For other types of fluctuations (such as causally generated
perturbations), translations must be handled on an individual basis.
VI. Applications of Stability Measures
Obviously, if one of the stability measures is exactly the iraportant
parameter in the use of a signal generator, the stability measure's
application is trivial. Some nontrivial applications arise when one is
interested in a different parameter, such as in the use of an oscillator in
Doppler radar raeasurements or in clocks.
A. Doppler Radar .
1. General . From its transmitted signal, a Doppler radar
receives from a moving target a frequencyshifted return signal in the
presence of other large signals. These large signals can include clutter
(ground return) and transraitter leakage into the receiver (spillover).
Instabilities of radar signals result in noise energy on the clutter return,
on spillover, and on local oscillators in the equipment.
The limitations of subclutter visibility (SCV) rejections due to
the radar signals themselves are related to the rf power spectral density,
S^^(f). The quantity typically referred to is the carriertonoise ratio and
can be mathematically approximated by the quantity
20
S (f)
V '
f"^
The effects of coherence of target return and other radar parameters are
amply considered in the literature [1417].
2. Special Case . Because FM effects generally predorainate
over AM effects, this carriertonoise ratio is approximately given by [6]
S (f)

S (f')df'
for many signal sources provided f  y  is sufficiently greater than zero.
(The factor of — arises from the fact that S (f) is a onesided spectrum.
)
Thus, if f  V is a frequency separation from the carrier, the carrier

o
to noise ratio at that point is approximately
IV \
\S (f V )=(t^— ) S (f y l) . (34)2 (p\' o 7 2 yf  y / y V o V
B. Clock Errors .
1. General . A clock is a device which counts the cycles of a
periodic phenoraenon. Thus, the reading error x(t) of a clock run from
the signal given by (2) is •
x(t) = , (35)
o
and the diraensions of x(t) are seconds.
H this clock is a secondary standard, then one could have
available sorae past history of x(t), the tirae error relative to the standard
.' clock. It often occurs that one is interested in predicting the clock error
x(t) for some future date, say t + t, where t is the present date. Obviously,
this is a problera in pure prediction and can be handled by conventional
methods [3]. ,
21
2. Special Case . Although one could handle the prediction of
clock errors by the rigorous methods of prediction theory, it is more
common to use simpler prediction methods[lO, ll]. In particular, one
often predicts a clock error for the future by adding to the present error
a correction which is derived from the current rate of gain (or loss) of
time. That is, the predicted error x(t + T) is related to the past history
of x(t) by the equation
rx(t )  x(t T>,
x(to+ T) = x(tj +T [
J
. (36)
It is typical to let T = t.
Thus, the mean square error of prediction for T = t becomes
<[x(tQ+ T)  ^{t^+ r)f) = <[x(t^+ T)  2x(t^+x(tQ T)p), (37)
which, with the aid of (11), can be written in the forra
. . <[x(t + T)  x(t + T)]2) = ZT^CT^T) . (38)
^ O O y
One can define a tirae stability raeasure, O" (t), by the equation
CT^(T) = T^ (T^(T) . (39)
x y
Clearly, however, the actual errors of prediction of clock readings are
dependent on the prediction algorithm used and the utility of such a definition
as rr (t) is not great. Caution should be used in employing this definition.
VII. Measurement Techniques for Frequency Stability
A. Heterodyne Techniques (general) It is possible for oscillators
to be very stable and values of 0" (t) can be as small as 10 in some
y
state of the art equipraent. Thus, one often needs raeasuring techniques
capable of resolving very sraall fluctuations in y(t). One of the most
comraon techniques is a heterodyne or beat frequency technique. In this
method, the signal frora the oscillator to be tested is mixed with a reference
signal of almost the sarae frequency as the test oscillator in order that one
is left with a lower average frequency for analysis without reducing the
22
frequency (or phase) flucutations themselves. Following Vessot, et. al.
[18], consider an ideal reference oscillator whose output signal is
V (t) = V sin ZttV t, (40)
r or o
and a second oscillator whose output voltage V(t) is given by (2):
V(t) = [V + C(t)] sin [27TU t + <p(t)]. Let these two signals be mixed in a
product detector; that is, the output of the product detector v(t) is eqxial
to the product 'yV(t) XV (t) , where y is a constant (see Fig. 4).
r
Let v(t), in turn, be processed by a sharp, lowpass filter with
cutoff frequency f, such that
<f, < f ' < y . (41)
h h o
One may write
y V(t) • V (t) = y V (V + e) [sin Zrrv t] IsiniZtrv t + (p)]
r or o ^ o o
(42)
V V
\ or o/ / C \ r T
v(t) = y ::; 1 + r— [cos CO  cos (4 Try t ^ CD)]
2 \ V / o
o
Assume that cos[cp(t)] has essentially no power in Fourier frequencies f
in the region f ^ f . The effect of the low pass filter then is to remove
the second term on the extreme right of (42); that is,
V V
v'(t) =: y^^ (1 + ^) cos cp(t). (43)
o
(P(t)
ZttV
o
'< < 1This separation of terms by the filter is correct only if
for all t (see (4)).
The following two cases are of interest:
Case I
The relative phase of the oscillators is adjusted so that cp(t)

< < 1
(inphase condition) during the period of measurement. Under these conditions
2 3
r
(D
1
^, ^
,£3
u
1
i
4_)
^ >
w
I)
fl
>^
Td
a ^ u
O OJ <u
^ u
(U
^ '{: 4) .^^ 4>
?H ^^ "X. ^w
>
/VV > •
1
1
k
^^
,^
1
^ ^ 1
[x^
1 1
t
1
rence llator
CO)
roduc
etecto
O
1
ft C •41
1—
1
(U •1 > I—
'
•rl
P^ O o
24
since cos cp (t) '^ 1. That is to say one detects the amplitude noise € (t)
of the signal.
Case II
The relative phase of the oscillators is adjusted to be in
approximate quadrature; that is ' •
(p'(t) = (p(t) +
j (45)
where <p(t)

<< 1. Under these conditions,
and
If it is true that
cos (Pit) = sin ip (t) «(p (t)
v'(t) =^ V V co'(t) + ^ V (p'(t)€(t)
Z or o Z or
g(t)
V
<< 1 for all t (see (3)), then (47) becomes
y
V (t) V V (p(t)
Z or o
(46)
(47)
(48)
that is, V (t) is proportional to the phase fluctuations. Thus, in order to
observe <p (t) by this method, (3) and (4) must be valid. For different
average phase values, mixtures of amplitude and phase noise are observed.
In order to maintain the two signals in quadrature for long
observational periods, the reference oscillator can be a voltage controlled
oscillator (VCO) and one may feed back the phase error voltage (as defined
in (48)) to control the frequency of the VCO [19]. In this condition of the
phaselocked oscillator, the voltage v (t) is the analog of the phase
fluctuations for Fourier frequencies above the loop cutoff frequency of the
locked loop. For Fourier frequencies below the loop cutoff frequency of
the loop, V (t) is the analog of frequency fluctuations. In practice, one
should measure the complete servo loop response.
Z5
B. Period Measurement. Assume one has an oscillator whose
f(fc)
voltage output may be represented by (2). H
the total phase
^(t) = 277^ t + (D{t)
o
V
< < 1 for all t and
(5)
is a monotonia function of time (that is,
ZTTU
between successive positive going zero crossings of V(t) is related to the
<P(t)
^ 1), then the time t
average frequency during the interval t ; specifically,
1 = y (1 +y )To n (49)
If one lets t be the time between a positive going zero crossing of V(t)
and the Mth successive positive going zero crossing, then
M
,  .
T o^ ^n
(50)
If the variations ^ T of the period are sraall compared to the average
period t , Cutler and Searle [?] have shown that one may make a reasonable
approximation to (cr^(N, T, t )) using period measurements.
y o
C. Period Measurement with Heterodyning . Suppose that <p{t) is
a monotonic function of time. The output of the filter of Sec. VII, A, (43)
becomes
V V
(51)
V V
/. ^ or o
V (t) «^ y — cos (p(t)
if
V
.o
<< 1. Then one raay measure the period t of two successive
positive zero crossings of v (t). Thus
1 I I = V y
T on
and for the Mth positive crossover
M I— = V y
T o ' n
(52:
(53)
26
The magnitude bars appear because cos (p(t) is an even function
of (p(t). It is impossible to determine by this method alone whether cp is
increasing with time or decreasing with time. Since y may be very
n
small (~ 1 or 10 for very good oscillators), t may be quite long
and thus measurable with a good relative precision.
If the phase, (p(t), is not monotonic, the true y may be n^ar
zero but one could still have many zeros of cos (p(t) and thus (52) and (53)
would not be valid.
D. Frequency Counters . Assume the phase (either ^ or (/?) is a
monotonic function of time. If one counts the nuraber M of positive
going zero crossings in a period of time t , then the average frequency
M
of the signal is — . If we assume that the signal is V(t) (as defined in
(2)), then
V = ^ (1 + y ) . (54)Ton
If we assume that the signal is v (t) (as defined in (48)), then
— = I^ y
I
. (55)
T o
'
^n'
Again, one raeasures only positive frequencies.
E. Frequency Discriminators . A frequency discriminator is a
device which converts frequency fluctuations into an analog voltage by
means of a dispersive element. For example, by slightly detuning a resonant
circuit from the signal V(t) the frequency fluctuations — ^P (t) are con
verted to amplitude fluctuations of the output signal. Provided the input
amplitude fluctuations ~— are insignificant, the output amplitude
fluctuations can be a good measure of the frequency fluctuations. Obviously,
more sophisticated frequency discriminators exist (e.g., the cesium beam).
From the analog voltage one may use analog spectrura analyzers
to determine S (f ) , the frequency stability. By converting to digital data,
other analyses are possible on a computer. ..
27
F. Cororaon Hazards .
1
.
Errors caused by signal processing equipment . The intent
of most frequency stability measurements is to evaluate the source and not
the measuring equipment. Thus, one must know the perforraance of the
measuring system. Of obvious importance are such aspects of the measuring,
equipment as noise level, dynamic range, resolution (dead time), and fre I
quency range.
It has been pointed out that the noise bandwidth f is very
essential for the mathematical convergence of certain expressions. Insofar
as one wants to measure the signal source, one must know that the measuring
1
system is not limiting the frequency response. At the very least, one must
recognize that the frequency limit of the measuring system may be a very

2
important, implicit parameter for either 0" (t) or S (f). Indeed, one
y y
must account for any deviations of the measuring systein from ideality
such as a "nonflat" frequency response of the spectrum analyzer itself.
Almost any electronic circuit which processes a signal will,
to some extent, convert amplitude fluctuations at the input terminals into
phase fluctuations at the output. Thus, AM noise at the input will cause
a tirae varying phase (or FM noise) at the output. This can impose inaportant
constraints on limiters and autoinatic gain control (AGC) circuits when good
frequency stability is needed. Similarly, this imposes constraints on equip
ment used for frequency stability measurements.
2. Analog spectruin analyzers (Frequency Domain) . Typical
analog spectrura analyzers are very similar in design to radio receivers
of the superheterodyne type, and thus certain design features are quite
similar. For example, image rejection (related to predetection bandwidth)
is very important. Similarly, the actual shape of the analyzer's frequency
window is important since this affects spectral resolution. As with receivers,
dynamic range can be critical for the analysis of weak signals in the presence
of substantial power in relatively narrow bandwidths (e.g., 60 Hz).
28
The slewing rate of the analyzer raust be consistent with the
analyzer's frequency window and the post detection bandwidth. If one has
a frequency window of 1 hertz, one cannot reliably estimate the intensity
of a bright line unless the slewing rate is much slower than 1 hertz/ second.
Additional postdetection filtering will further reduce the maximum usable
slewing rate.
3. Spectral density estimation from time doraain data . It is
beyond the scope of this paper to present a comprehensive list of hazards
for spectral density estimation; one should consult the literature [2  5].
There are a few points, however, which are worthy of special notice:
a. Data aliasing (similar to predetection bandwidth
problems).
b. Spectral resolution. i
c. Confidence of the estimate.
4. Variances of frequency fluctuations , (J (t). It is not un
common to have discrete frequency modulation of a source such as that
associated with the power supply frequencies. The existence of discrete
frequencies in S (f) can cause (j (t) to be a very rapidly changing function
of T. An interesting situation results when t is an exact multiple of the
period of the modulation frequency (e.g., one makes t = 1 second and
there exists 60 Hz frequency modulation on the signal). In this situation,
cr (T= 1 second) can be very optimistic relative to values with slightly
different values of t.
One also must be concerned with the convergence properties of
0" (T) since not all noise processes will have finite limits to the estimates
of a (T) (see Appendix A). One must be as critically aware of any "dead
time" in the measurement process as of the systera bandwidth.
5. Signal source and loading . Inraeasuring frequency stability
one should specify the exact location in the circuit from which the signal
is obtained and the nature of the load used. It is obvious that the transfer
29
characteristics of the device being specified will depend on the load and
that the measured frequency stability might be affected. If the load itself
is not constant during the measurements, one expects large effects on
frequency stability.
6. Confidence of the estimate . As with any measurement in
science, one wants to know the confidence to assign to numerical results.
Thus, when one measures S (f) or 0" (t), it is important to know the
y y
accuracies of these estimates.
a. The Allan Variance . It is apparent that a single sample
variance, cr (2, T, t), does not have good confidence, but, by averaging
many independent samples, one can improve the accuracy of the estimate
greatly. There is a key point in this statement "independent samples."
For this arguraent to be true, it is important that one sample variance be
independent of the next. Since (T (2, t, t) is related to the first difference
y
of the frequency (see (11)), it is sixfficient that the noise perturbing y(t)
have "independent increments, " i.e., that y(t) be a random walk. In
other words, it is sufficient that S (f) '^ f for low frequencies. One
y
can show that for noise processes which are more divergent at low fre
—p
quencies than f , it is difficult (or inipos sible) to gain good confidence
p
on estimates of 0" (T). For noise processes which are less divergent
— p
than f , no problem exists.
It is worth noting that if we were interested in Cr (N = °°, t, t ),
then the linait noise would become S (f ) ~ f instead of f as it is for
C (2, T, T). Since most real signal generators possess low frequency
divergent noises, (a (2, t, t) ) is raore useful than CT (N = «, t, t).
y y
p
Although the sample variances, CT (2, t, t), will not be normally
distributed, the variance of the average of m independent (nonoverlapping)
samples of a (2, t, t) (i.e., the variance of the Allan Variance) will decrease
as 1/m provided the conditions on low frequency divergence are met. For
sufficiently large ra, the distribution of the msample averages of
30
cr (2, T, t) will tend toward normal (central liniit theorem). It is, thus,
y
•possible to estimate confidence intervals based on the normal distribution.
As always, one may be interested in t values approaching the
limits of available data. Clearly, when one is interested in Tvalues of
the order of a year, one is severely limited in the size of m, the number
of samples of CT (2, t, t). Unfortunately, there seeras to be no substitute
y
for many samples and one extends t at the expense of confidence in the
results. "Truth in packaging" dictates that the sample size m be stated
with the results.
b. Spectral Density. As before, one is referred to the
literature for discussions of spectrum estimation [25]. It is worth pointing
out, however, that for S (f) there are basically two different types of
y
averaging which can be employed: Sample averaging of independent estimates
of S (f), and frequency averaging where the resolution bandwidth is made
much greater than the reciprocal data length.
VIII. Conclusions
A good measure of frequency stability is the spectral density, S (f),
of fractional frequency fluctuations, y(t). An alternative is the expected
variance of N sample averages of y(t) taken over a duration t . With
the beginning of successive sample periods spaced every T units of time,
the variance is denoted by (7 (N, T, t). The stability measure, then, is the
2
expected value of many measurements of 0" (N, T, t) with N = 2 and T = T;
p
that is, G (t). For all real experiraents one has a finite bandwidth. In
y
general, the time domain measure of frequency stability, 0" (t), is
dependent on the noise bandwidth of the system. Thus, there are four
important parameters to the time domain ineasure of frequency stability:
31
N, the number of saraple averages (N = 2 for preferred raeasure);
T, the repetition time for successive sample averages (T = T for
preferred measure);
T, the duration of each saraple average; and
f , the system noise bandwidth.
Translations among the various stability measures for common
noise types are possible, but there are significant reasons for choosing
N = 2 and T = t for the preferred measure of frequency stability in the
tirae domain. This measure, the Allan Variance, (N = 2) has been
referenced by [12, 2022], and more.
Although S (f) appears to be a function of the single variable f,
actual experiraental estiraation procedures for the spectral density involve
a great many parameters. Indeed, its experimental estimation can be at
least as involved as the estimation of cr (t).
y
32
APPENDIX A
We want to derive (Z3) in the text. Starting from (10) in the text,
we have
/ N N ,
N1
N
E
n=l
n/ N
N N
Z <yOi S E
11 j=i \ ' J
(Nl)T'=
N t^+^ }n^^
2/" dt" f dt' <y(t')y(t'')
n=l t. tn n
N N t + T t. + T
^1 1 f dt/ dt' /y<t',y(t"X
i=l j=l t t. \ /
(Al)
where (9) has been used. Now
y(t)y(t")\ = R^(t  t") (A2)
where R (t) is the autocorrelation function of y(t) and is the Fourier
y
transform of S (f), the power spectral density of y(t). Equation (A2) is
true provided that y(t) is stationary (at least in the wide or covariance
sense), and that the average exists. If we assurae the power spectral
density of y(t), S (f), has low and high frequency cutoffs f , and f (if
y jii h
necessary) so that
00
/ S (f) df exists
• *6 y
then if y is a random variable, the average does exist and we may safely
assume stationarity.
33
In practice, the high frequency cutoff, f , is alvyays present either
in the device being measured or in the measuring equipment itself. YtHien
the high frequency cutoff is necessary for convergence of integrals of
S (f) (or is too low in frequency), the stability measure will depend on f .
The latter case can occur when the measuring equipment is too narrow
band. In fact, a useful type of spectral analysis may be done by varying
f purposefully [l8].
The low frequency cutoff f may be taken to be much smaller than
the reciprocal of the longest time of interest. The results of calculations
as well as measurements will be meaningful if they are independnt of f
as f . approaches zero. The range of exponents in power law spectral
densities for which this is true will be discussed and are given in Fig. 1.
To continue, the derivation requires the Fourier transform relation
ships between the autocorrelation function and the power spectral density:
O)
S (f) = 4 / R (T) cos 27Tf TdT,
y ^ y
R (T) = r S (f) cos ZTTf Tdf . (A3)
y ^ y
34
Using (A3) and (A2) in (Al) gives
<a^(N, T, T)> :=
y
N
(Nl)T^ ^ / dfS {i)fd?' f
tn+>VT
dt' cos ZTfiit't")
n n
N N^ a, t.+T I
1=1 j^iA) y ^t. 4.
N
1
.t,+T ^t. + T
dfS (f)/ dt"/ dt' COS 27Tf(t't'')
1
J
(N1)T'=
N r ^'°
2/
ln=l
N N
i=ljl
,^^ ,^, sin"^ 7Tf T
dfS f g
y (TTif
S (f) /
df JL
'o (27t£)
2 cos 277fT(ji)  cos 27Tf[T(ji) + t]
cos 277f[T(ji)  t] (A4)
(The interchanges in order of integration are permissible here since the
integrals are uniformly convergent with the given restrictions on S (f).
)
The first suramation in the curly brackets is independent of the summation
index n and thus gives just
sin Tlf T
y"' m]Njf
df S_(f) ^7^^ (A5)
The kernel in the second term in the curly brackets may be further
simplified:
2 cos 27rfT(ji)  cos 27Tf (T(ji) + tJ  cos 27Tf (T(ji)  TJ
(A6)
= 4 sin 77f T cos 2TrfT(ji),
35
The second term is then
r°° S (f) ^i N
^— sin'^ 77f t) ) cos 277fT(ji) . (A7)
(TTf) 1^
J
= l
(The interchange of summation and integration is justified. ) We must
now do the double sum. Let
j  i = k,
277fT= X. (A8)
Changing summation indices from i and j to i and k gives for the
sum
.
; N N N Ni
.
S = V y cosx(ji)= ^ V coskx. (A 9)
1=1 j=l i=l k=li
The region of sumnaation over the discrete variables i and k is
shown in Fig. 5 for N = 4.
The summand is independent of i so one may interchange the
order of suramation and sum over i first. The sumraand is even in
k and the contributions for k < are equal to those for k > 0, and so
we raay pull out the term for k = separately and \vrite:
/N1 Nk \ N
S = 2
j
^ cos kx ^ 1 + £ 1
Vk=l i=l / i=l
N1 \
= 2 I > (Nk) cos kx
k=l
+ N . (AlO)
N1
1 d 1V ikx
This may be written as
S=N+2Re f N   riy e'^"" (All)
L 1 dx j Z^
where Re [U] means the real part of U and d/dx is the differential
operator. The series is a simple geometric series and raay be summed
easily, giving
36
Fig. 5 Region of Summation for i and k for N = 4
37
S = N + 2Re < !N
1 d e  e
I
"
i dx i , IX
1  e
iNx ^^,, ix, '
^ , 1  e  N 1  e >
= N + 2Re ( ^ ' )
4 sin x/2
I
sin Nx/2
sm x/2
(A12) 1
Combining everything ^ve get, after some rearrangement,
N
<ay(N,T,T)>
^_^ J
[ dfS (f)
y
sin TTf T
{TTirf
1  7 Y^""" I (A13)
N sin IT r f T
where r = T/t. This is the result given in (23).
We can determine a number of things very easily from this
equation. First let us change variables. Let 77f t = u, then
«^y(N,T,T)> = ^^_•1)77T J
2 r • 2
^ , u , sin u ,' sin Nru
du S (— ) <1  ,
y ttt' 2 ) ^,2 . 2 ju ^ N sm ru
(A14)
The kernel behaves like u as u * and like u as u * '».
Therefore (d (N, T, t)) is convergent for power law spectral densities,
(y
S (f ) = h f , without any low or high frequency cutoffs for  3 < C < 1.
Y ry
o 1 /
^
Using (A14) for power law spectral densities we find
<CJ^(N, T, T)) = T
QCl
h C^ for 3<a<l
and
C =
CL
where \i = « 1
u \
(N
N f . <y sin u /, sm Nru Vdu u { 1  /
1)77 ^ u I N sxn ru •
(A15)
This is the basis for the plot in Fig. 1 in the text of jLt vs. a. For
a ^ 1 we raust include the high frequency cutoff f .
38
For N = 2 and r = 1 the results are particularly simple. We
have ^
<cr^(2, T, T)> = t"^"^ h^ ^
I
duu*^"^ sin^u (A16)
for power law spectral densities. For N = 2 and general r we get
«, fi 1 o , cos2u(r4l) , cos2u(rl) \
/=° 1 cos2ucos2ru+
^^ ' + ^ l
du S (^^)
i ^ ?^
/. 3 .2, o / "^ \ sin u sm ru ,^ ,^.du S (— . A17
3
U
The first form in (A17) is particularly simple and is also useful for r = 1
in place of (A 16).
Let us discuss the case for ry ^ 1 in a little more detail. As
mentioned above we must include the high frequency cutoff, f , for
convergence. The general behavior can be seen most easily from (A13).
2
After placing the factor t outside the integral and combining the factor
f with S (f) we find that the remaining part of the kernel consists of
y
some constants and some oscillatory terms. If 2Trfi^T > > 1 it is apparent
that the rapidly oscillating terms contribute very little to the integral.
Most of the contribution comes from the integral over the constant term
3
causing the major portion of the t dependence to be the t factor
outside the integral. This is the reason for the vertical slope at U = 2
in the fl vs. Ot plot in Fig. 1 in the text.  "
One other point deserves some mention. The constant term of
the kernel discussed in the preceding paragraph is different for r = 1
from the value for r 7^ 1 . This is readily seen from (A17) for N = 2;
for r = 1 the constant term is 3/2 while for r 7^ 1 it is 1. This is
the reason that 6 (r1) which appears in some of the results of
Appendix B. In practice, 6 (r1) does not have zero width but is
39
smeared out over a width of approximately (ZTT^f^ t) . If there must be
dead time, r ?^ 1, it is wise to choose (r1) >> ZTTL t) '' or
h
(r1) <<(2 7Tf, T) • but M'ith ZTTf T >> 1. In the latter case, one may
h h
assume r » 1
.
40
APPENDIX B
Let y(t) be a sample function of a random noise process with a
spectral density S (f). The function y(t) is assuraed to be pure real
and S (f) is a onesided spectral density relative to a cycle frequency
(i.e., the dimensions of S (f) are that of y per hertz). (For additional
information see Appendix A, [7, 8, 18]. )
Let x(t) be defined by the equation
x(t) = ^ = y(t). (Bi)
Define: t is arbitrary instant of tirae and
o
n
N
n=l
and let f be a high (angular) frequency cutoff (infinitely sharp) with
27Tf, T > > 1.
h
Special Case
Special Case
y y \ 2T
[x(t^+2T)  2x(t^+T) + x(t^)]'
2~
)
t ^^ = t + T, n = 0, 1, 2, .. ., (B2)n+1 n
t^+T X(t +T) x(t )
y^ = 7 /
y(t) dt =
,
(B3)
1 r n
:b5;
!B6'
0=(T)^ (a^2,T,T)>= < 2 ___i^ ^_ ) • ,B7)
41
Definition:
D ^ (T) = < [x(t +2T)  2x(t + T) + x(t )f ) . (B8)
X \ o o o /
Consequency of Definitions
D^(T) = 2T^ cr^(T) = 2CT^(T) . (B9)
X y X
Definition:
^(T, T) = ([x(t +T+ T)  x(t +T)  x(t + T) + x(t )f\ . (BIO)X \o o o o/
Consequency of Definitions:
J/)^(T, T) = 2T^ <a^(2, T, T)) . (BID
...
X y
Special Case:
0''(T, T) := D^(T). (B12)
X X
42
RANDOM WALK y
S (f) = ^
y f
2
s (f) =
T
T h
Quantity Relation
<cr^(N, T, T)>
2
h_2 • ^^•Ti^[r(N+ 1)  1], r > 1
<rT^(N, T, T)>
y ^a^^rf^
a^ir) h^ i^4^. N=Z. r=l
D®(T) = 2a^ (T)
X X
J.
.
2{2Trf T
=
2 6
^ S 3
j^ lilLIli(3r _ 1), for r > 1
2 D
^ o * "^
—
'T
—^  1 ' foi I ^ 12 6 r
(B13)
(B14)
(B15)
(B16:
(B17)
43
FLICKER, y
^y^^)  T S (f ) =—~^
r=T/T, O^f^f,
Quantity
<CJ^(N, T, T))
y
<cr"(N, T, T)>
y
O^(T)
y
D^(T) ^ 2CT^(T)
X X
Relation
N
^r wbrS'N"'
n=l
2(nr) In(nr)
+ (nr+l)"^ ln(nr+l) + (nr1)^ Inlnrl
h
N In N
1
' N1 ,
(r = 1)
h • 2 In 2, (N = 2, r = 1)
h ' 4 T^ In 2
2r^ln r + (r + 1)^ ln(r + l)
+ (rir lnrl
h • 2T'' (2 + In r), for r > > 1
h • 2T'' (2  In r), for r < < 1
(B18)
(B19)
(B20)
(B21)
fB22:
(B23)
44
WHITE y (Random Walk x)
S (f) = h
y o X (Zirfr
r=T/T, ^i <£
Quantity Relation
<CT^(N, T, T))
h
Y"
• ^ "^ for r ^ 1
h • 7 r(N+l) T "^, for Nr ^ 1
6
<(J^(N, T, T))
,
h
; • T \ r = 1
y
h
^ • T "\ N = 2, r = 1
D^(T) = 2CT^(T)
X X
h • T
0^(T, T)
A.
h • T , for r ^ 1
h • T, for r ^ 1
(B24)
B25)
(B26)
(B27)
(B28)
45
FLICKER X
S (f) = h f
y 1
1
S (f) =
Quantity
r = T/T, 277 1 T > > 1, 27TL T >> 1, ^ f ^ f
,
h h h
Relation
<(7^ (N, T, T)>
y
<CT^(N, T, T))
h
y
D^(T) =: 2cr^(T)
X X
(T, T)
X
h
1 (277 T)^
In
N1
' n=l
2 2
n r
2 2 ,
n r 1
(B29)
, for r >> 1
2(N+1)
'l NT^(277)^
2 + ln(277f T)  %^
h N^ 
1
, r = 1 (B30)
2,.^ _x2
1 T^(277)
3[2 + ln(277f T)]  In 2 N=2, r=l (B31)
(2 77)
• 2 3[2 + ln(277f T)]  In 2 (B32)
; [2 + ln(277L T)], r >> 1
(277)^ h
h
(2 77)'
4
3[2 + ln(277f T)]  In 2 , r = 1 (B33)
"7 [2 + ln(277f, T)], r << 1
(277)^ h
46
WHITE X
s {£) = h r
y 2 V^X^^^ " (ZTlfy
r = T/T; 6 (r1) =
k
1 if r = 1
otherwise
27Tf T>> 1, ^ f ^ f^
h n
Quantity Relation
<a^ (N, T, T))
y
N + 6 (r1) "h
"2 N(277)^ T^
<CT^(N, T, T)>
y
h •
2
N + 1
N(27T)^ ,3
;' = i
/! /tX ^
'^h
; N = 2, r = 1
y (2 77)^ T^
D^ (T) :r 2cr^(T)
^^2
(277)=
J/)^(T, T)
'^2[^^\'^'],a.>
:B34)
(B35)
(B36)
(B37;
(B38)
47
REFERENCES
[ l] E. T. Jaynes, "Information theory and statistical mechanics, "
Phys. Rev., vol. 108, Sec. 15, pp. 171190, October 1957.
[ 2] C. Bingham, M. D. Godfrey, and J. W. Tukey, "Modern techniques
of power spectrum estimation, " IEEE Trans. AU, vol. 15, pp. 5666,
June 1967.
[ 3] R. B. Blackman, Linear data smoothing and prediction in theory
and practice, AddisonWesley Publishing Co., 1965.
[4] R. B. Blackman and J. W. Tukey, The measurement of power
spectra , New York: Dover, 1958.
[ 5] E. O. Brigham and R. E. Morrow, "The fast Fourier transform, "
IEEE Spectrum, vol. 4, pp. 6370, December 1967.
[ 6] E. J. Baghdady, R. D. Lincoln, and B. D. Nelin, "Shortterm
frequency stability: theory, measurement, and status, " Proc. lEEE
NASA Symp. on Shortterm Frequency Stability, (NASA SP80),
pp. 6587, November 1964; also Proc. IEEE, vol. 53, pp. 704722,
21102111, 196 5.
[ 7] L. Cutler and C. Searle, "Some aspects of the theory and measure
ment of frequency fluctuations in frequency standards, " Proc. IEEE,
vol. 54, pp. 136154, February 1966.
[ 8] D. W. Allan, "Statistics of atomic frequency standards, " Proc. IEEE,
vol. 54, pp. 221230, February 1 966.
[ 9] N. A. Shewart, Economic control of quality of 3nan\x£actured product .
Van Nostrand Co. , 1931, p. 146.
[10] J. A. Barnes, "Atomic timekeeping and the statistics of precision
signal generators, " Proc. IEEE, vol. 54, pp. 207220, February
1966.
[11] J. A. Barnes and D. W. Allan, "An approach to the prediction of
coordinated universal time," Frequency, vol. 5, pp. 1520, Nov. 
Dec. 1967.
48
[12] R.F.C. Vessot, et al. , "An intercomparison of hydrogen and cesiiira
frequency standards, " IEEE Trans. I&M, vol. 15, pp. 165176,
Deceraber 1966.
[13] J. A. Barnes, "Tables of bias functions, B and B , for variances
based on finite samples of processes with power law spectral
densities," NBS Technical Note 375, January 1969.
[14] D B. Leeson and G. F. Johnson, "Shortterm stability for a Dc^ppler
radar: requirenaents, measurements, and techniques, " Proc. IEEE,
vol. 54, pp. 244248, February 1 966.
[15] Radar Handbook, ed. M. I. Skolnik, McGraw Hill (1970), chapter 16,
(W. K. Saunders).
[l6] R. S. Raven, "Requirements on master oscillators for coherent radar, "
Proc. IEEE, vol. 54, pp. 237243, February 1 966.
[17] D. B. Leeson, "A siraple model of feedback oscillator noise spectrum, "
Proc. IEEE, vol. 54, pp. 329330, February 1966.
[18] R.F.C. Vessot, L. Mueller, and J. Vanier, "The specification of
oscillator characteristics from measurements made in the frequency
domain," Proc. IEEE, vol. 54, pp. 199207, February 1966.
[19] Floyd M. Gardner, Phaselock techniques, New York: Wiley, 1966,
[20] C. Menoud, J. Racine, and P. Kartaschoff, "Atomic hydrogen
maser work at L.S.R.H., Neuchatel, Switzerland, " Proc. 21st
Annual Symp. of Frequency Control, pp. 543567, April 1967.
[21] A. G. Mungall, D. Morris, H. Daams, and R. Bailey, "Atomic
hydrogen maser development at the National Research Council of
Canada," Metrologia, vol. 4, pp. 8794, July 1968.
[22] R.F.C. Vessot, "Atoraic hydrogen maser s, an introduction and
progress report, " HewlettPackard J., vol. 20, pp. 1520,
October 1968.
49
ACKNOWLEDGMENT
During the process of writing this manuscript and its many revisions,
numerous people have added comraents and encourageraent for the authors.
The authors are particularly indebted to Mr, D. W. Allan, Drs. D. Halford,
S. Jarvis, and J. J. Filliben of the National Bureau of Standards. The
authors are also indebted to Mrs. Carol Wright for her secretarial skills
and patience in preparing the many revised copies of this paper.
50
fiPO 831.?fi7
U.S. DEPARTMENT OF COMMERCE
WASHINGTON, D.C. 20230
OFFICIAL BUSINESS POSTAGE AND FEES PAID
U.S. DEPARTMENT OF COMMERCE
PENALTY FOR PRIVATE USE. $300
Superintendent of Documents
20150622T15:47:490400
US GPO, Washington, DC 20401
Superintendent of Documents
GPO attests that this document has not been altered since it was disseminated by GPO
Characterization of frequency stability
mmi Bureau ot ^mmm
NOV 2 3 1970
NBS TECHNICAL NOTE 394
Characterization
of Frequency Stability
NBS TECHNICAL PUBLICATIONS
PERIODICALS NONPERIODICALS
JOURNAL OF RESEARCH reports National
Bureau of Standards research and development in
physics, mathematics, chemistry, and engineering.
Comprehensive scientific papers give complete details
of the work, including laboratory data, experimental
procedures, and theoretical and mathematical analy
ses. Illustrated with photographs, drawings, and
charts.
Published in three sections, available separately:
• Physics and Chemistry
Papers of interest primarily to scientists working in
these fields. This section covers a broad range of
physical and chemical research, with major emphasis
on standards of physical measurement, fundamental
constants, and properties of matter. Issued six times
a year. Annual subscription: Domestic, $9.50; for
eign, $11.75*.
• Mathematical Sciences
Studies and compilations designed mainly for the
mathematician and theoretical physicist. Topics in
mathematical statistics, theory of experiment design,
numerical analysis, theoretical physics and chemis
try, logical design and programming of computers
and computer systems. Short numerical tables.
Issued quarterly. Annual subscription: Domestic,
$5.00; foreign, $6.25*.
• Engineering and Instrumentation
Reporting results of interest chiefly to the engineer
and the applied scientist. This section includes many
of the new developments in instrumentation resulting
from the Bureau's work in physical measurement,
data processing, and development of test methods.
It will also cover some of the work in acoustics,
applied mechanics, building research, and cryogenic
engineering. Issued quarterly. Annual subscription:
Domestic, $5.00; foreign, $6.25*.
TECHNICAL NEWS BULLETIN
The best single source of information concerning the
Bureau's research, developmental, cooperative and
publication activities, this monthly publication is
designed for the industryoriented individual whose
daily work involves intimate contact with science and
technology
—
for engineers, chemists, physicists, re
search managers, productdevelopment managers, and
company executives. Annual subscription: Domestic,
$3.00; foreign, $4.00*.
• Difference in price is due to extra cost of foreign mailing.
Applied Mathematics Series. Mathematical tables,
manuals, and studies.
Building Science Series. Research results, test
methods, and performance criteria of building ma
terials, components, systems, and structures.
Handbooks. Recommended codes of engineering
and industrial practice (including safety codes) de
veloped in cooperation with interested industries,
professional organizations, and regulatory bodies.
Special Publications. Proceedings of NBS confer
ences, bibliographies, annual reports, wall charts,
pamphlets, etc.
Monographs. Major contributions to the technical
literature on various subjects related to the Bureau's
scientific and technical activities.
National Standard Reference Data Series.
NSRDS provides quantitive data on the physical
and chemical properties of materials, compiled from
the world's literature and critically evaluated.
Product Standards. Provide requirements for sizes,
types, quality and methods for testing various indus
trial products. These standards are developed coopera
tively with interested Government and industry groups
and provide the basis for common understanding of
product characteristics for both buyers and sellers.
Their use is voluntary.
Technical Notes. This series consists of communi
cations and reports (covering both other agency and
NBSsponsored work) of limited or transitory interest.
Federal Information Processing Standards Pub
lications. This series is the official publication within
the Federal Government for information on standards
adopted and promulgated under the Public Law
89306, and Bureau of the Budget Circular A86
entitled, Standardization of Data Elements and Codes
in Data Systems.
CLEARINGHOUSE
The Clearinghouse for Federal Scientific and
Technical Information, operated by NBS, supplies
unclassified information related to Governmentgen
erated science and technology in defense, space,
atomic energy, and other national programs. For
further information on Clearinghouse services, write:
Clearinghouse
U.S. Department of Commerce
Springfield, Virginia 22151
Order NBS publications from: Superintendent of Documents
Government Printing Office
Washington, D.C. 20402
UNITED STATES DEPARTMENT OF COMMERCE
Maurice H. Stans, Secretary
NATIONAL BUREAU OF STANDARDS • Lewis M. Branscomb, Director
NBS TECHNICAL NOTE 394
ISSUED OCTOBER 1970
Nat. Bur. Stand. (U.S.), Tech. Note 394, 50 pages (Oct. 1970)
CODEN: NBTNA
Chorocterization of Frequancy Sfability
J. A. Barnes
A. R. Chi
L. S. Cutler
D.J. Healey
D. B. Leeson
T. E. McGunigai
J. A. Mullen
W. L.Smith
R. Sydnor
R. F. C. Vessot
G. M. R.Winkler
*The authors of this paper are members of the Subcom
mittee on Frequency Stability of the Technical Committee
on Frequency and Time of the IEEE Group on Instrumen
tation & Measurement. See page ii.
NBS Technical Notes are designed to supplement the
Bureau's regular publications program. They provide
a means for making available scientific data that are
of transient or limited interest. Technical Notes may
be listed or referred to in the open literature.
For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C. 20402
(Order by SD Catalog No. 013.46:394), Price 60 cents
Members
Subcorainittee on Frequency Stability
of the Technical Comraittee on Frequency and Time
of the
Institute of Electrical & Electronics Engineers
Group on Instrumentation & Measurement
J. A. Barnes, Chairman
Time and Frequency Division
Institute for Basic Standards
National Bureau of Standards
Boulder, Colorado 80302
A. R. Chi
National Aeronautical and Space
Administration
Greenbelt, Maryland 20771
L. S. Cutler
HewlettPackard Company
Palo Alto, California 94304
D. J. Healey
Westinghouse Electric Corporation
Baltimore, Maryland 2 1203
D. B. Leeson
California Microwave
Sunnyvale, California 94086
T. E. McGunigal
National Aeronautical and Space
Administration
Greenbelt, Maryland 20771
J. A. Mullen
Raytheon Company
Waltham, Massachusetts 02154
W. L. Smith
Bell Telephone Laboratories
Allentown, Pennsylvania 18103
R. Sydnor
Jet Propulsion Laboratory
Pasadena, California 91103
R. F.C. Vessot
Smithsonian Astrophysical Observatory
Cambridge, Massachusetts 01922
G.M.R. Winkler
Time Service Division
U. S. Naval Observatory
Washington, D. C. 20390
11
TABLE OF CONTENTS
Page
Glossary of Symbols v
Abstract ix
I. Introduction 1
II. Stateinent of the Problem . .^ 4
IIIj Background and Definitions 5
TV. The Definition of Measures of Frequency
Stability (Second Moment Type) 6
V. Translations Among Frequency Stability Measures .... 13
VI. Applications of Stability Measures 20
VII. Measurement Techniques for Frequency Stability .... 22
VIII. Conclusions 31
Appendix A 33
Appendix B 41
111
GLOSSARY OF SYMBOLS
B^(N, r, ^), B^{r,ll)
a
c , c
o 1
c(t)
D^(T)
X
_ w
277
g(t)
h
i, j, k, m, n
M
N
n(t)
R (t;
y
Bias function for variances based on finite
saraples of a process with a power law
spectral density. (See [13].)
A real constant defined by (A15).
Real constants.
A real, deterministic function of time.
Expected value of the squared second
difference of x(t) with lag time t. See
(B8).
Fourier frequency variable.
High frequency cutoff of an idealized infinitely
sharp cutoff, low pass filter.
Low frequency cutoff of an idealized infinitely
sharp cutoff, high pass filter.
A real function of time.
rOi .
Positive, real coefficient of f in a power
series expansion of the spectral density of
the function y(t).
Integers, often a dumray index of summation.
Positive integer giving the number of cycles
averaged.
Positive integer giving the number of data
points used in obtaining a sample variance.
A nondeterministic function of time.
Autocovariance function of y(t). See (A3).
Positive, real number defined by r = T/t.
S An intermediate term, used in deriving (23).
The definition of S is given by (A9).
S (f) Onesided (power) spectral density on a per
hertz basis of the pure real function g(t).
The dimensions of S (f) are the diraensions
ofg^(t)/f. §
S (f) A definition for the raea sure of frequency stability,
Onesided (power) spectral density of y(t)
on a per hertz basis. The dimensions of
S (f) are Hz"^.
y
T Tirae interval between the beginnings of
two successive measurements of average
frequency.
t Time variable.
t An arbitrary, fixed instant of time.
o
t The time coordinate of the beginning of the
kth measurement of average frequency.
By definition, t, = t, + T, k = 0, 1, 2* • • .
k+1 k
u Dummy variable of integration; u = 77 f t.
V(t) Instantaneous output voltage of signal
generator. See (2).
V . Nominal peak amplitude of signal generator
output. See (2).
V (t) Instantaneous voltage of reference signal.
^
. See (40).
V Peak amplitude of reference signal. See (40).
or
v(t) Voltage output of ideal product detector.
v (t) Low pass filtered output of product detector.
x(t) Real function of time related to the phase of
the signal V(t) by the equation
x(t) = .
o
vi
x{t) . A predicted value for x(t).
y(t) Fractional frequency offset of V(t) from
the norainal frequency. See (7).
y Average fractional frequency offset during
the kth measureraent interval. See (9),
(y) The saraple average of N successive values
of y . See (B4).
z (t) Nondeterministic (noise) function with
n
y
(power) spectral density given by (2 5).
0! Exponent of f for a power law spectral
density.
y Positive, real constant.
6 (r1) The Kronecker 6function defined by
K,
6^(rl) .
1, if r = 1
0, if otherwise
.
C(t) Amplitude fluctuations of signal. See (2).
/LI Exponent of t. See (2 9).
V{t) Instantaneous frequency of V(t). Defined by
V Norainal (constant) frequency of V(t).
o
X(t) The Fourier transforra of n(t).
CT (N, T, T) Sample variance of N averages of y(t).
each of duration t, and spaced every T
units of time. See (10).
(a (N, T, T)) Average value of the
saraple variance <7 (N, T, t).
la (T) A second choice of the definition for the measure
of frequency stability. Defined by
O^(T) = <cr^(N =2, T = T, T)> .
vii
U (T) Time stability measure defined by
X y
T Duration of averaging period of y(t) to
obtain y . See (9).
<^(t) Instantaneous phase of V(t). Defined by
*(t) = 2TTV t + (Pit).
o
(p(t) Instantaneous phase fluctuations about the
ideal phase, 2771^ t. See (2).
ij) (T, T) Mean square time error for Doppler radar,
^
See (BIO) .
CO = 2'n'f Angular Fourier frequency variable.
vm
CHARACTERIZATION OF FREQUENCY STABILITY
by
.
.
J. A. Barnes, A. R. Chi, L. S. Cutler,
D. J. Healey, D. B. Leeson, T. E. McGunigal,
J. A. Mullen, W. L. Smith, R. Sydnor,
'
R.F.C. Vessot, andG.M.R. Winkler
ABSTRACT
Consider a signal generator whose instantaneous output voltage V(t)
may be written as
V(t) = [V + €(t)] sin [27ri^ t + (p(t)]
where V and V are the nominal amplitude and frequency respectively
o o
T
d(p
of the output. Provided that €(t) and (p (t) = 7— are sufficiently small
for all time t, one raay define the fractional instantaneous frequency
deviation from nominal by the relation
O
A proposed definition for the measure of frequency stability is the
spectral density S (f) of the function y(t) where the spectrum is con
sidered to be onesided on a per hertz basis.
An alternative definition for the measure of stability is the infinite
time average of the sample variance of two adjacent averages of y(t);
that is, if
(.
'.!
k+T
where t is the averaging period, t , = t + T, k = 0, 1, 2* * ' , t is
arbitrary, and T is the time interval between the beginnings of two
successive measurements of average frequency; then the second measure
IX
of stability is
where ( ) denotes infinite time average and where T = T .
In practice, data records are of finite length and the infinite time
averages implied in the definitions are normally not available; thus estimates f
the two measures must be used. Estimates of S (f) would be obtained
y
from suitable averages either in the time domain or the frequency domain.
2An obvious estimate for 0" (t) is
y
m (v,  y, )
Parameters of the nn.easuring system and estimating procedure are
of critical importance in the specification of frequency stability. In
practice, one should experimentally establish confidence limits for an
estimate of frequency stability by repeated trials.
Key words: Allan variance; frequency; frequency stability; sample variance;
spectral density; variance.
CHARACTERIZATION OF FREQUENCY STABILITY
I. Introduction
The measurement of frequency and fluctuations in frequency has
received such great attention for so raany years that it is surprising that
the concept of frequency stability does not have a universally accepte<^
definition. At least part of the reason has been that some uses are most
readily described in the frequency domain and other uses in the time
doinain, as well as in combinations of the two. This situation is further
complicated by the fact that only recently have noise raodels been presented
which both adequately describe performance and allow a translation between
the time and frequency donaains. Indeed, only recently has it been recog
nized that there can be a wide discrepancy between commonlyused time
domain measures themselves. Following the NASAIEEE Symposium on
ShortTerm Stability in 1964 and the Special Issue on Frequency Stability
of the Proc. IEEE of February 1966, it now seems reasonable to propose
a definition of frequency stability. The present paper is presented as
technical background for an eventual IEEE standard definition.
This paper attempts to present (as concisely as practical) adequate,
self consistent definitions of frequency stability. Since more than one
definition of frequency stability is presented, an important part of this
paper (perhaps the most iraportant part) deals with translations among the
suggested definitions of frequency stability. The applicability of these
definitions to the more comraon noise models is demonstrated.
Consistent with an attempt to be concise, the references cited have
been selected on the basis of being of most value to the reader rather than
on the basis of being exhaustive. An exhaustive reference list covering
the subject of frequency stability would itself be a voluminous publication. .
Almost any signal generator is influenced to some extent by its
environment. Thus observed frequency instabilities raay be traced, for
exaraple, to changes in arabient temperature, supply voltages, magnetic
field, baroraetric pressure, humidity, physical vibration, or even ouput
loading to mention the more obvious. While these environmental influences
may be extremely important for many applications, the definition of fre
quency stability presented here is independent of these causal factors.
In effect, we cannot hope to present an exhaustive list of environraental
factors and a prescription for handling each even though, in some cases,
these environmental factors may be by far the most important. Given a
particular signal generator in a particular environment, one can obtain its
frequency stability with the measures presented below, but one should
not then expect an accurate prediction of frequency stability in a new
environraent.
It is natural to expect any definition of stability to involve various
statistical considerations such as stationarity, ergodicity, average,
variance, spectral density, etc. There often exist fundamental difficulties
in rigorous attempts to bring these concepts into the laboratory. It is
worth considering, specifically, the concept of stationarity since it is
a concept at the root of many statistical discussions.
A random, process is mathematically defined as stationary if every
translation of the time coordinate maps the ensemble onto itself. As a
necessary condition, if one looks at the ensemble at one instant of tirae,
t, the distribution in values within the ensemble is exactly the same as
at any other instant of time, t . This is not to imply that the elements
of the ensemble are constant in time, but, as one element changes value
with time, other elements of the ensemble assume the previous values.
Looking at it in another way, by observing the ensemble at some instant
of time, one can deduce no information as to when the particular instant
was chosen. This same sort of invariance of the joint distribution holds
for any set of times t, , t^, . . . , t and its translation t, + t, t^ + t, . . . ,
1 2 n 12
t + T.
n
It is apparent that any ensemble that has a finite past as well as a
finite future cannot be stationary, and this neatly excludes the real world
and anything of practical interest. The concept of stationarity does
violence to concepts of causality since we iraplicitly feel that current
performance (i.e., the applicability of stationary statistics) cannot be
logically dependent upon future events (i.e., if the process is terminated
sometime in the distant future). Also, the verification of stationarity
would involve hypothetical measurements which are not experimentally
feasible, and therefore the concept of stationarity is not directly relevant
to experimentation. '
Actually the utility of statistics is in the formation of idealized
models which reasonably describe significant observables of real systems.
One may, for example, consider a hypothetical ensemble of noises with
certain properties (such as stationarity) as a model for a particular real
device. If a model is to be acceptable, it should have at least two properties
First, the raodel should be tractable; that is, one should be able to easily
arrive at estimates for the elements of the model; and, second, the model
should be consistent with observables derived frora the real device which
it is simulating.
Notice that one does not need to know that the device was selected
from a stationary ensemble, but only that the observables derived from
the device are consistent with, say, elements of a hypothetically stationary
ensemble. Notice also that the actual model used may depend upon how
clever the experimenter theorist is in generating models.
It is worth noting, however, that while some texts on statistics give
"tests for stationarity, " these "tests" are almost always inadequate.
Typically, these "tests"' determine only if there is a substantial fraction
of the noise power in Fourier frequencies whose periods are of the same
order as the data length or longer. While this may be very iraportant, it
is not logically essential to the concept of stationarity. If a nonstationary
raodel actually becomes common, it will almost surely be because it is
useful or convenient and not because the process is "actually nonstationary.
Indeed, the phrase "actually non stationary" appears to have no meaning
in an operational sense. In short, stationarity (or nonstationarity) is a
property of models not a property of data [l].
Fortunately, many statistical raodels exist which adequately describe
most presentday signal generators; many of these models are considered
below. It is obvious that one cannot guarantee that all signal generators
are adequately described by these models, but the authors do feel they
are adequate for the description of most signal generators presently
encountered.
II. Statement of the Problem
To be useful, a measure of frequency stability must allow one to
predict performance of signal generators used in a wide variety of situations
as well as allow one to make meaningful relative comparisons among signal
generators. One must be able to predict perforinance in devices which
may iTiost easily be described either in the time domain, or in the frequency
domain, or in a combination of the two. This prediction of performance
raay involve actual distribution functions, and thus second raonaent measures
(such as power spectra and variances) are not totally adequate.
Two common types of equipment used to evaluate the performance
of a frequency source are (analog) spectrum, analyzers (frequency domain)
and digital, electronic counters (time domain). On occasion the digital
counter data are converted to power spectra by computers. One must
realize that any piece of equipraent siraultaneously has certain aspects
raost easily described in the time domain and other aspects most easily
described in the frequency doinain. For example, an electronic counter
has a high frequency limitation, and experimental spectra are determined
with finite time averages.
Research has established that ordinary oscillators demonstrate
noise which appears to be a superposition of causally generated signals
and random, nondeterrainistic noises. The random noises include
thermal noise, shot noise, noises of undeternjined origin (such as flicker
noise), and integrals of these noises.
One might well expect that for the more general cases one would)
need to use a nonstationary model (not stationary even in the wide sense,
i.e., the covariance sense). Nonstationarity would, however, introduce
significant difficulties in the passage between the frequency and time
domains. It is interesting to note that, so far, experimenters have seldom
found a non (covariance) stationary model useful in describing actual
oscillators.
In what follows, an attempt has been made to separate general
statements which hold for any noise or perturbation frora the stateraents
which apply only to specific models. It is important that these distinctions
be kept in mind.
III. Background and Definitions
To discuss the concept of frequency stability immediately implies
that frequency can change with time and thus one is not considering
Fourier frequencies (at least at this point). The conventional definition of
instantaneous (angular) frequency is the time rate of change of phase;
that is,
ZTTVit) = ^^^ H i(t) , • (1)
where ^ (t) is the instantaneous phase of the oscillator. This paper uses
the convention that time dependent frequencies of oscillators are denoted
by T^{t) (cycle frequency, hertz), and Fourier frequencies are denoted by
60 (angular frequency) or f (cycle frequency, hertz) where
to = ZTTf .
In order for (1) to have meaning, the phase ^(t) must be a well
defined function. This restriction immediately eliminates some "non
sinusoidal" signals such as a pure, random, uncorrelated ("white") noise.
For most real signal generators, the concept of phase is reasonably
amenable to an operational definition and this restriction is not serious.
Of great importance to this paper is the concept of spectral density,
S (f). The notation S (f) is to represent the onesided spectral density
g g
of the (pure real) function g(t) on a per hertz basis; that is, the total
"power" or mean square value of g(t) is given by
/
S (f)df .
Since the spectral density is such an important concept to what
follows, it is worthwhile to present some important references on spectrum
estiraation. There are raany references on the estimation of spectra
from data records, but worthy of special note are [2  5].
IV. The Definition of Measures of Frequency Stability (Second Moment Type) !
A. General . Consider a signal generator whose instantaneous
output voltage, V(t), may be written as
'
'
 I
V(t) = [V + e(t)] sin [2T71/ t + (p(t)], (2)
where V and V are the nominal amplitude and frequency respectively
o o
of the output and it is assuraed that
€(t)
V
< < 1
and
(P(t)
27TI/'
< < 1
(3)
(4)
for substantially all time t. Making use of (1) and (2) one sees that
^(t) = ZTTV t + (Pit) ,
o
(5)
and
l^(t) =
^o
+ ^ ^(t) . (6)
Equations (3) and (4) are essential in order that (p(t) may be defined
conveniently and unambiguously (see measurement section).
Since (4) must be valid even to speak of an instantaneous frequency,
there is no real need to distinguish stability measures from instability
measures. That is, any fractional frequency stability measure will be
far from unity, and the chance of confusion is slight. It is true that in
a very strict sense people usually measure instability and speak of stability.
Because the chances of confusion are so slight, the authors have chosen
to continue in the custom of measuring "instability" and speaking of stability
(a number always much less than unity).
Of significant interest to many people is the rf (radio frequency)
spectral density, S (f). This is of direct concern in spectroscopy and
radar. However, this is not a good primary measure of frequency stability
for two reasons: First, fluctuations in the amplitude, €(t), contribute
directly to S (f); and second, for many cases when €(t) is insignificant,
the rf spectrum, S (f), is not uniquely related to the frequency fluctuations
[6].
B. General: First definition of the measure of frequency stability

frequency domain .
By definition, let
o
where <P{t) and V are as in (2). Thus y(t) is the instantaneous fractional
o
frequency deviation from the norainal frequency V . A proposed definition
of frequency stability is the spectral density S (f) of the instantaneous
fractional frequency fluctuations y(t). The function S (f) has the
dimensions of Hz
One can show [7] that if S (f) is the spectral density of the phase
fluctuations, then
* °' (8)
/ 1 \^
= ^ f^S (f).
v
I ^
\ o I
Thus a knowledge of the spectral density of the phase fluctuations, S (f),
(p
allows a knowledge of the spectral density of the frequency fluctuations,
S (f), the first definition of frequency stability. Of course, S (f) cannot
be perfectly measured this is the case for any physical quantity; useful
estimates of S (f) are, however, easily obtainable.
C. General: Second definition of the measure of frequency stability 
time domain .
The second definition is based on the sample variance of the fractional
frequency fluctuations. In order to present this measure of frequency
stability, define y by the relation
v =  y(t)dt = , (9)
'
t
°
k
where t.
, ,
= t. + T , k = 0, 1, 2, . . . , T is the repetition interval for
k+1 k
measurements of duration t, and t is arbitrary. Conventional frequency
counters measure the number of cycles in a period T; that is, they measure
V T (1 + y ). When t is one second they count the number V (1 + y, ).ok ok
The second measure of frequency stability, then, is defined in analogy to
the saraple variance by the relation
N / , N '2
(N, T.T))= (^
n=l ' k=l
where (g) denotes the infinite time average of g. This measure of
frequency stability is dimensionless.
In many situations it would be wrong to assume that (10) converges
to a meaningful limit as N * °°. First, of course, one cannot practically
let N approach infinity and, second, it is known that some actual noise
processes contain substantial fractions of the total noise power in the
Fourier frequency range below one cycle per year. In order to iraprove
comparability of data, it is important to specify particular N and T ,
For the preferred definition we recommend choosing N = 2 and T = t
(i.e., no dead time between measurements). Writing (O" (N = 2, T = t, t))
as (T (T), the Allan variance [8], the proposed measure of frequency
stability in the time domain may be written as
for T = T
.
Of course, the experimental estimate of (T (t) must be obtained
y
from finite samples of data, and one can never obtain perfect confidence
in the estimate the true time average is not realizable in a real
situation. One estimates 0" (t) frora a finite number (say, ra) of values
y
2 2 .
of (J (2, T, T) and averages to obtain an estimate of 0" (t). Appendix A
shows that the ensemble average of (J (2, t, t ) is convergent (i.e., as
3Q  00) even for noise processes that do not have convergent ((J (N, t, t))
as N * =° . Therefore, (J (t) has greater utility as an idealization than
does (cr (°°, t, t)) even though both involve assumptions of infinite averages.
In effect, increasing N causes 0" (N, T, t) to be more sensitive to the low
frequency components of S (f). In practice, one must distinguish between
an experimental estimate of a quantity (say, of (J (t)) and its idealized
value. It is reasonable to believe that extensions to the concept of statistical
("quality") control [9] may prove useful here. One should, of course,
specify the actual number m of independent saraples used for an estimate
of a"(T). . ,
In sumraary, therefore, S (f) is the proposed measure of (instan
taneous) frequency stability in the (Fourier) frequency doraain and (7 (t)
is the proposed measure of frequency stability in the time domain.
D. Distributions . It is natural that people first become involved
with second moment measures of statistical quantities and only later with
actual distributions. This is certainly true with frequency stability.
While one can specify the arguraent of a distribution function to be, say,
(y^  y, ), it makes sense to postpone such a specification until a real
k+1 k
use has inaterialized for a particular distribution function. This paper
does not attempt to specify a preferred distribution function for frequency
fluctuations.
E. Treatraent of Systematic Variations .
1. General . The definition of frequency stability 0" (t) in the
tirae domain is useful for raany situations. However, some oscillators,
for example, exhibit an aging or almost linear drift of frequency with
tirae. For some applications, this trend may be calculated and should be
removed [8] before estiraating 0" (t).
In general, a systematic trend is perfectly deterministic (i.e.,
predictable) while the noise is nondeterministic. Consider a function,
g(t), which may be written in the form
g(t) = c(t) + n(t) (12)
where c(t) is some deterrainistic function of time and n(t), the noise, is
a non deterrainistic function of tirae. We will define c(t) to be the
systeraatic trend to the function g(t). A problem of significance here is
to determine when and in what sense c(t) is measurable.
2. Specific Case Linear Drift . As an exaraple, if we consider
a typical quartz crystal oscillator whose fractional frequency deviation is
y(t), we may let
g(t) = ^y(t). (13)
10
With these conditions, c(t) is the drift rate of the oscillator (e.g., 10
•per day) and n(t) is related to the frequency "noise" of the oscillator by
a time derivative. One sees that the time average of g(t) becomes
^t + T
j_ r
o
T I
1
t +T
r o
g(t)dt = c^ + Y j
n(t)dt 14)
where c(t) = c is assumed to be the constant drift rate of the oscillator,
In order for c to be an observable, it is natural to expect the average
of the noise term to vanish, that is, converge to zero.
It is instructive to assume [8, 10] that in addition to a linear
drift, the oscillator is perturbed by a flicker noise, i.e..
y
s (f ) = / h i \ < f ^ 1
1
0, i>{^ ,
where h is a constant (see Sec. V.A.2) and thus,
S (f) = ({Zrrfh f, ^ f ^ f,
n \ 1 h
:i5)
(16)
0, f>f^,
for the oscillator we are considering. "With these assumptions, it is seen
that
Lim 1
T  00 Ta
t^+T
n(t) dt = X(0) = 17'
and that
VanL
'P
» 00
ance
T
t, +T
k
n(t)dt
J
= 0, (18)
where X (f ) is the Fourier transform of n(t). Since S (0) = 0, X(0) must
also vanish both in probability and in mean square. Thus, not only does
n(t) average to zero, but one may obtain arbitrarily good confidence on
the result by longer averages.
11
Having shown that one can reliably estimate the drift rate, c ,
of this (comraon) oscillator, it is instructive to atterapt to fit a straight
line to the frequency aging. That is, let
g(t) = y(t) (19)
and, thus,
g(t) = c^ + c^(t  t^) + n'(t) (20)
where c is the frequency intercept at t = t and c is the drift rate
o o 1
previously determined. A problem arises here because
S , (f) = S (f) (21)
n y
and
Lira
'J'
> 00
Variance
1
rVT
,
I
n (t) d t
^t
k
(22)
for the noise model we have assumed. This follows from the fact that the
(infinite N) variance of a flicker noise process is infinite [7, 8, lO]. Thus,
c cannot be measured with any realistic precision at least, in an absolute
o
sense.
We may interpret these results as follows: After experimenting
with the oscillator for a period of time one can fit an empirical equation
to y(t) of the forra
y(t)  c + tc + n'(t),
o 1
where n (t) is nondeterrainistic. At some later time it is possible to
reevaluate the coefficients c and c . According to what has been said,
o 1
the drift rate c should be reproducible to within the confidence estimates
of the experiment regardless of when it is reevaluated. For c , however,
this is not true. In fact, the more one attempts to evaluate c , the larger
the fluctuations are in the result.
12
Depending on the spectral density of the noise term, it may be
possible to predict future mea sureraents of c and to place realistic
confidence limits on the prediction [ll]. For the case considered here,
however, these confidence limits tend to infinity when the prediction
interval is increased. Thus, in a certain sense, c is "raeasurable"
o
but it is not in statistical control (to use the language of the quality
i
control engineer [9]).
V. Translations Among Frequency Stability Measures •
A. Frequency Domain to Time Domain .
1. General . It is of value to define r = T/T; that is, r is the
ratio of the tirae interval between successive measurements to the duration
of the averaging period. Cutler has shown (see Appendix A) that
K^'''^''^^~W^i "'y'"
[sin^(77f T)]
(TTfrf
sin (TTrfNT)
N^ sin^ (TTrf T)
(23)
Equation (23) in principle allows one to calculate the time domain stability
(cr (N, T, t)) from the frequency domain stability S (f).
2. Specific raodel . A model which has been found useful [7, 9,
10, 11, 12] consists of a set of five independent noise processes, z (t), n =
n
2, 1, 0, 1, 2, such that , ; .
y(t) = z (t) + z ^ (t) + z (t) + z (t) + z (t),
2 1 o 1 2
(24)
and the spectral density of z is given by
n
S (f)
z
n
J
h f"^, ^ f ^ f,
= < n 1
0, f >f , n = 2, 1, 0, 1, 2,
h .
(25;
where the h are constants. Thus, S (f) becomes
n y
.2 1
S(f) = h^f" + hf"+h +hf + h^f^
,
y 2 1 o 1 2
(26)
13
for ^ f ^ f, and S (f) is assumed to be negligible beyond this range,
h y
In effect, each z contributes to both S (f) and <CT (N.T, t)) independently
n y y
of the other z . The contributions of the z to ((J (N, T, t)) are tabulated
n n y
in Appendix B.
Any electronic device has a finite bandwidth and this certainly
applies to frequency measuring equipment also. For fractional frequency
fluctuations, y(t), whose spectral density varies as
S (f) ~f^, a ^ 1, (27)
for the higher Fourier components, one sees (from Appendix A) that
(CT (N, T, T)) may depend on the exact shape of the frequency cutoff. This
is true because a substantial fraction of the noise "power" raay be in
these higher Fourier components. As a simplifying assumption, this
paper assuines a sharp cutoff in noise "power" at the frequency f for
the noise models. It is apparent from the tables of Appendix B that the
time domain measure of frequency stability may depend on f in a very
important way, and, in some practical cases, the actual shape of the
frequency cutoff may be very important [?]. On the other hand, there are
many practical measurements where the value of f. has little or no effect.
Good practice, however, dictates that the system noise bandwidth, f,
,
h
should be specified with any results.
In actual practice, the model of (24), (25), and (26) seems to fit
almost all real frequency sources. Typically, only two or three of the
hcoefficients are actually significant for a real device and the others can
be neglected. Because of its applicability, this raodel is used in much of
what follows. Since the z are assumed to be independent noises, it is
n ^
normally sufficient to compute the effects for a general z and recognize
that the superposition can be accoraplished by simple additions for their
contributions to S (f) or <CT^ (N, T, t) >.
14
B. Time Domain to Frequency Domain
1. General . For general (g (N, T, t)) no simple prescription
is available for translation into the frequency domain. For this reason,
one might prefer S (f) as a general measure of frequency stability. This
is especially true for theoretical work.
Z. Specific model . Equations (24), (25), and (26) formi a i
realistic model which fits the random, nondeterministic noises found on
most signal generators. Obviously, if this is a good nnodel, then the
tables in Appendix B may be used (in reverse) to translate into the fre
quency doraain.
Allan [8] and Vessot [12] showed that if
S (f)
y
where 0^ is a constant, then
.cxV ' ^ f ^ ^h
0, i > t ,
(28)
<0^(N, T, T)> ~ ItI^, ZTTTi >> 1
y h
(29)
T
for N and r = — held constant. The constant (I is related to 0^ by
the mapping shown''' in Fig. 1. If (28) and (29) hold over a reasonable range
for a signal generator, then (28) can be substituted into (23) and evaluated
to deterraine the constant h from measurements of (o" (N, T, t)). It
C y
should be noted that the model of (28) and (29) may be easily extended to a
superposition of similar noises as in (26).
It should be noted that in Allan [8], the exponent, 0^, corresponds to the
spectrum of phase fluctuations while variances are taken over average
frequency fluctuations. In the present paper, ^ is identical to the exponent
a + 2 in [8].
15
9
FIG. I f^a MAPPING
16
C. Translations Among the Time Domain Measures
1. General. Since <0^(N, T, t)) is a function of N, T, and t
y
(for some types of noise f is also important), it is very desirable to be
able to translate among different sets of N, T, and t (f held constant);
this is, however, not possible in general.
2. Specific model . It is useful to restrict consideration tq a
case described by (28) and (29). Superpositions of independent noises
with different power law types of spectral densities (i.e., different Qf's)
can also be treated by this technique, e.g. , (26). One may define two
"bias functions, " B and B by the relations [13]:
<(T^(N, T, T)>
B (N, r, iU) = —I : (30)
1 <a^(2, T, T))
and 
<a^(2, T, T))
B (r, M) = ^t———T (31)
2 <Cr (2, T, T))
where r = T/t and fl is related to 0? by the mapping of Fig. 1. In words,
B is the ratio of the average variance for N samples to the average
variance for 2 samples (everything else held constant); while B is the
ratio of the average variance with dead time between measurements
(r ^ 1 ) to that of no dead time (r = 1 and with N = 2 and t held constant).
These functions are tabulated in [13], Figs. 2 and 3 show a computer plot
of B^(N, r=l, jLl) and B^(r, H).
Suppose one has an experimental estimate of ((7 (N , T '^,)^ ^^d
its spectral type is known that is, (2 8) and (2 9) form a good raodel and jLt
is known. Suppose also that one wishes to know the variance at some other
set of measurement parameters, N , T , t An unbiased estimate of
(O" (N T T )) raay be calculated by the equation:y22 2
17
m
I
o
X
I
cvj
6»
iZ
%
Fig. 3 THE BIAS FUNCTION, Bg (r,^^)
<%^(^'^2'^2^^=(^)
B^(N^,r^,^)B2(r^,^)
B^(N^,r^,jy)B^(r^,^)
<rr^CN^, T^, T^)), (32)
T T
where r = 1/'^, ^i^d r = 2/t
3. General . While it is true that the concept of the bias
functions, B and B , could be extended to other processes besides those
1 2
^
with the powerlaw types of spectral densities, this generalization has not
been done. Indeed, spectra of the forra given in (2 8) (or superpositions
of such spectra as in (26)) seem to be the most common types of non
deterministic noises encountered in signal generators and associated
equipment. For other types of fluctuations (such as causally generated
perturbations), translations must be handled on an individual basis.
VI. Applications of Stability Measures
Obviously, if one of the stability measures is exactly the iraportant
parameter in the use of a signal generator, the stability measure's
application is trivial. Some nontrivial applications arise when one is
interested in a different parameter, such as in the use of an oscillator in
Doppler radar raeasurements or in clocks.
A. Doppler Radar .
1. General . From its transmitted signal, a Doppler radar
receives from a moving target a frequencyshifted return signal in the
presence of other large signals. These large signals can include clutter
(ground return) and transraitter leakage into the receiver (spillover).
Instabilities of radar signals result in noise energy on the clutter return,
on spillover, and on local oscillators in the equipment.
The limitations of subclutter visibility (SCV) rejections due to
the radar signals themselves are related to the rf power spectral density,
S^^(f). The quantity typically referred to is the carriertonoise ratio and
can be mathematically approximated by the quantity
20
S (f)
V '
f"^
The effects of coherence of target return and other radar parameters are
amply considered in the literature [1417].
2. Special Case . Because FM effects generally predorainate
over AM effects, this carriertonoise ratio is approximately given by [6]
S (f)

S (f')df'
for many signal sources provided f  y  is sufficiently greater than zero.
(The factor of — arises from the fact that S (f) is a onesided spectrum.
)
Thus, if f  V is a frequency separation from the carrier, the carrier

o
to noise ratio at that point is approximately
IV \
\S (f V )=(t^— ) S (f y l) . (34)2 (p\' o 7 2 yf  y / y V o V
B. Clock Errors .
1. General . A clock is a device which counts the cycles of a
periodic phenoraenon. Thus, the reading error x(t) of a clock run from
the signal given by (2) is •
x(t) = , (35)
o
and the diraensions of x(t) are seconds.
H this clock is a secondary standard, then one could have
available sorae past history of x(t), the tirae error relative to the standard
.' clock. It often occurs that one is interested in predicting the clock error
x(t) for some future date, say t + t, where t is the present date. Obviously,
this is a problera in pure prediction and can be handled by conventional
methods [3]. ,
21
2. Special Case . Although one could handle the prediction of
clock errors by the rigorous methods of prediction theory, it is more
common to use simpler prediction methods[lO, ll]. In particular, one
often predicts a clock error for the future by adding to the present error
a correction which is derived from the current rate of gain (or loss) of
time. That is, the predicted error x(t + T) is related to the past history
of x(t) by the equation
rx(t )  x(t T>,
x(to+ T) = x(tj +T [
J
. (36)
It is typical to let T = t.
Thus, the mean square error of prediction for T = t becomes
<[x(tQ+ T)  ^{t^+ r)f) = <[x(t^+ T)  2x(t^+x(tQ T)p), (37)
which, with the aid of (11), can be written in the forra
. . <[x(t + T)  x(t + T)]2) = ZT^CT^T) . (38)
^ O O y
One can define a tirae stability raeasure, O" (t), by the equation
CT^(T) = T^ (T^(T) . (39)
x y
Clearly, however, the actual errors of prediction of clock readings are
dependent on the prediction algorithm used and the utility of such a definition
as rr (t) is not great. Caution should be used in employing this definition.
VII. Measurement Techniques for Frequency Stability
A. Heterodyne Techniques (general) It is possible for oscillators
to be very stable and values of 0" (t) can be as small as 10 in some
y
state of the art equipraent. Thus, one often needs raeasuring techniques
capable of resolving very sraall fluctuations in y(t). One of the most
comraon techniques is a heterodyne or beat frequency technique. In this
method, the signal frora the oscillator to be tested is mixed with a reference
signal of almost the sarae frequency as the test oscillator in order that one
is left with a lower average frequency for analysis without reducing the
22
frequency (or phase) flucutations themselves. Following Vessot, et. al.
[18], consider an ideal reference oscillator whose output signal is
V (t) = V sin ZttV t, (40)
r or o
and a second oscillator whose output voltage V(t) is given by (2):
V(t) = [V + C(t)] sin [27TU t + <p(t)]. Let these two signals be mixed in a
product detector; that is, the output of the product detector v(t) is eqxial
to the product 'yV(t) XV (t) , where y is a constant (see Fig. 4).
r
Let v(t), in turn, be processed by a sharp, lowpass filter with
cutoff frequency f, such that
<f, < f ' < y . (41)
h h o
One may write
y V(t) • V (t) = y V (V + e) [sin Zrrv t] IsiniZtrv t + (p)]
r or o ^ o o
(42)
V V
\ or o/ / C \ r T
v(t) = y ::; 1 + r— [cos CO  cos (4 Try t ^ CD)]
2 \ V / o
o
Assume that cos[cp(t)] has essentially no power in Fourier frequencies f
in the region f ^ f . The effect of the low pass filter then is to remove
the second term on the extreme right of (42); that is,
V V
v'(t) =: y^^ (1 + ^) cos cp(t). (43)
o
(P(t)
ZttV
o
'< < 1This separation of terms by the filter is correct only if
for all t (see (4)).
The following two cases are of interest:
Case I
The relative phase of the oscillators is adjusted so that cp(t)

< < 1
(inphase condition) during the period of measurement. Under these conditions
2 3
r
(D
1
^, ^
,£3
u
1
i
4_)
^ >
w
I)
fl
>^
Td
a ^ u
O OJ <u
^ u
(U
^ '{: 4) .^^ 4>
?H ^^ "X. ^w
>
/VV > •
1
1
k
^^
,^
1
^ ^ 1
[x^
1 1
t
1
rence llator
CO)
roduc
etecto
O
1
ft C •41
1—
1
(U •1 > I—
'
•rl
P^ O o
24
since cos cp (t) '^ 1. That is to say one detects the amplitude noise € (t)
of the signal.
Case II
The relative phase of the oscillators is adjusted to be in
approximate quadrature; that is ' •
(p'(t) = (p(t) +
j (45)
where <p(t)

<< 1. Under these conditions,
and
If it is true that
cos (Pit) = sin ip (t) «(p (t)
v'(t) =^ V V co'(t) + ^ V (p'(t)€(t)
Z or o Z or
g(t)
V
<< 1 for all t (see (3)), then (47) becomes
y
V (t) V V (p(t)
Z or o
(46)
(47)
(48)
that is, V (t) is proportional to the phase fluctuations. Thus, in order to
observe <p (t) by this method, (3) and (4) must be valid. For different
average phase values, mixtures of amplitude and phase noise are observed.
In order to maintain the two signals in quadrature for long
observational periods, the reference oscillator can be a voltage controlled
oscillator (VCO) and one may feed back the phase error voltage (as defined
in (48)) to control the frequency of the VCO [19]. In this condition of the
phaselocked oscillator, the voltage v (t) is the analog of the phase
fluctuations for Fourier frequencies above the loop cutoff frequency of the
locked loop. For Fourier frequencies below the loop cutoff frequency of
the loop, V (t) is the analog of frequency fluctuations. In practice, one
should measure the complete servo loop response.
Z5
B. Period Measurement. Assume one has an oscillator whose
f(fc)
voltage output may be represented by (2). H
the total phase
^(t) = 277^ t + (D{t)
o
V
< < 1 for all t and
(5)
is a monotonia function of time (that is,
ZTTU
between successive positive going zero crossings of V(t) is related to the
<P(t)
^ 1), then the time t
average frequency during the interval t ; specifically,
1 = y (1 +y )To n (49)
If one lets t be the time between a positive going zero crossing of V(t)
and the Mth successive positive going zero crossing, then
M
,  .
T o^ ^n
(50)
If the variations ^ T of the period are sraall compared to the average
period t , Cutler and Searle [?] have shown that one may make a reasonable
approximation to (cr^(N, T, t )) using period measurements.
y o
C. Period Measurement with Heterodyning . Suppose that <p{t) is
a monotonic function of time. The output of the filter of Sec. VII, A, (43)
becomes
V V
(51)
V V
/. ^ or o
V (t) «^ y — cos (p(t)
if
V
.o
<< 1. Then one raay measure the period t of two successive
positive zero crossings of v (t). Thus
1 I I = V y
T on
and for the Mth positive crossover
M I— = V y
T o ' n
(52:
(53)
26
The magnitude bars appear because cos (p(t) is an even function
of (p(t). It is impossible to determine by this method alone whether cp is
increasing with time or decreasing with time. Since y may be very
n
small (~ 1 or 10 for very good oscillators), t may be quite long
and thus measurable with a good relative precision.
If the phase, (p(t), is not monotonic, the true y may be n^ar
zero but one could still have many zeros of cos (p(t) and thus (52) and (53)
would not be valid.
D. Frequency Counters . Assume the phase (either ^ or (/?) is a
monotonic function of time. If one counts the nuraber M of positive
going zero crossings in a period of time t , then the average frequency
M
of the signal is — . If we assume that the signal is V(t) (as defined in
(2)), then
V = ^ (1 + y ) . (54)Ton
If we assume that the signal is v (t) (as defined in (48)), then
— = I^ y
I
. (55)
T o
'
^n'
Again, one raeasures only positive frequencies.
E. Frequency Discriminators . A frequency discriminator is a
device which converts frequency fluctuations into an analog voltage by
means of a dispersive element. For example, by slightly detuning a resonant
circuit from the signal V(t) the frequency fluctuations — ^P (t) are con
verted to amplitude fluctuations of the output signal. Provided the input
amplitude fluctuations ~— are insignificant, the output amplitude
fluctuations can be a good measure of the frequency fluctuations. Obviously,
more sophisticated frequency discriminators exist (e.g., the cesium beam).
From the analog voltage one may use analog spectrura analyzers
to determine S (f ) , the frequency stability. By converting to digital data,
other analyses are possible on a computer. ..
27
F. Cororaon Hazards .
1
.
Errors caused by signal processing equipment . The intent
of most frequency stability measurements is to evaluate the source and not
the measuring equipment. Thus, one must know the perforraance of the
measuring system. Of obvious importance are such aspects of the measuring,
equipment as noise level, dynamic range, resolution (dead time), and fre I
quency range.
It has been pointed out that the noise bandwidth f is very
essential for the mathematical convergence of certain expressions. Insofar
as one wants to measure the signal source, one must know that the measuring
1
system is not limiting the frequency response. At the very least, one must
recognize that the frequency limit of the measuring system may be a very

2
important, implicit parameter for either 0" (t) or S (f). Indeed, one
y y
must account for any deviations of the measuring systein from ideality
such as a "nonflat" frequency response of the spectrum analyzer itself.
Almost any electronic circuit which processes a signal will,
to some extent, convert amplitude fluctuations at the input terminals into
phase fluctuations at the output. Thus, AM noise at the input will cause
a tirae varying phase (or FM noise) at the output. This can impose inaportant
constraints on limiters and autoinatic gain control (AGC) circuits when good
frequency stability is needed. Similarly, this imposes constraints on equip
ment used for frequency stability measurements.
2. Analog spectruin analyzers (Frequency Domain) . Typical
analog spectrura analyzers are very similar in design to radio receivers
of the superheterodyne type, and thus certain design features are quite
similar. For example, image rejection (related to predetection bandwidth)
is very important. Similarly, the actual shape of the analyzer's frequency
window is important since this affects spectral resolution. As with receivers,
dynamic range can be critical for the analysis of weak signals in the presence
of substantial power in relatively narrow bandwidths (e.g., 60 Hz).
28
The slewing rate of the analyzer raust be consistent with the
analyzer's frequency window and the post detection bandwidth. If one has
a frequency window of 1 hertz, one cannot reliably estimate the intensity
of a bright line unless the slewing rate is much slower than 1 hertz/ second.
Additional postdetection filtering will further reduce the maximum usable
slewing rate.
3. Spectral density estimation from time doraain data . It is
beyond the scope of this paper to present a comprehensive list of hazards
for spectral density estimation; one should consult the literature [2  5].
There are a few points, however, which are worthy of special notice:
a. Data aliasing (similar to predetection bandwidth
problems).
b. Spectral resolution. i
c. Confidence of the estimate.
4. Variances of frequency fluctuations , (J (t). It is not un
common to have discrete frequency modulation of a source such as that
associated with the power supply frequencies. The existence of discrete
frequencies in S (f) can cause (j (t) to be a very rapidly changing function
of T. An interesting situation results when t is an exact multiple of the
period of the modulation frequency (e.g., one makes t = 1 second and
there exists 60 Hz frequency modulation on the signal). In this situation,
cr (T= 1 second) can be very optimistic relative to values with slightly
different values of t.
One also must be concerned with the convergence properties of
0" (T) since not all noise processes will have finite limits to the estimates
of a (T) (see Appendix A). One must be as critically aware of any "dead
time" in the measurement process as of the systera bandwidth.
5. Signal source and loading . Inraeasuring frequency stability
one should specify the exact location in the circuit from which the signal
is obtained and the nature of the load used. It is obvious that the transfer
29
characteristics of the device being specified will depend on the load and
that the measured frequency stability might be affected. If the load itself
is not constant during the measurements, one expects large effects on
frequency stability.
6. Confidence of the estimate . As with any measurement in
science, one wants to know the confidence to assign to numerical results.
Thus, when one measures S (f) or 0" (t), it is important to know the
y y
accuracies of these estimates.
a. The Allan Variance . It is apparent that a single sample
variance, cr (2, T, t), does not have good confidence, but, by averaging
many independent samples, one can improve the accuracy of the estimate
greatly. There is a key point in this statement "independent samples."
For this arguraent to be true, it is important that one sample variance be
independent of the next. Since (T (2, t, t) is related to the first difference
y
of the frequency (see (11)), it is sixfficient that the noise perturbing y(t)
have "independent increments, " i.e., that y(t) be a random walk. In
other words, it is sufficient that S (f) '^ f for low frequencies. One
y
can show that for noise processes which are more divergent at low fre
—p
quencies than f , it is difficult (or inipos sible) to gain good confidence
p
on estimates of 0" (T). For noise processes which are less divergent
— p
than f , no problem exists.
It is worth noting that if we were interested in Cr (N = °°, t, t ),
then the linait noise would become S (f ) ~ f instead of f as it is for
C (2, T, T). Since most real signal generators possess low frequency
divergent noises, (a (2, t, t) ) is raore useful than CT (N = «, t, t).
y y
p
Although the sample variances, CT (2, t, t), will not be normally
distributed, the variance of the average of m independent (nonoverlapping)
samples of a (2, t, t) (i.e., the variance of the Allan Variance) will decrease
as 1/m provided the conditions on low frequency divergence are met. For
sufficiently large ra, the distribution of the msample averages of
30
cr (2, T, t) will tend toward normal (central liniit theorem). It is, thus,
y
•possible to estimate confidence intervals based on the normal distribution.
As always, one may be interested in t values approaching the
limits of available data. Clearly, when one is interested in Tvalues of
the order of a year, one is severely limited in the size of m, the number
of samples of CT (2, t, t). Unfortunately, there seeras to be no substitute
y
for many samples and one extends t at the expense of confidence in the
results. "Truth in packaging" dictates that the sample size m be stated
with the results.
b. Spectral Density. As before, one is referred to the
literature for discussions of spectrum estimation [25]. It is worth pointing
out, however, that for S (f) there are basically two different types of
y
averaging which can be employed: Sample averaging of independent estimates
of S (f), and frequency averaging where the resolution bandwidth is made
much greater than the reciprocal data length.
VIII. Conclusions
A good measure of frequency stability is the spectral density, S (f),
of fractional frequency fluctuations, y(t). An alternative is the expected
variance of N sample averages of y(t) taken over a duration t . With
the beginning of successive sample periods spaced every T units of time,
the variance is denoted by (7 (N, T, t). The stability measure, then, is the
2
expected value of many measurements of 0" (N, T, t) with N = 2 and T = T;
p
that is, G (t). For all real experiraents one has a finite bandwidth. In
y
general, the time domain measure of frequency stability, 0" (t), is
dependent on the noise bandwidth of the system. Thus, there are four
important parameters to the time domain ineasure of frequency stability:
31
N, the number of saraple averages (N = 2 for preferred raeasure);
T, the repetition time for successive sample averages (T = T for
preferred measure);
T, the duration of each saraple average; and
f , the system noise bandwidth.
Translations among the various stability measures for common
noise types are possible, but there are significant reasons for choosing
N = 2 and T = t for the preferred measure of frequency stability in the
tirae domain. This measure, the Allan Variance, (N = 2) has been
referenced by [12, 2022], and more.
Although S (f) appears to be a function of the single variable f,
actual experiraental estiraation procedures for the spectral density involve
a great many parameters. Indeed, its experimental estimation can be at
least as involved as the estimation of cr (t).
y
32
APPENDIX A
We want to derive (Z3) in the text. Starting from (10) in the text,
we have
/ N N ,
N1
N
E
n=l
n/ N
N N
Z <yOi S E
11 j=i \ ' J
(Nl)T'=
N t^+^ }n^^
2/" dt" f dt' <y(t')y(t'')
n=l t. tn n
N N t + T t. + T
^1 1 f dt/ dt' /y<t',y(t"X
i=l j=l t t. \ /
(Al)
where (9) has been used. Now
y(t)y(t")\ = R^(t  t") (A2)
where R (t) is the autocorrelation function of y(t) and is the Fourier
y
transform of S (f), the power spectral density of y(t). Equation (A2) is
true provided that y(t) is stationary (at least in the wide or covariance
sense), and that the average exists. If we assurae the power spectral
density of y(t), S (f), has low and high frequency cutoffs f , and f (if
y jii h
necessary) so that
00
/ S (f) df exists
• *6 y
then if y is a random variable, the average does exist and we may safely
assume stationarity.
33
In practice, the high frequency cutoff, f , is alvyays present either
in the device being measured or in the measuring equipment itself. YtHien
the high frequency cutoff is necessary for convergence of integrals of
S (f) (or is too low in frequency), the stability measure will depend on f .
The latter case can occur when the measuring equipment is too narrow
band. In fact, a useful type of spectral analysis may be done by varying
f purposefully [l8].
The low frequency cutoff f may be taken to be much smaller than
the reciprocal of the longest time of interest. The results of calculations
as well as measurements will be meaningful if they are independnt of f
as f . approaches zero. The range of exponents in power law spectral
densities for which this is true will be discussed and are given in Fig. 1.
To continue, the derivation requires the Fourier transform relation
ships between the autocorrelation function and the power spectral density:
O)
S (f) = 4 / R (T) cos 27Tf TdT,
y ^ y
R (T) = r S (f) cos ZTTf Tdf . (A3)
y ^ y
34
Using (A3) and (A2) in (Al) gives
<a^(N, T, T)> :=
y
N
(Nl)T^ ^ / dfS {i)fd?' f
tn+>VT
dt' cos ZTfiit't")
n n
N N^ a, t.+T I
1=1 j^iA) y ^t. 4.
N
1
.t,+T ^t. + T
dfS (f)/ dt"/ dt' COS 27Tf(t't'')
1
J
(N1)T'=
N r ^'°
2/
ln=l
N N
i=ljl
,^^ ,^, sin"^ 7Tf T
dfS f g
y (TTif
S (f) /
df JL
'o (27t£)
2 cos 277fT(ji)  cos 27Tf[T(ji) + t]
cos 277f[T(ji)  t] (A4)
(The interchanges in order of integration are permissible here since the
integrals are uniformly convergent with the given restrictions on S (f).
)
The first suramation in the curly brackets is independent of the summation
index n and thus gives just
sin Tlf T
y"' m]Njf
df S_(f) ^7^^ (A5)
The kernel in the second term in the curly brackets may be further
simplified:
2 cos 27rfT(ji)  cos 27Tf (T(ji) + tJ  cos 27Tf (T(ji)  TJ
(A6)
= 4 sin 77f T cos 2TrfT(ji),
35
The second term is then
r°° S (f) ^i N
^— sin'^ 77f t) ) cos 277fT(ji) . (A7)
(TTf) 1^
J
= l
(The interchange of summation and integration is justified. ) We must
now do the double sum. Let
j  i = k,
277fT= X. (A8)
Changing summation indices from i and j to i and k gives for the
sum
.
; N N N Ni
.
S = V y cosx(ji)= ^ V coskx. (A 9)
1=1 j=l i=l k=li
The region of sumnaation over the discrete variables i and k is
shown in Fig. 5 for N = 4.
The summand is independent of i so one may interchange the
order of suramation and sum over i first. The sumraand is even in
k and the contributions for k < are equal to those for k > 0, and so
we raay pull out the term for k = separately and \vrite:
/N1 Nk \ N
S = 2
j
^ cos kx ^ 1 + £ 1
Vk=l i=l / i=l
N1 \
= 2 I > (Nk) cos kx
k=l
+ N . (AlO)
N1
1 d 1V ikx
This may be written as
S=N+2Re f N   riy e'^"" (All)
L 1 dx j Z^
where Re [U] means the real part of U and d/dx is the differential
operator. The series is a simple geometric series and raay be summed
easily, giving
36
Fig. 5 Region of Summation for i and k for N = 4
37
S = N + 2Re < !N
1 d e  e
I
"
i dx i , IX
1  e
iNx ^^,, ix, '
^ , 1  e  N 1  e >
= N + 2Re ( ^ ' )
4 sin x/2
I
sin Nx/2
sm x/2
(A12) 1
Combining everything ^ve get, after some rearrangement,
N
<ay(N,T,T)>
^_^ J
[ dfS (f)
y
sin TTf T
{TTirf
1  7 Y^""" I (A13)
N sin IT r f T
where r = T/t. This is the result given in (23).
We can determine a number of things very easily from this
equation. First let us change variables. Let 77f t = u, then
«^y(N,T,T)> = ^^_•1)77T J
2 r • 2
^ , u , sin u ,' sin Nru
du S (— ) <1  ,
y ttt' 2 ) ^,2 . 2 ju ^ N sm ru
(A14)
The kernel behaves like u as u * and like u as u * '».
Therefore (d (N, T, t)) is convergent for power law spectral densities,
(y
S (f ) = h f , without any low or high frequency cutoffs for  3 < C < 1.
Y ry
o 1 /
^
Using (A14) for power law spectral densities we find
<CJ^(N, T, T)) = T
QCl
h C^ for 3<a<l
and
C =
CL
where \i = « 1
u \
(N
N f . <y sin u /, sm Nru Vdu u { 1  /
1)77 ^ u I N sxn ru •
(A15)
This is the basis for the plot in Fig. 1 in the text of jLt vs. a. For
a ^ 1 we raust include the high frequency cutoff f .
38
For N = 2 and r = 1 the results are particularly simple. We
have ^
<cr^(2, T, T)> = t"^"^ h^ ^
I
duu*^"^ sin^u (A16)
for power law spectral densities. For N = 2 and general r we get
«, fi 1 o , cos2u(r4l) , cos2u(rl) \
/=° 1 cos2ucos2ru+
^^ ' + ^ l
du S (^^)
i ^ ?^
/. 3 .2, o / "^ \ sin u sm ru ,^ ,^.du S (— . A17
3
U
The first form in (A17) is particularly simple and is also useful for r = 1
in place of (A 16).
Let us discuss the case for ry ^ 1 in a little more detail. As
mentioned above we must include the high frequency cutoff, f , for
convergence. The general behavior can be seen most easily from (A13).
2
After placing the factor t outside the integral and combining the factor
f with S (f) we find that the remaining part of the kernel consists of
y
some constants and some oscillatory terms. If 2Trfi^T > > 1 it is apparent
that the rapidly oscillating terms contribute very little to the integral.
Most of the contribution comes from the integral over the constant term
3
causing the major portion of the t dependence to be the t factor
outside the integral. This is the reason for the vertical slope at U = 2
in the fl vs. Ot plot in Fig. 1 in the text.  "
One other point deserves some mention. The constant term of
the kernel discussed in the preceding paragraph is different for r = 1
from the value for r 7^ 1 . This is readily seen from (A17) for N = 2;
for r = 1 the constant term is 3/2 while for r 7^ 1 it is 1. This is
the reason that 6 (r1) which appears in some of the results of
Appendix B. In practice, 6 (r1) does not have zero width but is
39
smeared out over a width of approximately (ZTT^f^ t) . If there must be
dead time, r ?^ 1, it is wise to choose (r1) >> ZTTL t) '' or
h
(r1) <<(2 7Tf, T) • but M'ith ZTTf T >> 1. In the latter case, one may
h h
assume r » 1
.
40
APPENDIX B
Let y(t) be a sample function of a random noise process with a
spectral density S (f). The function y(t) is assuraed to be pure real
and S (f) is a onesided spectral density relative to a cycle frequency
(i.e., the dimensions of S (f) are that of y per hertz). (For additional
information see Appendix A, [7, 8, 18]. )
Let x(t) be defined by the equation
x(t) = ^ = y(t). (Bi)
Define: t is arbitrary instant of tirae and
o
n
N
n=l
and let f be a high (angular) frequency cutoff (infinitely sharp) with
27Tf, T > > 1.
h
Special Case
Special Case
y y \ 2T
[x(t^+2T)  2x(t^+T) + x(t^)]'
2~
)
t ^^ = t + T, n = 0, 1, 2, .. ., (B2)n+1 n
t^+T X(t +T) x(t )
y^ = 7 /
y(t) dt =
,
(B3)
1 r n
:b5;
!B6'
0=(T)^ (a^2,T,T)>= < 2 ___i^ ^_ ) • ,B7)
41
Definition:
D ^ (T) = < [x(t +2T)  2x(t + T) + x(t )f ) . (B8)
X \ o o o /
Consequency of Definitions
D^(T) = 2T^ cr^(T) = 2CT^(T) . (B9)
X y X
Definition:
^(T, T) = ([x(t +T+ T)  x(t +T)  x(t + T) + x(t )f\ . (BIO)X \o o o o/
Consequency of Definitions:
J/)^(T, T) = 2T^ <a^(2, T, T)) . (BID
...
X y
Special Case:
0''(T, T) := D^(T). (B12)
X X
42
RANDOM WALK y
S (f) = ^
y f
2
s (f) =
T
T h
Quantity Relation
<cr^(N, T, T)>
2
h_2 • ^^•Ti^[r(N+ 1)  1], r > 1
<rT^(N, T, T)>
y ^a^^rf^
a^ir) h^ i^4^. N=Z. r=l
D®(T) = 2a^ (T)
X X
J.
.
2{2Trf T
=
2 6
^ S 3
j^ lilLIli(3r _ 1), for r > 1
2 D
^ o * "^
—
'T
—^  1 ' foi I ^ 12 6 r
(B13)
(B14)
(B15)
(B16:
(B17)
43
FLICKER, y
^y^^)  T S (f ) =—~^
r=T/T, O^f^f,
Quantity
<CJ^(N, T, T))
y
<cr"(N, T, T)>
y
O^(T)
y
D^(T) ^ 2CT^(T)
X X
Relation
N
^r wbrS'N"'
n=l
2(nr) In(nr)
+ (nr+l)"^ ln(nr+l) + (nr1)^ Inlnrl
h
N In N
1
' N1 ,
(r = 1)
h • 2 In 2, (N = 2, r = 1)
h ' 4 T^ In 2
2r^ln r + (r + 1)^ ln(r + l)
+ (rir lnrl
h • 2T'' (2 + In r), for r > > 1
h • 2T'' (2  In r), for r < < 1
(B18)
(B19)
(B20)
(B21)
fB22:
(B23)
44
WHITE y (Random Walk x)
S (f) = h
y o X (Zirfr
r=T/T, ^i <£
Quantity Relation
<CT^(N, T, T))
h
Y"
• ^ "^ for r ^ 1
h • 7 r(N+l) T "^, for Nr ^ 1
6
<(J^(N, T, T))
,
h
; • T \ r = 1
y
h
^ • T "\ N = 2, r = 1
D^(T) = 2CT^(T)
X X
h • T
0^(T, T)
A.
h • T , for r ^ 1
h • T, for r ^ 1
(B24)
B25)
(B26)
(B27)
(B28)
45
FLICKER X
S (f) = h f
y 1
1
S (f) =
Quantity
r = T/T, 277 1 T > > 1, 27TL T >> 1, ^ f ^ f
,
h h h
Relation
<(7^ (N, T, T)>
y
<CT^(N, T, T))
h
y
D^(T) =: 2cr^(T)
X X
(T, T)
X
h
1 (277 T)^
In
N1
' n=l
2 2
n r
2 2 ,
n r 1
(B29)
, for r >> 1
2(N+1)
'l NT^(277)^
2 + ln(277f T)  %^
h N^ 
1
, r = 1 (B30)
2,.^ _x2
1 T^(277)
3[2 + ln(277f T)]  In 2 N=2, r=l (B31)
(2 77)
• 2 3[2 + ln(277f T)]  In 2 (B32)
; [2 + ln(277L T)], r >> 1
(277)^ h
h
(2 77)'
4
3[2 + ln(277f T)]  In 2 , r = 1 (B33)
"7 [2 + ln(277f, T)], r << 1
(277)^ h
46
WHITE X
s {£) = h r
y 2 V^X^^^ " (ZTlfy
r = T/T; 6 (r1) =
k
1 if r = 1
otherwise
27Tf T>> 1, ^ f ^ f^
h n
Quantity Relation
<a^ (N, T, T))
y
N + 6 (r1) "h
"2 N(277)^ T^
<CT^(N, T, T)>
y
h •
2
N + 1
N(27T)^ ,3
;' = i
/! /tX ^
'^h
; N = 2, r = 1
y (2 77)^ T^
D^ (T) :r 2cr^(T)
^^2
(277)=
J/)^(T, T)
'^2[^^\'^'],a.>
:B34)
(B35)
(B36)
(B37;
(B38)
47
REFERENCES
[ l] E. T. Jaynes, "Information theory and statistical mechanics, "
Phys. Rev., vol. 108, Sec. 15, pp. 171190, October 1957.
[ 2] C. Bingham, M. D. Godfrey, and J. W. Tukey, "Modern techniques
of power spectrum estimation, " IEEE Trans. AU, vol. 15, pp. 5666,
June 1967.
[ 3] R. B. Blackman, Linear data smoothing and prediction in theory
and practice, AddisonWesley Publishing Co., 1965.
[4] R. B. Blackman and J. W. Tukey, The measurement of power
spectra , New York: Dover, 1958.
[ 5] E. O. Brigham and R. E. Morrow, "The fast Fourier transform, "
IEEE Spectrum, vol. 4, pp. 6370, December 1967.
[ 6] E. J. Baghdady, R. D. Lincoln, and B. D. Nelin, "Shortterm
frequency stability: theory, measurement, and status, " Proc. lEEE
NASA Symp. on Shortterm Frequency Stability, (NASA SP80),
pp. 6587, November 1964; also Proc. IEEE, vol. 53, pp. 704722,
21102111, 196 5.
[ 7] L. Cutler and C. Searle, "Some aspects of the theory and measure
ment of frequency fluctuations in frequency standards, " Proc. IEEE,
vol. 54, pp. 136154, February 1966.
[ 8] D. W. Allan, "Statistics of atomic frequency standards, " Proc. IEEE,
vol. 54, pp. 221230, February 1 966.
[ 9] N. A. Shewart, Economic control of quality of 3nan\x£actured product .
Van Nostrand Co. , 1931, p. 146.
[10] J. A. Barnes, "Atomic timekeeping and the statistics of precision
signal generators, " Proc. IEEE, vol. 54, pp. 207220, February
1966.
[11] J. A. Barnes and D. W. Allan, "An approach to the prediction of
coordinated universal time," Frequency, vol. 5, pp. 1520, Nov. 
Dec. 1967.
48
[12] R.F.C. Vessot, et al. , "An intercomparison of hydrogen and cesiiira
frequency standards, " IEEE Trans. I&M, vol. 15, pp. 165176,
Deceraber 1966.
[13] J. A. Barnes, "Tables of bias functions, B and B , for variances
based on finite samples of processes with power law spectral
densities," NBS Technical Note 375, January 1969.
[14] D B. Leeson and G. F. Johnson, "Shortterm stability for a Dc^ppler
radar: requirenaents, measurements, and techniques, " Proc. IEEE,
vol. 54, pp. 244248, February 1 966.
[15] Radar Handbook, ed. M. I. Skolnik, McGraw Hill (1970), chapter 16,
(W. K. Saunders).
[l6] R. S. Raven, "Requirements on master oscillators for coherent radar, "
Proc. IEEE, vol. 54, pp. 237243, February 1 966.
[17] D. B. Leeson, "A siraple model of feedback oscillator noise spectrum, "
Proc. IEEE, vol. 54, pp. 329330, February 1966.
[18] R.F.C. Vessot, L. Mueller, and J. Vanier, "The specification of
oscillator characteristics from measurements made in the frequency
domain," Proc. IEEE, vol. 54, pp. 199207, February 1966.
[19] Floyd M. Gardner, Phaselock techniques, New York: Wiley, 1966,
[20] C. Menoud, J. Racine, and P. Kartaschoff, "Atomic hydrogen
maser work at L.S.R.H., Neuchatel, Switzerland, " Proc. 21st
Annual Symp. of Frequency Control, pp. 543567, April 1967.
[21] A. G. Mungall, D. Morris, H. Daams, and R. Bailey, "Atomic
hydrogen maser development at the National Research Council of
Canada," Metrologia, vol. 4, pp. 8794, July 1968.
[22] R.F.C. Vessot, "Atoraic hydrogen maser s, an introduction and
progress report, " HewlettPackard J., vol. 20, pp. 1520,
October 1968.
49
ACKNOWLEDGMENT
During the process of writing this manuscript and its many revisions,
numerous people have added comraents and encourageraent for the authors.
The authors are particularly indebted to Mr, D. W. Allan, Drs. D. Halford,
S. Jarvis, and J. J. Filliben of the National Bureau of Standards. The
authors are also indebted to Mrs. Carol Wright for her secretarial skills
and patience in preparing the many revised copies of this paper.
50
fiPO 831.?fi7
U.S. DEPARTMENT OF COMMERCE
WASHINGTON, D.C. 20230
OFFICIAL BUSINESS POSTAGE AND FEES PAID
U.S. DEPARTMENT OF COMMERCE
PENALTY FOR PRIVATE USE. $300
Superintendent of Documents
20150622T15:47:490400
US GPO, Washington, DC 20401
Superintendent of Documents
GPO attests that this document has not been altered since it was disseminated by GPO