Download INS/GPS Integration Using Neural Networks for Land Vehicular Navigation Applications PDF

TitleINS/GPS Integration Using Neural Networks for Land Vehicular Navigation Applications
LanguageEnglish
File Size14.2 MB
Total Pages307
Table of Contents
                            20209.pdf
	UCGE Reports
		Number 20209
		Department of Geomatics Engineering
			Kai-Wei Chiang
				November 2004
20209.pdf
	UCGE Reports
		Number 20209
		Department of Geomatics Engineering
			Kai-Wei Chiang
				November 2004
Thesis_kai-wei_nov_2004.pdf
	chapter2.pdf
		Figure 2.9: Performance comparisons of various augmented pos
		Figure 2.10: Core components of a modern land vehicular navi
                        
Document Text Contents
Page 1

UCGE Reports
Number 20209





Department of Geomatics Engineering



INS/GPS Integration Using Neural Networks for Land

Vehicular Navigation Applications
(URL: http://www.geomatics.ucalgary.ca/links/GradTheses.html)





by



Kai-Wei Chiang



November 2004

Page 2

UNIVERSITY OF CALGARY





INS/GPS Integration Using Neural Networks for Land Vehicular

Navigation Applications



by



Kai-Wei Chiang





A DISSERTATION

SUBMITTED TO THE FACULTY OF GRADUATE STUDIES

IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE

DEGREE OF DOCTOR OF PHILOSOPHY



DEPARTMENT OF GEOMATICS ENGINEERING



CALGARY, ALBERTA, CANADA

NOVEMBER, 2004





© Kai-Wei Chiang 2004

Page 153

minimize a scalar index of performance. In general, a reinforcement learning

system is built around a critic that converts a primary reinforcement signal to a

heuristic reinforcement signal [Barto et al., 1983]. The goal of such a learning

procedure is to minimize a cost function, which is defined as the expectation of

the cumulative cost of actions taken over a sequence of steps instead of simply

immediate cost. See Haykin [1999] for more detail.

• Hybrid learning: Sometimes a purely supervised learning procedure is not very

efficient and the incorporation of supervised learning with unsupervised learning

to solve certain questions is required. For example, an appropriate unsupervised

NN can be applied first to reduce the training data set for a classification problem

by clustering original data and a supervised learning NN architecture can be

applied to categorize the clustered data. As a result, the training time of the

supervised learning can be reduced significantly.






Unknow n
m odel


+

-

Vector describing
state of the m odel

Learning
system

Actual
Response

Desired
Response

Error signal

Supervised LearningSupervised Learning

unsupervised Learningunsupervised Learning

Unknow n
m odel

Vector describing
state of the m odel Learning

system

Actual
Response

Unknow n
m odel


+

-

Vector describing
state of the m odel

Learning
system

Actual
Response

Desired
Response

Error signal

Supervised LearningSupervised Learning

unsupervised Learningunsupervised Learning

Unknow n
m odel

Vector describing
state of the m odel Learning

system

Actual
Response













Figure 5.6: Learning procedures

5.2 Multi-Layered Feed-Forward Neural Networks


A supervised neural network that has either a static or dynamic network architecture can

be applied for nonlinear input-output mapping applications, such as pattern recognition,

function approximation and estimation. A dynamic network architecture is given in

section 5.3. According to Ham and Kostanic [2001], four of the most common static

126

Page 154

neural networks are: (1) associate memory networks, (2) radial basis function networks,

(3) counter-propagation networks, and (4) multi-layered feed-forward networks. The

scope of this research is limited to multi-layered feed-forward networks. Therefore, see

Ham and Kostanic [2001] for more details associated with other networks.

The multi-layered feed-forward neural network (MFNN) trained by the backpropagation

algorithm is the most well-known and most common used neural network today. The

standard backpropagation algorithm is based on the gradient descent algorithm and the

synaptic weights are updated proportionally to the computed error between the actual

response and the desired response. The result after training is a specific nonlinear

mapping from input to output. The advantages of the backpropagation learning algorithm

include its parallel computation structure and its ability to acquire a complex nonlinear

mapping. As it is the most widely applied static neural network, the following section is

given to present more detail information about static MFNN and associated learning

algorithms.

5.2.1 Nonlinear Mapping and MFNNs

As previously stated, the topology of a MFNN consists of an input layer, at least one

hidden layer and an output layer. In general, an individual neuron aggregates its weighted

inputs (synaptic operation) and yields outputs through a linear or nonlinear activation

function (somatic operation). As these neurons form layered network configurations

through only feedforward interlayered synaptic connections in terms of the neural signal

flow, it has only feedforward information from the lower to the higher neural layers. As a

result, a MFNN is a static neural model in the sense that its input-output relationship can

be described by a nonlinear mapping function. In other words, MFNN has the capability

of implementing a nonlinear mapping from many inputs to many outputs. Indeed,

MFNNs have been widely applied to provide alternative solutions to various engineering

and science applications that can not be solved by conventional methods.

Since the complexity of the problem varies from one application to another, the

complexity of the applied MFNNs varies according to the complexity of the application.

In general, the complexity of MFNN depends on its topology which consists of the

127

Page 306

C.3 Specifications of MEMS IMUs

The specifications of those three low cost IMUs being applied in this research are

illustrated in Figure (C.6), Figure (C.7), and Figure (C.8), respectively.




12.5mV/(deg/s)250mV/gScale factor

40 Hz32 HzBandwidth
0.1% full scale0.2% full scaleNon-linearity
0.05 0.225mgNoise

+2.5V 0.3V+2.5V 0.625VBias
+/- 150(deg/s)+/- 5 (g)Range
GyroAccelerometerParameter

12.5mV/(deg/s)250mV/gScale factor

40 Hz32 HzBandwidth
0.1% full scale0.2% full scaleNon-linearity
0.05 0.225mgNoise

+2.5V 0.3V+2.5V 0.625VBias
+/- 150(deg/s)+/- 5 (g)Range
GyroAccelerometerParameter

Technical Specification

MEMS Sensor Triad (MST)
•MMSS Research Group
•Size (100x100x50 mm)
•Weight (nominal weight is less than 0.5kg)
•Cost : $ 300 USD
•Sampling rate: 100 Hz

± ±

/ Hz deg/ /s Hz


















Figure C.6: The specifications of MST












8.5 (mg






<1%<1%Scale factor accuracy

2(arcsec/sec)0.25 (mg)Data resolution
>25 Hz>75 HzBandwidth
<0.1 ( )<0.1 ( )Random Walk
0.3% full scale1% full scaleNon-linearity

<+/- 1.0 (deg/s)<+/- )Bias
+/- 100(deg/s)+/- 2 (g)Range
GyroAccelerometerParameter

<1%<1%Scale factor accuracy

2(arcsec/sec)0.25 (mg)Data resolution
>25 Hz>75 HzBandwidth
<0.1 ( )<0.1 ( )Random Walk
0.3% full scale1% full scaleNon-linearity

<+/- 1.0 (deg/s)<+/- )Bias
+/- 100(deg/s)+/- 2 (g)Range
GyroAccelerometerParameter

8.5 (mg

1/ 2/ /m s h 1/ 2deg/ h

IMU400CC-100
•Crossbow technology Inc.( www.xbow.com)
•Size (76.2x 95.3 x 81.3 mm),
•Weight (nominal weight is less than 0.6 kg)
•Cost : $ 3,000 USD~$ 5,000 USD
•Sampling rate: >100 Hz

Technical Specification




Figure C.7: The specifications of IMU400CC-100





279

Page 307

0.2%0.2%Scale factor accuracy

2(arcsec/sec)1 (mg)Data resolution
>=30 Hz>= 30 HzBandwidth
<0.5 ( )<0.1 ( )Random Walk
0.2% full scale0.5% full scaleNon-linearity
< 1 (deg/s)N/ALong-Term bias stability

<0.01 (deg/s)+/- 2 (mg)Short-term bias stability
+/- 90(deg/s)+/- 20 (g)Range
GyroAccelerometerParameter

0.2%0.2%Scale factor accuracy

2(arcsec/sec)1 (mg)Data resolution
>=30 Hz>= 30 HzBandwidth
<0.5 ( )<0.1 ( )Random Walk
0.2% full scale0.5% full scaleNon-linearity
< 1 (deg/s)N/ALong-Term bias stability

<0.01 (deg/s)+/- 2 (mg)Short-term bias stability
+/- 90(deg/s)+/- 20 (g)Range
GyroAccelerometerParameter

1/ 2/ /m s h 1/ 2deg/ h

Technical Specification

ISI IMU
•Inertial Science Inc. (www.inertialscience.com)
•Size (72x76x58 mm )
•Weight (nominal weight is less than 0.36kg
•Cost : $ 3,000 USD~$ 5,000 USD
•Sampling rate: 200 Hz


Figure C.8: The specifications of ISI IMU






280

Similer Documents