Download Intelligent Techniques for Web Personalization: IJCAI 2003 Workshop, ITWP 2003, Acapulco, Mexico, August 11, 2003, Revised Selected Papers PDF

TitleIntelligent Techniques for Web Personalization: IJCAI 2003 Workshop, ITWP 2003, Acapulco, Mexico, August 11, 2003, Revised Selected Papers
File Size7.2 MB
Total Pages331
Table of Contents
                            Front matter
Chapter 1
	The Personalization Process
	Classifications of Approaches to Personalization
		Individual Vs Collaborative
		Reactive Vs Proactive
		User Vs Item Information
		Memory Based Vs Model Based
		Client Side Vs Server Side
	Personalization Techniques
		Content-Based Filtering
		Traditional Collaborative Filtering
		Model Based Techniques
		Hybrid Techniques
		The Cold Start and Latency Problem
		Data Sparseness
		Recommendation List Diversity
		Adapting to User Context
		Using Domain Knowledge
		Managing the Dynamics in User Interests
	Evaluation of Personalization Systems
	Conclusions and Future Directions
Chapter 2
	Performing Web Navigation
		Visually Scanning a Web Page
		Link Assessment
		Link Selection Strategies
		Backtracking Strategies
	Modeling Approaches
	The MESA Model
		Modeling the Web Site and Web Browser
		MESA Strategies
		Assessing Link Relevance
	Future Directions for Assessing Relevance
		Co-occurrence of Label and Target
		Latent Semantic Analysis
		Modeling Variation Among Users
	Implications for Intelligence Systems That Infer User Intent
	Closing Comments
Chapter 3
	Personalizing Interaction
		Representational Approach
	More Choices of Representations
		Using EBG
		Operationality Considerations
		Domain Theories for Information-Seeking Interactions
	Personable Traits
Chapter 4
	Problems for Personalization That Arise from Users’ Privacy Concerns and Sites’ Current Policies of Dealing with Privacy Issues
		A Categorization of Data Used for Personalization
		Privacy Concerns and Perceptions of the Privacy-Personalization Tradeoff
		Variables That Influence Users’ Privacy Concerns
		Site Communication Design
	Analyzing Users: An Experimental Evaluation of the Influence of Communication Design on Data Disclosure
	Analyzing Algorithms: Experimental Evaluations of the Influence of Data Availability on Personalization Quality
		Quantifying Level of Identity Disclosure and Recommendation Quality
		Level of Identity Disclosure: From Low to High
		Level of Identity Disclosure: From High to Maximal
		Reconstructing Data Revisited: From Minimal to Low (or Higher)
	Conclusions and Future Work
Chapter 5
	Recommender Systems
	Case-Based Reasoning
	Methodology for the CBR Recommender Systems
	CBR Recommendation Techniques and Systems
		Entree - EN
		Interest Confidence Value - ICV
		DieToRecs - DTR
		Order-Based Retrieval - OBR
		First Case - CDR
		Comparison-Based Retrieval - COB
		ExpertClerk - EC
Chapter 6
		User Feedback in Personalized Recommender Systems
		The Promises and Pitfalls of Critiquing
	Critiquing in Comparison-Based Recommendation
		A Review
		Adapting Comparison-Based Recommendation for Critiquing
		Adaptive Selection
	Experimental Evaluation
		Basic Assumption
		Recommendation Efficiency
		Preference Tolerance
		Preference Noise
Chapter 7
		Collaborative Recommendation
		Content-Based Recommendation
		Knowledge-Based Recommendation
		Strengths and Weaknesses
		Hybrid Recommendation
	Hybrid Recommender Systems
	Experiments with Hybrid Recommendation
		Evaluation Metrics
		Cascade Hybrids
		Feature Combination Hybrids
		Feature Augmentation Hybrids
Chapter 8
		Collaborative Filtering
		Linear Associative Memory
	Collaborative-Filtering by Linear Associative Memory (CLAM)
		Effect of Training Set Size
		User-Based Interpretation of CLAM
	Related Work
	Future Work
Chapter 9
		Content-Based Filtering Systems
		Collaborative Filtering Systems
		Hybrid Approach
	Our Proposed Approach
		A Motivational Example
		Our Proposed Hybrid Recommender Mechanism
		Related Work
		Data Set
		Evaluation Metrics
		Experiment Results
	Conclusions and Future Work
Chapter 10
	Introduction and Background
	Sequence Alignment Method (SAM)
	Interestingness Measure
		Baldwin’s Support Logic
		Support Logic Framework for Web Usage Mining
	Sequence Alignment Method Extended with Interestingness (SAMI)
	Empirical Analysis
		Proposed Approach
		Deploying the Results
	Conclusions and Future Research
Chapter 11
	Web Information Retrieval
	Profiles for Personalisation
	Approaches to Personalisation
		Relevance Feedback and Query Modification
		Personalisation by Content Analysis
		Recommender Systems
		Personalisation by Link Analysis
		Social Search Engines
		Mobile and Context-Aware Searching
		Requirements for Personalised Search
	Final Remarks
Chapter 12
	Recent Approaches to Personalization
	Description of the Method
		Step 1 (Preprocessing): Expand the Communities
		Step 2: Calculate the Community Weights of the User
		Step 3: Reorder the Result Set
	Experimental Set-Up and Evaluation Metric
	Experimental Results
Chapter 13
	Related Work
	Web Browsing Behavior Model
		User Studies
		Empirical Results
		AIE: Annotation Internet Explorer
		Overview of WebIC
	Conclusion and Future Work
Chapter 14
	The Mobile Internet
		Mobile Internet Devices
		Mobile Information Access
		Mobile Portal Navigation
	A Probabilistic Model of Personalized Navigation
		Profiling and Personalization
		Deployment Experiences
	Distance-Biased Promotion
		Expected Click-Distance
	Experimental Evaluation
		Comparative Click-Distance Profiles
		Further Analysis
Chapter 15
	Content Contextualization Servers
		Content Management
	Related Work
	System Architecture
		Content Classification
		Sequence Tree Based Recommendation
		Using Multiple Recommenders
		Integration with a Content Management System
		Conformance to Standards
	Experimental Evaluation
	Conclusions and Future Work
Chapter 16
	A Privacy Framework for Web Services
		Declaration of Privacy Policy
		Specifying Data Requests of Web Services
		Describing User Privacy Preferences
		Architecture of the System
	Negotiation Component
		Extraction of Preference Rules
		Negotiation of Data Elements
	An Example Scenario
		Rule Extraction
	Related Work
	Conclusions and Future Work
Chapter 17
	The Problem
		Issues with the Racist Web
		Examples of Criteria
		Relying on Weak Criteria
	Related Work
	The Multi-agent Model
		The Multi-agent Architecture
		Dynamic Pyramidal Coordination
Back matter
Document Text Contents
Page 1

Lecture Notes in Artificial Intelligence 3169
Edited by J. G. Carbonell and J. Siekmann

Subseries of Lecture Notes in Computer Science

Page 2

Bamshad Mobasher Sarabjot SinghAnand (Eds.)

for Web

IJCAI 2003 Workshop, ITWP 2003
Acapulco, Mexico, August 11, 2003
Revised Selected Papers


Page 165

158 C.P. Lam

We then recommend the top N items based on the elements of pa, not including those
items for which the active user has already rated.

We will simply refer to collaborative-filtering using linear associative memory as
CLAM from now on. The first observation we make is that W completely represents
the underlying model and is an n × n symmetric matrix, in which n is the number of
items. Its complexity is independent of m, the number of users. In general, m � n, so
the difference can be significant.

Another notable property about CLAM is its ability for incremental learning. Other
model-based CF algorithms tend to support only batch learning, and their models have
to be completely rebuilt when more data is available. It is a simple observation that
if W (m) stands for the memory matrix given m users, and the data for a new, (m +
1)th, user become available, then the memory matrix can be updated by W (m + 1) =
W (m)+xm+1 · (xm+1)T /‖xm+1‖α. Other simple updating rules can also be devised
for the cases where existing users in the training set decide to add, modify, or delete
their ratings.

3.1 Effect of Training Set Size

The fact that each user’s information is additive in creating the memory model also
enables some theoretical analysis. One can imagine there to be a universe of all users
that one can get ratings from (e.g., the population of all shoppers). Say the size of this
universe is M . If one can actually collect the ratings from all M users, then one can
create the ideal CLAM model,

W ∗ =



xi · (xi)T

Here we assume α = 0, and the 1

multiplier has no impact on the recommendation
but will add clarity to our analysis.

Of course, one does not have the ratings from all possible users, so we assume what
one has is a random subset of size m. The CLAM model built from this subset is

W (m) =



Xi · (Xi)T (6)

(Again the multiplicative constant has no effect on recommendation, and we capitalize
X just to emphasize it as a random sample.) Under this view, one is effectively trying to
estimate W ∗ by sampling a population of size m. It is well known that this is an unbi-
ased estimator and the expectation of W (m) is in fact W ∗. Another known result [12]
for this estimator is that the variance of each element of W (m) is proportional to


1 − m − 1

M − 1

For m � M , as is usually the case, the variance of the elements of W (m) is approx-
imately proportional to 1

and the standard error, the more useful measure, is approxi-

mately proportional to 1√

. Thus a larger sample population of users will make a better

Page 166

Collaborative Filtering Using Associative Neural Memory 159

model with lower standard error, although the rate of improvement will be decreasing.
This fits the intuition of many practitioners, but to our knowledge this is the first formal
analysis of such phenomenon for a collaborative filtering algorithm.

3.2 User-Based Interpretation of CLAM

Earlier we described the user-based collaborative filtering algorithm for ranking in
Equation 2. Let xi be a column vector with its elements as defined in Equation 4. Let
pa be a column vector whose jth element is paj of Equation 2. The user-based ranking
algorithm can be written in vector form as

(pa)T =


w(a, i) · (xi)T

Several forms for the weighting function w(a, i) exist [2]. One popular form takes the
vector similarity of user i and the active user’s rating vectors. It is defined as

w(a, i) =
(xa)T · xi

Plugging into the user-based ranking algorithm, we get

(pa)T =


(xa)T · xi · (xi)T


‖xa‖ ·


xi · (xi)T

The division by ‖xa‖ can be dropped because it affects the predicted value of every
item equally and thus has no effect on the ranking of items. The summation is simply
W defined in Equation 5 with α set to 1. Noting the symmetry of W , we have

(pa)T = (xa)T W
pa = Wxa,

which is exactly our CLAM algorithm.

4 Experiment

4.1 Methodology

To test the effectiveness of the CLAM algorithm we ran some experiments on the
MovieLens dataset [13], collected by the GroupLens Research Project at the University
of Minnesota. The dataset consists of 100,000 ratings, in the range of one to five, from
943 users on 1682 movies. Each user has rated at least 20 movies, and each movie has
been rated at least once. The ratings were gathered at a Web site during a seven-month
period from 1997 to 1998.

Page 330

Web Personalisation for Users Protection: A Multi-agent Method 323

67%). This evaluation has also important consequences with respect to the applicability
of the system in an industrial context.


[1] S. Aknine and P. Caillou. Agreements without disagreements: A coalition formation
method. In ECAI, European Conference on Artifitial Intelligence, pages 3–7, 2004.

[2] Kessler B., Nunberg G., and Schtze H. Automatic detection of text genre. Technical report,
Palo Alto Research Center, 1997.

[3] C.H. Brooks and E.H. Durfee. Congregation formation in multiagent systems. Journal of
Autonomous Agents and Multi-agent Systems, 2001.

[4] Princip Consorsium. Final report of princip project. Technical report, LIP6, 2004.
[5] J. Kalgren and D. Cutting. Recognizing text genres with simple metrics using discriminant

analysis. In COLING94, 1994.
[6] J. Kalgren and D. Cutting. Genres defined for a purpose, fast clustering, and an iterative

information retrieval interface. In Eighth DELOS Workshop on User Interfaces in Digital
Libraries, 1998.

[7] Jim Miller. Pics label distribution, label syntax and communication protocols. W3C Rec-
ommendation REC-PICS-labels-961031, 1996.

[8] T.W. Sandholm and V.R. Lesser. Coalitions among computationally bounded agents. AI,
94:99–137, 1997.

[9] O. Shehory and S. Kraus. Methods for task allocation via agent coalition formation. Arti-
ficial Intelligence, 101(1-2):165–200, May 1998.

[10] M. Tambe. Towards flexible teamwork. Journal Of AI Research, 7:83–124, 1997.

Page 331

Author Index

Aknine, Samir, 306
Anand, Sarabjot Singh, 1, 272

Berendt, Bettina, 69
Burke, Robin, 133

Chan, Keith C.C., 169

Dogac, Asuman, 289

Eirinaki, Magdalini, 272

Greiner, Russ, 241

Häubl, Gerald, 241
Hay, Birgit, 187

Keenoy, Kevin, 201
Kritikopoulos, Apostolos, 229

Lam, Chuck P., 153
Levene, Mark, 201
Lorenzi, Fabiana, 89

McCarthy, Kevin, 255
McGinty, Lorraine, 114
Miller, Craig S., 37

Mobasher, Bamshad, 1

Price, Bob, 241

Quenum, Ghislain, 306

Ramakrishnan, Naren, 53
Reilly, James, 255
Ricci, Francesco, 89

Sideri, Martha, 229
Slodzian, Aurélien, 306
Smyth, Barry, 114, 255

Tang, Tiffany Ya, 169
Teltzrow, Maximilian, 69
Toroslu, I. Hakki, 289
Tumer, Arif, 289

Vanhoof, Koen, 187
Vlachakis, Joannis, 272

Wets, Geert, 187
Winoto, Pinata, 169

Zhu, Tingshao, 241

Similer Documents