Download Artificial Cognition Architectures PDF

TitleArtificial Cognition Architectures
Author
LanguageEnglish
File Size4.1 MB
Total Pages273
Table of Contents
                            Preface
Contents
List of Figures
Chapter 1: Introduction
	1.1 Striving for Artificial Intelligence
	1.2 Historical Concepts of Intelligent Robots
		1.2.1 Ancient Automatons
			1.2.1.1 Isaac Asimov’s Laws of Robotics
	1.3 Hollywood’s Views on Robots and Artificial Intelligence
	1.4 What Are Artificial Cognitive Systems and Why Do We Need Them?
	1.5 Layout of the Book
Chapter 2: The Information Continuum
	2.1 Information Flow Within a Synthetic Continuum
	2.2 Information Processing Models
	2.3 Discussion
	Reference
Chapter 3: The Psychology of Artificial Intelligence
	3.1 Artificial Psychology
	3.2 Artificial Cognition: What Does It Mean to Be Cognitive?
	3.3 Artificial Intuition: What Does It Mean to Be Intuitive?
	3.4 Human Versus Machine Emotions
		3.4.1 Basic Emotions
	3.5 Human Perception of Artificial Intelligence
	3.6 Human Acceptance of Artificial Intelligence
	3.7 Artificial Intelligence Perception Design
	3.8 The Psychology of Human-Robot Collaboration
	3.9 Discussion
Chapter 4: Cognitive Intelligence and the Brain: Synthesizing Human Brain Functions
	4.1 The Artificial Cognitive Neural Framework (ACNF) Architecture
		4.1.1 Cognitrons
	4.2 The Artificial Prefrontal Cortex (The Mediator)
		4.2.1 Artificial Prefrontal Cortex and Cognitive Control
		4.2.2 Artificial Prefrontal Cortex Framework
		4.2.3 Artificial Prefrontal Cortex Architecture
		4.2.4 Artificial Prefrontal Cortex Processing
	4.3 Self-Evolving, Cognitrons: The Heart of the SELF
		4.3.1 Self-Adapting Cognitrons
		4.3.2 Cognitron Tasking
		4.3.3 The Cognitron Dialectic Search Argument (DSA)
		4.3.4 The Cognitron Software Architecture
		4.3.5 Teaching Cognitrons to Learn and Reason
	4.4 Continuously Recombinant Neural Fiber Threads
		4.4.1 Self-Adaptive Cognitive Neural Fibers
		4.4.2 Stochasto-Chaotic Differential Constraints
		4.4.3 Continuously Recombinant Neural Fiber Topology
	4.5 Discussion
Chapter 5: Artificial Memory Systems
	5.1 Artificial Context in Memory Systems
	5.2 Sensory Memories
	5.3 Short-Term Artificial Memories
		5.3.1 Short-Term Memory Attention Processing
	5.4 Long-Term Artificial Memories
		5.4.1 Explicit or Declarative Long-Term Memories
		5.4.2 Long-Term Spatio-temporal Memories
		5.4.3 Long-Term Semantic Memories
		5.4.4 Long-Term Implicit Memories
			5.4.4.1 Priming Implicit Memory
			5.4.4.2 Procedural Implicit Memory
		5.4.5 Procedural Memory Description
			5.4.5.1 Creation of Artificial Procedural Memory Scripts
	5.5 Group Consciousness and Memory Sharing
	5.6 Emotional Memory
		5.6.1 SELF Artificial Autonomic Nervous System States and Emotional Memories
		5.6.2 SELF Artificial Autonomic Nervous System States
	5.7 Memory Recall in the SELF: Memory Reconstruction
		5.7.1 Constructivist Memory Theory
		5.7.2 Artificial Memory Reconstruction
	5.8 Discussion
Chapter 6: Artificial Consciousness
	6.1 Artificial Neural Cognitrons
	6.2 The SELF Mixture of Experts Architecture
		6.2.1 Dynamic Cognitron Growing and Pruning
	6.3 Artificial Metcognition: Cognitive Regulation
		6.3.1 Artificial Cognition with Metacognition
		6.3.2 Metacognition: Cognitive Self-Awareness and Assessment
	6.4 Artificial Metamemory: Cognitive Understanding and Learning
		6.4.1 Cognitive Visibility and Governance
	6.5 Metacognitive and Metamemory Structures
	6.6 Extended Metacognition: Artificial Locus of Control Within the SELF
		6.6.1 Artificial Locus of Control
		6.6.2 Constructivist Learning
		6.6.3 Bounded Conceptual Reality (Cognitive Economy)
	6.7 Cognitive System Management
		6.7.1 SELF Memory Management
		6.7.2 SELF Learning Management
		6.7.3 SELF Decision Management
		6.7.4 SELF Rules Management
		6.7.5 SELF Cognitron Management
	6.8 Discussion
Chapter 7: Learning in an Artificial Cognitive System
	7.1 Autonomous Heterogeneious Level Learning Environment
	7.2 Autonomous Genetic Learning Environments
	7.3 SELF Emotional Learning
	7.4 Decision Analytics in Real-Time (DART)
		7.4.1 Case-Based DART
	7.5 Cognitronic Learning
		7.5.1 Cognitron Autonomy
		7.5.2 Cognitronic Cognition
		7.5.3 Conscious Cognitrons
		7.5.4 Autonomous Learning Mechanisms
		7.5.5 Autonomous Behavior Learning
		7.5.6 Behavior Learning and Human Interaction
	7.6 DART Occam Learning
		7.6.1 DART Pattern Discovery
		7.6.2 DART Pattern Discovery Concepts
		7.6.3 DART Computational Mechanics and Occam Learning
	7.7 DART Constructivist Learning Concepts
		7.7.1 Adaptation of Constructivist Learning Concepts to the SELF
	7.8 Discussion
Chapter 8: Synthetic Reasoning
	8.1 Human Reasoning Concepts
		8.1.1 Human Thinking
		8.1.2 Modular Reasoning
		8.1.3 Distributed Reasoning
		8.1.4 Collaborative Reasoning
	8.2 Types of Reasoning
		8.2.1 Logical Reasoning
		8.2.2 Humans and Inductive/Deductive Reasoning
		8.2.3 Moral and Ethical Reasoning
	8.3 SELF Reasoning
	8.4 Abductive Reasoning: Possibilistic, Neural Networks
		8.4.1 Artificial Creativity
		8.4.2 Creativity Through Problem Solving
		8.4.3 Dialectic Reasoning Framework
		8.4.4 FuNN Creating DAS
		8.4.5 DAS Reasoning Approximation
		8.4.6 Cognitron Archetype Descriptions
		8.4.7 The Fuzzy, Unsupervised, Active Resonance Theory, Neural Network (FUNN)
	8.5 Cognitron Theory
		8.5.1 Intelligent Software Agent Definition
		8.5.2 Weak Intelligent Software Agents
		8.5.3 Intelligent Software Agents
		8.5.4 Software Agents and Intelligence
		8.5.5 The Cognitron
	8.6 Knowledge Relativity and Reasoning
		8.6.1 Knowledge Relativity
		8.6.2 Knowledge Relativity Threads
		8.6.3 Frameworks for Contextual Knowledge Refinement
	8.7 Knowledge Density Mapping Within a SELF
		8.7.1 Knowledge Density Mapping: Pathway to SELF Metacognition
		8.7.2 Analytical Competency
	8.8 Discussion
Chapter 9: Artificial Cognitive System Architectures
	9.1 Cognitronic Artificial Consciousness Architecture
		9.1.1 Synthetic Neocortex Adaptation
		9.1.2 Cognitronic Information Flow
		9.1.3 Artificial Abductive Reasoning
		9.1.4 Elementary Artificial Occam Abductivity
		9.1.5 Synthesis of Artificial Occam Abduction
		9.1.6 Artificial Occam Abductive Hypothesis Evaluation Logic
		9.1.7 SELF’s Overall Cognitive Cycle
		9.1.8 SELF Sensory Environment
		9.1.9 ISAAC’s Lower Brain Function Executives
		9.1.10 ISAAC as an Artificial Central Nervous System
	9.2 The Cognitive, Interactive Training Environment (CITE)
		9.2.1 SELF Cognitive Resiliency
		9.2.2 SELF Cognitive Resiliency and Memory Development
		9.2.3 SELF Procedural Memory Development and Resiliency
	9.3 Discussion
Chapter 10: Artificial Cognitive Software Architectures
	10.1 Artificial Prefrontal Cortex Genesis
	10.2 Cognitron Service Instantiation
	10.3 Cognitron Personalities
	10.4 Cognitron Flexibility
		10.4.1 Mediator Service
		10.4.2 Data Acquisition Service
		10.4.3 Signal Processing Service
		10.4.4 The Data Flow Service
		10.4.5 Alerts and Alarms Service
		10.4.6 Health Assessment Service
		10.4.7 Inference Engine Service
		10.4.8 Prognostic Service
		10.4.9 Decision Reasoning Service
		10.4.10 Histories Service
		10.4.11 Configuration Service
		10.4.12 Human Systems Interface Service
		10.4.13 Proxy Service
	10.5 SELF Service Node Strategies
	10.6 Discussion
Chapter 11: SELF Physical Architectures
	11.1 The Reconfigurable Advanced Rapid-Prototyping Environment (RARE)
	11.2 Physically Modularity and Scalability
	11.3 Discussion
Chapter 12: Cyber Security Within a Cognitive Architecture
	12.1 SELF Cognitive Security Architecture
	12.2 SELF Cognitive Security Architecture: Threat
	12.3 SELF Cognitive Security Architecture: Vulnerability
	12.4 SELF PENLPE Security Management Ontology
	12.5 SELF Security Management: Self-Diagnostics and Prognostics
	12.6 PENLPE Prognostic Security Management (PSM)
	12.7 Abductive Logic and Emotional Reasoners
	12.8 Self-Soothing Mechanisms
		12.8.1 SELF Self-Soothing: Acupressure
		12.8.2 SELF Self-Soothing: Deep Breathing
		12.8.3 SELF Self-Soothing: Amplification of the Feeling
		12.8.4 SELF Self-Soothing: Imagery
		12.8.5 SELF Self-Soothing: Mindfulness
		12.8.6 SELF Self-Soothing: Positive Psychology
	12.9 SELF Internal Information Encryption
	12.10 Discussion
Chapter 13: Conclusions and Next Steps
	13.1 The Future SELF
	13.2 Zeus: A Self-Evolving Artificial Life Form
	13.3 Early Research into Cognitrons: Adventures in Cyberspace
	13.4 What’s Next?
Acronyms
References
Index
                        
Document Text Contents
Page 1

James A. Crowder · John N. Carbone
Shelli A. Friess

Arti� cial
Cognition
Architectures

Page 2

Artifi cial Cognition Architectures

Page 136

121

enabling environmental adaptation as described in Fig. 7.5 which illustrates the
DART finite state machine for genetic learning and Fig. 7.6 describing the learned
behavior selection process.

The DART Finite State Machine is utilized to manage the genetic learning pro-
cesses within the DART. The DART Finite State Machine accepts input from the
Behavioral Learning Model and accesses the genetic hypotheses generation and
testing processes to help manage the DART learning process. Each state drives dif-
ferent actions from DART. Termination of the learning process occurs either when
the system has determined it has hypotheses that adequately explain the observa-
tions/data/information or when the system cannot make a determination, either
because it has taken too long (timed out) or because there is enough rebuttal evi-
dence to the hypotheses that they are not worth pursuing (lost focus).

The process of abduction (hypothesis-based learning) makes use genetically
generated populations of hypotheses (described in Chap. 9) to create potential
explanations for the observations/data/information being processed. Finite state
conditions and resulting actions are as follows:

• Start: this state determines whether the input from the DART Learning
Behavioral Model is adequate, based on goals, needs, mission constraints, etc. If
the input is adequate, learning has occurred and the process terminates and sends
it on to memory processes. If the input it determines the learning models from
the DART Behavioral Model is not adequate, further reasoning, analysis, and

Determine
Learning

Approach

Interact
With
SELF

Start
Learning

Search for
Answers

Terminate
Learning

Already
Learned

Not Learned

Close

Lost Focus
For Learning

Timed
Out

Success

Failure

Not Close

Possible
Solu�on
Detected

Learned

Not Learned

Close

Fig. 7.5 DART genetic learning finite state machine

7.5 Cognitronic Learning

http://dx.doi.org/10.1007/978-1-4614-8072-3_9

Page 137

122

learning processing is required. Depending on the “state of learning” an approach
is determined and the information is sent to the “Search” process to look for
relevant information and possibly create more abductive hypotheses to increase
the level of explanation (increase the knowledge density for that topic).

• Approach: in this state the learning approach is evaluated and the information is
passed on to the Search state. There are many learning approaches that are pos-
sible, as explained throughout Chap. 7. Input from the other states helps deter-
mine the approach. Detected tells the approach state that information and/or
hypotheses that may be useful are available for evaluation. An input of “not
close” tells the approach state that the learning is still valid, much work is need
for an adequate explanation; DART hasn’t learned much yet.

• Interact: this state determines the level of interaction DART requires from the
rest of the SELF, including the level of resources that will be required to continue
the learning process. This required interaction with the Artificial Prefrontal
Cortex to request resources and possibly with interface Cognitrons to request
outside information (depending on the SELFs Locus of Control determination).

• Search: this is the main “learning” state for DART, but Abductive, Hypothesis-
Driven, Occam Learning system, described next in Sect. 7.6. Once DART deter-
mines that something has been adequately learned, the Search state deems the
learning a “success” and sends notification to the rest of the SELF. If a success
criteria is not reached in the given time frame (which is determined on a case-by-
case basis), a “time out” signal is sent to the Terminate state.

Check
Applicability

Load
Behavior

Refer to
Introspec�on

Wait for
Conscious

Input

Ac�ve
Cognitron

Introspec�ve
Cognitron

Ac�ve Behavior
Sugges�on
Received

Possible
Behaviors Loaded

Introspec�ve
Sugges�ons

Received

Think about
Finite State

Machine Behavior

Input from
Finite State

Machine

Output to
Finite State

Machine

Execute Finite
State Machine

Behavior

Fig. 7.6 DART behavior selection process

http://dx.doi.org/10.1007/978-1-4614-8072-3_7

Page 272

262

E
Emotional learning , 71, 93, 102, 110, 111,

113, 236, 240
Emotional memory , 23, 28, 33–37, 60, 70–75,

81–83, 92, 97, 131, 132, 164, 168–170,
187, 195, 196, 200, 236–240

Evolution , 9, 38, 48, 49, 51, 74, 91, 94, 104,
106, 131, 175, 199, 211, 212, 224

F
Field-programmable gate array (FPGA) , 223,

224, 226
Fusion , 46, 54, 160, 165
Fuzzy inference , 40, 75, 92, 97, 131, 143, 146,

155, 167, 186, 218, 219, 239

G
Granularity , 2, 79, 87, 165, 232

H
Humanistic , 1, 2, 8, 9, 12, 14, 21, 23, 31, 116,

140, 141, 143, 144
Humanoid , 3, 4, 22
Human reasoning , 1, 2, 8, 29, 37, 38, 40, 43,

53, 60, 68, 83, 101, 133, 135–139, 142,
143, 147, 219, 247

I
Inductive reasoning , 9, 80, 138, 139
Information continuum , 8–9, 11–15
Information fragment , 9, 10, 12–14, 42, 56,

58–61, 64, 69, 70, 73, 75, 100, 102,
127, 133, 143, 162, 163, 166, 167, 230

Intuition , 9, 19–21, 239, 247
ISAAC , 10, 123, 132, 133, 171, 174–176, 178,

179, 183, 185, 187–198, 201, 208, 210,
218–221, 224, 227, 230, 232, 234, 236,
238

J
Java , 38, 142, 157

L
Locus of control , 93–99, 122, 130, 131, 155,

156, 165, 176, 189, 195, 204
Long-term memory (LTM) , 9, 28, 29, 47,

57–70, 75–77, 80–82, 97, 100, 103, 131,
133, 164, 169, 174, 204, 207, 212, 217

M
Markov , 32, 49, 50
Memory reconstruction , 62, 63, 75–78
Metacognition , 85–89, 93–99, 116, 166–168
Metamemory , 69, 81, 85, 87, 89–92, 100, 131,

204
Mindfulness , 32, 240
Modular reasoning , 136, 137

N
Neocortex , 171, 174–175, 179, 193, 195, 216
Neural fi ber , 46, 48, 50, 51, 61, 111, 142, 158,

182, 232
Neural network , 27, 39, 48, 102, 141–151,

244, 245
Neuroscience , 1, 2, 141, 238

O
Occam Abduction , 179–181, 183, 185, 186, 219
Occam Learning , 111, 113, 114, 122–129,

132, 184, 193
Ontology , 29, 36, 41, 42, 45, 69, 75, 97, 99,

124, 144, 230–233

P
Polymorphic, evolving, neural learning and

processing environment (PENLPE) , 10,
102–104, 132, 133, 147–151, 187–189,
191, 193, 194, 197, 199, 204, 210, 212,
215, 232–237

Possibilistic , 32–34, 52, 90, 137, 140–151,
167, 185, 218, 233, 236, 239

Procedural memory , 65–68, 78, 82, 117, 132,
133, 189, 200–201, 245

Prognostics , 39, 42, 149, 157, 214, 219–220,
235–236, 240

R
Robot , 1, 3–7, 22, 25, 26, 153, 244

S
Self-assessment , 2, 8, 14, 87–89, 91, 92, 102,

110, 130, 131, 179, 189, 191, 193, 196,
199–201, 227, 236

Self-healing , 8, 14, 196, 227, 236
Sensory memory , 29, 56, 75, 196, 200
Short-term memory (STM) , 9, 28–30, 46, 47,

56–61, 74–77, 97, 100, 103, 119, 131,
164, 174, 175, 204, 213, 217

Index

Page 273

263

Social behavior , 22, 31
STM. See Short-term memory (STM)
Synthetic reasoning , 2, 9, 135–171

T
Topical map , 9, 13, 32, 35, 36, 42, 43, 50,

66, 75–79, 89–91, 133, 144, 146,
147, 177, 189

Toulmin , 40, 143

Index

Similer Documents