Download Designing Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems PDF

TitleDesigning Data-Intensive Applications: The Big Ideas Behind Reliable, Scalable, and Maintainable Systems
File Size4.2 MB
Total Pages491
Table of Contents
Table of Contents
About this Book
	Who Should Read this Book?
	Scope of this Book
	Outline of this Book
	Early Release Status and Feedback
Part I. Foundations of Data Systems
Chapter 1. Reliable, Scalable and Maintainable Applications
	Thinking About Data Systems
		Hardware faults
		Software errors
		Human errors
		How important is reliability?
		Describing load
		Describing performance
		Approaches for coping with load
		Operability: making life easy for operations
		Simplicity: managing complexity
		Evolvability: making change easy
Chapter 2. Data Models and Query Languages
	Relational Model vs. Document Model
		The birth of NoSQL
		The object-relational mismatch
		Many-to-one and many-to-many relationships
		Are document databases repeating history?
		Relational vs. document databases today
	Query Languages for Data
		Declarative queries on the web
		MapReduce querying
	Graph-like Data Models
		Property graphs
		The Cypher query language
		Graph queries in SQL
		Triple-stores and SPARQL
		The foundation: Datalog
Chapter 3. Storage and Retrieval
	Data Structures that Power Your Database
		Hash indexes
		SSTables and LSM-trees
		Other indexing structures
		Keeping everything in memory
	Transaction Processing or Analytics?
		Data warehousing
		Stars and snowflakes: schemas for analytics
	Column-oriented storage
		Column compression
		Sort order in column storage
		Writing to column-oriented storage
		Aggregation: Data cubes and materialized views
Chapter 4. Encoding and Evolution
	Formats for Encoding Data
		Language-specific formats
		JSON, XML and binary variants
		Thrift and Protocol Buffers
		The merits of schemas
	Modes of Data Flow
		Data flow through databases
		Data flow through services: REST and RPC
		Message passing data flow
Part II. Distributed Data
Chapter 5. Replication
	Leaders and Followers
		Synchronous vs. asynchronous replication
		Setting up new followers
		Handling node outages
		Implementation of replication logs
	Problems With Replication Lag
		Reading your own writes
		Monotonic reads
		Consistent prefix reads
		Solutions for replication lag
	Multi-leader replication
		Use cases for multi-leader replication
		Handling write conflicts
		Multi-leader replication topologies
	Leaderless replication
		Writing to the database when a node is down
		Limitations of quorum consistency
		Sloppy quorums and hinted handoff
		Detecting concurrent writes
Chapter 6. Partitioning
	Partitioning and replication
	Partitioning of key-value data
		Partitioning by key range
		Partitioning by hash of key
		Skewed workloads and relieving hot spots
	Partitioning and secondary indexes
		Partitioning secondary indexes by document
		Partitioning secondary indexes by term
	Rebalancing partitions
		Strategies for rebalancing
		Operations: automatic or manual rebalancing
	Request routing
		Parallel query execution
Chapter 7. Transactions
	The slippery concept of a transaction
		The meaning of ACID
		Single-object and multi-object operations
	Weak isolation levels
		Read committed
		Snapshot isolation and repeatable read
		Preventing lost updates
		Preventing write skew and phantoms
		Actual serial execution
		Two-phase locking (2PL)
		Serializable snapshot isolation (SSI)
Chapter 8. The Trouble with Distributed Systems
	Faults and Partial Failures
		Cloud computing and supercomputing
	Unreliable Networks
		Network faults in practice
		Detecting faults
		Timeouts and unbounded delays
		Synchronous vs. asynchronous networks
	Unreliable Clocks
		Monotonic vs. time-of-day clocks
		Clock synchronization and accuracy
		Relying on synchronized clocks
		Process pauses
	Knowledge, Truth and Lies
		The truth is defined by the majority
		Byzantine faults
		System model and reality
Chapter 9. Consistency and Consensus
	Consistency Guarantees
		What makes a system linearizable?
		Relying on linearizability
		Implementing linearizable systems
		The cost of linearizability
	Ordering Guarantees
		Ordering and causality
		Sequence number ordering
		Total order broadcast
	Distributed Transactions and Consensus
		Atomic commit and two-phase commit (2PC)
		Distributed transactions in practice
		Fault-tolerant consensus
		Membership and coordination services
Part III. Derived Data
Chapter 10. Batch Processing
	Batch Processing with Unix Tools
		Simple log analysis
		The Unix philosophy
	MapReduce and Distributed Filesystems
		MapReduce job execution
		Reduce-side joins and grouping
		Map-side joins
		The output of batch workflows
		Comparing MapReduce to distributed databases
	Beyond MapReduce
		Materialization of intermediate state
		Graphs and iterative processing
		High-level APIs and languages
Chapter 11. Stream Processing
	Transmitting Event Streams
		Messaging systems
		Partitioned logs
	Databases and streams
		Keeping systems in sync
		Change data capture
		Event sourcing
		State, streams, and immutability
	Processing Streams
		Uses of stream processing
		Reasoning about time
		Stream joins
		Fault tolerance
Document Text Contents
Page 245

transactions are needed when updating a single document. However, document
databases lacking join functionality also encourage denormalization (see “Rela‐
tional vs. document databases today” on page 38). When denormalized informa‐
tion needs to be updated, like in the example of Figure 7-2, you need to update
several documents in one go. Transactions are very useful in this situation, to
prevent denormalized data from going out of sync.

• In a database with secondary indexes (almost everything except pure key-value
stores), the indexes also need to be updated every time you change a value. These
indexes are different database objects from a transaction point of view: for exam‐
ple, without transaction isolation, it’s possible for a record to appear in one index
but not another, because the update to the second index hasn’t happened yet.

Such applications can still be implemented without transactions. However, error han‐
dling becomes much more complicated without atomicity, and the lack of isolation
can cause concurrency problems. We will discuss those in “Weak isolation levels” on
page 224.

Handling errors and aborts
A key feature of a transaction is that in the case of a problem, it can be aborted and
retried. ACID databases are based on this philosophy: if the database is in danger of
violating its guarantee of atomicity, isolation or durability, it would rather abandon
the transaction entirely than allow it to continue.

Not all systems follow that philosophy: especially datastores with leaderless replica‐
tion (see “Leaderless replication” on page 171) work much more on a “best effort”
basis, which could be summarized as “the database will do as much as it can, and if it
runs into an error, it won’t undo something it has already done” — so it’s the applica‐
tion’s responsibility to recover from errors.

Errors will inevitably happen, but many software developers prefer to think only
about the happy path rather than the intricacies of error handling. For example, pop‐
ular object-relational mapping (ORM) frameworks such as Rails’ ActiveRecord and
Django don’t retry aborted transactions — the error usually results in an exception
bubbling up the stack, so any user input is thrown away and the user gets an error
message. This is a shame, because the whole point of aborts is to enable safe retries.

Although retrying an aborted transaction is a simple and effective error handling
mechanism, it isn’t perfect:

• If the transaction actually succeeded, but the network failed while the server tried
to acknowledge the successful commit to the client (so the client thinks it failed),
then retrying the transaction causes it to be performed twice — unless you have
an additional application-level deduplication mechanism in place.

The slippery concept of a transaction | 223

Page 490

[60] Philippe Flajolet, Éric Fusy, Olivier Gandouet, and Frédéric Meunier: “HyperLo‐
gLog: the analysis of a near-optimal cardinality estimation algorithm,” at Conference
on Analysis of Algorithms (AofA), pages 137–156, June 2007.

[61] Jay Kreps: “Questioning the Lambda Architecture,”, 2 July 2014.

[62] Ian Hellström: “An Overview of Apache Streaming Technologies,” database‐, 12 March 2016.

[63] Jay Kreps: “Why local state is a fundamental primitive in stream processing,”, 31 July 2014.

[64] Alan Woodward and Martin Kleppmann: “Real-time full-text search with Luwak
and Samza,”, 13 April 2015.

[65] “Apache Storm 1.0.1 Documentation,”, May 2016.

[66] Tyler Akidau: “The world beyond batch: Streaming 102,”, 20 January

[67] Stephan Ewen: “Streaming Analytics with Apache Flink,” at Kafka Summit, April

[68] Tyler Akidau, Alex Balikov, Kaya Bekiroğlu, et al.: “MillWheel: Fault-Tolerant
Stream Processing at Internet Scale,” at 39th International Conference on Very Large
Data Bases (VLDB), pages 734–746, August 2013.

[69] Alex Dean: “Improving Snowplow’s understanding of time,” snowplowanalyt‐, 15 September 2015.

[70] “Windowing (Azure Stream Analytics),” Microsoft Azure Reference,, April 2016.

[71] “State Management,” Apache Samza 0.10 Documentation,,
December 2015.

[72] Rajagopal Ananthanarayanan, Venkatesh Basker, Sumit Das, et al.: “Photon:
Fault-tolerant and Scalable Joining of Continuous Data Streams,” at ACM Interna‐
tional Conference on Management of Data (SIGMOD), June 2013. doi:

[73] Martin Kleppmann: “Samza newsfeed demo,”, September 2014.

[74] Ben Kirwin: “Doing the Impossible: Exactly-once Messaging Patterns in Kafka,”, 28 November 2014.

[75] Matei Zaharia, Tathagata Das, Haoyuan Li, Scott Shenker, and Ion Stoica: “Dis‐
cretized Streams: An Efficient and Fault-Tolerant Model for Stream Processing on
Large Clusters,” at 4th USENIX Conference in Hot Topics in Cloud Computing (Hot‐
Cloud), June 2012.

468 | Chapter 11: Stream Processing

Page 491

[76] Kostas Tzoumas, Stephan Ewen, and Robert Metzger: “High-throughput, low-
latency, and exactly-once stream processing with Apache Flink,”, 5
August 2015.

[77] Paris Carbone, Gyula Fóra, Stephan Ewen, Seif Haridi, and Kostas Tzoumas:
“Lightweight Asynchronous Snapshots for Distributed Dataflows,” arXiv:1506.08603
[cs.DC], 29 June 2015.

[78] Flavio Junqueira: “Making sense of exactly-once semantics,” at Strata+Hadoop
World London, June 2016.

[79] Pat Helland: “Idempotence is not a medical condition,” Communications of the
ACM, volume 55, number 5, page 56, May 2012. doi:10.1145/2160718.2160734

[80] Jay Kreps: “Re: Trying to achieve deterministic behavior on recovery/rewind,”
email to samza-dev mailing list, 9 September 2014.

[81] Adam Warski: “Kafka Streams - how does it fit the stream processing land‐
scape?,”, 1 June 2016.

Summary | 469

Similer Documents