CouchDB Cluster

Lets look at how one can layer a cluster on top of CouchDB.

Couch Cluster

A “Couch Cluster” is composed of multiple “partitions”. Each partition is composed of multiple replicated DB instances. We call each replica a “virtual node”, which is basically a DB instance hosted inside a "physical node", which is a CouchDB process running in a machine. “Virtual node” can migrate across machines (which we also call “physical node”) for various reasons, such as …
  • when physical node crashes
  • when more physical nodes are provisioned
  • when the workload of physical nodes are unbalanced
  • when there is a need to reduce latency by migrating closed to the client

Proxy


The "Couch Cluster" is frontend by a "Proxy", which intercept all the client's call and forward it to the corresponding "virtual node". In doing so, the proxy has a "configuration DB" which store the topology and knows how the virtual nodes are distributed across physical nodes. The configuration DB will be updated at more DBs are created or destroyed. Changes of the configuration DB will be replicated among the proxies so each of them will eventually share the same picture of the cluster topology.


In this diagram, it shows a single DB, which is split into 2 partitions (the blue and orange partitions). Each partition has 3 replicas, where one of them is the primary and the other two are secondaries.

Create DB
  1. Client call Proxy with URL=http://proxy/dbname; HTTP_Command = PUT /dbname
  2. Proxy need to determine number of partitions and number of replications is needed, lets say we have 2 partitions and each partition has 3 copies. So there will be 6 virtual nodes. v1-1, v1-2, v1-3, v2-1, v2-2, v2-3.
  3. Proxy also need to determine which virtual node is the primary of its partition. Lets say v1-1, v2-1 are primary and the rest are secondaries.
  4. And then Proxy need to determine which physical node is hosting these virtual nodes. say M1 (v1-1, v2-2), M2 (v1-2, v2-3), M3 (v1-3, v2-1).
  5. Proxy record its decision to the configuration DB
  6. Proxy call M1 with URL=http://M1/dbname_p1; HTTP_Command = PUT /dbname_p1. And then call M1 again with URL=http://M1/dbname_p2; HTTP_Command = PUT /dbname_p2.
  7. Proxy repeat step 6 to M2, M3

List all DBs
  1. Client call Proxy with URL=http://proxy/_all_dbs; HTTP_Command = GET /_all_dbs
  2. Proxy lookup the configuration DB to determine all the DBs
  3. Proxy return to client

Get DB info
  1. Client call Proxy with URL=http://proxy/dbname; HTTP_Command = GET /dbname
  2. Proxy will lookup the configuration DB for all its partitions. For each partition, it locates the virtual node that host the primary copy (v1-1, v2-1). It also identifies the physical node that host these virtual nodes (M1, M3).
  3. For each physical node, say M1, the proxy call it with URL=http://M1/dbname_p1; HTTP_Command = GET /dbname
  4. Proxy do the same to M3
  5. Proxy combine the results of M1, and M3 and then forward to the client

Delete DB
  1. Client call Proxy with URL=http://proxy/dbname; HTTP_Command = DELETE /dbname
  2. Proxy lookup which machines is hosting the clustered DB and find M1, M2, M3.
  3. Proxy call M1 with URL=http://M1/dbname_p1; HTTP_Command = DELETE /dbname_p1. Then Proxy call M1 again with URL=http://M1/dbname_p2; HTTP_Command = DELETE /dbname_p2.
  4. Proxy do the same to M2, M3

Get all documents of a DB
  1. Client call Proxy with URL=http://proxy/dbname/_all_docs; HTTP_Command = GET /dbname/_all_docs
  2. Proxy will lookup the configuration DB for all its partitions. For each partition, it randomly locates the virtual node that host a copy (v1-2, v2-2). It also identifies the physical node that host these virtual nodes (M1, M2).
  3. Proxy call M1 with URL=http://M1/dbname_p1/_all_docs; HTTP_Command = GET /dbname_p1/_all_docs.
  4. Proxy do the same to M2
  5. Proxy combine the results of M1, and M3 and then forward to the client

Create / Update a document
  1. Client call Proxy with URL=http://proxy/dbname/docid; HTTP_Command = PUT /dbname/docid
  2. Proxy will invoke "select_partition(docid)" to determine the partition, and then lookup the primary copy of that partition (e.g. v1-1). It also identifies the physical node (e.g. M1) that host this virtual node.
  3. The proxy call M1 with URL=http://M1/dbname_p1/docid; HTTP_Command = PUT /dbname_p1/docid

Get a document
  1. Client call Proxy with URL=http://proxy/dbname/docid; HTTP_Command = GET /dbname/docid
  2. Proxy will invoke "select_partition(docid)" to determine the partition, and then randomly get a copy of that partition (e.g. v1-3). It also identifies the physical node (e.g. M3) that host this virtual node.
  3. The proxy call M3 with URL=http://M3/dbname_p1/docid; HTTP_Command = GET /dbname_p1/docid

Delete a document
  1. Client call Proxy with URL=http://proxy/dbname/docid?rev=1234; HTTP_Command = DELETE /dbname/docid?rev=1234
  2. Proxy will invoke "select_partition(docid)" to determine the partition, and then lookup the primary copy of that partition (e.g. v1-1). It also identifies the physical node (e.g. M1) that host this virtual node.
  3. The proxy call M1 with URL=http://M1/dbname_p1/docid?rev=1234; HTTP_Command = DELETE /dbname_p1/docid?rev=1234

Create a View design doc
  1. Client call Proxy with URL=http://proxy/dbname/_design/viewid; HTTP_Command = PUT /dbname/_design/viewid
  2. Proxy will determine all the virtual nodes of this DB, and identifies all the physical nodes (e.g. M1, M2, M3) that host these virtual nodes.
  3. The proxy call M1 with URL=http://M1/dbname_p1/_design/viewid; HTTP_Command = PUT /dbname_p1/_design/viewid. Then proxy call M1 again with URL=http://M1/dbname_p2/_design/viewid; HTTP_Command = PUT /dbname_p2/_design/viewid.
  4. Proxy do the same to M2, M3

Query a View
  1. Client call Proxy with URL=http://proxy/dbname/_view/viewid/attrid; HTTP_Command = GET /dbname/_view/viewid/attrid
  2. Proxy will determine all the partitions of "dbname", and for each partition, it randomly get a copy of that partition (e.g. v1-3, v2-2). It also identifies the physical node (e.g. M1, M3) that host these virtual nodes.
  3. The proxy call M1 with URL=http://M1/dbname_p1/_view/viewid/attrid; HTTP_Command = GET /dbname_p1/_view/viewid/attrid
  4. The proxy do the same to M3
  5. The proxy combines the result from M1, M3. If the "attrid" is a map only function, the proxy will just concatenate all the results together. But if the "attrid" has a reduce function defined, then the proxy will invoke the view engine's reduce() function with rereduce = true. Then the proxy return the combined result to the client.

Replication within the Cluster
  1. Periodically, Proxy will replicate the changes of ConfigurationDB among themselves. This will ensure all the proxies are having the same picture of the topology.
  2. Periodically, Proxy will pick a DB, pick one of its partition, and replicate the changes from the primary to all the secondaries. This will make sure all the copies of each partition of DB are in sync.

Client data sync

Lets say the client also has a local DB, which is replicated from the cluster. This is important for occasionally connected scenario, where the client may disconnect with the cluster for a time period and work with the local DB for a while. Later on when the client connects back to the cluster again, the data between the local DB and the cluster need to be synchronized.

To replicate changes from the local DB to the cluster ...
  1. Client start a replicator, and send the POST /_replicate with {source : "http://localhost/localdb, target: "http://proxy/dbname"}
  2. The replicator, which has remembered the last seq_num of the source in the previous replication, and extract all the changes of the localDB since then.
  3. The replicator push these changes to the proxy.
  4. The proxy will examine the list of changes. For each changed document, it will call "select_partition(docid)" to determine the partition, and then lookup the primary copy of that partition and then the physical node that host this virtual node.
  5. The proxy will push this changed document to the physical node. In other words, the primary copy of the cluster will first receive the changes from the localDB. These changes will be replicated to the secondary copies at a latter time.
  6. When complete, the replicator will update the seq_num for the next replication.
To replicate the changes from the cluster to the localDB
  1. Client starts the replicator, which has remembered the last "seq_num" array of the cluster. The seq_num array contains all the seq_num of each virtual node of the cluster. This seq_num array is a opaque data structure which the replicator doesn't care.
  2. The replicator send a request to the proxy to extract the latest changs, along with the seq_num array
  3. The proxy first lookup who is the primary of each partition, and then it extract changes from them using the appropriate seq_num from the seq_num array.
  4. The proxy consolidate all changes from each primary copy of each partition, and send them back to the replicator, along with the updated array of seq_num.
  5. The replicator apply these changes to the localDB, and then update the seq_num array for the next replication.

Consistent Multi-Master DB Replication

As explain in my CouchDB implementation notes, the current replication mechanism doesn't provide consistency guarantees. This means if the client connects to different replicas at different time, she may see weird results, including ...
  • Client read a document X and later read the same document X again, but the 2nd read return an earlier revision of X than the 1st read.
  • Client update a document X and after some time, read the document X again, but he doesn’t see his previous update.
  • Client read a document X and based on its value, update document Y. Another client may see the update on document Y but doesn't see document X which document Y's update is based on.
  • Even if a client 1st update document X and then later on update document X the 2nd time, CouchDB may wrongly-perceive there is a conflict between the two updates (if they land on different replicas) and resort to a user-provided resolution strategy to resolve the conflict.
To prevent above situations from happening, here describe a possible extension of CouchDB to provides a "causal consistency guarantee" based on Vector Clock Gossiping technique. The target environment is a cluster of machines.

Here is a few definitions ...

Causal Consistency
  • It is not possible to see the effects before seeing its causes. In other words, when different replicas propagate their updates, it always apply the updates of the causes before applying updates of the "effect".
  • "Effects" and "Causes" are related by a "happens-before" relationship. ie: causes happens-before effect.

Logical Clock
  • A monotonically increasing sequence number that is atomically increase by one whenever an "event" occur.
Event
  • Update a state locally
  • Sending a message
  • Receiving a message

Vector Clock
  • An array of logical clocks where each entry represents the logical clock of a different process
  • VC1 >= VC2 if for every i, VC1[i] >= VC2[i]
  • VC3 = merge(VC1, VC2) where for every i, VC3[i] = max(VC1[i], VC2[i])

Architecture

The basic idea is ...
  • When the client issue a GET, the replica should only reply when it is sure that it has got a value later than what the client has seen before. Otherwise, it delays its response until that happens.
  • When the client issue an PUT/POST/DELETE, the replica immediately acknowledge the client but instead of applying the update immediately, it will put this request into a queue. After all other updates that this update depends on has been applied to the DB state, this update will be applied.
  • Replicas in the background will exchange their update logs so that all the updates will be propagated to all copies.

Each replica maintains ...
  • A "replica-VC" is associated with the whole replica, which is updated when an update request is received from a proxy, or when a gossip message is sent or received.
  • A "state-VC" is associated with the state, which is updated when a pending update from the queue is applied to the local DB
  • A set of other replica's VC, this is the vector clock obtained from other replicas during the last gossip message received from them

The client talks to the same proxy, which maintains the Client's Vector clock. This vector clock is important to filter out inconsistent data when the proxy talking to the replicas which the proxy can choose randomly.

Read (GET) Processing
  1. When the client issue a READ, the proxy can choose any replica to forward its GET (along with the Client's vector clock).
  2. The chosen replica will return the GET result only when it make sure its DB has got the state which is "more updated" than what the client has seen. (ie: stateVC >= clientVC). Otherwise, it will delay its response until this condition happen.
  3. The proxy may timeout and contact another replica
  4. The response of the replica contains its replicaVC. The proxy will refresh its clientVC = merge(clientVC, replicaVC)

Update (PUT/POST/DELETE) Processing
  1. When the client issue an UPDATE, the proxy can choose any replica to forward its UPDATE (which contains a uniqueId, the Client's vector clock and the operation's data).
  2. For fault tolerant reason, the proxy may pick multiple replica to forward its updates (e.g. it may pick M replicas to forward its request and return "success" to the client when N replicas ACK back).
  3. The chosen replica(s) will first advance its logical clock and the replicaVC.
  4. The replica compute a vector timestamp by copying from the clientVC and modify its entry to its logical clock. (ie: ts = clientVC; ts[myReplicaNo] = logicalClock)
  5. The replica attach this timestamp to the update request and put the UPDATE request into the queue. The update record "u" =
  6. The replica send an ACK message containing its replicaVC to the proxy. The proxy will refresh its clientVC = merge(clientVC, replicaVC)
Applying Pending Updates
  1. A pending update "u" can be applied to the state when all the "states" that it depends on has been applied. (ie: stateVC >= u.clientVC)
  2. Periodically, the updatelog is scanned for the above criteria
  3. When this happens, it applies the update "u" to the DB and then update the stateVC = merge(stateVC, u.ts)
  4. Note that while this mechanism guarantees that updates happens in "casual order", (ie: the "effect" will not be updated before its "causes"). It doesn't guarantees "total order". Because independent updates (or concurrent updates) can happen in arbitrary order, the order it happen in different replicas may be different.
Processing Gossip Messages

It is important that Replica exchange the request log among themselves so eventually everyone will have a complete picture for all the update request regardless of where that happens.

Periodically, each replica picks some other replica to send its update log. The strategy to pick who to communicate can be based on a random selection, or based on topology (only talk to neighbors), or based on degree of outdateness (the one with longest time we haven't talked). Once the target replica is selected, a complete update log together with its current replicaVC will be sent to the target replica.

On the other hand, when a replica receive a gossip message from another replica...
  • It will merge the update log of the message with its own update log. ie: For each update record u in the message's update log, it will add u to its own update log unless its replicaVC >= u.ts (which means it already has received a later update that suceed u)
  • Check to see some of the pending update is ready to be apply to the database. Adjust the stateVC accordingly
  • Delete some entries in the log after they have been applied to the DB and knowing that all other replicas has already got it. In other words, let c be the replicaId that "u" is created, then "u" is removable if for every replica i, otherReplicasVC[i][c] > u.ts[c]
  • Update the replicaVC = merge(replicaVC, message.replicaVC)

Basic "Custom Tags" Parsing Script

Basic “Custom Tags” Parsing Script

Today we are going to create a basic Custom Tags parsing script that will parse
special symbols (tags) in text for formatting purpose. Just like writing <b>BOLD</b>,
a web browser parses it as “BOLD” in bold letters, same way our
script will parse tags created by us. One very popular example of custom tag
parsing for formatting purpose is, BBCode which most of the bulletin boards
use to let users format their posts.


This will be a basic example of parsing custom tags so we will only be parsing
two tags. One will convert the enclosing text into bold and other will be used
for italics. After understanding the basic idea, you can easily add more tags
according to your needs and can also use it wherever necessary. One of its good
use will be in Shout Boxes that we had designed a few months back.


Though many would like the use of Regular
Expressions
for parsing, we will not be using them here. For the sake
of simplicity, we will be using only the basic string manipulation functions
available in PHP.


If you look at the code below, you can see an array (2D) holding our custom
tags. Here we’ll be having four information for each tag. Start tag, end
tag (both defined by us), HTML start tag and HTML end tag. To make this more
clear, let’s suppose we want to parse the text “[b]Text[/b]
so that it’s displayed as “Text” in bold. Our start (custom)
tag will be [b], end tag will be [/b], HTML start
tag will be <b> and HTML end tag will be </b>.


As we will be parsing two different custom tags, we have eight elements in
the array. If you want to add more tags, add four elements for each tag, just
like the way the others are. No need to change anything else.


The code:



<form name="form1" method="get" action="">

  <p>

    <!-- textarea should display previously wriiten text -->


    <textarea name="content" cols="35" rows="12" id="content"><? 
if (isset(
$_GET['content'])) echo $_GET['content']; ?></textarea>


  </p>

  <p>

    <input name="parse" type="submit" id="parse" value="Parse">

  </p>

</form>

<?



if(isset($_GET['parse']))


{

    
$content $_GET['content'];

    
//convert newlines in the text to HTML "<br />"


    //required to keep formatting (newlines)

    
$content nl2br($content);

    

    
/* CUSTOM TAGS

    -----------

    */




    //For Tag 1

    
$tag[0][0] = '[b]';

    
$tag[0][1] = '[/b]';


    
$tag[0][2] = '<strong>';

    
$tag[0][3] = '</strong>';




    
//For Tag 2    

    
$tag[1][0] = '[i]';

    
$tag[1][1] = '[/i]';


    
$tag[1][2] = '<i>';

    
$tag[1][3] = '</i>';




    
//count total no. of tags to parse

    
$total_tags count($tag); //2 for now


    

    //parse our custom tags adding HTML tags instead

    //which a browser can understand

    
for($i 0$i<$total_tags$i++)


    {    

        
$content str_replace($tag[$i][0],$tag[$i][2],$content);


        
$content str_replace($tag[$i][1],$tag[$i][3],$content);


    }

    

    
//now the variable $content contains HTML formatted text

    //display it

    
echo '<hr />';


    echo 
$content;

}

?>


The code is pretty straightforward. Isn’t it!


Previous Posts:


CouchDB Implementation

CouchDB is an Apache OpenSource project. It is Damien Katz's brain child and has a number of very attractive features based on very cool technologies. Such as ...
  • RESTful API
  • Schema-less document store (document in JSON format)
  • Multi-Version-Concurrency-Control model
  • User-defined query structured as map/reduce
  • Incremental Index Update mechanism
  • Multi-Master Replication model
  • Written in Erlang (Erlang is good)
There is a wide range of application scenarios where CouchDB can be a good solution fit, from an occasionally connected laptop-based application, high performance data cluster, and all the way up to virtual data storage in the cloud.

To understand deeper about CouchDB design, I am very fortunate to have a conversation with Damien, who is so kind to share many details with me. Here I want to capture what I have learnt from this conversation.

Underlying Storage Structure

CouchDB is a “document-oriented” database where document is a JSON string (with an optional binary attachment). The underlying structure is composed of a “storage” as well as multiple “view indexes”. The “storage” is used to store the documents and the “view indexes” is used for query processing.

Within a storage file, there are “contiguous” regions
which is used to store documents. There are 2 B+Tree indexes to speed up certain assess to the documents.
  • by_id_index (which use the document id as the key). It is mainly use to lookup the document by its document id, it points to a list of revisions (or a tree of revisions in case of conflicts in the replication scenario) since the last compaction. It also keep a the revision history (which won't be affected by compaction).
  • by_seqnum_index (which use a monotonically increasing number as the key). Seqnum is generated whenever a document is updated. (Note that all updates are happening is a serial fashion so the seqnum reflect a sequence of non-concurrent update). It is mainly use to keep track of last point of replication synchronization, last point of view index update.


Append Only Operation

All updates (creating documents, modifying documents and deleting documents) happens in an append only mechanism. Instead of modifying the existing documents, a new copy is created and append to the current region. After that, the b+tree nodes are also modified to point to the new document location. Modification to the b+tree nodes also done in an append-only fashion, which means a new b+tree node is copy and tail-append to the end of the file. This in turn trigger a modification to the parent node of the b+tree node, which cause a new copy of the parent node … until all the way back to the root b+tree node. And finally modify the file header to point to the new root node.

That means all updates will trigger 1 write to the document (except delete) and logN writes to each B+Tree node page. So it is O(logN) complexity.

Append-only operation provide an interesting MVCC (Multi-Version Concurrency Control) model because the file keep a history of all the versions of previous document state. As long as the client hold on to a previous root node of the B+Tree index, it can get a snapshot view. While update can continuously happen, the client won’t see any of the latest changes. Such consistency snapshot is very useful in online backup as well as online compaction.

Note that while read operation is perform concurrently with other read and write. Write operation is perform in a serial order across documents. In other words, at any time only one document update can be in progress (however, write of attachments within a document can happen in parallel).

GET document

When a client issue a HTTP REST GET call to CouchDB, the DBServer …
  • Look at the file header to find the root node of the by_id B+Tree index
  • Traverse down the B+tree to figure out the document location
  • Read the document and return back to client

PUT document (modification)

When a client issue a HTTP REST POST call to CouchDB, the DBServer …
  • Look at the file header to find the root node of the by_id B+Tree index
  • Traverse down the B+tree to figure out the leaf node as well as the document location
  • Read the document. Compare the revision, throw an error if they don’t match.
  • If they match, figure out the old seqnum of the current revision.
  • Generate a new (monotonic increasing) seqnum as well as a new revision
  • Find the last region to see if this document can fit in. If not, allocate another contiguous region.
  • Write the document (with the new revision) into the new region
  • Modify the by_id b+tree to point to the new document location
  • Modify the by_seqnum b+tree to add the new entry (of the new seqnum) and remove the old entry (of the old seqnum).
Note that the by_seqnum B+Tree index always point to the latest revision, previous revision is automatically forgotten.

PUT / POST document (creation)

When a client issue a HTTP REST PUT call to CouchDB, the DBServer …
  • Generate a new (monotonic increasing) seqnum as well as a new document id and revision
  • Find the last region to see if this document can fit in. If not, allocate another contiguous region.
  • Write the document (with the new revision) into the new region
  • Modify the by_id b+tree to point to the new document location
  • Modify the by_seqnum b+tree to add the new entry (of the new seqnum)

DELETE document (modify)

When a client issue a HTTP REST DELETE call to CouchDB, the DBServer …
  • Look at the file header to find the root node of the by_id B+Tree index
  • Traverse down the B+tree to figure out the leaf node as well as the document location
  • Read the document. Compare the revision, throw an error if they don’t match.
  • If they match, figure out the old seqnum of the current revision.
  • Generate a new (monotonic increasing) seqnum as well as a new revision
  • Modify the by_id b+tree revision history to show this revision path is deleted
  • Modify the by_seqnum b+tree to add the new entry (of the new seqnum) and remove the old entry (of the old seqnum).
Online Compaction

As an append-only operation, the storage file will grow over time. So we need to compact the file regularly.
  • Open a new storage file
  • Walk the by_seqnum b+tree index (which only points to the latest revision), locate the document
  • Copy the document to the new storage file (which automatically update the corresponding b+tree indexes in the new storage file).
Note that because of the characteristic of MVCC, the compaction will get a consistency snapshot and can happen concurrently without being interfered by the continuously update after the start of compaction. However, if the rate of update is too high, the compaction process can never catch up with the update which keep appending to the file. There is a throttling mechanism under development to slow down the client update rate.

View Indexes

CouchDB supports a concept of “view” to the database. A view is effectively the result of user-defined processing to the underlying document repository. The user-defined processing has to be organized as a two-step processing, “map” and “reduce”. (note that the reduce semantics is very different from Google’s Map/Reduce model). Map() is a user defined function which transform each documents into zero, one or multiple intermediate objects, which reduce() is another user defined function to consolidate the intermediate objects into the final result.

The intermediate objects of the map() and the reduce() is stored in the view indexes. As the storage gets updated, the previous results stored in the view indexes is no longer valid and has to be updated. CouchDB use an incremental update mechanism so that the refresh of the view indexes is highly efficient.

Views definitions are grouped into a design document.

Each view is defined by one “map” function and an optional “reduce” function.

map = function(doc) {

emit(k1, v1)

emit(k2, v2)

}

reduce = function(keys, values) {

return result
}
The reduce() function needs to be commutative and associative so that the order of reduction can be arbitrary.

Views defined within each design document is materialized in a view file.


Initially, the view file is empty (no index has been built yet). View is built lazily when the first query is made.
  1. CouchDB will walk the by_seqnum B+Tree index of the storage file.
  2. Based on that, CouchDB get the latest revisions of all existing documents
  3. CouchDB remembers the last seqnum and then feed each document to the View Server using “map_doc”.
  4. View Server invoke the map(doc) function, for each emit(key, value) call, an entry is created.
  5. Finally, a set of entries is computed and return back to CouchDB.
  6. CouchDb will add those entries into the B+Tree index, key = emit_key + doc_id. For each of the B+Tree leave node.
  7. CouchDB will send all its containing map entry back to the View Server using “reduce”.
  8. View Server invoke the reduce(keys, values) function.
  9. The reduce result is computed and return back to CouchDB
  10. CouchDb will update the leave B+Tree node to point to the reduce value of its containing map results.
  11. After that, CouchDb move up one level to the parent of the leave B+Tree node. For each of the B+Tree parent node, CouchDB will send the corresponding reduce result of its children nodes to the View Server using “rereduce”.
  12. View Server invoke the reduce(keys, values) function again.
  13. Finally a rereduce result is computed and return back to CouchDB.
  14. CouchDB will update the parent B+Tree node to point to the rereduce value.
CouchDB continues to move up one level and repeat the calculation of rereduce result. Finally the rereduce result of the root node is also updated.


When done, the view index will look something like this …



Incremental View Update

CouchDB updates the view indexes lazily and incrementally. That means, when the documents are updated, CouchDB will not refresh the view index until the next query reaches CouchDB.

Then CouchDB refresh the index in the following way.
  • CouchDB will then walk the by_seqnum B+Tree index of the storage file, starting from the last seqnum.
  • CouchDB extract all the change documents since the last view query and feed them to the view server’s map function, and get back a set of map results.
  • CouchDb update the map result into the B+Tree index, some of the leave B+Tree node will be updated.
  • For those updated leave B+Tree node, CouchDB resend all its containing map entries back to view server to recomputed the reduce value. Then store the reduced value inside the B+Tree node.
  • All the parents of the updated leave B+Tree node, CouchDB need to recompute the rereduce value and store it inside the B+Tree node. Until all the way up to the root node.
Because of the consistent snapshot characteristic, a long-running view query can run concurrently (without interference) with the ongoing update of the DB. However, the query need to wait for the completion of the view index update before seeing the consistent result. There is also an option (under development) to immediately return a stale copy of the view in case the client can tolerate that.

Query processing

When client retrieve the result of a view, there are the following scenarios

Query on Map-only view
In this case, there is no reduce phase of the view indexes update. To perform the query processing, CouchDB simply search the B+Tree to locate the corresponding starting point of the key (note that the key is prefixed by the emit_key) and then return all the map results of that key

Query on Map with reduce
There are 2 cases. If the query is on the final reduce value over the whole view, then CouchDB will just return the rereduce value pointed by the root of B+Tree of the view index.

If the query is on the reduce value of each key (group_by_key = true), then CouchDB try to locate the boundary of each key. Since this range is probably not fitting exactly along the B+Tree node, CouchDB need to figure out the edge of both ends to locate the partially matched leave B+Tree node and resend its map result (with that key) to the View Server. This reduce result will then merge with existing rereduce result to compute the final reduce result of this key.


e.g. If the key span between leave node A to F, then the key falls partially in node A and node F need to be sent to reduce() again. The result will be rereduced with node E’s existing reduce value and node P’s existing rereduce value.


DB Replication

CouchDB supports multiple DB replicas running in difference machines and provide a mechanism to synchronize their data. This is useful in 2 common scenarios
  • Occasionally connected applications (e.g. PDA). In this case, user can work in a disconnected mode for a time period and store his data changes locally. Later on when he connects back to his corporate network, he can synchronize his changes back to his corporate DB.
  • Mission critical app (e.g. clusters). In this case, the DB will be replicate across multiple machines so that reliability can be achieved through redundancy and high performance can be achieved through load balancing
Underneath there is a replicator process which accepts replication commands. The command specifies the source DB and target DB. The replicator will then ask the source DB for all the updated documents after a particular seq_num. In other words, the replicator need to keep track of the last seq_num. Then it send a request to the target DB to pull the current revision history of all these documents and check whether the revision history of the target is older than the source. If so, it will push the change documents to the target, otherwise, it will skip sending the doc.

At the targetDB, conflicts can be detected when the document have been updated in the target DB. The conflict will then be flagged in the revision tree pointed by the by_id index.

Before this conflict is resolved, CouchDB will consider the revision with the longest path to be the winner and will show that in the views. However, CouchDB expects there is a separate process (maybe manually) to fix the conflict.

Now, building multi-master replica model based on bi-directional data synchronization on top of the replicator is pretty straightforward.

For example, we can have a pair-wise "gossip" process that runs periodically (or triggered by certain events). The process will do the following ...
  1. Copy the changes from source = replica A to target = replica B
  2. Reverse the direction, copy the changes from source = replica B to target = replica A
  3. Pick randomly between replicaA or replicaB, call it a winner.
  4. Call a user-provided merge(doc_revA) function to fix the revision tree. Basically running app-specific logic to bring the tree back to a list.
  5. Copy the changes back from the winner to the loser. This will replicate the fixes.


Data Consistency Considerations

CouchDB doesn’t have the transaction concept nor keep track of the inter-dependency between documents. It is important to make sure that the data integrity doesn’t span across more than one documents.

For example, data integrity may become an issue if you application read document-X and based on what it read to update document-Y. It is possible that after you read document-X, some other application may have change document-X into something else that you are not aware of. And you update document-Y based on a stale value. CouchDB cannot detect these kind of conflict because it happens in two different documents.

Additional data consistency issues happen in the data replication setup. Since the data synchronization happens in the background, there will be a latency to see the latest changes if it happens in other replicas. If the client connect to the replica in an undeterministic way, then the following scenario can happen …
  • Client read a document and later read the same document again, but the 2nd read return an earlier revision than the 1st read.
  • Client update a document and later read the document again, but it doesn’t see his own update.

Unable to load dynamic library php_curl.dll: Error message in PHP

PHP logoIn Windows, while trying to use "curl" extension with PHP 5.2/Apache 2; you may encounter a blank page. If you open up the error log file in Apache (%APACHE_HOME%\logs\error.log); you may see an error message as follows.

"PHP Warning: PHP Startup: Unable to load dynamic library '%PHP_HOME\\ext\\php_curl.dll' - The application has failed to start because its side-by-side configuration is incorrect. Please see the application event log for more detail."

This error message can be caused due to some version conflicts on a set of DLLs that are already available inside %windows%\system32 directory.

First check the %windows%\system32 directory to see whether the following DLL files are already available.
curl logo
  1. libeay32.dll
  2. ssleay32.dll
If those are available, then you would be able to resolve the issue as follows. (However if they are not available, you would need to find some alternative solution).

Now you must first rename the above two DLL files. We did as follows by adding an .old extesion.
  1. libeay32.dll.old
  2. ssleay32.dll.old
Both these libeay32.dll and ssleay32.dll files are available inside your %PHP_HOME% directory. Now copy both of those DLL files into %windows%\system32 folder.

Now try to use the "curl" extension, and it will start to function properly. (Sometimes you may need to restart Windows).

However if this tip could not resolve your issue, make sure to restore the previous DLLs.

Manage HTTP headers with Java Servlets: Quick Notes

Servlet & JSPIn Java Servlets API, both HttpServletRequest and HttpServletResponse interfaces (in javax.servlet.http package) provide methods to programatically manipulate HTTP headers. There are a number of standard HTTP headers exchanged between a web server and a client (eg: a browser). "Content-Type" is a commonly used header (which is used to specify MIME type) in Servlets. In this article we are discussing how headers are read/written with Servlet classes.

Reading Headers

A servlet can read HTTP headers sent by a client request using HttpServletRequest interface. This interface has two methods for this.

String getHeader(String headerName)
int getintHeader(String headerName)

Both these methods are similar except getIntHeader() method is used to return value of headers with int type values. Below code shows how value of "User-Agent" header is read from the user request. (HttpServlet.doGet() method is used in the example).


import javax.servlet.http.*;
import java.io.*;

public class MyServlet extends HttpServlet {
public void doGet(re, res) throws IOException{
String contentType = request.getHeader("Content-type");
....
}
}

Creating/Writing Headers

A servlet can create a header and send back to the client using HttpServletResponse interface; using the following setter methods.

void setHeader(String headerName, String headerValue)
int setintHeader(String headerName, int headerValue)

Below code shows how a new header named "My_Header" is created and set on response.


import javax.servlet.http.*;
import java.io.*;

public class MyServlet extends HttpServlet {
public void doGet(re, res) throws IOException{
response.setHeader("My_Header", "new Header value");
....
}
}

Now the client (generally the browser) will receive this new header.

Flight Simulator in BASIC

I remember years ago that there was an instrument-only flight simulator written in BASIC and ported to different computers.  It wasn't much to look at because it was all character based.  Tom Nally has just released an open source flight simulator written in Liberty BASIC.  This one blows the doors off that old classic.  Check it out here.

Google redirect to incorrect country domain: Fix on Firefox

Google servers identify the country of the googler and redirect to the relevant country domain. They may be using the IP of the request to locate the country before the redirect. However there are situations many googler getting redirected to incorrect country domains. For example, even though our requests are generating from Sri Lanka, most of the time we are redirected to another country (Taiwan) domain. Some misconfigurations at the routers or at the ISP may cause this issue, which makes it harder for a general browser user to get that resolved. This automatic redirect mechanism does not depend on the browser used.

Following is a quick fix to this issue. However this will only work on Firefox browser, sorry friends if you are on another browser.

Install Redirector extension

Redirector is a small extension available for Firefox, and it supports most of the versions ranging from 2.0 to 3.0.*. (we are using latest Firefox; 3.0.3). You can freely install it from here.
If you are a newbie to Firefox extensions; just click the "Add to Firefox" button in the web page and click "Install now" in the pop up box. You must restart Firefox, to complete this installation.

Now you will see a small icon in status bar (bottom right corner) as shown here. This is the redirector extension we installed above. You can enable/disable this extension just by clicking on this icon.

Configure a redirect

Right clicking on that Redirect icon will display a settings panel. Click "Add.." button to add a new redirect.

Now we have to fill 3 options in this new redirect box.

1). Include Pattern: Here you must try the incorrect Google country domain url. As our incorrect country domain is Taiwan, we set the following as "Include Pattern".
http://www.google.com.tw/*

2). Redirect to: This is the place where you specify the correct country domain url. As we live in Sri Lanka, following is the redirect url for us.
http://www.google.lk/$1

3). Pattern Type: Set this to "Wildcard".

It's done, quite simple. Now the Redirect settings box will look somewhat similar to the above image.

Now try and load Google home page, and it will display the country domain home page you set in the redirect settings.

BASIC for the web a killer app?

If you ask the average geek worth his salt what was the killer app that launched the microcomputer revolution the answer would probably be "Visicalc, of course!"

I take issue with this answer. Long before the spreadsheet application Visicalc was a gleam in Dan Bricklin's eye the most important and powerful application for small computers was the BASIC programming language. Without Microsoft BASIC (and this is Microsoft's real and lasting legacy if you ask me) very few people would have been able to do anything useful with computers. Most versions of BASIC back then were variations on Bill Gates' original BASIC interpreter.

Without BASIC we would not have seen so many kids grow up to be programmers, myself included. This is the very reason why I work on BASIC language products because I believe that there's no good reason why anyone with a little desire and time shouldn't be able to create software.

Run BASIC is very much in the same spirit as the early BASIC, but for the web. Now anyone can create web applications. :-)

Run BASIC - Killer app for the Internet age!

Check out this stream