SlideShare une entreprise Scribd logo
1  sur  508
Riak Tutorial
         Sean Cribbs & Ian Plosker
           Basho Technologies




basho
What is Riak?
• Key-Value store (plus extras)
• Key-Value store (plus extras)
• Distributed, horizontally scalable
• Key-Value store (plus extras)
• Distributed, horizontally scalable
• Fault-tolerant
• Key-Value store (plus extras)
• Distributed, horizontally scalable
• Fault-tolerant
• Highly-available
• Key-Value store (plus extras)
• Distributed, horizontally scalable
• Fault-tolerant
• Highly-available
• Built for the Web
• Key-Value store (plus extras)
• Distributed, horizontally scalable
• Fault-tolerant
• Highly-available
• Built for the Web
• Inspired by Amazon’s Dynamo
Key-Value
Key-Value
• Simple operations - get, put, delete
Key-Value
• Simple operations - get, put, delete
• Value is mostly opaque (some metadata)
Key-Value
• Simple operations - get, put, delete
• Value is mostly opaque (some metadata)
• Extras
Key-Value
• Simple operations - get, put, delete
• Value is mostly opaque (some metadata)
• Extras
 • MapReduce
Key-Value
• Simple operations - get, put, delete
• Value is mostly opaque (some metadata)
• Extras
 • MapReduce
 • Links
Key-Value
• Simple operations - get, put, delete
• Value is mostly opaque (some metadata)
• Extras
 • MapReduce
 • Links
 • Full-text search (new, optional)
Key-Value
• Simple operations - get, put, delete
• Value is mostly opaque (some metadata)
• Extras
 • MapReduce
 • Links
 • Full-text search (new, optional)
 • Secondary Indexes
Distributed &
Horizontally
Distributed &
     Horizontally

• Default configuration is in a cluster
Distributed &
     Horizontally

• Default configuration is in a cluster
• Load and data are spread evenly
Distributed &
     Horizontally

• Default configuration is in a cluster
• Load and data are spread evenly
• Add more nodes to get more X
Fault-Tolerant
Fault-Tolerant
• All nodes participate equally - no
  SPOF
Fault-Tolerant
• All nodes participate equally - no
  SPOF
• All data is replicated
Fault-Tolerant
• All nodes participate equally - no
  SPOF
• All data is replicated
• Cluster transparently survives...
Fault-Tolerant
• All nodes participate equally - no
  SPOF
• All data is replicated
• Cluster transparently survives...
 • node failure
Fault-Tolerant
• All nodes participate equally - no
  SPOF
• All data is replicated
• Cluster transparently survives...
 • node failure
 • network partitions
Fault-Tolerant
• All nodes participate equally - no
  SPOF
• All data is replicated
• Cluster transparently survives...
 • node failure
 • network partitions
• Built on Erlang/OTP (designed for FT)
Highly-Available
Highly-Available
• Any node can serve client requests
Highly-Available
• Any node can serve client requests
• Fallbacks are used when nodes are
  down
Highly-Available
• Any node can serve client requests
• Fallbacks are used when nodes are
  down

• Always accepts read and write
  requests
Highly-Available
• Any node can serve client requests
• Fallbacks are used when nodes are
  down

• Always accepts read and write
  requests

• Per-request quorums
Built for the Web
Built for the Web
• HTTP is default (but not only) interface
Built for the Web
• HTTP is default (but not only) interface
• Does HTTP/REST well (see
  Webmachine)
Built for the Web
• HTTP is default (but not only) interface
• Does HTTP/REST well (see
  Webmachine)

• Plays well with HTTP infrastructure -
  reverse proxy caches, load balancers,
  web servers
Built for the Web
• HTTP is default (but not only) interface
• Does HTTP/REST well (see
  Webmachine)

• Plays well with HTTP infrastructure -
  reverse proxy caches, load balancers,
  web servers
• Suitable to many web applications
Inspired by
Amazon’s Dynamo
Inspired by
Amazon’s Dynamo
• Masterless, peer-coordinated
  replication
Inspired by
Amazon’s Dynamo
• Masterless, peer-coordinated
  replication

• Consistent hashing
Inspired by
Amazon’s Dynamo
• Masterless, peer-coordinated
  replication

• Consistent hashing
• Eventually consistent
Inspired by
Amazon’s Dynamo
• Masterless, peer-coordinated
  replication

• Consistent hashing
• Eventually consistent
• Quorum reads and writes
Inspired by
Amazon’s Dynamo
• Masterless, peer-coordinated
  replication

• Consistent hashing
• Eventually consistent
• Quorum reads and writes
• Anti-entropy: read repair, hinted
  handoff
Lab: Riak Basics
Installing Riak
Installing Riak
• Download from our local server
Installing Riak
• Download from our local server
• Download a package:
  http://downloads.basho.com/riak/
  CURRENT
Installing Riak
• Download from our local server
• Download a package:
  http://downloads.basho.com/riak/
  CURRENT

• Clone & build (need git and Erlang) :
  $ git clone git://github.com/basho/riak.git
  $ make all devrel
You need `curl`
    apt-get install curl
     yum install curl
Start the Cluster
Start the Cluster

$ surge-start.sh
Start the Cluster

$ surge-start.sh
dev1/bin/riak start
dev2/bin/riak start
dev3/bin/riak start
dev4/bin/riak start
Start the Cluster

$ surge-start.sh
dev1/bin/riak start
dev2/bin/riak start
dev3/bin/riak start
dev4/bin/riak start
dev2/bin/riak-admin join dev1
Sent join request to dev1
dev3/bin/riak-admin join dev1
Sent join request to dev1
dev4/bin/riak-admin join dev1
Sent join request to dev1
Check its status
Check its status


$ dev1/bin/riak-admin member_status

$ dev1/bin/riak-admin ring_status

$ dev1/bin/riak-admin status
PUT
PUT
$ surge-put.sh
PUT
$ surge-put.sh
Enter a key: your-key
PUT
$ surge-put.sh
Enter a key: your-key
Enter a value: your-value
PUT
$ surge-put.sh
Enter a key: your-key
Enter a value: your-value
Enter a write quorum value:
PUT
$ surge-put.sh
Enter a key: your-key
Enter a value: your-value
Enter a write quorum value:

* About to connect() to 127.0.0.1 port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/your-key?returnbody=true&w=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
> Content-Type: text/plain
> Content-Length: 10
>
PUT
$ surge-put.sh
Enter a key: your-key
Enter a value: your-value
Enter a write quorum value:

* About to connect() to 127.0.0.1 port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/your-key?returnbody=true&w=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
> Content-Type: text/plain
> Content-Length: 10
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA=
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Date: Tue, 27 Sep 2011 19:21:08 GMT
< Content-Type: text/plain
< Content-Length: 10
<
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
your-value
GET
GET
$ surge-get.sh
GET
$ surge-get.sh
Enter a key: your-key
GET
$ surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):
GET
$ surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):

* About to connect() to 127.0.0.1 port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> GET /riak/surge/your-key?r=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
GET
$ surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):

* About to connect() to 127.0.0.1 port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> GET /riak/surge/your-key?r=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA=
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Last-Modified: Tue, 27 Sep 2011 19:01:45 GMT
< ETag: "51h3q7RjTNaHWYpO4P0MJj"
< Date: Tue, 27 Sep 2011 19:31:01 GMT
< Content-Type: text/plain
< Content-Length: 10
<
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
your-value
GET
$ surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):

* About to connect() to 127.0.0.1 port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> GET /riak/surge/your-key?r=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA=
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Last-Modified: Tue, 27 Sep 2011 19:01:45 GMT
< ETag: "51h3q7RjTNaHWYpO4P0MJj"
< Date: Tue, 27 Sep 2011 19:31:01 GMT
< Content-Type: text/plain
                                                                vector clock
< Content-Length: 10
<
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
your-value
GET
$ surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):

* About to connect() to 127.0.0.1 port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> GET /riak/surge/your-key?r=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA=
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Last-Modified: Tue, 27 Sep 2011 19:01:45 GMT
< ETag: "51h3q7RjTNaHWYpO4P0MJj"
< Date: Tue, 27 Sep 2011 19:31:01 GMT
< Content-Type: text/plain
< Content-Length: 10
<
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
your-value
                                                    caching headers
GET
$ surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):

* About to connect() to 127.0.0.1 port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> GET /riak/surge/your-key?r=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA=
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Last-Modified: Tue, 27 Sep 2011 19:01:45 GMT
< ETag: "51h3q7RjTNaHWYpO4P0MJj"
< Date: Tue, 27 Sep 2011 19:31:01 GMT
< Content-Type: text/plain
< Content-Length: 10
<
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
your-value
                                   content-type
DELETE
DELETE
$ surge-delete.sh
DELETE
$ surge-delete.sh
Type a key: your-key
DELETE
$ surge-delete.sh
Type a key: your-key

* About to connect() to 127.0.0.1 port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> DELETE /riak/surge/your-key HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
DELETE
$ surge-delete.sh
Type a key: your-key

* About to connect() to 127.0.0.1 port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> DELETE /riak/surge/your-key HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
< HTTP/1.1 204 No Content
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Date: Tue, 27 Sep 2011 19:33:59 GMT
< Content-Type: text/plain
< Content-Length: 0
<
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
Kill two nodes
Kill two nodes



$ dev4/bin/riak stop
Kill two nodes



$ dev4/bin/riak stop
ok
Kill two nodes



$ dev4/bin/riak stop
ok

$ dev3/bin/riak stop
Kill two nodes



$ dev4/bin/riak stop
ok

$ dev3/bin/riak stop
ok
Try GETting your key
Try GETting your key
$ . ~/Desktop/surge/surge-get.sh
Try GETting your key
$ . ~/Desktop/surge/surge-get.sh
Enter a key: your-key
Try GETting your key
$ . ~/Desktop/surge/surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):
Try GETting your key
$ . ~/Desktop/surge/surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):

* About to connect() to 127.0.0.1 port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> GET /riak/surge/your-key?r=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
Try GETting your key
$ . ~/Desktop/surge/surge-get.sh
Enter a key: your-key
Enter a read quorum value (default: quorum):

* About to connect() to 127.0.0.1 port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0)
> GET /riak/surge/your-key?r=quorum HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r
zlib/1.2.5
> Host: 127.0.0.1:8091
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA=
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Last-Modified: Tue, 27 Sep 2011 19:50:07 GMT
< ETag: "51h3q7RjTNaHWYpO4P0MJj"
< Date: Tue, 27 Sep 2011 19:51:10 GMT
< Content-Type: text/plain
< Content-Length: 10
<
* Connection #0 to host 127.0.0.1 left intact
* Closing connection #0
your-value
Riak Architecture
Key-Value Data
    Model
Key-Value Data
       Model
• All data (objects) are
  referenced by keys
                              mark


                           beth   sean
Key-Value Data
       Model                                 n_val
                                         allow_mult
                                            quora
• All data (objects) are     people
  referenced by keys
                              mark
• Keys are grouped into
  buckets
                           beth   sean
Key-Value Data
       Model                                 n_val
                                         allow_mult
                                            quora
• All data (objects) are     people
  referenced by keys
                              mark
• Keys are grouped into
  buckets
                           beth   sean
• Simple operations:
  get, put, delete
Key-Value Data
       Model
• All data (objects) are
  referenced by keys
                           people/beth
• Keys are grouped into    vclock: ...
                           content-type: text/html
  buckets
                           last-mod: 20101108T...
                           links: ...
• Simple operations:
  get, put, delete
                              <html><head>....
• Object is composed
  of metadata and value
Consistent Hashing
   & The Ring
Consistent Hashing
   & The Ring
• 160-bit integer
  keyspace
                      0


                              2 160/4




                    2 160/2
Consistent Hashing
   & The Ring
• 160-bit integer
  keyspace
                                    0
• Divided into fixed
  number of evenly-sized
  partitions
                           32 partitions    2 160/4




                                  2 160/2
Consistent Hashing
   & The Ring                                            node 0

                                                         node 1

• 160-bit integer                                        node 2
  keyspace
                                       0                 node 3

• Divided into fixed
  number of evenly-sized
  partitions
                              32 partitions    2 160/4
• Partitions are claimed by
  nodes in the cluster


                                     2 160/2
Consistent Hashing
   & The Ring                 node 0

                              node 1

• 160-bit integer             node 2
  keyspace
                              node 3

• Divided into fixed
  number of evenly-sized
  partitions

• Partitions are claimed by
  nodes in the cluster

• Replicas go to the N
  partitions following the
  key
Consistent Hashing
   & The Ring                                node 0

                                             node 1

• 160-bit integer                            node 2
  keyspace
                                             node 3

• Divided into fixed
  number of evenly-sized
  partitions

• Partitions are claimed by   N=3
  nodes in the cluster

• Replicas go to the N
  partitions following the
  key
                 hash(“conferences/surge”)
Request Quorums
Request Quorums

• Every request contacts all replicas of
  key
Request Quorums

• Every request contacts all replicas of
  key

• N - number of replicas (default 3)
Request Quorums

• Every request contacts all replicas of
  key

• N - number of replicas (default 3)
• R - read quorum
Request Quorums

• Every request contacts all replicas of
  key

• N - number of replicas (default 3)
• R - read quorum
• W - write quorum
Vector Clocks
Vector Clocks
• Every node has an ID (changed in 1.0)
Vector Clocks
• Every node has an ID (changed in 1.0)
• Send last-seen vector clock in every
  “put” or “delete” request
Vector Clocks
• Every node has an ID (changed in 1.0)
• Send last-seen vector clock in every
  “put” or “delete” request

• Riak tracks history of updates
Vector Clocks
• Every node has an ID (changed in 1.0)
• Send last-seen vector clock in every
  “put” or “delete” request

• Riak tracks history of updates
 • Auto-resolves stale versions
Vector Clocks
• Every node has an ID (changed in 1.0)
• Send last-seen vector clock in every
  “put” or “delete” request

• Riak tracks history of updates
 • Auto-resolves stale versions
 • Lets you decide conflicts
Vector Clocks
http://blog.basho.com/2010/01/29/why-vector-clocks-are-easy/

• Every node has an ID (changed in 1.0)
• Send last-seen vector clock in every
   “put” or “delete” request

• Riak tracks history of updates
 • Auto-resolves stale versions
 • Lets you decide conflicts
Hinted Handoff
Hinted Handoff
• Node fails               X
                   X
                               X

               X
                                   X

                   X
                               X
                       X
Hinted Handoff
• Node fails                              X
                                X
• Requests go to fallback                     X

                            X
                                                  X

                                X
                                              X
                                      X
                hash(“conferences/surge”)
Hinted Handoff
• Node fails
• Requests go to fallback
• Node comes back




                hash(“conferences/surge”)
Hinted Handoff
• Node fails
• Requests go to fallback
• Node comes back
• “Handoff” - data
  returns to recovered
  node




                hash(“conferences/surge”)
Hinted Handoff
• Node fails
• Requests go to fallback
• Node comes back
• “Handoff” - data
  returns to recovered
  node

• Normal operations
  resume
                hash(“conferences/surge”)
Anatomy of a
  Request
 get(“conferences/surge”)
Anatomy of a
           Request
          get(“conferences/surge”)
client
 Riak
Anatomy of a
           Request
          get(“conferences/surge”)
client
 Riak

               Get Handler (FSM)
Anatomy of a
           Request
          get(“conferences/surge”)
client
 Riak
                                     hash(“conferences/oredev”)
               Get Handler (FSM)            == 10, 11, 12
Anatomy of a
                          Request
                         get(“conferences/surge”)
          client
           Riak
                                                        hash(“conferences/oredev”)
                              Get Handler (FSM)                == 10, 11, 12




    Coordinating node
        Cluster




6        7         8     9    10     11       12   13         14        15           16
                                   The Ring
Anatomy of a
                           Request
                             get(“conferences/surge”)
          client
           Riak

                                     Get Handler (FSM)

                   get(“conferences/oredev”)
    Coordinating node
        Cluster




6        7           8       9       10        11    12   13   14   15   16
                                          The Ring
Anatomy of a
                          Request
                         get(“conferences/surge”)
          client
           Riak

    R=2                       Get Handler (FSM)



    Coordinating node
        Cluster




6         7        8     9    10     11       12   13   14   15   16
                                   The Ring
Anatomy of a
                          Request
                         get(“conferences/surge”)
          client
           Riak

    R=2       v1              Get Handler (FSM)



    Coordinating node
        Cluster




6         7        8     9    10     11       12   13   14   15   16
                                   The Ring
Anatomy of a
                 Request
                    get(“conferences/surge”)
      client
       Riak

R=2      v1    v2        Get Handler (FSM)
Anatomy of a
                 Request
                    get(“conferences/surge”)
      client                          v2
       Riak

R=2            v2        Get Handler (FSM)
Anatomy of a
  Request
 get(“conferences/surge”)
                 v2
Read Repair
                             get(“conferences/surge”)
          client                               v2
           Riak

    R=2       v1        v2        Get Handler (FSM)



    Coordinating node
        Cluster




6         7        8         9    10     11     12    13   14   15   16
                                  v1            v2
Read Repair
                             get(“conferences/surge”)
          client                               v2
           Riak

    R=2                 v2        Get Handler (FSM)



    Coordinating node
        Cluster




6         7        8         9    10     11     12    13   14   15   16
                                  v1            v2
Read Repair
                             get(“conferences/surge”)
          client                               v2
           Riak

    R=2                 v2        Get Handler (FSM)



    Coordinating node
        Cluster




6         7        8         9    10     11     12    13   14   15   16
                                  v1     v1     v2
Read Repair
                             get(“conferences/surge”)
          client                                     v2
           Riak

    R=2                 v2            Get Handler (FSM)



    Coordinating node
        Cluster
                                 v2        v2


6         7        8         9        10        11    12   13   14   15   16
                                      v2        v2    v2
Riak Architecture
Erlang/OTP Runtime
Riak Architecture
Erlang/OTP Runtime




                     Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs




                     Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs    HTTP




                            Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs    HTTP   Protocol Buffers




                                       Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs    HTTP               Protocol Buffers
                            Erlang local client




                                                   Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs      HTTP               Protocol Buffers
                              Erlang local client
      Request Coordination




                                                     Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs      HTTP               Protocol Buffers
                              Erlang local client
      Request Coordination
               get    put     delete     map-reduce




                                                      Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs      HTTP               Protocol Buffers
                              Erlang local client
      Request Coordination
               get    put     delete     map-reduce


                                                       Riak Core




                                                      Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs        HTTP                 Protocol Buffers
                                  Erlang local client
      Request Coordination
               get      put       delete     map-reduce

             consistent hashing
                                                           Riak Core




                                                          Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs        HTTP                 Protocol Buffers
                                  Erlang local client
      Request Coordination
               get      put       delete     map-reduce

             consistent hashing
             membership                                    Riak Core




                                                          Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs        HTTP                  Protocol Buffers
                                  Erlang local client
      Request Coordination
               get      put       delete     map-reduce

             consistent hashing      handoff
             membership                                     Riak Core




                                                          Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs       HTTP               Protocol Buffers
                               Erlang local client
      Request Coordination
               get     put     delete     map-reduce

             consistent hashing handoff
             membership node-liveness                   Riak Core




                                                       Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs       HTTP               Protocol Buffers
                               Erlang local client
      Request Coordination
               get     put     delete     map-reduce

             consistent hashing handoff gossip
             membership node-liveness                   Riak Core




                                                       Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs      HTTP               Protocol Buffers
                              Erlang local client
      Request Coordination
               get    put     delete     map-reduce

             consistent hashing handoff gossip
             membership node-liveness buckets          Riak Core




                                                      Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs          HTTP                Protocol Buffers
                                   Erlang local client
      Request Coordination
               get        put      delete     map-reduce

             consistent hashing handoff gossip
             membership node-liveness buckets               Riak Core

                    vnode master



                                                           Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs          HTTP                Protocol Buffers
                                   Erlang local client
      Request Coordination
               get        put      delete     map-reduce

             consistent hashing handoff gossip
             membership node-liveness buckets               Riak Core

                    vnode master
                      vnodes

                                                           Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs          HTTP                Protocol Buffers
                                   Erlang local client
      Request Coordination
               get        put      delete     map-reduce

             consistent hashing handoff gossip
             membership node-liveness buckets               Riak Core

                    vnode master
                      vnodes
               storage backend
                                                           Riak KV
Riak Architecture
Erlang/OTP Runtime
      Client APIs          HTTP                Protocol Buffers
                                   Erlang local client
      Request Coordination
               get        put      delete     map-reduce

             consistent hashing handoff gossip
             membership node-liveness buckets                Riak Core

                    vnode master
                      vnodes                             Workers
               storage backend
                                                            Riak KV
Modeling &
 Querying
Application
  Design
Application
        Design
• No intrinsic schema
Application
        Design
• No intrinsic schema
• Your application defines the
  structure and semantics
Application
        Design
• No intrinsic schema
• Your application defines the
  structure and semantics

• Your application resolves conflicts
  (if you care)
Modeling Tools
Modeling Tools

• Key-Value
Modeling Tools

• Key-Value
• Links
Modeling Tools

• Key-Value
• Links
• Full-text Search
Modeling Tools

• Key-Value
• Links
• Full-text Search
• Secondary Indexes (2I)
Modeling Tools

• Key-Value
• Links
• Full-text Search
• Secondary Indexes (2I)
• MapReduce
Key-Value
Key-Value
• Content-Types
Key-Value
• Content-Types
• Denormalize
Key-Value
• Content-Types
• Denormalize
• Meaningful or “guessable” keys
Key-Value
• Content-Types
• Denormalize
• Meaningful or “guessable” keys
 • Composites
Key-Value
• Content-Types
• Denormalize
• Meaningful or “guessable” keys
 • Composites
 • Time-boxing
Key-Value
• Content-Types
• Denormalize
• Meaningful or “guessable” keys
 • Composites
 • Time-boxing
• References (value is a key or list)
Links
Links

• Lightweight relationships, like <a>
Links

• Lightweight relationships, like <a>
• Includes a “tag”
Links

• Lightweight relationships, like <a>
• Includes a “tag”
• Built-in traversal op (“walking”)
  GET /riak/b/k/[bucket],[tag],[keep]
Links

• Lightweight relationships, like <a>
• Includes a “tag”
• Built-in traversal op (“walking”)
  GET /riak/b/k/[bucket],[tag],[keep]

• Limited in number (part of meta)
Full-text Search
Full-text Search

• Designed for searching prose
Full-text Search

• Designed for searching prose
• Lucene/Solr-like query interface
Full-text Search

• Designed for searching prose
• Lucene/Solr-like query interface
• Automatically index K/V values
Full-text Search

• Designed for searching prose
• Lucene/Solr-like query interface
• Automatically index K/V values
• Input to MapReduce
Full-text Search

• Designed for searching prose
• Lucene/Solr-like query interface
• Automatically index K/V values
• Input to MapReduce
• Customizable index schemas
Secondary Indexes
Secondary Indexes

• Defined as metadata
Secondary Indexes

• Defined as metadata
• Two index types: _int and _bin
Secondary Indexes

• Defined as metadata
• Two index types: _int and _bin
• Two query types: equal and range
Secondary Indexes

• Defined as metadata
• Two index types: _int and _bin
• Two query types: equal and range
• Input to MapReduce
MapReduce
MapReduce
• For more involved queries
MapReduce
• For more involved queries
 • Specify input keys
MapReduce
• For more involved queries
 • Specify input keys
 • Process data in “map” and
   “reduce” functions
MapReduce
• For more involved queries
 • Specify input keys
 • Process data in “map” and
   “reduce” functions

   • JavaScript or Erlang
MapReduce
• For more involved queries
 • Specify input keys
 • Process data in “map” and
   “reduce” functions

   • JavaScript or Erlang
• Not tuned for batch processing
Lab: Querying
Lab: Walk Links
Lab: Walk Links

• Store an object with a Link
Lab: Walk Links

• Store an object with a Link
• Store the target of the Link
Lab: Walk Links

• Store an object with a Link
• Store the target of the Link
• Walk from one to the other
Lab: Store Links
Lab: Store Links
$ ./store-linked.sh
Lab: Store Links
$ ./store-linked.sh
Enter the origin key: sean
Enter the target key: ian
Enter the link's tag: coworker
Lab: Store Links
$ ./store-linked.sh
Enter the origin key: sean
Enter the target key: ian
Enter the link's tag: coworker

*** Storing the origin ***
* About to connect() to localhost port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/sean HTTP/1.1
> Link: </riak/surge/ian>;riaktag="coworker"
...
< HTTP/1.1 204 No Content
...
Lab: Store Links
$ ./store-linked.sh
Enter the origin key: sean
Enter the target key: ian
Enter the link's tag: coworker

*** Storing the origin ***
* About to connect() to localhost port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/sean HTTP/1.1
> Link: </riak/surge/ian>;riaktag="coworker"
...
< HTTP/1.1 204 No Content
...
Lab: Store Links
$ ./store-linked.sh
Enter the origin key: sean
Enter the target key: ian
Enter the link's tag: coworker

*** Storing the origin ***
* About to connect() to localhost port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/sean HTTP/1.1
> Link: </riak/surge/ian>;riaktag="coworker"
...
< HTTP/1.1 204 No Content
...
Lab: Store Links
$ ./store-linked.sh
Enter the origin key: sean
Enter the target key: ian
Enter the link's tag: coworker

*** Storing the origin ***
* About to connect() to localhost port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/sean HTTP/1.1
> Link: </riak/surge/ian>;riaktag="coworker"
...
< HTTP/1.1 204 No Content
...
Lab: Store Links
$ ./store-linked.sh
Enter the origin key: sean
Enter the target key: ian
Enter the link's tag: coworker

*** Storing the origin ***
* About to connect() to localhost port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/sean HTTP/1.1
> Link: </riak/surge/ian>;riaktag="coworker"
...
< HTTP/1.1 204 No Content
...
*** Storing the target ***
* About to connect() to localhost port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/ian HTTP/1.1
...
< HTTP/1.1 204 No Content
Lab: Store Links
$ ./store-linked.sh
Enter the origin key: sean
Enter the target key: ian
Enter the link's tag: coworker

*** Storing the origin ***
* About to connect() to localhost port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/sean HTTP/1.1
> Link: </riak/surge/ian>;riaktag="coworker"
...
< HTTP/1.1 204 No Content
...
*** Storing the target ***
* About to connect() to localhost port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/ian HTTP/1.1
...
< HTTP/1.1 204 No Content
Lab: Walk Links (1/3)
Lab: Walk Links (1/3)
$ ./walk-linked.sh
Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
...
> GET /riak/surge/sean/_,_,_ HTTP/1.1
...
Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
...
> GET /riak/surge/sean/_,_,_ HTTP/1.1
...
Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
...
> GET /riak/surge/sean/_,_,_ HTTP/1.1
...
Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
...
> GET /riak/surge/sean/_,_,_ HTTP/1.1
...
< HTTP/1.1 200 OK
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Expires: Tue, 27 Sep 2011 20:51:37 GMT
< Date: Tue, 27 Sep 2011 20:41:37 GMT
< Content-Type: multipart/mixed; boundary=J6O2SZfLfSEdAv5dv3mttR7AVOF
< Content-Length: 438
<
Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
...
> GET /riak/surge/sean/_,_,_ HTTP/1.1
...
< HTTP/1.1 200 OK
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Expires: Tue, 27 Sep 2011 20:51:37 GMT
< Date: Tue, 27 Sep 2011 20:41:37 GMT
< Content-Type: multipart/mixed; boundary=J6O2SZfLfSEdAv5dv3mttR7AVOF
< Content-Length: 438
<
Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
...
> GET /riak/surge/sean/_,_,_ HTTP/1.1
...
< HTTP/1.1 200 OK
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Expires: Tue, 27 Sep 2011 20:51:37 GMT
< Date: Tue, 27 Sep 2011 20:41:37 GMT
< Content-Type: multipart/mixed; boundary=J6O2SZfLfSEdAv5dv3mttR7AVOF
< Content-Length: 438
<

--J6O2SZfLfSEdAv5dv3mttR7AVOF
Content-Type: multipart/mixed; boundary=C1NUMpwbmSvb7CbqEAy1KdYRiUH

--C1NUMpwbmSvb7CbqEAy1KdYRiUH
X-Riak-Vclock: a85hYGBgzGDKBVIcMRuuc/k1GU/KYEpkymNl0Nzw7ThfFgA=
Location: /riak/surge/ian
Content-Type: text/plain
Link: </riak/surge>; rel="up"
Etag: 51h3q7RjTNaHWYpO4P0MJj
Last-Modified: Tue, 27 Sep 2011 20:38:01 GMT

target
--C1NUMpwbmSvb7CbqEAy1KdYRiUH--

--J6O2SZfLfSEdAv5dv3mttR7AVOF--
Lab: Walk Links (1/3)
$ ./walk-linked.sh
Enter the origin key: sean
Enter the link spec (default: _,_,_):
...
> GET /riak/surge/sean/_,_,_ HTTP/1.1
...
< HTTP/1.1 200 OK
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Expires: Tue, 27 Sep 2011 20:51:37 GMT
< Date: Tue, 27 Sep 2011 20:41:37 GMT
< Content-Type: multipart/mixed; boundary=J6O2SZfLfSEdAv5dv3mttR7AVOF
< Content-Length: 438
<

--J6O2SZfLfSEdAv5dv3mttR7AVOF
Content-Type: multipart/mixed; boundary=C1NUMpwbmSvb7CbqEAy1KdYRiUH

--C1NUMpwbmSvb7CbqEAy1KdYRiUH
X-Riak-Vclock: a85hYGBgzGDKBVIcMRuuc/k1GU/KYEpkymNl0Nzw7ThfFgA=
Location: /riak/surge/ian
Content-Type: text/plain
Link: </riak/surge>; rel="up"
Etag: 51h3q7RjTNaHWYpO4P0MJj
Last-Modified: Tue, 27 Sep 2011 20:38:01 GMT

target
--C1NUMpwbmSvb7CbqEAy1KdYRiUH--

--J6O2SZfLfSEdAv5dv3mttR7AVOF--
Lab: Walk Links (2/3)
Lab: Walk Links (2/3)
Enter the origin key: sean
Enter the link spec (default: _,_,_): surge,_,_
Lab: Walk Links (2/3)
Enter the origin key: sean
Enter the link spec (default: _,_,_): surge,_,_

> GET /riak/surge/sean/surge,_,_ HTTP/1.1
...
Lab: Walk Links (2/3)
Enter the origin key: sean
Enter the link spec (default: _,_,_): surge,_,_

> GET /riak/surge/sean/surge,_,_ HTTP/1.1
...
>
< HTTP/1.1 200 OK
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Expires: Tue, 27 Sep 2011 20:52:53 GMT
< Date: Tue, 27 Sep 2011 20:42:53 GMT
< Content-Type: multipart/mixed; boundary=CsxMIsO9tfadrRJ7EQ3XL2ivQ4f
< Content-Length: 438
<

--CsxMIsO9tfadrRJ7EQ3XL2ivQ4f
Content-Type: multipart/mixed; boundary=I3U33LkzkqJi6HtbwsU0qRd4k4y

--I3U33LkzkqJi6HtbwsU0qRd4k4y
X-Riak-Vclock: a85hYGBgzGDKBVIcMRuuc/k1GU/KYEpkymNl0Nzw7ThfFgA=
Location: /riak/surge/ian
Content-Type: text/plain
Link: </riak/surge>; rel="up"
Etag: 51h3q7RjTNaHWYpO4P0MJj
Last-Modified: Tue, 27 Sep 2011 20:38:01 GMT

target
--I3U33LkzkqJi6HtbwsU0qRd4k4y--

--CsxMIsO9tfadrRJ7EQ3XL2ivQ4f--
Lab: Walk Links (3/3)
Lab: Walk Links (3/3)
Enter the origin key: sean
Enter the link spec (default: _,_,_): foo,_,_
Lab: Walk Links (3/3)
Enter the origin key: sean
Enter the link spec (default: _,_,_): foo,_,_

...
> GET /riak/surge/sean/foo,_,_ HTTP/1.1
< HTTP/1.1 200 OK
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Expires: Tue, 27 Sep 2011 20:53:08 GMT
< Date: Tue, 27 Sep 2011 20:43:08 GMT
< Content-Type: multipart/mixed; boundary=3AsxaHhVMlDEQCakLSmNqQUTS4Y
< Content-Length: 172
<

--3AsxaHhVMlDEQCakLSmNqQUTS4Y
Content-Type: multipart/mixed; boundary=Ivtq9RgHpydECNEZOnHpiqHcFYl

--Ivtq9RgHpydECNEZOnHpiqHcFYl--

--3AsxaHhVMlDEQCakLSmNqQUTS4Y--
Lab: Secondary
   Indexes
Lab: Secondary
      Indexes

• Create some objects with indexes
Lab: Secondary
      Indexes

• Create some objects with indexes
• Query the index for their keys
Lab: Secondary
      Indexes

• Create some objects with indexes
• Query the index for their keys
• Query input to MapReduce
Lab: Create 2I Objects
Lab: Create 2I Objects
$ ./store-indexed.sh
Lab: Create 2I Objects
$ ./store-indexed.sh
Enter the number of objects to create: 40
Lab: Create 2I Objects
$ ./store-indexed.sh
Enter the number of objects to create: 40
* About to connect() to localhost port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/surgeobj1?returnbody=true HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r
zlib/1.2.3
> Host: localhost:8091
> Accept: */*
> Content-Type: application/json
> X-Riak-Index-surgeobj_int: 1
> Content-Length: 14
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1uZVkMCUy5rEyHDn57ThfFgA=
< x-riak-index-surgeobj_int: 1
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Date: Tue, 27 Sep 2011 22:27:16 GMT
< Content-Type: application/json
< Content-Length: 14
<
* Connection #0 to host localhost left intact
* Closing connection #0
{"surgeobj":1} ...
Lab: Create 2I Objects
$ ./store-indexed.sh
Enter the number of objects to create: 40       key
* About to connect() to localhost port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/surgeobj1?returnbody=true HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r
zlib/1.2.3
> Host: localhost:8091
> Accept: */*
> Content-Type: application/json
> X-Riak-Index-surgeobj_int: 1
> Content-Length: 14
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1uZVkMCUy5rEyHDn57ThfFgA=
< x-riak-index-surgeobj_int: 1
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Date: Tue, 27 Sep 2011 22:27:16 GMT
< Content-Type: application/json
< Content-Length: 14
<
* Connection #0 to host localhost left intact
* Closing connection #0
{"surgeobj":1} ...
Lab: Create 2I Objects
$ ./store-indexed.sh
Enter the number of objects to create: 40
* About to connect() to localhost port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> PUT /riak/surge/surgeobj1?returnbody=true HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r
zlib/1.2.3
> Host: localhost:8091
> Accept: */*
                                                       index header
> Content-Type: application/json
> X-Riak-Index-surgeobj_int: 1
> Content-Length: 14
>
< HTTP/1.1 200 OK
< X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1uZVkMCUy5rEyHDn57ThfFgA=
< x-riak-index-surgeobj_int: 1
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Link: </riak/surge>; rel="up"
< Date: Tue, 27 Sep 2011 22:27:16 GMT
< Content-Type: application/json
< Content-Length: 14
<
* Connection #0 to host localhost left intact
* Closing connection #0
{"surgeobj":1} ...
Lab: Query 2I (1/2)
Lab: Query 2I (1/2)
$ ./query-indexed.sh
Lab: Query 2I (1/2)
$ ./query-indexed.sh
Query on range or equality? (r/e) e
Index value: 10
Lab: Query 2I (1/2)
$ ./query-indexed.sh
Query on range or equality? (r/e) e
Index value: 10

* About to connect() to localhost port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> GET /buckets/surge/index/surgeobj_int/10 HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7
OpenSSL/0.9.8r zlib/1.2.3
> Host: localhost:8091
> Accept: */*
>
< HTTP/1.1 200 OK
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Date: Tue, 27 Sep 2011 22:27:27 GMT
< Content-Type: application/json
< Content-Length: 23
<
* Connection #0 to host localhost left intact
* Closing connection #0
{"keys":["surgeobj10"]}
Lab: Query 2I (2/2)
Lab: Query 2I (2/2)
Query on range or equality? (r/e) r
Range start: 5
Range end: 15
Lab: Query 2I (2/2)
Query on range or equality? (r/e) r
Range start: 5
Range end: 15

* About to connect() to localhost port 8091 (#0)
*   Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8091 (#0)
> GET /buckets/surge/index/surgeobj_int/5/15 HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7
OpenSSL/0.9.8r zlib/1.2.3
> Host: localhost:8091
> Accept: */*
>
< HTTP/1.1 200 OK
< Vary: Accept-Encoding
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Date: Tue, 27 Sep 2011 22:27:36 GMT
< Content-Type: application/json
< Content-Length: 148
<
* Connection #0 to host localhost left intact
* Closing connection #0
{"keys":
["surgeobj12","surgeobj15","surgeobj6","surgeobj5","surgeobj13","surgeob
j8","surgeobj11","surgeobj14","surgeobj9","surgeobj10","surgeobj7"]}
Lab: 2I MapReduce
Lab: 2I MapReduce
$ ./mapred-indexed.sh
Lab: 2I MapReduce
$ ./mapred-indexed.sh
Query on range or equality? (r/e) r
Range start: 35
Range end: 40
Lab: 2I MapReduce
$ ./mapred-indexed.sh
Query on range or equality? (r/e) r
Range start: 35
Range end: 40

Executing MapReduce:
{ "inputs":{ "bucket":"surge", "index":"surgeobj_int" ,"start":35,"end":40},
"query":[ {"map":
{"name":"Riak.mapValuesJson","language":"javascript","keep":true}} ]}
Lab: 2I MapReduce
$ ./mapred-indexed.sh
Query on range or equality? (r/e) r
Range start: 35
Range end: 40

Executing MapReduce:
{ "inputs":{ "bucket":"surge", "index":"surgeobj_int" ,"start":35,"end":40},
"query":[ {"map":
{"name":"Riak.mapValuesJson","language":"javascript","keep":true}} ]}

> POST /mapred HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/
0.9.8r zlib/1.2.3
> Host: localhost:8091
> Accept: */*
> Content-Type: application/json
> Content-Length: 164
>
< HTTP/1.1 200 OK
< Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
< Date: Wed, 28 Sep 2011 01:41:53 GMT
< Content-Type: application/json
< Content-Length: 97
<
* Connection #0 to host localhost left intact
* Closing connection #0
[{"surgeobj":35},{"surgeobj":40},{"surgeobj":36},{"surgeobj":37},{"surgeobj":
39},{"surgeobj":38}]
Riak Operations
Configuration
File Locations
File Locations
• Configuration
File Locations
• Configuration
  •   /etc/riak
File Locations
• Configuration
  •   /etc/riak

• Binaries
File Locations
• Configuration
  •   /etc/riak

• Binaries
  •   /usr/sbin/riak
File Locations
• Configuration
  •   /etc/riak

• Binaries
  •   /usr/sbin/riak

  •   /usr/sbin/riak-admin
File Locations
• Configuration
  •   /etc/riak

• Binaries
  •   /usr/sbin/riak

  •   /usr/sbin/riak-admin

  •   /usr/[lib,lib64]/riak
File Locations
• Configuration                • Logs
  •   /etc/riak

• Binaries
  •   /usr/sbin/riak

  •   /usr/sbin/riak-admin

  •   /usr/[lib,lib64]/riak
File Locations
• Configuration                • Logs
  •   /etc/riak                 •   /var/log/riak

• Binaries
  •   /usr/sbin/riak

  •   /usr/sbin/riak-admin

  •   /usr/[lib,lib64]/riak
File Locations
• Configuration                • Logs
  •   /etc/riak                 •   /var/log/riak

• Binaries                    • Data
  •   /usr/sbin/riak

  •   /usr/sbin/riak-admin

  •   /usr/[lib,lib64]/riak
File Locations
• Configuration                • Logs
  •   /etc/riak                 •   /var/log/riak

• Binaries                    • Data
  •   /usr/sbin/riak            • /var/lib/riak
  •   /usr/sbin/riak-admin

  •   /usr/[lib,lib64]/riak
File Locations
• Configuration                • Logs
  •   /etc/riak                 •   /var/log/riak

• Binaries                    • Data
  •   /usr/sbin/riak            • /var/lib/riak
  •   /usr/sbin/riak-admin    • Handles / Temp Files
  •   /usr/[lib,lib64]/riak
File Locations
• Configuration                • Logs
  •   /etc/riak                 •   /var/log/riak

• Binaries                    • Data
  •   /usr/sbin/riak            • /var/lib/riak
  •   /usr/sbin/riak-admin    • Handles / Temp Files
  •   /usr/[lib,lib64]/riak
                                • /tmp/riak
/etc/riak/vm.args
/etc/riak/vm.args
• Overview
/etc/riak/vm.args
• Overview
 • Erlang VM configuration settings.
/etc/riak/vm.args
• Overview
 • Erlang VM configuration settings.
• Important Settings
/etc/riak/vm.args
• Overview
 • Erlang VM configuration settings.
• Important Settings
 • Node name.
/etc/riak/vm.args
• Overview
 • Erlang VM configuration settings.
• Important Settings
 • Node name.
 • Security cookie.
/etc/riak/vm.args
• Overview
 • Erlang VM configuration settings.
• Important Settings
 • Node name.
 • Security cookie.
• Documentation
/etc/riak/vm.args
• Overview
 • Erlang VM configuration settings.
• Important Settings
 • Node name.
 • Security cookie.
• Documentation
 • http://www.erlang.org/doc/man/erl.html
/etc/riak/vm.args
/etc/riak/vm.args

• Node Name & Security Cookie
  ## Name of the riak node
/etc/riak/vm.args

• Node Name & Security Cookie
  ## Name of the riak node
  -name riak@127.0.0.1
/etc/riak/vm.args

• Node Name & Security Cookie
  ## Name of the riak node
  -name riak@127.0.0.1
  ## Cookie for distributed erlang
/etc/riak/vm.args

• Node Name & Security Cookie
  ## Name of the riak node
  -name riak@127.0.0.1
  ## Cookie for distributed erlang
  -setcookie riak
/etc/riak/
app.config
/etc/riak/
               app.config
•   Overview
/etc/riak/
               app.config
•   Overview
    •   Riak (and dependency) configuration settings.
/etc/riak/
               app.config
•   Overview
    •   Riak (and dependency) configuration settings.

•   Important Settings
/etc/riak/
                app.config
•   Overview
    •   Riak (and dependency) configuration settings.

•   Important Settings

        •   Ports
/etc/riak/
                app.config
•   Overview
    •   Riak (and dependency) configuration settings.

•   Important Settings

        •   Ports

        •   Directories
/etc/riak/
                app.config
•   Overview
    •   Riak (and dependency) configuration settings.

•   Important Settings

        •   Ports

        •   Directories

        •   Number of Partitions
/etc/riak/
                app.config
•   Overview
    •   Riak (and dependency) configuration settings.

•   Important Settings

        •   Ports

        •   Directories

        •   Number of Partitions

        •   Storage Backend
/etc/riak/
                app.config
•   Overview
    •   Riak (and dependency) configuration settings.

•   Important Settings

        •   Ports

        •   Directories

        •   Number of Partitions

        •   Storage Backend

•   Documentation
/etc/riak/
                app.config
•   Overview
    •   Riak (and dependency) configuration settings.

•   Important Settings

        •   Ports

        •   Directories

        •   Number of Partitions

        •   Storage Backend

•   Documentation

    •   http://wiki.basho.com/Configuration-Files.html
/etc/riak/
                app.config
•   Overview
    •   Riak (and dependency) configuration settings.

•   Important Settings

        •   Ports

        •   Directories

        •   Number of Partitions

        •   Storage Backend

•   Documentation

    •   http://wiki.basho.com/Configuration-Files.html
    •   Inline comments
/etc/riak/
app.config
/etc/riak/
             app.config
• Storage Engine
   %% Storage_backend specifies the Erlang module defining the storage
/etc/riak/
             app.config
• Storage Engine
   %% Storage_backend specifies the Erlang module defining the storage
   %% mechanism that will be used on this node.
/etc/riak/
             app.config
• Storage Engine
    %% Storage_backend specifies the Erlang module defining the storage
   %% mechanism that will be used on this node.
   {storage_backend, riak_kv_bitcask_backend},
/etc/riak/
             app.config
• Storage Engine
    %% Storage_backend specifies the Erlang module defining the storage
   %% mechanism that will be used on this node.
   {storage_backend, riak_kv_bitcask_backend},

• About Storage Engines
/etc/riak/
             app.config
• Storage Engine
    %% Storage_backend specifies the Erlang module defining the storage
   %% mechanism that will be used on this node.
   {storage_backend, riak_kv_bitcask_backend},

• About Storage Engines
 • Riak has pluggable storage engines. (Bitcask,
    Innostore, LevelDB)
/etc/riak/
             app.config
• Storage Engine
    %% Storage_backend specifies the Erlang module defining the storage
   %% mechanism that will be used on this node.
   {storage_backend, riak_kv_bitcask_backend},

• About Storage Engines
 • Riak has pluggable storage engines. (Bitcask,
    Innostore, LevelDB)

  • Different engines have different tradeoffs.
/etc/riak/
             app.config
• Storage Engine
    %% Storage_backend specifies the Erlang module defining the storage
   %% mechanism that will be used on this node.
   {storage_backend, riak_kv_bitcask_backend},

• About Storage Engines
 • Riak has pluggable storage engines. (Bitcask,
    Innostore, LevelDB)

  • Different engines have different tradeoffs.
  • Engine selection depends on shape of data.
/etc/riak/
             app.config
• Storage Engine
    %% Storage_backend specifies the Erlang module defining the storage
   %% mechanism that will be used on this node.
   {storage_backend, riak_kv_bitcask_backend},

• About Storage Engines
 • Riak has pluggable storage engines. (Bitcask,
    Innostore, LevelDB)

  • Different engines have different tradeoffs.
  • Engine selection depends on shape of data.
  • More on this later.
/etc/riak/
          app.config

%% Bitcask Config
{bitcask, [
            {data_root, "data/bitcask"}
          ]},

{eleveldb, [
             {data_root, “data/leveldb”}
           ]},
/etc/riak/
          app.config
• File Locations
%% Bitcask Config
{bitcask, [
            {data_root, "data/bitcask"}
          ]},

{eleveldb, [
             {data_root, “data/leveldb”}
           ]},
/etc/riak/
          app.config
• File Locations
%% Bitcask Config
{bitcask, [
            {data_root, "data/bitcask"}
          ]},

{eleveldb, [
             {data_root, “data/leveldb”}
           ]},
/etc/riak/
          app.config
• File Locations
%% Bitcask Config
{bitcask, [
            {data_root, "data/bitcask"}
          ]},

{eleveldb, [
             {data_root, “data/leveldb”}
           ]},
/etc/riak/
          app.config
• File Locations
%% Bitcask Config
{bitcask, [
            {data_root, "data/bitcask"}
          ]},

{eleveldb, [
             {data_root, “data/leveldb”}
           ]},
/etc/riak/
          app.config
• File Locations
%% Bitcask Config
{bitcask, [
            {data_root, "data/bitcask"}
          ]},

{eleveldb, [
             {data_root, “data/leveldb”}
•          ]},
/etc/riak/
             app.config
%% http is a list of IP addresses and TCP ports that the
Riak
%% HTTP interface will bind.
{http, [ {"127.0.0.1", 8091 } ]},


%% pb_ip is the IP address that the Riak Protocol Buffers
interface
%% will bind to. If this is undefined, the interface will
not run.
{pb_ip,   "127.0.0.1" },

%% pb_port is the TCP port that the Riak Protocol Buffers
interface
%% will bind to
{pb_port, 8081 },
/etc/riak/
             app.config
• HTTP & Protocol Buffers
%% http is a list of IP addresses and TCP ports that the
Riak
%% HTTP interface will bind.
{http, [ {"127.0.0.1", 8091 } ]},


%% pb_ip is the IP address that the Riak Protocol Buffers
interface
%% will bind to. If this is undefined, the interface will
not run.
{pb_ip,   "127.0.0.1" },

%% pb_port is the TCP port that the Riak Protocol Buffers
interface
%% will bind to
{pb_port, 8081 },
Securing Riak
Securing Riak
•   Overview
Securing Riak
•   Overview

    •   Erlang has a “cookie” for “security.”
Securing Riak
•   Overview

    •   Erlang has a “cookie” for “security.”
        •   Do not rely on it. Plain text.
Securing Riak
•   Overview

    •   Erlang has a “cookie” for “security.”
        •   Do not rely on it. Plain text.

    •   Riak assumes internal environment is trusted.
Securing Riak
•   Overview

    •   Erlang has a “cookie” for “security.”
        •   Do not rely on it. Plain text.

    •   Riak assumes internal environment is trusted.

•   Securing the HTTP Interface
Securing Riak
•   Overview

    •   Erlang has a “cookie” for “security.”
        •   Do not rely on it. Plain text.

    •   Riak assumes internal environment is trusted.

•   Securing the HTTP Interface

    •   HTTP Auth via Proxy
Securing Riak
•   Overview

    •   Erlang has a “cookie” for “security.”
        •   Do not rely on it. Plain text.

    •   Riak assumes internal environment is trusted.

•   Securing the HTTP Interface

    •   HTTP Auth via Proxy

    •   Does support SSL
Securing Riak
•   Overview

    •   Erlang has a “cookie” for “security.”
        •   Do not rely on it. Plain text.

    •   Riak assumes internal environment is trusted.

•   Securing the HTTP Interface

    •   HTTP Auth via Proxy

    •   Does support SSL
•   Securing Protocol Buffers
Securing Riak
•   Overview

    •   Erlang has a “cookie” for “security.”
        •   Do not rely on it. Plain text.

    •   Riak assumes internal environment is trusted.

•   Securing the HTTP Interface

    •   HTTP Auth via Proxy

    •   Does support SSL
•   Securing Protocol Buffers

    •   No security.
Load Balancing
Load Balancing
•   Overview
Load Balancing
•   Overview

    •   Any node can handle any request.
Load Balancing
•   Overview

    •   Any node can handle any request.
    •   Makes load balancing easy.
Load Balancing
•   Overview

    •   Any node can handle any request.
    •   Makes load balancing easy.

•   Load Balancing HTTP
Load Balancing
•   Overview

    •   Any node can handle any request.
    •   Makes load balancing easy.

•   Load Balancing HTTP

    •   HAProxy, nginx, etc...
Load Balancing
•   Overview

    •   Any node can handle any request.
    •   Makes load balancing easy.

•   Load Balancing HTTP

    •   HAProxy, nginx, etc...

•   Load Balancing Protocol Buffers
Load Balancing
•   Overview

    •   Any node can handle any request.
    •   Makes load balancing easy.

•   Load Balancing HTTP

    •   HAProxy, nginx, etc...

•   Load Balancing Protocol Buffers

    •   Any TCP Load Balancer
Load Balancing
•   Overview

    •   Any node can handle any request.
    •   Makes load balancing easy.

•   Load Balancing HTTP

    •   HAProxy, nginx, etc...

•   Load Balancing Protocol Buffers

    •   Any TCP Load Balancer
•   In General
Load Balancing
•   Overview

    •   Any node can handle any request.
    •   Makes load balancing easy.

•   Load Balancing HTTP

    •   HAProxy, nginx, etc...

•   Load Balancing Protocol Buffers

    •   Any TCP Load Balancer
•   In General

    •   Use “least connected” strategy.
Storage Backends
There are 5 to
 choose from
There are 5 to
      choose from
• Bitcask
There are 5 to
      choose from
• Bitcask
• Innostore
There are 5 to
      choose from
• Bitcask
• Innostore
• LevelDB
There are 5 to
      choose from
• Bitcask
• Innostore
• LevelDB
• Memory
There are 5 to
      choose from
• Bitcask
• Innostore
• LevelDB
• Memory
• Multi
Bitcask
Bitcask
• A fast, append-only key-value store
Bitcask
• A fast, append-only key-value store
• In memory key lookup table
  (key_dir)
Bitcask
• A fast, append-only key-value store
• In memory key lookup table
  (key_dir)

• Closed files are immutable
Bitcask
• A fast, append-only key-value store
• In memory key lookup table
  (key_dir)

• Closed files are immutable
• Merging cleans up old data
Bitcask
• A fast, append-only key-value store
• In memory key lookup table
  (key_dir)

• Closed files are immutable
• Merging cleans up old data
• Apache 2 license
Innostore
Innostore
• Based on Embedded InnoDB
Innostore
• Based on Embedded InnoDB
• Write ahead log plus B-tree storage
Innostore
• Based on Embedded InnoDB
• Write ahead log plus B-tree storage
• Similar characterics to the MySQL
  InnoDB plugin
Innostore
• Based on Embedded InnoDB
• Write ahead log plus B-tree storage
• Similar characterics to the MySQL
  InnoDB plugin

• Manages which pages of the B-trees
  are in memory
Innostore
• Based on Embedded InnoDB
• Write ahead log plus B-tree storage
• Similar characterics to the MySQL
  InnoDB plugin

• Manages which pages of the B-trees
  are in memory

• GPL license
LevelDB
LevelDB

• A Google developed key-value store
LevelDB

• A Google developed key-value store
• Append-only
LevelDB

• A Google developed key-value store
• Append-only
• Multiple levels of SSTable-like
  structures
LevelDB

• A Google developed key-value store
• Append-only
• Multiple levels of SSTable-like
  structures

• BSD license
Cluster Capacity
   Planning
Cluster Capacity
   Planning
Cluster Capacity
       Planning
•   Overview
Cluster Capacity
       Planning
•   Overview
    •   There are MANY variables that affect performance.
Cluster Capacity
       Planning
•   Overview
    •   There are MANY variables that affect performance.

    •   Safest route is to simulate load and see what
        happens.
Cluster Capacity
       Planning
•   Overview
    •   There are MANY variables that affect performance.

    •   Safest route is to simulate load and see what
        happens.

    •   basho_bench can help
Cluster Capacity
       Planning
•   Overview
    •   There are MANY variables that affect performance.

    •   Safest route is to simulate load and see what
        happens.

    •   basho_bench can help
•   Documentation
Cluster Capacity
       Planning
•   Overview
    •   There are MANY variables that affect performance.

    •   Safest route is to simulate load and see what
        happens.

    •   basho_bench can help
•   Documentation

    •   http://wiki.basho.com/Cluster-Capacity-
        Planning.html
Cluster Capacity
       Planning
•   Overview
    •   There are MANY variables that affect performance.

    •   Safest route is to simulate load and see what
        happens.

    •   basho_bench can help
•   Documentation

    •   http://wiki.basho.com/Cluster-Capacity-
        Planning.html
    •   http://wiki.basho.com/System-
        Requirements.html
Things to Consider
      (1/3)
Things to Consider
      (1/3)
•   Virtualization
Things to Consider
      (1/3)
•   Virtualization

    •   Riak’s workload is I/O bound
Things to Consider
      (1/3)
•   Virtualization

    •   Riak’s workload is I/O bound

    •   Run on bare metal for best performance.
Things to Consider
      (1/3)
•   Virtualization

    •   Riak’s workload is I/O bound

    •   Run on bare metal for best performance.

    •   Virtualize for other factors (agility or cost)
Things to Consider
      (1/3)
•   Virtualization

    •   Riak’s workload is I/O bound

    •   Run on bare metal for best performance.

    •   Virtualize for other factors (agility or cost)

•   Object size
Things to Consider
      (1/3)
•   Virtualization

    •   Riak’s workload is I/O bound

    •   Run on bare metal for best performance.

    •   Virtualize for other factors (agility or cost)

•   Object size

    •   Too small means unnecessary disk and network
        overhead
Things to Consider
      (1/3)
•   Virtualization

    •   Riak’s workload is I/O bound

    •   Run on bare metal for best performance.

    •   Virtualize for other factors (agility or cost)

•   Object size

    •   Too small means unnecessary disk and network
        overhead

    •   Too large means chunky reads and writes.
Things to Consider
      (1/3)
•   Virtualization

    •   Riak’s workload is I/O bound

    •   Run on bare metal for best performance.

    •   Virtualize for other factors (agility or cost)

•   Object size

    •   Too small means unnecessary disk and network
        overhead

    •   Too large means chunky reads and writes.

•   Read/Write Ratio
Things to Consider
      (1/3)
•   Virtualization

    •   Riak’s workload is I/O bound

    •   Run on bare metal for best performance.

    •   Virtualize for other factors (agility or cost)

•   Object size

    •   Too small means unnecessary disk and network
        overhead

    •   Too large means chunky reads and writes.

•   Read/Write Ratio

    •   Riak is not meant for dumping many tiny datapoints.
Things to Consider
      (2/3)
Things to Consider
      (2/3)
•   “Randomness” of Reads /
    Writes
Things to Consider
      (2/3)
•   “Randomness” of Reads /
    Writes

    •   Recently accessed
        objects are cached in
        memory.
Things to Consider
      (2/3)
•   “Randomness” of Reads /
    Writes

    •   Recently accessed
        objects are cached in
        memory.

    •   A small working set of
        objects will be fast.
Things to Consider
      (2/3)
•   “Randomness” of Reads /
    Writes

    •   Recently accessed
        objects are cached in
        memory.

    •   A small working set of
        objects will be fast.

•   Amount of RAM
Things to Consider
      (2/3)
•   “Randomness” of Reads /
    Writes

    •   Recently accessed
        objects are cached in
        memory.

    •   A small working set of
        objects will be fast.

•   Amount of RAM

    •   Expands the size of
        cached working set,
        lowers latency.
Things to Consider
      (2/3)
•   “Randomness” of Reads /      •   Disk Speed
    Writes

    •   Recently accessed
        objects are cached in
        memory.

    •   A small working set of
        objects will be fast.

•   Amount of RAM

    •   Expands the size of
        cached working set,
        lowers latency.
Things to Consider
      (2/3)
•   “Randomness” of Reads /      •   Disk Speed
    Writes
                                     •   Faster disk means
    •   Recently accessed                lower read/write
        objects are cached in            latency.
        memory.

    •   A small working set of
        objects will be fast.

•   Amount of RAM

    •   Expands the size of
        cached working set,
        lowers latency.
Things to Consider
      (2/3)
•   “Randomness” of Reads /      •   Disk Speed
    Writes
                                     •   Faster disk means
    •   Recently accessed                lower read/write
        objects are cached in            latency.
        memory.
                                 •   Number of Processors
    •   A small working set of
        objects will be fast.

•   Amount of RAM

    •   Expands the size of
        cached working set,
        lowers latency.
Things to Consider
      (2/3)
•   “Randomness” of Reads /      •   Disk Speed
    Writes
                                     •   Faster disk means
    •   Recently accessed                lower read/write
        objects are cached in            latency.
        memory.
                                 •   Number of Processors
    •   A small working set of
        objects will be fast.        •   More processors ==
                                         more Map/Reduce
•   Amount of RAM                        throughput.

    •   Expands the size of
        cached working set,
        lowers latency.
Things to Consider
      (2/3)
•   “Randomness” of Reads /      •   Disk Speed
    Writes
                                     •   Faster disk means
    •   Recently accessed                lower read/write
        objects are cached in            latency.
        memory.
                                 •   Number of Processors
    •   A small working set of
        objects will be fast.        •   More processors ==
                                         more Map/Reduce
•   Amount of RAM                        throughput.

    •   Expands the size of          •   Faster processors ==
        cached working set,              lower Map/Reduce
        lowers latency.                  latency.
Things to Consider
      (3/3)
Things to Consider
      (3/3)
•   Features
Things to Consider
      (3/3)
•   Features
    •   RiakKV vs. Riak 2I vs. Riak Search
Things to Consider
      (3/3)
•   Features
    •   RiakKV vs. Riak 2I vs. Riak Search

•   Backends
Things to Consider
      (3/3)
•   Features
    •   RiakKV vs. Riak 2I vs. Riak Search

•   Backends

    •   Bitcask vs. Innostore vs. eLevelDB
Things to Consider
      (3/3)
•   Features
    •   RiakKV vs. Riak 2I vs. Riak Search

•   Backends

    •   Bitcask vs. Innostore vs. eLevelDB

    •   Space/Time tradeoff
Things to Consider
      (3/3)
•   Features
    •   RiakKV vs. Riak 2I vs. Riak Search

•   Backends

    •   Bitcask vs. Innostore vs. eLevelDB

    •   Space/Time tradeoff

•   Partitions
Things to Consider
      (3/3)
•   Features
    •   RiakKV vs. Riak 2I vs. Riak Search

•   Backends

    •   Bitcask vs. Innostore vs. eLevelDB

    •   Space/Time tradeoff

•   Partitions

    •   Depends on eventual number of boxes.
Things to Consider
      (3/3)
•   Features
    •   RiakKV vs. Riak 2I vs. Riak Search

•   Backends

    •   Bitcask vs. Innostore vs. eLevelDB

    •   Space/Time tradeoff

•   Partitions

    •   Depends on eventual number of boxes.

•   Protocol
Things to Consider
      (3/3)
•   Features
    •   RiakKV vs. Riak 2I vs. Riak Search

•   Backends

    •   Bitcask vs. Innostore vs. eLevelDB

    •   Space/Time tradeoff

•   Partitions

    •   Depends on eventual number of boxes.

•   Protocol

    •   HTTP vs. Protocol Buffers
Things to Consider
      (3/3)
•   Features
    •   RiakKV vs. Riak 2I vs. Riak Search

•   Backends

    •   Bitcask vs. Innostore vs. eLevelDB

    •   Space/Time tradeoff

•   Partitions

    •   Depends on eventual number of boxes.

•   Protocol

    •   HTTP vs. Protocol Buffers
    •   All clients support HTTP, some support PB.
Performance
  Tuning
Performance
          Tuning
•   General Tips
Performance
            Tuning
•   General Tips

    •   Start with a “DB” Like Machine Profile
Performance
              Tuning
•   General Tips

    •   Start with a “DB” Like Machine Profile
        •   Multi-core 64-bit CPUs
Performance
              Tuning
•   General Tips

    •   Start with a “DB” Like Machine Profile
        •   Multi-core 64-bit CPUs

        •   The More RAM the Better
Performance
              Tuning
•   General Tips

    •   Start with a “DB” Like Machine Profile
        •   Multi-core 64-bit CPUs

        •   The More RAM the Better

        •   Fast Disk
Performance
              Tuning
•   General Tips

    •   Start with a “DB” Like Machine Profile
        •   Multi-core 64-bit CPUs

        •   The More RAM the Better

        •   Fast Disk

    •   Use noatime mounts
Performance
              Tuning
•   General Tips

    •   Start with a “DB” Like Machine Profile
        •   Multi-core 64-bit CPUs

        •   The More RAM the Better

        •   Fast Disk

    •   Use noatime mounts

    •   Limit List Keys and Full Bucket MapReduce
Performance
              Tuning
•   General Tips

    •   Start with a “DB” Like Machine Profile
        •   Multi-core 64-bit CPUs

        •   The More RAM the Better

        •   Fast Disk

    •   Use noatime mounts

    •   Limit List Keys and Full Bucket MapReduce
    •   Benchmark in Advance
Performance
              Tuning
•   General Tips

    •   Start with a “DB” Like Machine Profile
        •   Multi-core 64-bit CPUs

        •   The More RAM the Better

        •   Fast Disk

    •   Use noatime mounts

    •   Limit List Keys and Full Bucket MapReduce
    •   Benchmark in Advance

    •   Graph Everything
Performance
  Tuning
Performance
          Tuning
•   Bitcask Backend
Performance
            Tuning
•   Bitcask Backend

    •   KeyDir is in Memory
Performance
            Tuning
•   Bitcask Backend

    •   KeyDir is in Memory

    •   Filesystem Cache (Access Profile Considerations)
Performance
            Tuning
•   Bitcask Backend

    •   KeyDir is in Memory

    •   Filesystem Cache (Access Profile Considerations)

    •   Merge Trigger Settings
Performance
            Tuning
•   Bitcask Backend

    •   KeyDir is in Memory

    •   Filesystem Cache (Access Profile Considerations)

    •   Merge Trigger Settings

    •   Scheduling Bitcask Merges
Performance
            Tuning
•   Bitcask Backend

    •   KeyDir is in Memory

    •   Filesystem Cache (Access Profile Considerations)

    •   Merge Trigger Settings

    •   Scheduling Bitcask Merges

    •   Key Expiration
Performance
            Tuning
•   Bitcask Backend

    •   KeyDir is in Memory

    •   Filesystem Cache (Access Profile Considerations)

    •   Merge Trigger Settings

    •   Scheduling Bitcask Merges

    •   Key Expiration

•   Documentation
Performance
            Tuning
•   Bitcask Backend

    •   KeyDir is in Memory

    •   Filesystem Cache (Access Profile Considerations)

    •   Merge Trigger Settings

    •   Scheduling Bitcask Merges

    •   Key Expiration

•   Documentation

    •   http://wiki.basho.com/Bitcask.html
Performance
            Tuning
•   Bitcask Backend

    •   KeyDir is in Memory

    •   Filesystem Cache (Access Profile Considerations)

    •   Merge Trigger Settings

    •   Scheduling Bitcask Merges

    •   Key Expiration

•   Documentation

    •   http://wiki.basho.com/Bitcask.html

    •   http://wiki.basho.com/Bitcask-Capacity-Planning.html
Performance
  Tuning
Performance
          Tuning
•   Innostore Backend
Performance
            Tuning
•   Innostore Backend

    •   Similar to MySQL + InnoDB
Performance
            Tuning
•   Innostore Backend

    •   Similar to MySQL + InnoDB

    •   buffer pool size
Performance
            Tuning
•   Innostore Backend

    •   Similar to MySQL + InnoDB

    •   buffer pool size

    •   o_direct
Performance
            Tuning
•   Innostore Backend

    •   Similar to MySQL + InnoDB

    •   buffer pool size

    •   o_direct

    •   log_files_in_groups
Performance
            Tuning
•   Innostore Backend

    •   Similar to MySQL + InnoDB

    •   buffer pool size

    •   o_direct

    •   log_files_in_groups

    •   Separate Spindles for Log and Data
Performance
            Tuning
•   Innostore Backend

    •   Similar to MySQL + InnoDB

    •   buffer pool size

    •   o_direct

    •   log_files_in_groups

    •   Separate Spindles for Log and Data

•   Documentation
Performance
            Tuning
•   Innostore Backend

    •   Similar to MySQL + InnoDB

    •   buffer pool size

    •   o_direct

    •   log_files_in_groups

    •   Separate Spindles for Log and Data

•   Documentation

    •   http://wiki.basho.com/Setting-Up-Innostore.html
Performance
            Tuning
•   Innostore Backend

    •   Similar to MySQL + InnoDB

    •   buffer pool size

    •   o_direct

    •   log_files_in_groups

    •   Separate Spindles for Log and Data

•   Documentation

    •   http://wiki.basho.com/Setting-Up-Innostore.html

    •   http://wiki.basho.com/Innostore-Configuration-and-
        Tuning.html
basho_bench
basho_bench
•   What Does It Do?
basho_bench
•   What Does It Do?
    •   Generates load against a Riak cluster. Logs and graphs results.
basho_bench
•   What Does It Do?
    •   Generates load against a Riak cluster. Logs and graphs results.

    •   Intended for cluster capacity planning.
basho_bench
•   What Does It Do?
    •   Generates load against a Riak cluster. Logs and graphs results.

    •   Intended for cluster capacity planning.
•   Major Parameters
basho_bench
•   What Does It Do?
    •   Generates load against a Riak cluster. Logs and graphs results.

    •   Intended for cluster capacity planning.
•   Major Parameters
    •   Test Harness / Key Generator / Value Generator
basho_bench
•   What Does It Do?
    •   Generates load against a Riak cluster. Logs and graphs results.

    •   Intended for cluster capacity planning.
•   Major Parameters
    •   Test Harness / Key Generator / Value Generator
    •   Read / write / update mix.
basho_bench
•   What Does It Do?
    •   Generates load against a Riak cluster. Logs and graphs results.

    •   Intended for cluster capacity planning.
•   Major Parameters
    •   Test Harness / Key Generator / Value Generator
    •   Read / write / update mix.
    •   Number of workers.
basho_bench
•   What Does It Do?
    •   Generates load against a Riak cluster. Logs and graphs results.

    •   Intended for cluster capacity planning.
•   Major Parameters
    •   Test Harness / Key Generator / Value Generator
    •   Read / write / update mix.
    •   Number of workers.

    •   Worker rate. (X per second vs. as fast as possible)
basho_bench
•   What Does It Do?
    •   Generates load against a Riak cluster. Logs and graphs results.

    •   Intended for cluster capacity planning.
•   Major Parameters
    •   Test Harness / Key Generator / Value Generator
    •   Read / write / update mix.
    •   Number of workers.

    •   Worker rate. (X per second vs. as fast as possible)
•   Documentation
basho_bench
•   What Does It Do?
    •   Generates load against a Riak cluster. Logs and graphs results.

    •   Intended for cluster capacity planning.
•   Major Parameters
    •   Test Harness / Key Generator / Value Generator
    •   Read / write / update mix.
    •   Number of workers.

    •   Worker rate. (X per second vs. as fast as possible)
•   Documentation
    •   http://wiki.basho.com/Benchmarking-with-Basho-Bench.html
basho_bench:
   Configuration
{mode, max}.
{duration, 20}.
{concurrent, 3}.
{driver, basho_bench_driver_http_raw}.
{code_paths, ["deps/ibrowse"]}.
{key_generator, {int_to_bin,
{uniform_int, 50000000}}}.
{value_generator, {fixed_bin, 5000}}.
{http_raw_port, 8098}.
{operations, [{insert, 1}]}.
{source_dir, "foo"}.
basho_bench:
   Results
elapsed, window, total, successful, failed
10, 10, 10747, 10747, 0
20, 10, 13295, 13295, 0
30, 10, 11840, 11840, 0
40, 10, 1688, 1688, 0
50, 10, 10675, 10675, 0
60, 10, 13599, 13599, 0
70, 10, 7747, 7747, 0
80, 10, 4089, 4089, 0
90, 10, 13851, 13851, 0
100, 10, 13374, 13374, 0
110, 10, 2293, 2293, 0
120, 10, 9813, 9813, 0
130, 10, 13860, 13860, 0
140, 10, 8333, 8333, 0
150, 10, 3592, 3592, 0
160, 10, 14104, 14104, 0
170, 10, 13834, 13834, 0
...snip...
basho_bench:
   Results
Monitoring
Lager
Lager
•   Notes
Lager
•   Notes

    •   Basho’s replacement for Erlang logging tools
Lager
•   Notes

    •   Basho’s replacement for Erlang logging tools
    •   It plays nice with your tools
Lager
•   Notes

    •   Basho’s replacement for Erlang logging tools
    •   It plays nice with your tools

•   /var/log/riak/console.log
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web
Riak Tutorial: Key-Value Store for the Web

Contenu connexe

Tendances

Spark zeppelin-cassandra at synchrotron
Spark zeppelin-cassandra at synchrotronSpark zeppelin-cassandra at synchrotron
Spark zeppelin-cassandra at synchrotronDuyhai Doan
 
Meet Up - Spark Stream Processing + Kafka
Meet Up - Spark Stream Processing + KafkaMeet Up - Spark Stream Processing + Kafka
Meet Up - Spark Stream Processing + KafkaKnoldus Inc.
 
Introducing HerdDB - a distributed JVM embeddable database built upon Apache ...
Introducing HerdDB - a distributed JVM embeddable database built upon Apache ...Introducing HerdDB - a distributed JVM embeddable database built upon Apache ...
Introducing HerdDB - a distributed JVM embeddable database built upon Apache ...StreamNative
 
Big data analytics with Spark & Cassandra
Big data analytics with Spark & Cassandra Big data analytics with Spark & Cassandra
Big data analytics with Spark & Cassandra Matthias Niehoff
 
Apache Drill and Zeppelin: Two Promising Tools You've Never Heard Of
Apache Drill and Zeppelin: Two Promising Tools You've Never Heard OfApache Drill and Zeppelin: Two Promising Tools You've Never Heard Of
Apache Drill and Zeppelin: Two Promising Tools You've Never Heard OfCharles Givre
 
ELK Ruminating on Logs (Zendcon 2016)
ELK Ruminating on Logs (Zendcon 2016)ELK Ruminating on Logs (Zendcon 2016)
ELK Ruminating on Logs (Zendcon 2016)Mathew Beane
 
Why your Spark Job is Failing
Why your Spark Job is FailingWhy your Spark Job is Failing
Why your Spark Job is FailingDataWorks Summit
 
Analytics with Cassandra & Spark
Analytics with Cassandra & SparkAnalytics with Cassandra & Spark
Analytics with Cassandra & SparkMatthias Niehoff
 
Apache Spark and DataStax Enablement
Apache Spark and DataStax EnablementApache Spark and DataStax Enablement
Apache Spark and DataStax EnablementVincent Poncet
 
MongoSF - mongodb @ foursquare
MongoSF - mongodb @ foursquareMongoSF - mongodb @ foursquare
MongoSF - mongodb @ foursquarejorgeortiz85
 
Introduction to real time big data with Apache Spark
Introduction to real time big data with Apache SparkIntroduction to real time big data with Apache Spark
Introduction to real time big data with Apache SparkTaras Matyashovsky
 
Spark and Cassandra 2 Fast 2 Furious
Spark and Cassandra 2 Fast 2 FuriousSpark and Cassandra 2 Fast 2 Furious
Spark and Cassandra 2 Fast 2 FuriousRussell Spitzer
 
Spark Cassandra Connector: Past, Present, and Future
Spark Cassandra Connector: Past, Present, and FutureSpark Cassandra Connector: Past, Present, and Future
Spark Cassandra Connector: Past, Present, and FutureRussell Spitzer
 
Analyzing Log Data With Apache Spark
Analyzing Log Data With Apache SparkAnalyzing Log Data With Apache Spark
Analyzing Log Data With Apache SparkSpark Summit
 

Tendances (20)

Os riak1-pdf
Os riak1-pdfOs riak1-pdf
Os riak1-pdf
 
Spark zeppelin-cassandra at synchrotron
Spark zeppelin-cassandra at synchrotronSpark zeppelin-cassandra at synchrotron
Spark zeppelin-cassandra at synchrotron
 
Spark+flume seattle
Spark+flume seattleSpark+flume seattle
Spark+flume seattle
 
Meet Up - Spark Stream Processing + Kafka
Meet Up - Spark Stream Processing + KafkaMeet Up - Spark Stream Processing + Kafka
Meet Up - Spark Stream Processing + Kafka
 
Introducing HerdDB - a distributed JVM embeddable database built upon Apache ...
Introducing HerdDB - a distributed JVM embeddable database built upon Apache ...Introducing HerdDB - a distributed JVM embeddable database built upon Apache ...
Introducing HerdDB - a distributed JVM embeddable database built upon Apache ...
 
Big data analytics with Spark & Cassandra
Big data analytics with Spark & Cassandra Big data analytics with Spark & Cassandra
Big data analytics with Spark & Cassandra
 
Apache Drill and Zeppelin: Two Promising Tools You've Never Heard Of
Apache Drill and Zeppelin: Two Promising Tools You've Never Heard OfApache Drill and Zeppelin: Two Promising Tools You've Never Heard Of
Apache Drill and Zeppelin: Two Promising Tools You've Never Heard Of
 
Hadoop on osx
Hadoop on osxHadoop on osx
Hadoop on osx
 
ELK Ruminating on Logs (Zendcon 2016)
ELK Ruminating on Logs (Zendcon 2016)ELK Ruminating on Logs (Zendcon 2016)
ELK Ruminating on Logs (Zendcon 2016)
 
Why your Spark Job is Failing
Why your Spark Job is FailingWhy your Spark Job is Failing
Why your Spark Job is Failing
 
Intro to Apache Spark
Intro to Apache SparkIntro to Apache Spark
Intro to Apache Spark
 
Analytics with Cassandra & Spark
Analytics with Cassandra & SparkAnalytics with Cassandra & Spark
Analytics with Cassandra & Spark
 
Apache Spark and DataStax Enablement
Apache Spark and DataStax EnablementApache Spark and DataStax Enablement
Apache Spark and DataStax Enablement
 
MongoSF - mongodb @ foursquare
MongoSF - mongodb @ foursquareMongoSF - mongodb @ foursquare
MongoSF - mongodb @ foursquare
 
Cloudera Impala
Cloudera ImpalaCloudera Impala
Cloudera Impala
 
Debugging Apache Spark
Debugging Apache SparkDebugging Apache Spark
Debugging Apache Spark
 
Introduction to real time big data with Apache Spark
Introduction to real time big data with Apache SparkIntroduction to real time big data with Apache Spark
Introduction to real time big data with Apache Spark
 
Spark and Cassandra 2 Fast 2 Furious
Spark and Cassandra 2 Fast 2 FuriousSpark and Cassandra 2 Fast 2 Furious
Spark and Cassandra 2 Fast 2 Furious
 
Spark Cassandra Connector: Past, Present, and Future
Spark Cassandra Connector: Past, Present, and FutureSpark Cassandra Connector: Past, Present, and Future
Spark Cassandra Connector: Past, Present, and Future
 
Analyzing Log Data With Apache Spark
Analyzing Log Data With Apache SparkAnalyzing Log Data With Apache Spark
Analyzing Log Data With Apache Spark
 

En vedette

Riak (Øredev nosql day)
Riak (Øredev nosql day)Riak (Øredev nosql day)
Riak (Øredev nosql day)Sean Cribbs
 
Distributed Key-Value Stores- Featuring Riak
Distributed Key-Value Stores- Featuring RiakDistributed Key-Value Stores- Featuring Riak
Distributed Key-Value Stores- Featuring Riaksamof76
 
Riak in Ten Minutes
Riak in Ten MinutesRiak in Ten Minutes
Riak in Ten MinutesJon Meredith
 
Riak - From Small to Large
Riak - From Small to LargeRiak - From Small to Large
Riak - From Small to LargeRusty Klophaus
 
Riak Operations
Riak OperationsRiak Operations
Riak Operationsgschofield
 
Riak a successful failure
Riak   a successful failureRiak   a successful failure
Riak a successful failureGiltTech
 

En vedette (6)

Riak (Øredev nosql day)
Riak (Øredev nosql day)Riak (Øredev nosql day)
Riak (Øredev nosql day)
 
Distributed Key-Value Stores- Featuring Riak
Distributed Key-Value Stores- Featuring RiakDistributed Key-Value Stores- Featuring Riak
Distributed Key-Value Stores- Featuring Riak
 
Riak in Ten Minutes
Riak in Ten MinutesRiak in Ten Minutes
Riak in Ten Minutes
 
Riak - From Small to Large
Riak - From Small to LargeRiak - From Small to Large
Riak - From Small to Large
 
Riak Operations
Riak OperationsRiak Operations
Riak Operations
 
Riak a successful failure
Riak   a successful failureRiak   a successful failure
Riak a successful failure
 

Similaire à Riak Tutorial: Key-Value Store for the Web

AstriCon 2017 - Docker Swarm & Asterisk
AstriCon 2017  - Docker Swarm & AsteriskAstriCon 2017  - Docker Swarm & Asterisk
AstriCon 2017 - Docker Swarm & AsteriskEvan McGee
 
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto Docker, Inc.
 
Introduction to NodeJS with LOLCats
Introduction to NodeJS with LOLCatsIntroduction to NodeJS with LOLCats
Introduction to NodeJS with LOLCatsDerek Anderson
 
Puppet Camp Charlotte 2015: Exporting Resources: There and Back Again
Puppet Camp Charlotte 2015: Exporting Resources: There and Back AgainPuppet Camp Charlotte 2015: Exporting Resources: There and Back Again
Puppet Camp Charlotte 2015: Exporting Resources: There and Back AgainPuppet
 
Apache Wizardry - Ohio Linux 2011
Apache Wizardry - Ohio Linux 2011Apache Wizardry - Ohio Linux 2011
Apache Wizardry - Ohio Linux 2011Rich Bowen
 
Relayd: a load balancer for OpenBSD
Relayd: a load balancer for OpenBSD Relayd: a load balancer for OpenBSD
Relayd: a load balancer for OpenBSD Giovanni Bechis
 
Managing Your Content with Elasticsearch
Managing Your Content with ElasticsearchManaging Your Content with Elasticsearch
Managing Your Content with ElasticsearchSamantha Quiñones
 
Postgres Vienna DB Meetup 2014
Postgres Vienna DB Meetup 2014Postgres Vienna DB Meetup 2014
Postgres Vienna DB Meetup 2014Michael Renner
 
Using and scaling Rack and Rack-based middleware
Using and scaling Rack and Rack-based middlewareUsing and scaling Rack and Rack-based middleware
Using and scaling Rack and Rack-based middlewareAlona Mekhovova
 
Introduction to Node.js
Introduction to Node.jsIntroduction to Node.js
Introduction to Node.jsRichard Lee
 
Scaling php applications with redis
Scaling php applications with redisScaling php applications with redis
Scaling php applications with redisjimbojsb
 
Networking in Kubernetes
Networking in KubernetesNetworking in Kubernetes
Networking in KubernetesMinhan Xia
 
PHP from soup to nuts Course Deck
PHP from soup to nuts Course DeckPHP from soup to nuts Course Deck
PHP from soup to nuts Course DeckrICh morrow
 
Netconf for Peering Automation by Tom Paseka [APRICOT 2015]
Netconf for Peering Automation by Tom Paseka [APRICOT 2015]Netconf for Peering Automation by Tom Paseka [APRICOT 2015]
Netconf for Peering Automation by Tom Paseka [APRICOT 2015]APNIC
 
APRICOT 2015 - NetConf for Peering Automation
APRICOT 2015 - NetConf for Peering AutomationAPRICOT 2015 - NetConf for Peering Automation
APRICOT 2015 - NetConf for Peering AutomationTom Paseka
 
Fortify aws aurora_proxy_2019_pleu
Fortify aws aurora_proxy_2019_pleuFortify aws aurora_proxy_2019_pleu
Fortify aws aurora_proxy_2019_pleuMarco Tusa
 
introduction to node.js
introduction to node.jsintroduction to node.js
introduction to node.jsorkaplan
 
Bare Metal to OpenStack with Razor and Chef
Bare Metal to OpenStack with Razor and ChefBare Metal to OpenStack with Razor and Chef
Bare Metal to OpenStack with Razor and ChefMatt Ray
 
Ruby MVC from scratch with Rack
Ruby MVC from scratch with RackRuby MVC from scratch with Rack
Ruby MVC from scratch with RackDonSchado
 

Similaire à Riak Tutorial: Key-Value Store for the Web (20)

AstriCon 2017 - Docker Swarm & Asterisk
AstriCon 2017  - Docker Swarm & AsteriskAstriCon 2017  - Docker Swarm & Asterisk
AstriCon 2017 - Docker Swarm & Asterisk
 
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto
 
Introduction to NodeJS with LOLCats
Introduction to NodeJS with LOLCatsIntroduction to NodeJS with LOLCats
Introduction to NodeJS with LOLCats
 
Puppet Camp Charlotte 2015: Exporting Resources: There and Back Again
Puppet Camp Charlotte 2015: Exporting Resources: There and Back AgainPuppet Camp Charlotte 2015: Exporting Resources: There and Back Again
Puppet Camp Charlotte 2015: Exporting Resources: There and Back Again
 
Apache Wizardry - Ohio Linux 2011
Apache Wizardry - Ohio Linux 2011Apache Wizardry - Ohio Linux 2011
Apache Wizardry - Ohio Linux 2011
 
Relayd: a load balancer for OpenBSD
Relayd: a load balancer for OpenBSD Relayd: a load balancer for OpenBSD
Relayd: a load balancer for OpenBSD
 
Managing Your Content with Elasticsearch
Managing Your Content with ElasticsearchManaging Your Content with Elasticsearch
Managing Your Content with Elasticsearch
 
Postgres Vienna DB Meetup 2014
Postgres Vienna DB Meetup 2014Postgres Vienna DB Meetup 2014
Postgres Vienna DB Meetup 2014
 
Using and scaling Rack and Rack-based middleware
Using and scaling Rack and Rack-based middlewareUsing and scaling Rack and Rack-based middleware
Using and scaling Rack and Rack-based middleware
 
Introduction to Node.js
Introduction to Node.jsIntroduction to Node.js
Introduction to Node.js
 
Scaling php applications with redis
Scaling php applications with redisScaling php applications with redis
Scaling php applications with redis
 
Wider than rails
Wider than railsWider than rails
Wider than rails
 
Networking in Kubernetes
Networking in KubernetesNetworking in Kubernetes
Networking in Kubernetes
 
PHP from soup to nuts Course Deck
PHP from soup to nuts Course DeckPHP from soup to nuts Course Deck
PHP from soup to nuts Course Deck
 
Netconf for Peering Automation by Tom Paseka [APRICOT 2015]
Netconf for Peering Automation by Tom Paseka [APRICOT 2015]Netconf for Peering Automation by Tom Paseka [APRICOT 2015]
Netconf for Peering Automation by Tom Paseka [APRICOT 2015]
 
APRICOT 2015 - NetConf for Peering Automation
APRICOT 2015 - NetConf for Peering AutomationAPRICOT 2015 - NetConf for Peering Automation
APRICOT 2015 - NetConf for Peering Automation
 
Fortify aws aurora_proxy_2019_pleu
Fortify aws aurora_proxy_2019_pleuFortify aws aurora_proxy_2019_pleu
Fortify aws aurora_proxy_2019_pleu
 
introduction to node.js
introduction to node.jsintroduction to node.js
introduction to node.js
 
Bare Metal to OpenStack with Razor and Chef
Bare Metal to OpenStack with Razor and ChefBare Metal to OpenStack with Razor and Chef
Bare Metal to OpenStack with Razor and Chef
 
Ruby MVC from scratch with Rack
Ruby MVC from scratch with RackRuby MVC from scratch with Rack
Ruby MVC from scratch with Rack
 

Dernier

Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxBkGupta21
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 

Dernier (20)

Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptx
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 

Riak Tutorial: Key-Value Store for the Web

  • 1. Riak Tutorial Sean Cribbs & Ian Plosker Basho Technologies basho
  • 3.
  • 4. • Key-Value store (plus extras)
  • 5. • Key-Value store (plus extras) • Distributed, horizontally scalable
  • 6. • Key-Value store (plus extras) • Distributed, horizontally scalable • Fault-tolerant
  • 7. • Key-Value store (plus extras) • Distributed, horizontally scalable • Fault-tolerant • Highly-available
  • 8. • Key-Value store (plus extras) • Distributed, horizontally scalable • Fault-tolerant • Highly-available • Built for the Web
  • 9. • Key-Value store (plus extras) • Distributed, horizontally scalable • Fault-tolerant • Highly-available • Built for the Web • Inspired by Amazon’s Dynamo
  • 11. Key-Value • Simple operations - get, put, delete
  • 12. Key-Value • Simple operations - get, put, delete • Value is mostly opaque (some metadata)
  • 13. Key-Value • Simple operations - get, put, delete • Value is mostly opaque (some metadata) • Extras
  • 14. Key-Value • Simple operations - get, put, delete • Value is mostly opaque (some metadata) • Extras • MapReduce
  • 15. Key-Value • Simple operations - get, put, delete • Value is mostly opaque (some metadata) • Extras • MapReduce • Links
  • 16. Key-Value • Simple operations - get, put, delete • Value is mostly opaque (some metadata) • Extras • MapReduce • Links • Full-text search (new, optional)
  • 17. Key-Value • Simple operations - get, put, delete • Value is mostly opaque (some metadata) • Extras • MapReduce • Links • Full-text search (new, optional) • Secondary Indexes
  • 19. Distributed & Horizontally • Default configuration is in a cluster
  • 20. Distributed & Horizontally • Default configuration is in a cluster • Load and data are spread evenly
  • 21. Distributed & Horizontally • Default configuration is in a cluster • Load and data are spread evenly • Add more nodes to get more X
  • 23. Fault-Tolerant • All nodes participate equally - no SPOF
  • 24. Fault-Tolerant • All nodes participate equally - no SPOF • All data is replicated
  • 25. Fault-Tolerant • All nodes participate equally - no SPOF • All data is replicated • Cluster transparently survives...
  • 26. Fault-Tolerant • All nodes participate equally - no SPOF • All data is replicated • Cluster transparently survives... • node failure
  • 27. Fault-Tolerant • All nodes participate equally - no SPOF • All data is replicated • Cluster transparently survives... • node failure • network partitions
  • 28. Fault-Tolerant • All nodes participate equally - no SPOF • All data is replicated • Cluster transparently survives... • node failure • network partitions • Built on Erlang/OTP (designed for FT)
  • 30. Highly-Available • Any node can serve client requests
  • 31. Highly-Available • Any node can serve client requests • Fallbacks are used when nodes are down
  • 32. Highly-Available • Any node can serve client requests • Fallbacks are used when nodes are down • Always accepts read and write requests
  • 33. Highly-Available • Any node can serve client requests • Fallbacks are used when nodes are down • Always accepts read and write requests • Per-request quorums
  • 35. Built for the Web • HTTP is default (but not only) interface
  • 36. Built for the Web • HTTP is default (but not only) interface • Does HTTP/REST well (see Webmachine)
  • 37. Built for the Web • HTTP is default (but not only) interface • Does HTTP/REST well (see Webmachine) • Plays well with HTTP infrastructure - reverse proxy caches, load balancers, web servers
  • 38. Built for the Web • HTTP is default (but not only) interface • Does HTTP/REST well (see Webmachine) • Plays well with HTTP infrastructure - reverse proxy caches, load balancers, web servers • Suitable to many web applications
  • 40. Inspired by Amazon’s Dynamo • Masterless, peer-coordinated replication
  • 41. Inspired by Amazon’s Dynamo • Masterless, peer-coordinated replication • Consistent hashing
  • 42. Inspired by Amazon’s Dynamo • Masterless, peer-coordinated replication • Consistent hashing • Eventually consistent
  • 43. Inspired by Amazon’s Dynamo • Masterless, peer-coordinated replication • Consistent hashing • Eventually consistent • Quorum reads and writes
  • 44. Inspired by Amazon’s Dynamo • Masterless, peer-coordinated replication • Consistent hashing • Eventually consistent • Quorum reads and writes • Anti-entropy: read repair, hinted handoff
  • 47. Installing Riak • Download from our local server
  • 48. Installing Riak • Download from our local server • Download a package: http://downloads.basho.com/riak/ CURRENT
  • 49. Installing Riak • Download from our local server • Download a package: http://downloads.basho.com/riak/ CURRENT • Clone & build (need git and Erlang) : $ git clone git://github.com/basho/riak.git $ make all devrel
  • 50. You need `curl` apt-get install curl yum install curl
  • 52. Start the Cluster $ surge-start.sh
  • 53. Start the Cluster $ surge-start.sh dev1/bin/riak start dev2/bin/riak start dev3/bin/riak start dev4/bin/riak start
  • 54. Start the Cluster $ surge-start.sh dev1/bin/riak start dev2/bin/riak start dev3/bin/riak start dev4/bin/riak start dev2/bin/riak-admin join dev1 Sent join request to dev1 dev3/bin/riak-admin join dev1 Sent join request to dev1 dev4/bin/riak-admin join dev1 Sent join request to dev1
  • 56. Check its status $ dev1/bin/riak-admin member_status $ dev1/bin/riak-admin ring_status $ dev1/bin/riak-admin status
  • 57. PUT
  • 59. PUT $ surge-put.sh Enter a key: your-key
  • 60. PUT $ surge-put.sh Enter a key: your-key Enter a value: your-value
  • 61. PUT $ surge-put.sh Enter a key: your-key Enter a value: your-value Enter a write quorum value:
  • 62. PUT $ surge-put.sh Enter a key: your-key Enter a value: your-value Enter a write quorum value: * About to connect() to 127.0.0.1 port 8091 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0) > PUT /riak/surge/your-key?returnbody=true&w=quorum HTTP/1.1 > User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5 > Host: 127.0.0.1:8091 > Accept: */* > Content-Type: text/plain > Content-Length: 10 >
  • 63. PUT $ surge-put.sh Enter a key: your-key Enter a value: your-value Enter a write quorum value: * About to connect() to 127.0.0.1 port 8091 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0) > PUT /riak/surge/your-key?returnbody=true&w=quorum HTTP/1.1 > User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5 > Host: 127.0.0.1:8091 > Accept: */* > Content-Type: text/plain > Content-Length: 10 > < HTTP/1.1 200 OK < X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA= < Vary: Accept-Encoding < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Link: </riak/surge>; rel="up" < Date: Tue, 27 Sep 2011 19:21:08 GMT < Content-Type: text/plain < Content-Length: 10 < * Connection #0 to host 127.0.0.1 left intact * Closing connection #0 your-value
  • 64. GET
  • 66. GET $ surge-get.sh Enter a key: your-key
  • 67. GET $ surge-get.sh Enter a key: your-key Enter a read quorum value (default: quorum):
  • 68. GET $ surge-get.sh Enter a key: your-key Enter a read quorum value (default: quorum): * About to connect() to 127.0.0.1 port 8091 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0) > GET /riak/surge/your-key?r=quorum HTTP/1.1 > User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5 > Host: 127.0.0.1:8091 > Accept: */* >
  • 69. GET $ surge-get.sh Enter a key: your-key Enter a read quorum value (default: quorum): * About to connect() to 127.0.0.1 port 8091 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0) > GET /riak/surge/your-key?r=quorum HTTP/1.1 > User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5 > Host: 127.0.0.1:8091 > Accept: */* > < HTTP/1.1 200 OK < X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA= < Vary: Accept-Encoding < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Link: </riak/surge>; rel="up" < Last-Modified: Tue, 27 Sep 2011 19:01:45 GMT < ETag: "51h3q7RjTNaHWYpO4P0MJj" < Date: Tue, 27 Sep 2011 19:31:01 GMT < Content-Type: text/plain < Content-Length: 10 < * Connection #0 to host 127.0.0.1 left intact * Closing connection #0 your-value
  • 70. GET $ surge-get.sh Enter a key: your-key Enter a read quorum value (default: quorum): * About to connect() to 127.0.0.1 port 8091 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0) > GET /riak/surge/your-key?r=quorum HTTP/1.1 > User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5 > Host: 127.0.0.1:8091 > Accept: */* > < HTTP/1.1 200 OK < X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA= < Vary: Accept-Encoding < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Link: </riak/surge>; rel="up" < Last-Modified: Tue, 27 Sep 2011 19:01:45 GMT < ETag: "51h3q7RjTNaHWYpO4P0MJj" < Date: Tue, 27 Sep 2011 19:31:01 GMT < Content-Type: text/plain vector clock < Content-Length: 10 < * Connection #0 to host 127.0.0.1 left intact * Closing connection #0 your-value
  • 71. GET $ surge-get.sh Enter a key: your-key Enter a read quorum value (default: quorum): * About to connect() to 127.0.0.1 port 8091 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0) > GET /riak/surge/your-key?r=quorum HTTP/1.1 > User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5 > Host: 127.0.0.1:8091 > Accept: */* > < HTTP/1.1 200 OK < X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA= < Vary: Accept-Encoding < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Link: </riak/surge>; rel="up" < Last-Modified: Tue, 27 Sep 2011 19:01:45 GMT < ETag: "51h3q7RjTNaHWYpO4P0MJj" < Date: Tue, 27 Sep 2011 19:31:01 GMT < Content-Type: text/plain < Content-Length: 10 < * Connection #0 to host 127.0.0.1 left intact * Closing connection #0 your-value caching headers
  • 72. GET $ surge-get.sh Enter a key: your-key Enter a read quorum value (default: quorum): * About to connect() to 127.0.0.1 port 8091 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0) > GET /riak/surge/your-key?r=quorum HTTP/1.1 > User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5 > Host: 127.0.0.1:8091 > Accept: */* > < HTTP/1.1 200 OK < X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA= < Vary: Accept-Encoding < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Link: </riak/surge>; rel="up" < Last-Modified: Tue, 27 Sep 2011 19:01:45 GMT < ETag: "51h3q7RjTNaHWYpO4P0MJj" < Date: Tue, 27 Sep 2011 19:31:01 GMT < Content-Type: text/plain < Content-Length: 10 < * Connection #0 to host 127.0.0.1 left intact * Closing connection #0 your-value content-type
  • 76. DELETE $ surge-delete.sh Type a key: your-key * About to connect() to 127.0.0.1 port 8091 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0) > DELETE /riak/surge/your-key HTTP/1.1 > User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5 > Host: 127.0.0.1:8091 > Accept: */* >
  • 77. DELETE $ surge-delete.sh Type a key: your-key * About to connect() to 127.0.0.1 port 8091 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0) > DELETE /riak/surge/your-key HTTP/1.1 > User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5 > Host: 127.0.0.1:8091 > Accept: */* > < HTTP/1.1 204 No Content < Vary: Accept-Encoding < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Date: Tue, 27 Sep 2011 19:33:59 GMT < Content-Type: text/plain < Content-Length: 0 < * Connection #0 to host 127.0.0.1 left intact * Closing connection #0
  • 79. Kill two nodes $ dev4/bin/riak stop
  • 80. Kill two nodes $ dev4/bin/riak stop ok
  • 81. Kill two nodes $ dev4/bin/riak stop ok $ dev3/bin/riak stop
  • 82. Kill two nodes $ dev4/bin/riak stop ok $ dev3/bin/riak stop ok
  • 84. Try GETting your key $ . ~/Desktop/surge/surge-get.sh
  • 85. Try GETting your key $ . ~/Desktop/surge/surge-get.sh Enter a key: your-key
  • 86. Try GETting your key $ . ~/Desktop/surge/surge-get.sh Enter a key: your-key Enter a read quorum value (default: quorum):
  • 87. Try GETting your key $ . ~/Desktop/surge/surge-get.sh Enter a key: your-key Enter a read quorum value (default: quorum): * About to connect() to 127.0.0.1 port 8091 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0) > GET /riak/surge/your-key?r=quorum HTTP/1.1 > User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5 > Host: 127.0.0.1:8091 > Accept: */* >
  • 88. Try GETting your key $ . ~/Desktop/surge/surge-get.sh Enter a key: your-key Enter a read quorum value (default: quorum): * About to connect() to 127.0.0.1 port 8091 (#0) * Trying 127.0.0.1... connected * Connected to 127.0.0.1 (127.0.0.1) port 8091 (#0) > GET /riak/surge/your-key?r=quorum HTTP/1.1 > User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8r zlib/1.2.5 > Host: 127.0.0.1:8091 > Accept: */* > < HTTP/1.1 200 OK < X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1qb3LYEpkzGNleL/k23G+LAA= < Vary: Accept-Encoding < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Link: </riak/surge>; rel="up" < Last-Modified: Tue, 27 Sep 2011 19:50:07 GMT < ETag: "51h3q7RjTNaHWYpO4P0MJj" < Date: Tue, 27 Sep 2011 19:51:10 GMT < Content-Type: text/plain < Content-Length: 10 < * Connection #0 to host 127.0.0.1 left intact * Closing connection #0 your-value
  • 90. Key-Value Data Model
  • 91. Key-Value Data Model • All data (objects) are referenced by keys mark beth sean
  • 92. Key-Value Data Model n_val allow_mult quora • All data (objects) are people referenced by keys mark • Keys are grouped into buckets beth sean
  • 93. Key-Value Data Model n_val allow_mult quora • All data (objects) are people referenced by keys mark • Keys are grouped into buckets beth sean • Simple operations: get, put, delete
  • 94. Key-Value Data Model • All data (objects) are referenced by keys people/beth • Keys are grouped into vclock: ... content-type: text/html buckets last-mod: 20101108T... links: ... • Simple operations: get, put, delete <html><head>.... • Object is composed of metadata and value
  • 95. Consistent Hashing & The Ring
  • 96. Consistent Hashing & The Ring • 160-bit integer keyspace 0 2 160/4 2 160/2
  • 97. Consistent Hashing & The Ring • 160-bit integer keyspace 0 • Divided into fixed number of evenly-sized partitions 32 partitions 2 160/4 2 160/2
  • 98. Consistent Hashing & The Ring node 0 node 1 • 160-bit integer node 2 keyspace 0 node 3 • Divided into fixed number of evenly-sized partitions 32 partitions 2 160/4 • Partitions are claimed by nodes in the cluster 2 160/2
  • 99. Consistent Hashing & The Ring node 0 node 1 • 160-bit integer node 2 keyspace node 3 • Divided into fixed number of evenly-sized partitions • Partitions are claimed by nodes in the cluster • Replicas go to the N partitions following the key
  • 100. Consistent Hashing & The Ring node 0 node 1 • 160-bit integer node 2 keyspace node 3 • Divided into fixed number of evenly-sized partitions • Partitions are claimed by N=3 nodes in the cluster • Replicas go to the N partitions following the key hash(“conferences/surge”)
  • 102. Request Quorums • Every request contacts all replicas of key
  • 103. Request Quorums • Every request contacts all replicas of key • N - number of replicas (default 3)
  • 104. Request Quorums • Every request contacts all replicas of key • N - number of replicas (default 3) • R - read quorum
  • 105. Request Quorums • Every request contacts all replicas of key • N - number of replicas (default 3) • R - read quorum • W - write quorum
  • 107. Vector Clocks • Every node has an ID (changed in 1.0)
  • 108. Vector Clocks • Every node has an ID (changed in 1.0) • Send last-seen vector clock in every “put” or “delete” request
  • 109. Vector Clocks • Every node has an ID (changed in 1.0) • Send last-seen vector clock in every “put” or “delete” request • Riak tracks history of updates
  • 110. Vector Clocks • Every node has an ID (changed in 1.0) • Send last-seen vector clock in every “put” or “delete” request • Riak tracks history of updates • Auto-resolves stale versions
  • 111. Vector Clocks • Every node has an ID (changed in 1.0) • Send last-seen vector clock in every “put” or “delete” request • Riak tracks history of updates • Auto-resolves stale versions • Lets you decide conflicts
  • 112. Vector Clocks http://blog.basho.com/2010/01/29/why-vector-clocks-are-easy/ • Every node has an ID (changed in 1.0) • Send last-seen vector clock in every “put” or “delete” request • Riak tracks history of updates • Auto-resolves stale versions • Lets you decide conflicts
  • 114. Hinted Handoff • Node fails X X X X X X X X
  • 115. Hinted Handoff • Node fails X X • Requests go to fallback X X X X X X hash(“conferences/surge”)
  • 116. Hinted Handoff • Node fails • Requests go to fallback • Node comes back hash(“conferences/surge”)
  • 117. Hinted Handoff • Node fails • Requests go to fallback • Node comes back • “Handoff” - data returns to recovered node hash(“conferences/surge”)
  • 118. Hinted Handoff • Node fails • Requests go to fallback • Node comes back • “Handoff” - data returns to recovered node • Normal operations resume hash(“conferences/surge”)
  • 119. Anatomy of a Request get(“conferences/surge”)
  • 120. Anatomy of a Request get(“conferences/surge”) client Riak
  • 121. Anatomy of a Request get(“conferences/surge”) client Riak Get Handler (FSM)
  • 122. Anatomy of a Request get(“conferences/surge”) client Riak hash(“conferences/oredev”) Get Handler (FSM) == 10, 11, 12
  • 123. Anatomy of a Request get(“conferences/surge”) client Riak hash(“conferences/oredev”) Get Handler (FSM) == 10, 11, 12 Coordinating node Cluster 6 7 8 9 10 11 12 13 14 15 16 The Ring
  • 124. Anatomy of a Request get(“conferences/surge”) client Riak Get Handler (FSM) get(“conferences/oredev”) Coordinating node Cluster 6 7 8 9 10 11 12 13 14 15 16 The Ring
  • 125. Anatomy of a Request get(“conferences/surge”) client Riak R=2 Get Handler (FSM) Coordinating node Cluster 6 7 8 9 10 11 12 13 14 15 16 The Ring
  • 126. Anatomy of a Request get(“conferences/surge”) client Riak R=2 v1 Get Handler (FSM) Coordinating node Cluster 6 7 8 9 10 11 12 13 14 15 16 The Ring
  • 127. Anatomy of a Request get(“conferences/surge”) client Riak R=2 v1 v2 Get Handler (FSM)
  • 128. Anatomy of a Request get(“conferences/surge”) client v2 Riak R=2 v2 Get Handler (FSM)
  • 129. Anatomy of a Request get(“conferences/surge”) v2
  • 130. Read Repair get(“conferences/surge”) client v2 Riak R=2 v1 v2 Get Handler (FSM) Coordinating node Cluster 6 7 8 9 10 11 12 13 14 15 16 v1 v2
  • 131. Read Repair get(“conferences/surge”) client v2 Riak R=2 v2 Get Handler (FSM) Coordinating node Cluster 6 7 8 9 10 11 12 13 14 15 16 v1 v2
  • 132. Read Repair get(“conferences/surge”) client v2 Riak R=2 v2 Get Handler (FSM) Coordinating node Cluster 6 7 8 9 10 11 12 13 14 15 16 v1 v1 v2
  • 133. Read Repair get(“conferences/surge”) client v2 Riak R=2 v2 Get Handler (FSM) Coordinating node Cluster v2 v2 6 7 8 9 10 11 12 13 14 15 16 v2 v2 v2
  • 136. Riak Architecture Erlang/OTP Runtime Client APIs Riak KV
  • 137. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Riak KV
  • 138. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Riak KV
  • 139. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Erlang local client Riak KV
  • 140. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Erlang local client Request Coordination Riak KV
  • 141. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Erlang local client Request Coordination get put delete map-reduce Riak KV
  • 142. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Erlang local client Request Coordination get put delete map-reduce Riak Core Riak KV
  • 143. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Erlang local client Request Coordination get put delete map-reduce consistent hashing Riak Core Riak KV
  • 144. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Erlang local client Request Coordination get put delete map-reduce consistent hashing membership Riak Core Riak KV
  • 145. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Erlang local client Request Coordination get put delete map-reduce consistent hashing handoff membership Riak Core Riak KV
  • 146. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Erlang local client Request Coordination get put delete map-reduce consistent hashing handoff membership node-liveness Riak Core Riak KV
  • 147. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Erlang local client Request Coordination get put delete map-reduce consistent hashing handoff gossip membership node-liveness Riak Core Riak KV
  • 148. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Erlang local client Request Coordination get put delete map-reduce consistent hashing handoff gossip membership node-liveness buckets Riak Core Riak KV
  • 149. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Erlang local client Request Coordination get put delete map-reduce consistent hashing handoff gossip membership node-liveness buckets Riak Core vnode master Riak KV
  • 150. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Erlang local client Request Coordination get put delete map-reduce consistent hashing handoff gossip membership node-liveness buckets Riak Core vnode master vnodes Riak KV
  • 151. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Erlang local client Request Coordination get put delete map-reduce consistent hashing handoff gossip membership node-liveness buckets Riak Core vnode master vnodes storage backend Riak KV
  • 152. Riak Architecture Erlang/OTP Runtime Client APIs HTTP Protocol Buffers Erlang local client Request Coordination get put delete map-reduce consistent hashing handoff gossip membership node-liveness buckets Riak Core vnode master vnodes Workers storage backend Riak KV
  • 155. Application Design • No intrinsic schema
  • 156. Application Design • No intrinsic schema • Your application defines the structure and semantics
  • 157. Application Design • No intrinsic schema • Your application defines the structure and semantics • Your application resolves conflicts (if you care)
  • 161. Modeling Tools • Key-Value • Links • Full-text Search
  • 162. Modeling Tools • Key-Value • Links • Full-text Search • Secondary Indexes (2I)
  • 163. Modeling Tools • Key-Value • Links • Full-text Search • Secondary Indexes (2I) • MapReduce
  • 167. Key-Value • Content-Types • Denormalize • Meaningful or “guessable” keys
  • 168. Key-Value • Content-Types • Denormalize • Meaningful or “guessable” keys • Composites
  • 169. Key-Value • Content-Types • Denormalize • Meaningful or “guessable” keys • Composites • Time-boxing
  • 170. Key-Value • Content-Types • Denormalize • Meaningful or “guessable” keys • Composites • Time-boxing • References (value is a key or list)
  • 171. Links
  • 173. Links • Lightweight relationships, like <a> • Includes a “tag”
  • 174. Links • Lightweight relationships, like <a> • Includes a “tag” • Built-in traversal op (“walking”) GET /riak/b/k/[bucket],[tag],[keep]
  • 175. Links • Lightweight relationships, like <a> • Includes a “tag” • Built-in traversal op (“walking”) GET /riak/b/k/[bucket],[tag],[keep] • Limited in number (part of meta)
  • 177. Full-text Search • Designed for searching prose
  • 178. Full-text Search • Designed for searching prose • Lucene/Solr-like query interface
  • 179. Full-text Search • Designed for searching prose • Lucene/Solr-like query interface • Automatically index K/V values
  • 180. Full-text Search • Designed for searching prose • Lucene/Solr-like query interface • Automatically index K/V values • Input to MapReduce
  • 181. Full-text Search • Designed for searching prose • Lucene/Solr-like query interface • Automatically index K/V values • Input to MapReduce • Customizable index schemas
  • 184. Secondary Indexes • Defined as metadata • Two index types: _int and _bin
  • 185. Secondary Indexes • Defined as metadata • Two index types: _int and _bin • Two query types: equal and range
  • 186. Secondary Indexes • Defined as metadata • Two index types: _int and _bin • Two query types: equal and range • Input to MapReduce
  • 188. MapReduce • For more involved queries
  • 189. MapReduce • For more involved queries • Specify input keys
  • 190. MapReduce • For more involved queries • Specify input keys • Process data in “map” and “reduce” functions
  • 191. MapReduce • For more involved queries • Specify input keys • Process data in “map” and “reduce” functions • JavaScript or Erlang
  • 192. MapReduce • For more involved queries • Specify input keys • Process data in “map” and “reduce” functions • JavaScript or Erlang • Not tuned for batch processing
  • 195. Lab: Walk Links • Store an object with a Link
  • 196. Lab: Walk Links • Store an object with a Link • Store the target of the Link
  • 197. Lab: Walk Links • Store an object with a Link • Store the target of the Link • Walk from one to the other
  • 199. Lab: Store Links $ ./store-linked.sh
  • 200. Lab: Store Links $ ./store-linked.sh Enter the origin key: sean Enter the target key: ian Enter the link's tag: coworker
  • 201. Lab: Store Links $ ./store-linked.sh Enter the origin key: sean Enter the target key: ian Enter the link's tag: coworker *** Storing the origin *** * About to connect() to localhost port 8091 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8091 (#0) > PUT /riak/surge/sean HTTP/1.1 > Link: </riak/surge/ian>;riaktag="coworker" ... < HTTP/1.1 204 No Content ...
  • 202. Lab: Store Links $ ./store-linked.sh Enter the origin key: sean Enter the target key: ian Enter the link's tag: coworker *** Storing the origin *** * About to connect() to localhost port 8091 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8091 (#0) > PUT /riak/surge/sean HTTP/1.1 > Link: </riak/surge/ian>;riaktag="coworker" ... < HTTP/1.1 204 No Content ...
  • 203. Lab: Store Links $ ./store-linked.sh Enter the origin key: sean Enter the target key: ian Enter the link's tag: coworker *** Storing the origin *** * About to connect() to localhost port 8091 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8091 (#0) > PUT /riak/surge/sean HTTP/1.1 > Link: </riak/surge/ian>;riaktag="coworker" ... < HTTP/1.1 204 No Content ...
  • 204. Lab: Store Links $ ./store-linked.sh Enter the origin key: sean Enter the target key: ian Enter the link's tag: coworker *** Storing the origin *** * About to connect() to localhost port 8091 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8091 (#0) > PUT /riak/surge/sean HTTP/1.1 > Link: </riak/surge/ian>;riaktag="coworker" ... < HTTP/1.1 204 No Content ...
  • 205. Lab: Store Links $ ./store-linked.sh Enter the origin key: sean Enter the target key: ian Enter the link's tag: coworker *** Storing the origin *** * About to connect() to localhost port 8091 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8091 (#0) > PUT /riak/surge/sean HTTP/1.1 > Link: </riak/surge/ian>;riaktag="coworker" ... < HTTP/1.1 204 No Content ... *** Storing the target *** * About to connect() to localhost port 8091 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8091 (#0) > PUT /riak/surge/ian HTTP/1.1 ... < HTTP/1.1 204 No Content
  • 206. Lab: Store Links $ ./store-linked.sh Enter the origin key: sean Enter the target key: ian Enter the link's tag: coworker *** Storing the origin *** * About to connect() to localhost port 8091 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8091 (#0) > PUT /riak/surge/sean HTTP/1.1 > Link: </riak/surge/ian>;riaktag="coworker" ... < HTTP/1.1 204 No Content ... *** Storing the target *** * About to connect() to localhost port 8091 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8091 (#0) > PUT /riak/surge/ian HTTP/1.1 ... < HTTP/1.1 204 No Content
  • 207. Lab: Walk Links (1/3)
  • 208. Lab: Walk Links (1/3) $ ./walk-linked.sh
  • 209. Lab: Walk Links (1/3) $ ./walk-linked.sh Enter the origin key: sean Enter the link spec (default: _,_,_):
  • 210. Lab: Walk Links (1/3) $ ./walk-linked.sh Enter the origin key: sean Enter the link spec (default: _,_,_): ... > GET /riak/surge/sean/_,_,_ HTTP/1.1 ...
  • 211. Lab: Walk Links (1/3) $ ./walk-linked.sh Enter the origin key: sean Enter the link spec (default: _,_,_): ... > GET /riak/surge/sean/_,_,_ HTTP/1.1 ...
  • 212. Lab: Walk Links (1/3) $ ./walk-linked.sh Enter the origin key: sean Enter the link spec (default: _,_,_): ... > GET /riak/surge/sean/_,_,_ HTTP/1.1 ...
  • 213. Lab: Walk Links (1/3) $ ./walk-linked.sh Enter the origin key: sean Enter the link spec (default: _,_,_): ... > GET /riak/surge/sean/_,_,_ HTTP/1.1 ... < HTTP/1.1 200 OK < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Expires: Tue, 27 Sep 2011 20:51:37 GMT < Date: Tue, 27 Sep 2011 20:41:37 GMT < Content-Type: multipart/mixed; boundary=J6O2SZfLfSEdAv5dv3mttR7AVOF < Content-Length: 438 <
  • 214. Lab: Walk Links (1/3) $ ./walk-linked.sh Enter the origin key: sean Enter the link spec (default: _,_,_): ... > GET /riak/surge/sean/_,_,_ HTTP/1.1 ... < HTTP/1.1 200 OK < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Expires: Tue, 27 Sep 2011 20:51:37 GMT < Date: Tue, 27 Sep 2011 20:41:37 GMT < Content-Type: multipart/mixed; boundary=J6O2SZfLfSEdAv5dv3mttR7AVOF < Content-Length: 438 <
  • 215. Lab: Walk Links (1/3) $ ./walk-linked.sh Enter the origin key: sean Enter the link spec (default: _,_,_): ... > GET /riak/surge/sean/_,_,_ HTTP/1.1 ... < HTTP/1.1 200 OK < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Expires: Tue, 27 Sep 2011 20:51:37 GMT < Date: Tue, 27 Sep 2011 20:41:37 GMT < Content-Type: multipart/mixed; boundary=J6O2SZfLfSEdAv5dv3mttR7AVOF < Content-Length: 438 < --J6O2SZfLfSEdAv5dv3mttR7AVOF Content-Type: multipart/mixed; boundary=C1NUMpwbmSvb7CbqEAy1KdYRiUH --C1NUMpwbmSvb7CbqEAy1KdYRiUH X-Riak-Vclock: a85hYGBgzGDKBVIcMRuuc/k1GU/KYEpkymNl0Nzw7ThfFgA= Location: /riak/surge/ian Content-Type: text/plain Link: </riak/surge>; rel="up" Etag: 51h3q7RjTNaHWYpO4P0MJj Last-Modified: Tue, 27 Sep 2011 20:38:01 GMT target --C1NUMpwbmSvb7CbqEAy1KdYRiUH-- --J6O2SZfLfSEdAv5dv3mttR7AVOF--
  • 216. Lab: Walk Links (1/3) $ ./walk-linked.sh Enter the origin key: sean Enter the link spec (default: _,_,_): ... > GET /riak/surge/sean/_,_,_ HTTP/1.1 ... < HTTP/1.1 200 OK < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Expires: Tue, 27 Sep 2011 20:51:37 GMT < Date: Tue, 27 Sep 2011 20:41:37 GMT < Content-Type: multipart/mixed; boundary=J6O2SZfLfSEdAv5dv3mttR7AVOF < Content-Length: 438 < --J6O2SZfLfSEdAv5dv3mttR7AVOF Content-Type: multipart/mixed; boundary=C1NUMpwbmSvb7CbqEAy1KdYRiUH --C1NUMpwbmSvb7CbqEAy1KdYRiUH X-Riak-Vclock: a85hYGBgzGDKBVIcMRuuc/k1GU/KYEpkymNl0Nzw7ThfFgA= Location: /riak/surge/ian Content-Type: text/plain Link: </riak/surge>; rel="up" Etag: 51h3q7RjTNaHWYpO4P0MJj Last-Modified: Tue, 27 Sep 2011 20:38:01 GMT target --C1NUMpwbmSvb7CbqEAy1KdYRiUH-- --J6O2SZfLfSEdAv5dv3mttR7AVOF--
  • 217. Lab: Walk Links (2/3)
  • 218. Lab: Walk Links (2/3) Enter the origin key: sean Enter the link spec (default: _,_,_): surge,_,_
  • 219. Lab: Walk Links (2/3) Enter the origin key: sean Enter the link spec (default: _,_,_): surge,_,_ > GET /riak/surge/sean/surge,_,_ HTTP/1.1 ...
  • 220. Lab: Walk Links (2/3) Enter the origin key: sean Enter the link spec (default: _,_,_): surge,_,_ > GET /riak/surge/sean/surge,_,_ HTTP/1.1 ... > < HTTP/1.1 200 OK < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Expires: Tue, 27 Sep 2011 20:52:53 GMT < Date: Tue, 27 Sep 2011 20:42:53 GMT < Content-Type: multipart/mixed; boundary=CsxMIsO9tfadrRJ7EQ3XL2ivQ4f < Content-Length: 438 < --CsxMIsO9tfadrRJ7EQ3XL2ivQ4f Content-Type: multipart/mixed; boundary=I3U33LkzkqJi6HtbwsU0qRd4k4y --I3U33LkzkqJi6HtbwsU0qRd4k4y X-Riak-Vclock: a85hYGBgzGDKBVIcMRuuc/k1GU/KYEpkymNl0Nzw7ThfFgA= Location: /riak/surge/ian Content-Type: text/plain Link: </riak/surge>; rel="up" Etag: 51h3q7RjTNaHWYpO4P0MJj Last-Modified: Tue, 27 Sep 2011 20:38:01 GMT target --I3U33LkzkqJi6HtbwsU0qRd4k4y-- --CsxMIsO9tfadrRJ7EQ3XL2ivQ4f--
  • 221. Lab: Walk Links (3/3)
  • 222. Lab: Walk Links (3/3) Enter the origin key: sean Enter the link spec (default: _,_,_): foo,_,_
  • 223. Lab: Walk Links (3/3) Enter the origin key: sean Enter the link spec (default: _,_,_): foo,_,_ ... > GET /riak/surge/sean/foo,_,_ HTTP/1.1 < HTTP/1.1 200 OK < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Expires: Tue, 27 Sep 2011 20:53:08 GMT < Date: Tue, 27 Sep 2011 20:43:08 GMT < Content-Type: multipart/mixed; boundary=3AsxaHhVMlDEQCakLSmNqQUTS4Y < Content-Length: 172 < --3AsxaHhVMlDEQCakLSmNqQUTS4Y Content-Type: multipart/mixed; boundary=Ivtq9RgHpydECNEZOnHpiqHcFYl --Ivtq9RgHpydECNEZOnHpiqHcFYl-- --3AsxaHhVMlDEQCakLSmNqQUTS4Y--
  • 224. Lab: Secondary Indexes
  • 225. Lab: Secondary Indexes • Create some objects with indexes
  • 226. Lab: Secondary Indexes • Create some objects with indexes • Query the index for their keys
  • 227. Lab: Secondary Indexes • Create some objects with indexes • Query the index for their keys • Query input to MapReduce
  • 228. Lab: Create 2I Objects
  • 229. Lab: Create 2I Objects $ ./store-indexed.sh
  • 230. Lab: Create 2I Objects $ ./store-indexed.sh Enter the number of objects to create: 40
  • 231. Lab: Create 2I Objects $ ./store-indexed.sh Enter the number of objects to create: 40 * About to connect() to localhost port 8091 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8091 (#0) > PUT /riak/surge/surgeobj1?returnbody=true HTTP/1.1 > User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 > Host: localhost:8091 > Accept: */* > Content-Type: application/json > X-Riak-Index-surgeobj_int: 1 > Content-Length: 14 > < HTTP/1.1 200 OK < X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1uZVkMCUy5rEyHDn57ThfFgA= < x-riak-index-surgeobj_int: 1 < Vary: Accept-Encoding < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Link: </riak/surge>; rel="up" < Date: Tue, 27 Sep 2011 22:27:16 GMT < Content-Type: application/json < Content-Length: 14 < * Connection #0 to host localhost left intact * Closing connection #0 {"surgeobj":1} ...
  • 232. Lab: Create 2I Objects $ ./store-indexed.sh Enter the number of objects to create: 40 key * About to connect() to localhost port 8091 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8091 (#0) > PUT /riak/surge/surgeobj1?returnbody=true HTTP/1.1 > User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 > Host: localhost:8091 > Accept: */* > Content-Type: application/json > X-Riak-Index-surgeobj_int: 1 > Content-Length: 14 > < HTTP/1.1 200 OK < X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1uZVkMCUy5rEyHDn57ThfFgA= < x-riak-index-surgeobj_int: 1 < Vary: Accept-Encoding < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Link: </riak/surge>; rel="up" < Date: Tue, 27 Sep 2011 22:27:16 GMT < Content-Type: application/json < Content-Length: 14 < * Connection #0 to host localhost left intact * Closing connection #0 {"surgeobj":1} ...
  • 233. Lab: Create 2I Objects $ ./store-indexed.sh Enter the number of objects to create: 40 * About to connect() to localhost port 8091 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8091 (#0) > PUT /riak/surge/surgeobj1?returnbody=true HTTP/1.1 > User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 > Host: localhost:8091 > Accept: */* index header > Content-Type: application/json > X-Riak-Index-surgeobj_int: 1 > Content-Length: 14 > < HTTP/1.1 200 OK < X-Riak-Vclock: a85hYGBgzGDKBVIcR4M2cvs1uZVkMCUy5rEyHDn57ThfFgA= < x-riak-index-surgeobj_int: 1 < Vary: Accept-Encoding < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Link: </riak/surge>; rel="up" < Date: Tue, 27 Sep 2011 22:27:16 GMT < Content-Type: application/json < Content-Length: 14 < * Connection #0 to host localhost left intact * Closing connection #0 {"surgeobj":1} ...
  • 234. Lab: Query 2I (1/2)
  • 235. Lab: Query 2I (1/2) $ ./query-indexed.sh
  • 236. Lab: Query 2I (1/2) $ ./query-indexed.sh Query on range or equality? (r/e) e Index value: 10
  • 237. Lab: Query 2I (1/2) $ ./query-indexed.sh Query on range or equality? (r/e) e Index value: 10 * About to connect() to localhost port 8091 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8091 (#0) > GET /buckets/surge/index/surgeobj_int/10 HTTP/1.1 > User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 > Host: localhost:8091 > Accept: */* > < HTTP/1.1 200 OK < Vary: Accept-Encoding < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Date: Tue, 27 Sep 2011 22:27:27 GMT < Content-Type: application/json < Content-Length: 23 < * Connection #0 to host localhost left intact * Closing connection #0 {"keys":["surgeobj10"]}
  • 238. Lab: Query 2I (2/2)
  • 239. Lab: Query 2I (2/2) Query on range or equality? (r/e) r Range start: 5 Range end: 15
  • 240. Lab: Query 2I (2/2) Query on range or equality? (r/e) r Range start: 5 Range end: 15 * About to connect() to localhost port 8091 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 8091 (#0) > GET /buckets/surge/index/surgeobj_int/5/15 HTTP/1.1 > User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 > Host: localhost:8091 > Accept: */* > < HTTP/1.1 200 OK < Vary: Accept-Encoding < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Date: Tue, 27 Sep 2011 22:27:36 GMT < Content-Type: application/json < Content-Length: 148 < * Connection #0 to host localhost left intact * Closing connection #0 {"keys": ["surgeobj12","surgeobj15","surgeobj6","surgeobj5","surgeobj13","surgeob j8","surgeobj11","surgeobj14","surgeobj9","surgeobj10","surgeobj7"]}
  • 242. Lab: 2I MapReduce $ ./mapred-indexed.sh
  • 243. Lab: 2I MapReduce $ ./mapred-indexed.sh Query on range or equality? (r/e) r Range start: 35 Range end: 40
  • 244. Lab: 2I MapReduce $ ./mapred-indexed.sh Query on range or equality? (r/e) r Range start: 35 Range end: 40 Executing MapReduce: { "inputs":{ "bucket":"surge", "index":"surgeobj_int" ,"start":35,"end":40}, "query":[ {"map": {"name":"Riak.mapValuesJson","language":"javascript","keep":true}} ]}
  • 245. Lab: 2I MapReduce $ ./mapred-indexed.sh Query on range or equality? (r/e) r Range start: 35 Range end: 40 Executing MapReduce: { "inputs":{ "bucket":"surge", "index":"surgeobj_int" ,"start":35,"end":40}, "query":[ {"map": {"name":"Riak.mapValuesJson","language":"javascript","keep":true}} ]} > POST /mapred HTTP/1.1 > User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/ 0.9.8r zlib/1.2.3 > Host: localhost:8091 > Accept: */* > Content-Type: application/json > Content-Length: 164 > < HTTP/1.1 200 OK < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic) < Date: Wed, 28 Sep 2011 01:41:53 GMT < Content-Type: application/json < Content-Length: 97 < * Connection #0 to host localhost left intact * Closing connection #0 [{"surgeobj":35},{"surgeobj":40},{"surgeobj":36},{"surgeobj":37},{"surgeobj": 39},{"surgeobj":38}]
  • 251. File Locations • Configuration • /etc/riak • Binaries
  • 252. File Locations • Configuration • /etc/riak • Binaries • /usr/sbin/riak
  • 253. File Locations • Configuration • /etc/riak • Binaries • /usr/sbin/riak • /usr/sbin/riak-admin
  • 254. File Locations • Configuration • /etc/riak • Binaries • /usr/sbin/riak • /usr/sbin/riak-admin • /usr/[lib,lib64]/riak
  • 255. File Locations • Configuration • Logs • /etc/riak • Binaries • /usr/sbin/riak • /usr/sbin/riak-admin • /usr/[lib,lib64]/riak
  • 256. File Locations • Configuration • Logs • /etc/riak • /var/log/riak • Binaries • /usr/sbin/riak • /usr/sbin/riak-admin • /usr/[lib,lib64]/riak
  • 257. File Locations • Configuration • Logs • /etc/riak • /var/log/riak • Binaries • Data • /usr/sbin/riak • /usr/sbin/riak-admin • /usr/[lib,lib64]/riak
  • 258. File Locations • Configuration • Logs • /etc/riak • /var/log/riak • Binaries • Data • /usr/sbin/riak • /var/lib/riak • /usr/sbin/riak-admin • /usr/[lib,lib64]/riak
  • 259. File Locations • Configuration • Logs • /etc/riak • /var/log/riak • Binaries • Data • /usr/sbin/riak • /var/lib/riak • /usr/sbin/riak-admin • Handles / Temp Files • /usr/[lib,lib64]/riak
  • 260. File Locations • Configuration • Logs • /etc/riak • /var/log/riak • Binaries • Data • /usr/sbin/riak • /var/lib/riak • /usr/sbin/riak-admin • Handles / Temp Files • /usr/[lib,lib64]/riak • /tmp/riak
  • 263. /etc/riak/vm.args • Overview • Erlang VM configuration settings.
  • 264. /etc/riak/vm.args • Overview • Erlang VM configuration settings. • Important Settings
  • 265. /etc/riak/vm.args • Overview • Erlang VM configuration settings. • Important Settings • Node name.
  • 266. /etc/riak/vm.args • Overview • Erlang VM configuration settings. • Important Settings • Node name. • Security cookie.
  • 267. /etc/riak/vm.args • Overview • Erlang VM configuration settings. • Important Settings • Node name. • Security cookie. • Documentation
  • 268. /etc/riak/vm.args • Overview • Erlang VM configuration settings. • Important Settings • Node name. • Security cookie. • Documentation • http://www.erlang.org/doc/man/erl.html
  • 270. /etc/riak/vm.args • Node Name & Security Cookie ## Name of the riak node
  • 271. /etc/riak/vm.args • Node Name & Security Cookie ## Name of the riak node -name riak@127.0.0.1
  • 272. /etc/riak/vm.args • Node Name & Security Cookie ## Name of the riak node -name riak@127.0.0.1 ## Cookie for distributed erlang
  • 273. /etc/riak/vm.args • Node Name & Security Cookie ## Name of the riak node -name riak@127.0.0.1 ## Cookie for distributed erlang -setcookie riak
  • 275. /etc/riak/ app.config • Overview
  • 276. /etc/riak/ app.config • Overview • Riak (and dependency) configuration settings.
  • 277. /etc/riak/ app.config • Overview • Riak (and dependency) configuration settings. • Important Settings
  • 278. /etc/riak/ app.config • Overview • Riak (and dependency) configuration settings. • Important Settings • Ports
  • 279. /etc/riak/ app.config • Overview • Riak (and dependency) configuration settings. • Important Settings • Ports • Directories
  • 280. /etc/riak/ app.config • Overview • Riak (and dependency) configuration settings. • Important Settings • Ports • Directories • Number of Partitions
  • 281. /etc/riak/ app.config • Overview • Riak (and dependency) configuration settings. • Important Settings • Ports • Directories • Number of Partitions • Storage Backend
  • 282. /etc/riak/ app.config • Overview • Riak (and dependency) configuration settings. • Important Settings • Ports • Directories • Number of Partitions • Storage Backend • Documentation
  • 283. /etc/riak/ app.config • Overview • Riak (and dependency) configuration settings. • Important Settings • Ports • Directories • Number of Partitions • Storage Backend • Documentation • http://wiki.basho.com/Configuration-Files.html
  • 284. /etc/riak/ app.config • Overview • Riak (and dependency) configuration settings. • Important Settings • Ports • Directories • Number of Partitions • Storage Backend • Documentation • http://wiki.basho.com/Configuration-Files.html • Inline comments
  • 286. /etc/riak/ app.config • Storage Engine %% Storage_backend specifies the Erlang module defining the storage
  • 287. /etc/riak/ app.config • Storage Engine %% Storage_backend specifies the Erlang module defining the storage %% mechanism that will be used on this node.
  • 288. /etc/riak/ app.config • Storage Engine %% Storage_backend specifies the Erlang module defining the storage %% mechanism that will be used on this node. {storage_backend, riak_kv_bitcask_backend},
  • 289. /etc/riak/ app.config • Storage Engine %% Storage_backend specifies the Erlang module defining the storage %% mechanism that will be used on this node. {storage_backend, riak_kv_bitcask_backend}, • About Storage Engines
  • 290. /etc/riak/ app.config • Storage Engine %% Storage_backend specifies the Erlang module defining the storage %% mechanism that will be used on this node. {storage_backend, riak_kv_bitcask_backend}, • About Storage Engines • Riak has pluggable storage engines. (Bitcask, Innostore, LevelDB)
  • 291. /etc/riak/ app.config • Storage Engine %% Storage_backend specifies the Erlang module defining the storage %% mechanism that will be used on this node. {storage_backend, riak_kv_bitcask_backend}, • About Storage Engines • Riak has pluggable storage engines. (Bitcask, Innostore, LevelDB) • Different engines have different tradeoffs.
  • 292. /etc/riak/ app.config • Storage Engine %% Storage_backend specifies the Erlang module defining the storage %% mechanism that will be used on this node. {storage_backend, riak_kv_bitcask_backend}, • About Storage Engines • Riak has pluggable storage engines. (Bitcask, Innostore, LevelDB) • Different engines have different tradeoffs. • Engine selection depends on shape of data.
  • 293. /etc/riak/ app.config • Storage Engine %% Storage_backend specifies the Erlang module defining the storage %% mechanism that will be used on this node. {storage_backend, riak_kv_bitcask_backend}, • About Storage Engines • Riak has pluggable storage engines. (Bitcask, Innostore, LevelDB) • Different engines have different tradeoffs. • Engine selection depends on shape of data. • More on this later.
  • 294. /etc/riak/ app.config %% Bitcask Config {bitcask, [ {data_root, "data/bitcask"} ]}, {eleveldb, [ {data_root, “data/leveldb”} ]},
  • 295. /etc/riak/ app.config • File Locations %% Bitcask Config {bitcask, [ {data_root, "data/bitcask"} ]}, {eleveldb, [ {data_root, “data/leveldb”} ]},
  • 296. /etc/riak/ app.config • File Locations %% Bitcask Config {bitcask, [ {data_root, "data/bitcask"} ]}, {eleveldb, [ {data_root, “data/leveldb”} ]},
  • 297. /etc/riak/ app.config • File Locations %% Bitcask Config {bitcask, [ {data_root, "data/bitcask"} ]}, {eleveldb, [ {data_root, “data/leveldb”} ]},
  • 298. /etc/riak/ app.config • File Locations %% Bitcask Config {bitcask, [ {data_root, "data/bitcask"} ]}, {eleveldb, [ {data_root, “data/leveldb”} ]},
  • 299. /etc/riak/ app.config • File Locations %% Bitcask Config {bitcask, [ {data_root, "data/bitcask"} ]}, {eleveldb, [ {data_root, “data/leveldb”} • ]},
  • 300. /etc/riak/ app.config %% http is a list of IP addresses and TCP ports that the Riak %% HTTP interface will bind. {http, [ {"127.0.0.1", 8091 } ]}, %% pb_ip is the IP address that the Riak Protocol Buffers interface %% will bind to. If this is undefined, the interface will not run. {pb_ip, "127.0.0.1" }, %% pb_port is the TCP port that the Riak Protocol Buffers interface %% will bind to {pb_port, 8081 },
  • 301. /etc/riak/ app.config • HTTP & Protocol Buffers %% http is a list of IP addresses and TCP ports that the Riak %% HTTP interface will bind. {http, [ {"127.0.0.1", 8091 } ]}, %% pb_ip is the IP address that the Riak Protocol Buffers interface %% will bind to. If this is undefined, the interface will not run. {pb_ip, "127.0.0.1" }, %% pb_port is the TCP port that the Riak Protocol Buffers interface %% will bind to {pb_port, 8081 },
  • 303. Securing Riak • Overview
  • 304. Securing Riak • Overview • Erlang has a “cookie” for “security.”
  • 305. Securing Riak • Overview • Erlang has a “cookie” for “security.” • Do not rely on it. Plain text.
  • 306. Securing Riak • Overview • Erlang has a “cookie” for “security.” • Do not rely on it. Plain text. • Riak assumes internal environment is trusted.
  • 307. Securing Riak • Overview • Erlang has a “cookie” for “security.” • Do not rely on it. Plain text. • Riak assumes internal environment is trusted. • Securing the HTTP Interface
  • 308. Securing Riak • Overview • Erlang has a “cookie” for “security.” • Do not rely on it. Plain text. • Riak assumes internal environment is trusted. • Securing the HTTP Interface • HTTP Auth via Proxy
  • 309. Securing Riak • Overview • Erlang has a “cookie” for “security.” • Do not rely on it. Plain text. • Riak assumes internal environment is trusted. • Securing the HTTP Interface • HTTP Auth via Proxy • Does support SSL
  • 310. Securing Riak • Overview • Erlang has a “cookie” for “security.” • Do not rely on it. Plain text. • Riak assumes internal environment is trusted. • Securing the HTTP Interface • HTTP Auth via Proxy • Does support SSL • Securing Protocol Buffers
  • 311. Securing Riak • Overview • Erlang has a “cookie” for “security.” • Do not rely on it. Plain text. • Riak assumes internal environment is trusted. • Securing the HTTP Interface • HTTP Auth via Proxy • Does support SSL • Securing Protocol Buffers • No security.
  • 313. Load Balancing • Overview
  • 314. Load Balancing • Overview • Any node can handle any request.
  • 315. Load Balancing • Overview • Any node can handle any request. • Makes load balancing easy.
  • 316. Load Balancing • Overview • Any node can handle any request. • Makes load balancing easy. • Load Balancing HTTP
  • 317. Load Balancing • Overview • Any node can handle any request. • Makes load balancing easy. • Load Balancing HTTP • HAProxy, nginx, etc...
  • 318. Load Balancing • Overview • Any node can handle any request. • Makes load balancing easy. • Load Balancing HTTP • HAProxy, nginx, etc... • Load Balancing Protocol Buffers
  • 319. Load Balancing • Overview • Any node can handle any request. • Makes load balancing easy. • Load Balancing HTTP • HAProxy, nginx, etc... • Load Balancing Protocol Buffers • Any TCP Load Balancer
  • 320. Load Balancing • Overview • Any node can handle any request. • Makes load balancing easy. • Load Balancing HTTP • HAProxy, nginx, etc... • Load Balancing Protocol Buffers • Any TCP Load Balancer • In General
  • 321. Load Balancing • Overview • Any node can handle any request. • Makes load balancing easy. • Load Balancing HTTP • HAProxy, nginx, etc... • Load Balancing Protocol Buffers • Any TCP Load Balancer • In General • Use “least connected” strategy.
  • 323. There are 5 to choose from
  • 324. There are 5 to choose from • Bitcask
  • 325. There are 5 to choose from • Bitcask • Innostore
  • 326. There are 5 to choose from • Bitcask • Innostore • LevelDB
  • 327. There are 5 to choose from • Bitcask • Innostore • LevelDB • Memory
  • 328. There are 5 to choose from • Bitcask • Innostore • LevelDB • Memory • Multi
  • 330. Bitcask • A fast, append-only key-value store
  • 331. Bitcask • A fast, append-only key-value store • In memory key lookup table (key_dir)
  • 332. Bitcask • A fast, append-only key-value store • In memory key lookup table (key_dir) • Closed files are immutable
  • 333. Bitcask • A fast, append-only key-value store • In memory key lookup table (key_dir) • Closed files are immutable • Merging cleans up old data
  • 334. Bitcask • A fast, append-only key-value store • In memory key lookup table (key_dir) • Closed files are immutable • Merging cleans up old data • Apache 2 license
  • 336. Innostore • Based on Embedded InnoDB
  • 337. Innostore • Based on Embedded InnoDB • Write ahead log plus B-tree storage
  • 338. Innostore • Based on Embedded InnoDB • Write ahead log plus B-tree storage • Similar characterics to the MySQL InnoDB plugin
  • 339. Innostore • Based on Embedded InnoDB • Write ahead log plus B-tree storage • Similar characterics to the MySQL InnoDB plugin • Manages which pages of the B-trees are in memory
  • 340. Innostore • Based on Embedded InnoDB • Write ahead log plus B-tree storage • Similar characterics to the MySQL InnoDB plugin • Manages which pages of the B-trees are in memory • GPL license
  • 342. LevelDB • A Google developed key-value store
  • 343. LevelDB • A Google developed key-value store • Append-only
  • 344. LevelDB • A Google developed key-value store • Append-only • Multiple levels of SSTable-like structures
  • 345. LevelDB • A Google developed key-value store • Append-only • Multiple levels of SSTable-like structures • BSD license
  • 346. Cluster Capacity Planning
  • 347. Cluster Capacity Planning
  • 348. Cluster Capacity Planning • Overview
  • 349. Cluster Capacity Planning • Overview • There are MANY variables that affect performance.
  • 350. Cluster Capacity Planning • Overview • There are MANY variables that affect performance. • Safest route is to simulate load and see what happens.
  • 351. Cluster Capacity Planning • Overview • There are MANY variables that affect performance. • Safest route is to simulate load and see what happens. • basho_bench can help
  • 352. Cluster Capacity Planning • Overview • There are MANY variables that affect performance. • Safest route is to simulate load and see what happens. • basho_bench can help • Documentation
  • 353. Cluster Capacity Planning • Overview • There are MANY variables that affect performance. • Safest route is to simulate load and see what happens. • basho_bench can help • Documentation • http://wiki.basho.com/Cluster-Capacity- Planning.html
  • 354. Cluster Capacity Planning • Overview • There are MANY variables that affect performance. • Safest route is to simulate load and see what happens. • basho_bench can help • Documentation • http://wiki.basho.com/Cluster-Capacity- Planning.html • http://wiki.basho.com/System- Requirements.html
  • 356. Things to Consider (1/3) • Virtualization
  • 357. Things to Consider (1/3) • Virtualization • Riak’s workload is I/O bound
  • 358. Things to Consider (1/3) • Virtualization • Riak’s workload is I/O bound • Run on bare metal for best performance.
  • 359. Things to Consider (1/3) • Virtualization • Riak’s workload is I/O bound • Run on bare metal for best performance. • Virtualize for other factors (agility or cost)
  • 360. Things to Consider (1/3) • Virtualization • Riak’s workload is I/O bound • Run on bare metal for best performance. • Virtualize for other factors (agility or cost) • Object size
  • 361. Things to Consider (1/3) • Virtualization • Riak’s workload is I/O bound • Run on bare metal for best performance. • Virtualize for other factors (agility or cost) • Object size • Too small means unnecessary disk and network overhead
  • 362. Things to Consider (1/3) • Virtualization • Riak’s workload is I/O bound • Run on bare metal for best performance. • Virtualize for other factors (agility or cost) • Object size • Too small means unnecessary disk and network overhead • Too large means chunky reads and writes.
  • 363. Things to Consider (1/3) • Virtualization • Riak’s workload is I/O bound • Run on bare metal for best performance. • Virtualize for other factors (agility or cost) • Object size • Too small means unnecessary disk and network overhead • Too large means chunky reads and writes. • Read/Write Ratio
  • 364. Things to Consider (1/3) • Virtualization • Riak’s workload is I/O bound • Run on bare metal for best performance. • Virtualize for other factors (agility or cost) • Object size • Too small means unnecessary disk and network overhead • Too large means chunky reads and writes. • Read/Write Ratio • Riak is not meant for dumping many tiny datapoints.
  • 366. Things to Consider (2/3) • “Randomness” of Reads / Writes
  • 367. Things to Consider (2/3) • “Randomness” of Reads / Writes • Recently accessed objects are cached in memory.
  • 368. Things to Consider (2/3) • “Randomness” of Reads / Writes • Recently accessed objects are cached in memory. • A small working set of objects will be fast.
  • 369. Things to Consider (2/3) • “Randomness” of Reads / Writes • Recently accessed objects are cached in memory. • A small working set of objects will be fast. • Amount of RAM
  • 370. Things to Consider (2/3) • “Randomness” of Reads / Writes • Recently accessed objects are cached in memory. • A small working set of objects will be fast. • Amount of RAM • Expands the size of cached working set, lowers latency.
  • 371. Things to Consider (2/3) • “Randomness” of Reads / • Disk Speed Writes • Recently accessed objects are cached in memory. • A small working set of objects will be fast. • Amount of RAM • Expands the size of cached working set, lowers latency.
  • 372. Things to Consider (2/3) • “Randomness” of Reads / • Disk Speed Writes • Faster disk means • Recently accessed lower read/write objects are cached in latency. memory. • A small working set of objects will be fast. • Amount of RAM • Expands the size of cached working set, lowers latency.
  • 373. Things to Consider (2/3) • “Randomness” of Reads / • Disk Speed Writes • Faster disk means • Recently accessed lower read/write objects are cached in latency. memory. • Number of Processors • A small working set of objects will be fast. • Amount of RAM • Expands the size of cached working set, lowers latency.
  • 374. Things to Consider (2/3) • “Randomness” of Reads / • Disk Speed Writes • Faster disk means • Recently accessed lower read/write objects are cached in latency. memory. • Number of Processors • A small working set of objects will be fast. • More processors == more Map/Reduce • Amount of RAM throughput. • Expands the size of cached working set, lowers latency.
  • 375. Things to Consider (2/3) • “Randomness” of Reads / • Disk Speed Writes • Faster disk means • Recently accessed lower read/write objects are cached in latency. memory. • Number of Processors • A small working set of objects will be fast. • More processors == more Map/Reduce • Amount of RAM throughput. • Expands the size of • Faster processors == cached working set, lower Map/Reduce lowers latency. latency.
  • 377. Things to Consider (3/3) • Features
  • 378. Things to Consider (3/3) • Features • RiakKV vs. Riak 2I vs. Riak Search
  • 379. Things to Consider (3/3) • Features • RiakKV vs. Riak 2I vs. Riak Search • Backends
  • 380. Things to Consider (3/3) • Features • RiakKV vs. Riak 2I vs. Riak Search • Backends • Bitcask vs. Innostore vs. eLevelDB
  • 381. Things to Consider (3/3) • Features • RiakKV vs. Riak 2I vs. Riak Search • Backends • Bitcask vs. Innostore vs. eLevelDB • Space/Time tradeoff
  • 382. Things to Consider (3/3) • Features • RiakKV vs. Riak 2I vs. Riak Search • Backends • Bitcask vs. Innostore vs. eLevelDB • Space/Time tradeoff • Partitions
  • 383. Things to Consider (3/3) • Features • RiakKV vs. Riak 2I vs. Riak Search • Backends • Bitcask vs. Innostore vs. eLevelDB • Space/Time tradeoff • Partitions • Depends on eventual number of boxes.
  • 384. Things to Consider (3/3) • Features • RiakKV vs. Riak 2I vs. Riak Search • Backends • Bitcask vs. Innostore vs. eLevelDB • Space/Time tradeoff • Partitions • Depends on eventual number of boxes. • Protocol
  • 385. Things to Consider (3/3) • Features • RiakKV vs. Riak 2I vs. Riak Search • Backends • Bitcask vs. Innostore vs. eLevelDB • Space/Time tradeoff • Partitions • Depends on eventual number of boxes. • Protocol • HTTP vs. Protocol Buffers
  • 386. Things to Consider (3/3) • Features • RiakKV vs. Riak 2I vs. Riak Search • Backends • Bitcask vs. Innostore vs. eLevelDB • Space/Time tradeoff • Partitions • Depends on eventual number of boxes. • Protocol • HTTP vs. Protocol Buffers • All clients support HTTP, some support PB.
  • 388. Performance Tuning • General Tips
  • 389. Performance Tuning • General Tips • Start with a “DB” Like Machine Profile
  • 390. Performance Tuning • General Tips • Start with a “DB” Like Machine Profile • Multi-core 64-bit CPUs
  • 391. Performance Tuning • General Tips • Start with a “DB” Like Machine Profile • Multi-core 64-bit CPUs • The More RAM the Better
  • 392. Performance Tuning • General Tips • Start with a “DB” Like Machine Profile • Multi-core 64-bit CPUs • The More RAM the Better • Fast Disk
  • 393. Performance Tuning • General Tips • Start with a “DB” Like Machine Profile • Multi-core 64-bit CPUs • The More RAM the Better • Fast Disk • Use noatime mounts
  • 394. Performance Tuning • General Tips • Start with a “DB” Like Machine Profile • Multi-core 64-bit CPUs • The More RAM the Better • Fast Disk • Use noatime mounts • Limit List Keys and Full Bucket MapReduce
  • 395. Performance Tuning • General Tips • Start with a “DB” Like Machine Profile • Multi-core 64-bit CPUs • The More RAM the Better • Fast Disk • Use noatime mounts • Limit List Keys and Full Bucket MapReduce • Benchmark in Advance
  • 396. Performance Tuning • General Tips • Start with a “DB” Like Machine Profile • Multi-core 64-bit CPUs • The More RAM the Better • Fast Disk • Use noatime mounts • Limit List Keys and Full Bucket MapReduce • Benchmark in Advance • Graph Everything
  • 398. Performance Tuning • Bitcask Backend
  • 399. Performance Tuning • Bitcask Backend • KeyDir is in Memory
  • 400. Performance Tuning • Bitcask Backend • KeyDir is in Memory • Filesystem Cache (Access Profile Considerations)
  • 401. Performance Tuning • Bitcask Backend • KeyDir is in Memory • Filesystem Cache (Access Profile Considerations) • Merge Trigger Settings
  • 402. Performance Tuning • Bitcask Backend • KeyDir is in Memory • Filesystem Cache (Access Profile Considerations) • Merge Trigger Settings • Scheduling Bitcask Merges
  • 403. Performance Tuning • Bitcask Backend • KeyDir is in Memory • Filesystem Cache (Access Profile Considerations) • Merge Trigger Settings • Scheduling Bitcask Merges • Key Expiration
  • 404. Performance Tuning • Bitcask Backend • KeyDir is in Memory • Filesystem Cache (Access Profile Considerations) • Merge Trigger Settings • Scheduling Bitcask Merges • Key Expiration • Documentation
  • 405. Performance Tuning • Bitcask Backend • KeyDir is in Memory • Filesystem Cache (Access Profile Considerations) • Merge Trigger Settings • Scheduling Bitcask Merges • Key Expiration • Documentation • http://wiki.basho.com/Bitcask.html
  • 406. Performance Tuning • Bitcask Backend • KeyDir is in Memory • Filesystem Cache (Access Profile Considerations) • Merge Trigger Settings • Scheduling Bitcask Merges • Key Expiration • Documentation • http://wiki.basho.com/Bitcask.html • http://wiki.basho.com/Bitcask-Capacity-Planning.html
  • 408. Performance Tuning • Innostore Backend
  • 409. Performance Tuning • Innostore Backend • Similar to MySQL + InnoDB
  • 410. Performance Tuning • Innostore Backend • Similar to MySQL + InnoDB • buffer pool size
  • 411. Performance Tuning • Innostore Backend • Similar to MySQL + InnoDB • buffer pool size • o_direct
  • 412. Performance Tuning • Innostore Backend • Similar to MySQL + InnoDB • buffer pool size • o_direct • log_files_in_groups
  • 413. Performance Tuning • Innostore Backend • Similar to MySQL + InnoDB • buffer pool size • o_direct • log_files_in_groups • Separate Spindles for Log and Data
  • 414. Performance Tuning • Innostore Backend • Similar to MySQL + InnoDB • buffer pool size • o_direct • log_files_in_groups • Separate Spindles for Log and Data • Documentation
  • 415. Performance Tuning • Innostore Backend • Similar to MySQL + InnoDB • buffer pool size • o_direct • log_files_in_groups • Separate Spindles for Log and Data • Documentation • http://wiki.basho.com/Setting-Up-Innostore.html
  • 416. Performance Tuning • Innostore Backend • Similar to MySQL + InnoDB • buffer pool size • o_direct • log_files_in_groups • Separate Spindles for Log and Data • Documentation • http://wiki.basho.com/Setting-Up-Innostore.html • http://wiki.basho.com/Innostore-Configuration-and- Tuning.html
  • 418. basho_bench • What Does It Do?
  • 419. basho_bench • What Does It Do? • Generates load against a Riak cluster. Logs and graphs results.
  • 420. basho_bench • What Does It Do? • Generates load against a Riak cluster. Logs and graphs results. • Intended for cluster capacity planning.
  • 421. basho_bench • What Does It Do? • Generates load against a Riak cluster. Logs and graphs results. • Intended for cluster capacity planning. • Major Parameters
  • 422. basho_bench • What Does It Do? • Generates load against a Riak cluster. Logs and graphs results. • Intended for cluster capacity planning. • Major Parameters • Test Harness / Key Generator / Value Generator
  • 423. basho_bench • What Does It Do? • Generates load against a Riak cluster. Logs and graphs results. • Intended for cluster capacity planning. • Major Parameters • Test Harness / Key Generator / Value Generator • Read / write / update mix.
  • 424. basho_bench • What Does It Do? • Generates load against a Riak cluster. Logs and graphs results. • Intended for cluster capacity planning. • Major Parameters • Test Harness / Key Generator / Value Generator • Read / write / update mix. • Number of workers.
  • 425. basho_bench • What Does It Do? • Generates load against a Riak cluster. Logs and graphs results. • Intended for cluster capacity planning. • Major Parameters • Test Harness / Key Generator / Value Generator • Read / write / update mix. • Number of workers. • Worker rate. (X per second vs. as fast as possible)
  • 426. basho_bench • What Does It Do? • Generates load against a Riak cluster. Logs and graphs results. • Intended for cluster capacity planning. • Major Parameters • Test Harness / Key Generator / Value Generator • Read / write / update mix. • Number of workers. • Worker rate. (X per second vs. as fast as possible) • Documentation
  • 427. basho_bench • What Does It Do? • Generates load against a Riak cluster. Logs and graphs results. • Intended for cluster capacity planning. • Major Parameters • Test Harness / Key Generator / Value Generator • Read / write / update mix. • Number of workers. • Worker rate. (X per second vs. as fast as possible) • Documentation • http://wiki.basho.com/Benchmarking-with-Basho-Bench.html
  • 428. basho_bench: Configuration {mode, max}. {duration, 20}. {concurrent, 3}. {driver, basho_bench_driver_http_raw}. {code_paths, ["deps/ibrowse"]}. {key_generator, {int_to_bin, {uniform_int, 50000000}}}. {value_generator, {fixed_bin, 5000}}. {http_raw_port, 8098}. {operations, [{insert, 1}]}. {source_dir, "foo"}.
  • 429. basho_bench: Results elapsed, window, total, successful, failed 10, 10, 10747, 10747, 0 20, 10, 13295, 13295, 0 30, 10, 11840, 11840, 0 40, 10, 1688, 1688, 0 50, 10, 10675, 10675, 0 60, 10, 13599, 13599, 0 70, 10, 7747, 7747, 0 80, 10, 4089, 4089, 0 90, 10, 13851, 13851, 0 100, 10, 13374, 13374, 0 110, 10, 2293, 2293, 0 120, 10, 9813, 9813, 0 130, 10, 13860, 13860, 0 140, 10, 8333, 8333, 0 150, 10, 3592, 3592, 0 160, 10, 14104, 14104, 0 170, 10, 13834, 13834, 0 ...snip...
  • 430. basho_bench: Results
  • 432. Lager
  • 433. Lager • Notes
  • 434. Lager • Notes • Basho’s replacement for Erlang logging tools
  • 435. Lager • Notes • Basho’s replacement for Erlang logging tools • It plays nice with your tools
  • 436. Lager • Notes • Basho’s replacement for Erlang logging tools • It plays nice with your tools • /var/log/riak/console.log

Notes de l'éditeur

  1. \n
  2. \n
  3. \n
  4. \n
  5. \n
  6. \n
  7. \n
  8. \n
  9. Think of it like a big hash-table\n
  10. Think of it like a big hash-table\n
  11. Think of it like a big hash-table\n
  12. Think of it like a big hash-table\n
  13. Think of it like a big hash-table\n
  14. Think of it like a big hash-table\n
  15. Think of it like a big hash-table\n
  16. X = throughput, compute power for MapReduce, storage, lower latency\n
  17. X = throughput, compute power for MapReduce, storage, lower latency\n
  18. X = throughput, compute power for MapReduce, storage, lower latency\n
  19. \n
  20. \n
  21. \n
  22. \n
  23. \n
  24. \n
  25. \n
  26. \n
  27. \n
  28. \n
  29. \n
  30. \n
  31. \n
  32. \n
  33. Consistent hashing means:\n1) large, fixed-size key-space\n2) no rehashing of keys - always hash the same way\n
  34. Consistent hashing means:\n1) large, fixed-size key-space\n2) no rehashing of keys - always hash the same way\n
  35. Consistent hashing means:\n1) large, fixed-size key-space\n2) no rehashing of keys - always hash the same way\n
  36. Consistent hashing means:\n1) large, fixed-size key-space\n2) no rehashing of keys - always hash the same way\n
  37. Consistent hashing means:\n1) large, fixed-size key-space\n2) no rehashing of keys - always hash the same way\n
  38. \n
  39. \n
  40. \n
  41. \n
  42. \n
  43. \n
  44. \n
  45. \n
  46. \n
  47. \n
  48. \n
  49. \n
  50. \n
  51. \n
  52. \n
  53. \n
  54. \n
  55. \n
  56. \n
  57. \n
  58. \n
  59. \n
  60. \n
  61. \n
  62. \n
  63. \n
  64. \n
  65. \n
  66. \n
  67. \n
  68. \n
  69. \n
  70. \n
  71. \n
  72. \n
  73. \n
  74. \n
  75. \n
  76. \n
  77. \n
  78. \n
  79. \n
  80. \n
  81. \n
  82. \n
  83. \n
  84. \n
  85. \n
  86. \n
  87. \n
  88. \n
  89. \n
  90. \n
  91. \n
  92. \n
  93. \n
  94. \n
  95. \n
  96. \n
  97. \n
  98. \n
  99. \n
  100. \n
  101. \n
  102. \n
  103. \n
  104. \n
  105. \n
  106. \n
  107. \n
  108. \n
  109. \n
  110. \n
  111. \n
  112. \n
  113. \n
  114. \n
  115. \n
  116. \n
  117. \n
  118. \n
  119. \n
  120. \n
  121. \n
  122. \n
  123. \n
  124. \n
  125. \n
  126. \n
  127. \n
  128. \n
  129. \n
  130. \n
  131. \n
  132. \n
  133. \n
  134. \n
  135. \n
  136. \n
  137. \n
  138. \n
  139. \n
  140. \n
  141. \n
  142. \n
  143. \n
  144. \n
  145. \n
  146. \n
  147. \n
  148. \n
  149. \n
  150. \n
  151. \n
  152. \n
  153. \n
  154. \n
  155. \n
  156. \n
  157. \n
  158. \n
  159. \n
  160. \n
  161. \n
  162. \n
  163. \n
  164. \n
  165. \n
  166. \n
  167. \n
  168. \n
  169. \n
  170. \n
  171. \n
  172. \n
  173. \n
  174. \n
  175. \n
  176. \n
  177. \n
  178. \n
  179. \n
  180. \n
  181. \n
  182. \n
  183. \n
  184. \n
  185. \n
  186. \n
  187. \n
  188. \n
  189. \n
  190. \n
  191. \n
  192. \n
  193. \n
  194. \n
  195. \n
  196. \n
  197. \n
  198. \n
  199. \n
  200. \n
  201. \n
  202. \n
  203. \n
  204. \n
  205. \n
  206. \n
  207. \n
  208. \n
  209. \n
  210. \n
  211. \n
  212. \n
  213. \n
  214. \n
  215. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  216. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  217. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  218. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  219. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  220. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  221. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  222. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  223. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  224. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  225. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  226. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  227. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  228. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  229. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  230. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  231. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  232. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  233. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  234. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  235. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  236. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  237. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  238. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  239. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  240. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  241. 1) Client requests a key\n2) Get handler starts up to service the request\n3) Hashes key to its owner partitions (N=3)\n4) Sends similar &amp;#x201C;get&amp;#x201D; request to those partitions\n5) Waits for R replies that concur (R=2)\n6) Resolves the object, replies to client\n7) Third reply may come back at any time, but FSM replies as soon as quorum is satisfied/violated\n
  242. \n
  243. \n
  244. \n
  245. \n
  246. \n
  247. \n
  248. \n
  249. \n
  250. \n
  251. \n
  252. \n
  253. \n
  254. \n
  255. \n
  256. \n
  257. \n
  258. \n
  259. \n
  260. \n
  261. \n
  262. \n
  263. \n
  264. \n
  265. \n
  266. \n
  267. \n
  268. \n
  269. \n
  270. \n
  271. \n
  272. \n
  273. \n
  274. \n
  275. \n
  276. \n
  277. \n
  278. \n
  279. \n
  280. \n
  281. \n
  282. \n
  283. \n
  284. \n
  285. \n
  286. \n
  287. \n
  288. \n
  289. \n
  290. \n
  291. \n
  292. \n
  293. \n
  294. \n
  295. \n
  296. \n
  297. \n
  298. \n
  299. \n
  300. \n
  301. \n
  302. \n
  303. \n
  304. \n
  305. \n
  306. \n
  307. \n
  308. \n
  309. \n
  310. \n
  311. \n
  312. \n
  313. \n
  314. \n
  315. \n
  316. \n
  317. \n
  318. \n
  319. \n
  320. \n
  321. \n
  322. \n
  323. \n
  324. \n
  325. \n
  326. \n
  327. \n
  328. \n
  329. \n
  330. \n
  331. \n
  332. \n
  333. \n
  334. \n
  335. \n
  336. \n
  337. \n
  338. \n
  339. \n
  340. \n
  341. \n
  342. \n
  343. \n
  344. \n
  345. \n
  346. \n
  347. \n
  348. \n
  349. \n
  350. \n
  351. \n
  352. \n
  353. \n
  354. \n
  355. \n
  356. \n
  357. \n
  358. \n
  359. \n
  360. \n
  361. \n
  362. \n
  363. \n
  364. \n
  365. \n
  366. \n
  367. \n
  368. \n
  369. \n
  370. \n
  371. \n
  372. \n
  373. \n
  374. \n
  375. \n
  376. \n
  377. \n
  378. \n
  379. \n
  380. \n
  381. \n
  382. \n
  383. \n
  384. \n
  385. \n
  386. \n
  387. \n
  388. \n
  389. \n
  390. \n
  391. \n
  392. \n
  393. \n
  394. \n
  395. \n
  396. \n
  397. \n
  398. \n
  399. \n
  400. \n
  401. \n
  402. \n
  403. \n
  404. \n
  405. \n
  406. \n
  407. \n
  408. \n
  409. \n
  410. \n
  411. \n
  412. \n
  413. \n
  414. \n
  415. \n
  416. \n
  417. \n
  418. \n
  419. \n
  420. \n
  421. \n
  422. \n
  423. \n
  424. \n
  425. \n
  426. \n
  427. \n
  428. \n
  429. \n
  430. \n
  431. \n
  432. \n
  433. \n
  434. \n
  435. \n
  436. \n
  437. \n
  438. \n
  439. \n
  440. \n
  441. \n
  442. \n
  443. \n
  444. \n
  445. \n
  446. \n
  447. \n
  448. \n
  449. \n
  450. \n
  451. \n
  452. \n
  453. \n
  454. \n
  455. \n
  456. \n
  457. \n
  458. \n
  459. \n
  460. \n
  461. \n
  462. \n
  463. \n
  464. \n
  465. \n
  466. \n
  467. \n
  468. \n
  469. \n
  470. \n
  471. \n
  472. \n
  473. \n
  474. \n
  475. \n
  476. \n
  477. \n
  478. \n
  479. \n
  480. \n
  481. \n
  482. \n
  483. \n
  484. \n
  485. \n
  486. \n
  487. \n
  488. \n
  489. \n
  490. \n
  491. \n
  492. \n
  493. \n
  494. \n
  495. \n
  496. \n
  497. \n
  498. \n
  499. \n
  500. \n
  501. \n
  502. \n
  503. \n
  504. \n
  505. \n
  506. \n
  507. \n
  508. \n
  509. \n
  510. \n
  511. \n
  512. \n
  513. \n
  514. \n
  515. \n
  516. \n
  517. \n
  518. \n
  519. \n
  520. \n
  521. \n
  522. \n
  523. \n
  524. \n
  525. \n
  526. \n
  527. \n
  528. \n
  529. \n
  530. \n
  531. \n
  532. \n
  533. \n
  534. \n
  535. \n
  536. \n
  537. \n
  538. \n
  539. \n
  540. \n
  541. \n
  542. \n
  543. \n
  544. \n
  545. \n
  546. \n
  547. \n
  548. \n
  549. \n
  550. \n
  551. \n
  552. \n
  553. \n
  554. \n
  555. \n
  556. \n
  557. \n
  558. \n
  559. \n
  560. \n
  561. \n
  562. \n
  563. \n
  564. \n
  565. \n
  566. \n
  567. \n
  568. \n
  569. \n
  570. \n
  571. \n
  572. \n
  573. \n
  574. \n
  575. \n
  576. \n
  577. \n
  578. \n
  579. \n
  580. \n
  581. \n
  582. \n
  583. \n
  584. \n
  585. \n
  586. \n
  587. \n
  588. \n
  589. \n
  590. \n
  591. \n
  592. \n
  593. \n
  594. \n
  595. \n
  596. \n
  597. \n
  598. \n
  599. \n
  600. \n
  601. \n
  602. \n
  603. \n
  604. \n
  605. \n
  606. \n
  607. \n
  608. \n
  609. \n
  610. \n
  611. \n
  612. \n
  613. \n
  614. \n
  615. \n
  616. \n
  617. \n
  618. \n
  619. \n
  620. \n
  621. \n
  622. \n
  623. \n
  624. \n
  625. \n
  626. \n
  627. \n
  628. \n
  629. \n
  630. \n
  631. \n
  632. \n
  633. \n
  634. \n
  635. \n
  636. \n
  637. \n
  638. \n
  639. \n
  640. \n
  641. \n
  642. \n
  643. \n
  644. \n
  645. \n
  646. \n
  647. \n
  648. \n
  649. \n
  650. \n
  651. \n
  652. \n
  653. \n
  654. \n
  655. \n
  656. \n
  657. \n
  658. \n
  659. \n
  660. \n
  661. \n
  662. \n
  663. \n
  664. \n
  665. \n
  666. \n
  667. \n
  668. \n
  669. \n
  670. \n
  671. \n
  672. \n
  673. \n
  674. \n
  675. \n
  676. \n
  677. \n
  678. \n
  679. \n
  680. \n
  681. \n
  682. \n
  683. \n
  684. \n
  685. \n
  686. \n
  687. \n
  688. \n
  689. \n
  690. \n
  691. \n
  692. \n
  693. \n
  694. \n
  695. \n
  696. \n
  697. \n
  698. \n
  699. \n
  700. \n
  701. \n
  702. \n
  703. \n
  704. \n
  705. \n
  706. \n
  707. \n
  708. \n
  709. \n
  710. \n
  711. \n
  712. \n
  713. \n
  714. \n
  715. \n
  716. \n
  717. \n
  718. \n
  719. \n
  720. \n
  721. \n
  722. \n
  723. \n
  724. \n
  725. \n
  726. \n
  727. \n
  728. \n
  729. \n
  730. \n
  731. \n
  732. \n
  733. \n
  734. \n
  735. \n
  736. \n
  737. \n
  738. \n
  739. \n
  740. \n
  741. \n
  742. \n
  743. \n
  744. \n
  745. \n
  746. \n
  747. \n
  748. \n
  749. \n
  750. \n
  751. \n
  752. \n
  753. \n