Add vendor folder to git

This commit is contained in:
Lucas Käldström 2017-06-26 19:23:05 +03:00
parent 66cf5eaafb
commit 183585f56f
No known key found for this signature in database
GPG key ID: 600FEFBBD0D40D21
6916 changed files with 2629581 additions and 1 deletions

7
vendor/github.com/coreos/etcd/contrib/README.md generated vendored Normal file
View file

@ -0,0 +1,7 @@
## Contrib
Scripts and files which may be useful but aren't part of the core etcd project.
* [systemd](systemd) - an example unit file for deploying etcd on systemd-based distributions
* [raftexample](raftexample) - an example distributed key-value store using raft
* [systemd/etcd2-backup-coreos](systemd/etcd2-backup-coreos) - remote backup and restore procedures for etcd2 clusters on CoreOS Linux

View file

@ -0,0 +1,4 @@
# Use goreman to run `go get github.com/mattn/goreman`
raftexample1: ./raftexample --id 1 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 12380
raftexample2: ./raftexample --id 2 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 22380
raftexample3: ./raftexample --id 3 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 32380

View file

@ -0,0 +1,115 @@
# raftexample
raftexample is an example usage of etcd's [raft library](../../raft). It provides a simple REST API for a key-value store cluster backed by the [Raft][raft] consensus algorithm.
[raft]: http://raftconsensus.github.io/
## Getting Started
### Running single node raftexample
First start a single-member cluster of raftexample:
```sh
raftexample --id 1 --cluster http://127.0.0.1:12379 --port 12380
```
Each raftexample process maintains a single raft instance and a key-value server.
The process's list of comma separated peers (--cluster), its raft ID index into the peer list (--id), and http key-value server port (--port) are passed through the command line.
Next, store a value ("hello") to a key ("my-key"):
```
curl -L http://127.0.0.1:12380/my-key -XPUT -d hello
```
Finally, retrieve the stored key:
```
curl -L http://127.0.0.1:12380/my-key
```
### Running a local cluster
First install [goreman](https://github.com/mattn/goreman), which manages Procfile-based applications.
The [Procfile script](./Procfile) will set up a local example cluster. You can start it with:
```sh
goreman start
```
This will bring up three raftexample instances.
You can write a key-value pair to any member of the cluster and likewise retrieve it from any member.
### Fault Tolerance
To test cluster recovery, first start a cluster and write a value "foo":
```sh
goreman start
curl -L http://127.0.0.1:12380/my-key -XPUT -d foo
```
Next, remove a node and replace the value with "bar" to check cluster availability:
```sh
goreman run stop raftexample2
curl -L http://127.0.0.1:12380/my-key -XPUT -d bar
curl -L http://127.0.0.1:32380/my-key
```
Finally, bring the node back up and verify it recovers with the updated value "bar":
```sh
goreman run start raftexample2
curl -L http://127.0.0.1:22380/my-key
```
### Dynamic cluster reconfiguration
Nodes can be added to or removed from a running cluster using requests to the REST API.
For example, suppose we have a 3-node cluster that was started with the commands:
```sh
raftexample --id 1 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 12380
raftexample --id 2 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 22380
raftexample --id 3 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 32380
```
A fourth node with ID 4 can be added by issuing a POST:
```sh
curl -L http://127.0.0.1:12380/4 -XPOST -d http://127.0.0.1:42379
```
Then the new node can be started as the others were, using the --join option:
```sh
raftexample --id 4 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379,http://127.0.0.1:42379 --port 42380 --join
```
The new node should join the cluster and be able to service key/value requests.
We can remove a node using a DELETE request:
```sh
curl -L http://127.0.0.1:12380/3 -XDELETE
```
Node 3 should shut itself down once the cluster has processed this request.
## Design
The raftexample consists of three components: a raft-backed key-value store, a REST API server, and a raft consensus server based on etcd's raft implementation.
The raft-backed key-value store is a key-value map that holds all committed key-values.
The store bridges communication between the raft server and the REST server.
Key-value updates are issued through the store to the raft server.
The store updates its map once raft reports the updates are committed.
The REST server exposes the current raft consensus by accessing the raft-backed key-value store.
A GET command looks up a key in the store and returns the value, if any.
A key-value PUT command issues an update proposal to the store.
The raft server participates in consensus with its cluster peers.
When the REST server submits a proposal, the raft server transmits the proposal to its peers.
When raft reaches a consensus, the server publishes all committed updates over a commit channel.
For raftexample, this commit channel is consumed by the key-value store.

View file

@ -0,0 +1,16 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// raftexample is a simple KV store using the raft and rafthttp libraries.
package main

View file

@ -0,0 +1,122 @@
// Copyright 2015 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"io/ioutil"
"log"
"net/http"
"strconv"
"github.com/coreos/etcd/raft/raftpb"
)
// Handler for a http based key-value store backed by raft
type httpKVAPI struct {
store *kvstore
confChangeC chan<- raftpb.ConfChange
}
func (h *httpKVAPI) ServeHTTP(w http.ResponseWriter, r *http.Request) {
key := r.RequestURI
switch {
case r.Method == "PUT":
v, err := ioutil.ReadAll(r.Body)
if err != nil {
log.Printf("Failed to read on PUT (%v)\n", err)
http.Error(w, "Failed on PUT", http.StatusBadRequest)
return
}
h.store.Propose(key, string(v))
// Optimistic-- no waiting for ack from raft. Value is not yet
// committed so a subsequent GET on the key may return old value
w.WriteHeader(http.StatusNoContent)
case r.Method == "GET":
if v, ok := h.store.Lookup(key); ok {
w.Write([]byte(v))
} else {
http.Error(w, "Failed to GET", http.StatusNotFound)
}
case r.Method == "POST":
url, err := ioutil.ReadAll(r.Body)
if err != nil {
log.Printf("Failed to read on POST (%v)\n", err)
http.Error(w, "Failed on POST", http.StatusBadRequest)
return
}
nodeId, err := strconv.ParseUint(key[1:], 0, 64)
if err != nil {
log.Printf("Failed to convert ID for conf change (%v)\n", err)
http.Error(w, "Failed on POST", http.StatusBadRequest)
return
}
cc := raftpb.ConfChange{
Type: raftpb.ConfChangeAddNode,
NodeID: nodeId,
Context: url,
}
h.confChangeC <- cc
// As above, optimistic that raft will apply the conf change
w.WriteHeader(http.StatusNoContent)
case r.Method == "DELETE":
nodeId, err := strconv.ParseUint(key[1:], 0, 64)
if err != nil {
log.Printf("Failed to convert ID for conf change (%v)\n", err)
http.Error(w, "Failed on DELETE", http.StatusBadRequest)
return
}
cc := raftpb.ConfChange{
Type: raftpb.ConfChangeRemoveNode,
NodeID: nodeId,
}
h.confChangeC <- cc
// As above, optimistic that raft will apply the conf change
w.WriteHeader(http.StatusNoContent)
default:
w.Header().Set("Allow", "PUT")
w.Header().Add("Allow", "GET")
w.Header().Add("Allow", "POST")
w.Header().Add("Allow", "DELETE")
http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)
}
}
// serveHttpKVAPI starts a key-value server with a GET/PUT API and listens.
func serveHttpKVAPI(kv *kvstore, port int, confChangeC chan<- raftpb.ConfChange, errorC <-chan error) {
srv := http.Server{
Addr: ":" + strconv.Itoa(port),
Handler: &httpKVAPI{
store: kv,
confChangeC: confChangeC,
},
}
go func() {
if err := srv.ListenAndServe(); err != nil {
log.Fatal(err)
}
}()
// exit when raft goes down
if err, ok := <-errorC; ok {
log.Fatal(err)
}
}

View file

@ -0,0 +1,112 @@
// Copyright 2015 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"bytes"
"encoding/gob"
"encoding/json"
"log"
"sync"
"github.com/coreos/etcd/snap"
)
// a key-value store backed by raft
type kvstore struct {
proposeC chan<- string // channel for proposing updates
mu sync.RWMutex
kvStore map[string]string // current committed key-value pairs
snapshotter *snap.Snapshotter
}
type kv struct {
Key string
Val string
}
func newKVStore(snapshotter *snap.Snapshotter, proposeC chan<- string, commitC <-chan *string, errorC <-chan error) *kvstore {
s := &kvstore{proposeC: proposeC, kvStore: make(map[string]string), snapshotter: snapshotter}
// replay log into key-value map
s.readCommits(commitC, errorC)
// read commits from raft into kvStore map until error
go s.readCommits(commitC, errorC)
return s
}
func (s *kvstore) Lookup(key string) (string, bool) {
s.mu.RLock()
v, ok := s.kvStore[key]
s.mu.RUnlock()
return v, ok
}
func (s *kvstore) Propose(k string, v string) {
var buf bytes.Buffer
if err := gob.NewEncoder(&buf).Encode(kv{k, v}); err != nil {
log.Fatal(err)
}
s.proposeC <- string(buf.Bytes())
}
func (s *kvstore) readCommits(commitC <-chan *string, errorC <-chan error) {
for data := range commitC {
if data == nil {
// done replaying log; new data incoming
// OR signaled to load snapshot
snapshot, err := s.snapshotter.Load()
if err == snap.ErrNoSnapshot {
return
}
if err != nil && err != snap.ErrNoSnapshot {
log.Panic(err)
}
log.Printf("loading snapshot at term %d and index %d", snapshot.Metadata.Term, snapshot.Metadata.Index)
if err := s.recoverFromSnapshot(snapshot.Data); err != nil {
log.Panic(err)
}
continue
}
var dataKv kv
dec := gob.NewDecoder(bytes.NewBufferString(*data))
if err := dec.Decode(&dataKv); err != nil {
log.Fatalf("raftexample: could not decode message (%v)", err)
}
s.mu.Lock()
s.kvStore[dataKv.Key] = dataKv.Val
s.mu.Unlock()
}
if err, ok := <-errorC; ok {
log.Fatal(err)
}
}
func (s *kvstore) getSnapshot() ([]byte, error) {
s.mu.Lock()
defer s.mu.Unlock()
return json.Marshal(s.kvStore)
}
func (s *kvstore) recoverFromSnapshot(snapshot []byte) error {
var store map[string]string
if err := json.Unmarshal(snapshot, &store); err != nil {
return err
}
s.mu.Lock()
s.kvStore = store
s.mu.Unlock()
return nil
}

View file

@ -0,0 +1,47 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"reflect"
"testing"
)
func Test_kvstore_snapshot(t *testing.T) {
tm := map[string]string{"foo": "bar"}
s := &kvstore{kvStore: tm}
v, _ := s.Lookup("foo")
if v != "bar" {
t.Fatalf("foo has unexpected value, got %s", v)
}
data, err := s.getSnapshot()
if err != nil {
t.Fatal(err)
}
s.kvStore = nil
if err := s.recoverFromSnapshot(data); err != nil {
t.Fatal(err)
}
v, _ = s.Lookup("foo")
if v != "bar" {
t.Fatalf("foo has unexpected value, got %s", v)
}
if !reflect.DeepEqual(s.kvStore, tm) {
t.Fatalf("store expected %+v, got %+v", tm, s.kvStore)
}
}

View file

@ -0,0 +1,59 @@
// Copyright 2015 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"errors"
"net"
"time"
)
// stoppableListener sets TCP keep-alive timeouts on accepted
// connections and waits on stopc message
type stoppableListener struct {
*net.TCPListener
stopc <-chan struct{}
}
func newStoppableListener(addr string, stopc <-chan struct{}) (*stoppableListener, error) {
ln, err := net.Listen("tcp", addr)
if err != nil {
return nil, err
}
return &stoppableListener{ln.(*net.TCPListener), stopc}, nil
}
func (ln stoppableListener) Accept() (c net.Conn, err error) {
connc := make(chan *net.TCPConn, 1)
errc := make(chan error, 1)
go func() {
tc, err := ln.AcceptTCP()
if err != nil {
errc <- err
return
}
connc <- tc
}()
select {
case <-ln.stopc:
return nil, errors.New("server stopped")
case err := <-errc:
return nil, err
case tc := <-connc:
tc.SetKeepAlive(true)
tc.SetKeepAlivePeriod(3 * time.Minute)
return tc, nil
}
}

View file

@ -0,0 +1,45 @@
// Copyright 2015 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"flag"
"strings"
"github.com/coreos/etcd/raft/raftpb"
)
func main() {
cluster := flag.String("cluster", "http://127.0.0.1:9021", "comma separated cluster peers")
id := flag.Int("id", 1, "node ID")
kvport := flag.Int("port", 9121, "key-value server port")
join := flag.Bool("join", false, "join an existing cluster")
flag.Parse()
proposeC := make(chan string)
defer close(proposeC)
confChangeC := make(chan raftpb.ConfChange)
defer close(confChangeC)
// raft provides a commit stream for the proposals from the http api
var kvs *kvstore
getSnapshot := func() ([]byte, error) { return kvs.getSnapshot() }
commitC, errorC, snapshotterReady := newRaftNode(*id, strings.Split(*cluster, ","), *join, getSnapshot, proposeC, confChangeC)
kvs = newKVStore(<-snapshotterReady, proposeC, commitC, errorC)
// the key-value http handler will propose updates to raft
serveHttpKVAPI(kvs, *kvport, confChangeC, errorC)
}

View file

@ -0,0 +1,479 @@
// Copyright 2015 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"fmt"
"log"
"os"
"strconv"
"time"
"net/http"
"net/url"
"github.com/coreos/etcd/etcdserver/stats"
"github.com/coreos/etcd/pkg/fileutil"
"github.com/coreos/etcd/pkg/types"
"github.com/coreos/etcd/raft"
"github.com/coreos/etcd/raft/raftpb"
"github.com/coreos/etcd/rafthttp"
"github.com/coreos/etcd/snap"
"github.com/coreos/etcd/wal"
"github.com/coreos/etcd/wal/walpb"
"golang.org/x/net/context"
)
// A key-value stream backed by raft
type raftNode struct {
proposeC <-chan string // proposed messages (k,v)
confChangeC <-chan raftpb.ConfChange // proposed cluster config changes
commitC chan<- *string // entries committed to log (k,v)
errorC chan<- error // errors from raft session
id int // client ID for raft session
peers []string // raft peer URLs
join bool // node is joining an existing cluster
waldir string // path to WAL directory
snapdir string // path to snapshot directory
getSnapshot func() ([]byte, error)
lastIndex uint64 // index of log at start
confState raftpb.ConfState
snapshotIndex uint64
appliedIndex uint64
// raft backing for the commit/error channel
node raft.Node
raftStorage *raft.MemoryStorage
wal *wal.WAL
snapshotter *snap.Snapshotter
snapshotterReady chan *snap.Snapshotter // signals when snapshotter is ready
snapCount uint64
transport *rafthttp.Transport
stopc chan struct{} // signals proposal channel closed
httpstopc chan struct{} // signals http server to shutdown
httpdonec chan struct{} // signals http server shutdown complete
}
var defaultSnapCount uint64 = 10000
// newRaftNode initiates a raft instance and returns a committed log entry
// channel and error channel. Proposals for log updates are sent over the
// provided the proposal channel. All log entries are replayed over the
// commit channel, followed by a nil message (to indicate the channel is
// current), then new log entries. To shutdown, close proposeC and read errorC.
func newRaftNode(id int, peers []string, join bool, getSnapshot func() ([]byte, error), proposeC <-chan string,
confChangeC <-chan raftpb.ConfChange) (<-chan *string, <-chan error, <-chan *snap.Snapshotter) {
commitC := make(chan *string)
errorC := make(chan error)
rc := &raftNode{
proposeC: proposeC,
confChangeC: confChangeC,
commitC: commitC,
errorC: errorC,
id: id,
peers: peers,
join: join,
waldir: fmt.Sprintf("raftexample-%d", id),
snapdir: fmt.Sprintf("raftexample-%d-snap", id),
getSnapshot: getSnapshot,
snapCount: defaultSnapCount,
stopc: make(chan struct{}),
httpstopc: make(chan struct{}),
httpdonec: make(chan struct{}),
snapshotterReady: make(chan *snap.Snapshotter, 1),
// rest of structure populated after WAL replay
}
go rc.startRaft()
return commitC, errorC, rc.snapshotterReady
}
func (rc *raftNode) saveSnap(snap raftpb.Snapshot) error {
if err := rc.snapshotter.SaveSnap(snap); err != nil {
return err
}
walSnap := walpb.Snapshot{
Index: snap.Metadata.Index,
Term: snap.Metadata.Term,
}
if err := rc.wal.SaveSnapshot(walSnap); err != nil {
return err
}
return rc.wal.ReleaseLockTo(snap.Metadata.Index)
}
func (rc *raftNode) entriesToApply(ents []raftpb.Entry) (nents []raftpb.Entry) {
if len(ents) == 0 {
return
}
firstIdx := ents[0].Index
if firstIdx > rc.appliedIndex+1 {
log.Fatalf("first index of committed entry[%d] should <= progress.appliedIndex[%d] 1", firstIdx, rc.appliedIndex)
}
if rc.appliedIndex-firstIdx+1 < uint64(len(ents)) {
nents = ents[rc.appliedIndex-firstIdx+1:]
}
return
}
// publishEntries writes committed log entries to commit channel and returns
// whether all entries could be published.
func (rc *raftNode) publishEntries(ents []raftpb.Entry) bool {
for i := range ents {
switch ents[i].Type {
case raftpb.EntryNormal:
if len(ents[i].Data) == 0 {
// ignore empty messages
break
}
s := string(ents[i].Data)
select {
case rc.commitC <- &s:
case <-rc.stopc:
return false
}
case raftpb.EntryConfChange:
var cc raftpb.ConfChange
cc.Unmarshal(ents[i].Data)
rc.confState = *rc.node.ApplyConfChange(cc)
switch cc.Type {
case raftpb.ConfChangeAddNode:
if len(cc.Context) > 0 {
rc.transport.AddPeer(types.ID(cc.NodeID), []string{string(cc.Context)})
}
case raftpb.ConfChangeRemoveNode:
if cc.NodeID == uint64(rc.id) {
log.Println("I've been removed from the cluster! Shutting down.")
return false
}
rc.transport.RemovePeer(types.ID(cc.NodeID))
}
}
// after commit, update appliedIndex
rc.appliedIndex = ents[i].Index
// special nil commit to signal replay has finished
if ents[i].Index == rc.lastIndex {
select {
case rc.commitC <- nil:
case <-rc.stopc:
return false
}
}
}
return true
}
func (rc *raftNode) loadSnapshot() *raftpb.Snapshot {
snapshot, err := rc.snapshotter.Load()
if err != nil && err != snap.ErrNoSnapshot {
log.Fatalf("raftexample: error loading snapshot (%v)", err)
}
return snapshot
}
// openWAL returns a WAL ready for reading.
func (rc *raftNode) openWAL(snapshot *raftpb.Snapshot) *wal.WAL {
if !wal.Exist(rc.waldir) {
if err := os.Mkdir(rc.waldir, 0750); err != nil {
log.Fatalf("raftexample: cannot create dir for wal (%v)", err)
}
w, err := wal.Create(rc.waldir, nil)
if err != nil {
log.Fatalf("raftexample: create wal error (%v)", err)
}
w.Close()
}
walsnap := walpb.Snapshot{}
if snapshot != nil {
walsnap.Index, walsnap.Term = snapshot.Metadata.Index, snapshot.Metadata.Term
}
log.Printf("loading WAL at term %d and index %d", walsnap.Term, walsnap.Index)
w, err := wal.Open(rc.waldir, walsnap)
if err != nil {
log.Fatalf("raftexample: error loading wal (%v)", err)
}
return w
}
// replayWAL replays WAL entries into the raft instance.
func (rc *raftNode) replayWAL() *wal.WAL {
log.Printf("replaying WAL of member %d", rc.id)
snapshot := rc.loadSnapshot()
w := rc.openWAL(snapshot)
_, st, ents, err := w.ReadAll()
if err != nil {
log.Fatalf("raftexample: failed to read WAL (%v)", err)
}
rc.raftStorage = raft.NewMemoryStorage()
if snapshot != nil {
rc.raftStorage.ApplySnapshot(*snapshot)
}
rc.raftStorage.SetHardState(st)
// append to storage so raft starts at the right place in log
rc.raftStorage.Append(ents)
// send nil once lastIndex is published so client knows commit channel is current
if len(ents) > 0 {
rc.lastIndex = ents[len(ents)-1].Index
} else {
rc.commitC <- nil
}
return w
}
func (rc *raftNode) writeError(err error) {
rc.stopHTTP()
close(rc.commitC)
rc.errorC <- err
close(rc.errorC)
rc.node.Stop()
}
func (rc *raftNode) startRaft() {
if !fileutil.Exist(rc.snapdir) {
if err := os.Mkdir(rc.snapdir, 0750); err != nil {
log.Fatalf("raftexample: cannot create dir for snapshot (%v)", err)
}
}
rc.snapshotter = snap.New(rc.snapdir)
rc.snapshotterReady <- rc.snapshotter
oldwal := wal.Exist(rc.waldir)
rc.wal = rc.replayWAL()
rpeers := make([]raft.Peer, len(rc.peers))
for i := range rpeers {
rpeers[i] = raft.Peer{ID: uint64(i + 1)}
}
c := &raft.Config{
ID: uint64(rc.id),
ElectionTick: 10,
HeartbeatTick: 1,
Storage: rc.raftStorage,
MaxSizePerMsg: 1024 * 1024,
MaxInflightMsgs: 256,
}
if oldwal {
rc.node = raft.RestartNode(c)
} else {
startPeers := rpeers
if rc.join {
startPeers = nil
}
rc.node = raft.StartNode(c, startPeers)
}
ss := &stats.ServerStats{}
ss.Initialize()
rc.transport = &rafthttp.Transport{
ID: types.ID(rc.id),
ClusterID: 0x1000,
Raft: rc,
ServerStats: ss,
LeaderStats: stats.NewLeaderStats(strconv.Itoa(rc.id)),
ErrorC: make(chan error),
}
rc.transport.Start()
for i := range rc.peers {
if i+1 != rc.id {
rc.transport.AddPeer(types.ID(i+1), []string{rc.peers[i]})
}
}
go rc.serveRaft()
go rc.serveChannels()
}
// stop closes http, closes all channels, and stops raft.
func (rc *raftNode) stop() {
rc.stopHTTP()
close(rc.commitC)
close(rc.errorC)
rc.node.Stop()
}
func (rc *raftNode) stopHTTP() {
rc.transport.Stop()
close(rc.httpstopc)
<-rc.httpdonec
}
func (rc *raftNode) publishSnapshot(snapshotToSave raftpb.Snapshot) {
if raft.IsEmptySnap(snapshotToSave) {
return
}
log.Printf("publishing snapshot at index %d", rc.snapshotIndex)
defer log.Printf("finished publishing snapshot at index %d", rc.snapshotIndex)
if snapshotToSave.Metadata.Index <= rc.appliedIndex {
log.Fatalf("snapshot index [%d] should > progress.appliedIndex [%d] + 1", snapshotToSave.Metadata.Index, rc.appliedIndex)
}
rc.commitC <- nil // trigger kvstore to load snapshot
rc.confState = snapshotToSave.Metadata.ConfState
rc.snapshotIndex = snapshotToSave.Metadata.Index
rc.appliedIndex = snapshotToSave.Metadata.Index
}
var snapshotCatchUpEntriesN uint64 = 10000
func (rc *raftNode) maybeTriggerSnapshot() {
if rc.appliedIndex-rc.snapshotIndex <= rc.snapCount {
return
}
log.Printf("start snapshot [applied index: %d | last snapshot index: %d]", rc.appliedIndex, rc.snapshotIndex)
data, err := rc.getSnapshot()
if err != nil {
log.Panic(err)
}
snap, err := rc.raftStorage.CreateSnapshot(rc.appliedIndex, &rc.confState, data)
if err != nil {
panic(err)
}
if err := rc.saveSnap(snap); err != nil {
panic(err)
}
compactIndex := uint64(1)
if rc.appliedIndex > snapshotCatchUpEntriesN {
compactIndex = rc.appliedIndex - snapshotCatchUpEntriesN
}
if err := rc.raftStorage.Compact(compactIndex); err != nil {
panic(err)
}
log.Printf("compacted log at index %d", compactIndex)
rc.snapshotIndex = rc.appliedIndex
}
func (rc *raftNode) serveChannels() {
snap, err := rc.raftStorage.Snapshot()
if err != nil {
panic(err)
}
rc.confState = snap.Metadata.ConfState
rc.snapshotIndex = snap.Metadata.Index
rc.appliedIndex = snap.Metadata.Index
defer rc.wal.Close()
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
// send proposals over raft
go func() {
var confChangeCount uint64 = 0
for rc.proposeC != nil && rc.confChangeC != nil {
select {
case prop, ok := <-rc.proposeC:
if !ok {
rc.proposeC = nil
} else {
// blocks until accepted by raft state machine
rc.node.Propose(context.TODO(), []byte(prop))
}
case cc, ok := <-rc.confChangeC:
if !ok {
rc.confChangeC = nil
} else {
confChangeCount += 1
cc.ID = confChangeCount
rc.node.ProposeConfChange(context.TODO(), cc)
}
}
}
// client closed channel; shutdown raft if not already
close(rc.stopc)
}()
// event loop on raft state machine updates
for {
select {
case <-ticker.C:
rc.node.Tick()
// store raft entries to wal, then publish over commit channel
case rd := <-rc.node.Ready():
rc.wal.Save(rd.HardState, rd.Entries)
if !raft.IsEmptySnap(rd.Snapshot) {
rc.saveSnap(rd.Snapshot)
rc.raftStorage.ApplySnapshot(rd.Snapshot)
rc.publishSnapshot(rd.Snapshot)
}
rc.raftStorage.Append(rd.Entries)
rc.transport.Send(rd.Messages)
if ok := rc.publishEntries(rc.entriesToApply(rd.CommittedEntries)); !ok {
rc.stop()
return
}
rc.maybeTriggerSnapshot()
rc.node.Advance()
case err := <-rc.transport.ErrorC:
rc.writeError(err)
return
case <-rc.stopc:
rc.stop()
return
}
}
}
func (rc *raftNode) serveRaft() {
url, err := url.Parse(rc.peers[rc.id-1])
if err != nil {
log.Fatalf("raftexample: Failed parsing URL (%v)", err)
}
ln, err := newStoppableListener(url.Host, rc.httpstopc)
if err != nil {
log.Fatalf("raftexample: Failed to listen rafthttp (%v)", err)
}
err = (&http.Server{Handler: rc.transport.Handler()}).Serve(ln)
select {
case <-rc.httpstopc:
default:
log.Fatalf("raftexample: Failed to serve rafthttp (%v)", err)
}
close(rc.httpdonec)
}
func (rc *raftNode) Process(ctx context.Context, m raftpb.Message) error {
return rc.node.Step(ctx, m)
}
func (rc *raftNode) IsIDRemoved(id uint64) bool { return false }
func (rc *raftNode) ReportUnreachable(id uint64) {}
func (rc *raftNode) ReportSnapshot(id uint64, status raft.SnapshotStatus) {}

View file

@ -0,0 +1,159 @@
// Copyright 2015 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"fmt"
"os"
"testing"
"github.com/coreos/etcd/raft/raftpb"
)
type cluster struct {
peers []string
commitC []<-chan *string
errorC []<-chan error
proposeC []chan string
confChangeC []chan raftpb.ConfChange
}
// newCluster creates a cluster of n nodes
func newCluster(n int) *cluster {
peers := make([]string, n)
for i := range peers {
peers[i] = fmt.Sprintf("http://127.0.0.1:%d", 10000+i)
}
clus := &cluster{
peers: peers,
commitC: make([]<-chan *string, len(peers)),
errorC: make([]<-chan error, len(peers)),
proposeC: make([]chan string, len(peers)),
confChangeC: make([]chan raftpb.ConfChange, len(peers)),
}
for i := range clus.peers {
os.RemoveAll(fmt.Sprintf("raftexample-%d", i+1))
os.RemoveAll(fmt.Sprintf("raftexample-%d-snap", i+1))
clus.proposeC[i] = make(chan string, 1)
clus.confChangeC[i] = make(chan raftpb.ConfChange, 1)
clus.commitC[i], clus.errorC[i], _ = newRaftNode(i+1, clus.peers, false, nil, clus.proposeC[i], clus.confChangeC[i])
}
return clus
}
// sinkReplay reads all commits in each node's local log.
func (clus *cluster) sinkReplay() {
for i := range clus.peers {
for s := range clus.commitC[i] {
if s == nil {
break
}
}
}
}
// Close closes all cluster nodes and returns an error if any failed.
func (clus *cluster) Close() (err error) {
for i := range clus.peers {
close(clus.proposeC[i])
for range clus.commitC[i] {
// drain pending commits
}
// wait for channel to close
if erri, _ := <-clus.errorC[i]; erri != nil {
err = erri
}
// clean intermediates
os.RemoveAll(fmt.Sprintf("raftexample-%d", i+1))
os.RemoveAll(fmt.Sprintf("raftexample-%d-snap", i+1))
}
return err
}
func (clus *cluster) closeNoErrors(t *testing.T) {
if err := clus.Close(); err != nil {
t.Fatal(err)
}
}
// TestProposeOnCommit starts three nodes and feeds commits back into the proposal
// channel. The intent is to ensure blocking on a proposal won't block raft progress.
func TestProposeOnCommit(t *testing.T) {
clus := newCluster(3)
defer clus.closeNoErrors(t)
clus.sinkReplay()
donec := make(chan struct{})
for i := range clus.peers {
// feedback for "n" committed entries, then update donec
go func(pC chan<- string, cC <-chan *string, eC <-chan error) {
for n := 0; n < 100; n++ {
s, ok := <-cC
if !ok {
pC = nil
}
select {
case pC <- *s:
continue
case err, _ := <-eC:
t.Fatalf("eC message (%v)", err)
}
}
donec <- struct{}{}
for range cC {
// acknowledge the commits from other nodes so
// raft continues to make progress
}
}(clus.proposeC[i], clus.commitC[i], clus.errorC[i])
// one message feedback per node
go func(i int) { clus.proposeC[i] <- "foo" }(i)
}
for range clus.peers {
<-donec
}
}
// TestCloseProposerBeforeReplay tests closing the producer before raft starts.
func TestCloseProposerBeforeReplay(t *testing.T) {
clus := newCluster(1)
// close before replay so raft never starts
defer clus.closeNoErrors(t)
}
// TestCloseProposerInflight tests closing the producer while
// committed messages are being published to the client.
func TestCloseProposerInflight(t *testing.T) {
clus := newCluster(1)
defer clus.closeNoErrors(t)
clus.sinkReplay()
// some inflight ops
go func() {
clus.proposeC[0] <- "foo"
clus.proposeC[0] <- "bar"
}()
// wait for one message
if c, ok := <-clus.commitC[0]; *c != "foo" || !ok {
t.Fatalf("Commit failed")
}
}

View file

@ -0,0 +1,65 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package recipe
import (
v3 "github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/mvcc/mvccpb"
"golang.org/x/net/context"
)
// Barrier creates a key in etcd to block processes, then deletes the key to
// release all blocked processes.
type Barrier struct {
client *v3.Client
ctx context.Context
key string
}
func NewBarrier(client *v3.Client, key string) *Barrier {
return &Barrier{client, context.TODO(), key}
}
// Hold creates the barrier key causing processes to block on Wait.
func (b *Barrier) Hold() error {
_, err := newKey(b.client, b.key, 0)
return err
}
// Release deletes the barrier key to unblock all waiting processes.
func (b *Barrier) Release() error {
_, err := b.client.Delete(b.ctx, b.key)
return err
}
// Wait blocks on the barrier key until it is deleted. If there is no key, Wait
// assumes Release has already been called and returns immediately.
func (b *Barrier) Wait() error {
resp, err := b.client.Get(b.ctx, b.key, v3.WithFirstKey()...)
if err != nil {
return err
}
if len(resp.Kvs) == 0 {
// key already removed
return nil
}
_, err = WaitEvents(
b.client,
b.key,
resp.Header.Revision,
[]mvccpb.Event_EventType{mvccpb.PUT, mvccpb.DELETE})
return err
}

View file

@ -0,0 +1,55 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package recipe
import (
"errors"
v3 "github.com/coreos/etcd/clientv3"
spb "github.com/coreos/etcd/mvcc/mvccpb"
"golang.org/x/net/context"
)
var (
ErrKeyExists = errors.New("key already exists")
ErrWaitMismatch = errors.New("unexpected wait result")
ErrTooManyClients = errors.New("too many clients")
ErrNoWatcher = errors.New("no watcher channel")
)
// deleteRevKey deletes a key by revision, returning false if key is missing
func deleteRevKey(kv v3.KV, key string, rev int64) (bool, error) {
cmp := v3.Compare(v3.ModRevision(key), "=", rev)
req := v3.OpDelete(key)
txnresp, err := kv.Txn(context.TODO()).If(cmp).Then(req).Commit()
if err != nil {
return false, err
} else if !txnresp.Succeeded {
return false, nil
}
return true, nil
}
func claimFirstKey(kv v3.KV, kvs []*spb.KeyValue) (*spb.KeyValue, error) {
for _, k := range kvs {
ok, err := deleteRevKey(kv, string(k.Key), k.ModRevision)
if err != nil {
return nil, err
} else if ok {
return k, nil
}
}
return nil, nil
}

View file

@ -0,0 +1,137 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package recipe
import (
"github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/clientv3/concurrency"
"github.com/coreos/etcd/mvcc/mvccpb"
"golang.org/x/net/context"
)
// DoubleBarrier blocks processes on Enter until an expected count enters, then
// blocks again on Leave until all processes have left.
type DoubleBarrier struct {
s *concurrency.Session
ctx context.Context
key string // key for the collective barrier
count int
myKey *EphemeralKV // current key for this process on the barrier
}
func NewDoubleBarrier(s *concurrency.Session, key string, count int) *DoubleBarrier {
return &DoubleBarrier{
s: s,
ctx: context.TODO(),
key: key,
count: count,
}
}
// Enter waits for "count" processes to enter the barrier then returns
func (b *DoubleBarrier) Enter() error {
client := b.s.Client()
ek, err := newUniqueEphemeralKey(b.s, b.key+"/waiters")
if err != nil {
return err
}
b.myKey = ek
resp, err := client.Get(b.ctx, b.key+"/waiters", clientv3.WithPrefix())
if err != nil {
return err
}
if len(resp.Kvs) > b.count {
return ErrTooManyClients
}
if len(resp.Kvs) == b.count {
// unblock waiters
_, err = client.Put(b.ctx, b.key+"/ready", "")
return err
}
_, err = WaitEvents(
client,
b.key+"/ready",
ek.Revision(),
[]mvccpb.Event_EventType{mvccpb.PUT})
return err
}
// Leave waits for "count" processes to leave the barrier then returns
func (b *DoubleBarrier) Leave() error {
client := b.s.Client()
resp, err := client.Get(b.ctx, b.key+"/waiters", clientv3.WithPrefix())
if err != nil {
return err
}
if len(resp.Kvs) == 0 {
return nil
}
lowest, highest := resp.Kvs[0], resp.Kvs[0]
for _, k := range resp.Kvs {
if k.ModRevision < lowest.ModRevision {
lowest = k
}
if k.ModRevision > highest.ModRevision {
highest = k
}
}
isLowest := string(lowest.Key) == b.myKey.Key()
if len(resp.Kvs) == 1 {
// this is the only node in the barrier; finish up
if _, err = client.Delete(b.ctx, b.key+"/ready"); err != nil {
return err
}
return b.myKey.Delete()
}
// this ensures that if a process fails, the ephemeral lease will be
// revoked, its barrier key is removed, and the barrier can resume
// lowest process in node => wait on highest process
if isLowest {
_, err = WaitEvents(
client,
string(highest.Key),
highest.ModRevision,
[]mvccpb.Event_EventType{mvccpb.DELETE})
if err != nil {
return err
}
return b.Leave()
}
// delete self and wait on lowest process
if err = b.myKey.Delete(); err != nil {
return err
}
key := string(lowest.Key)
_, err = WaitEvents(
client,
key,
lowest.ModRevision,
[]mvccpb.Event_EventType{mvccpb.DELETE})
if err != nil {
return err
}
return b.Leave()
}

163
vendor/github.com/coreos/etcd/contrib/recipes/key.go generated vendored Normal file
View file

@ -0,0 +1,163 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package recipe
import (
"fmt"
"strings"
"time"
v3 "github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/clientv3/concurrency"
"golang.org/x/net/context"
)
// RemoteKV is a key/revision pair created by the client and stored on etcd
type RemoteKV struct {
kv v3.KV
key string
rev int64
val string
}
func newKey(kv v3.KV, key string, leaseID v3.LeaseID) (*RemoteKV, error) {
return newKV(kv, key, "", leaseID)
}
func newKV(kv v3.KV, key, val string, leaseID v3.LeaseID) (*RemoteKV, error) {
rev, err := putNewKV(kv, key, val, leaseID)
if err != nil {
return nil, err
}
return &RemoteKV{kv, key, rev, val}, nil
}
func newUniqueKV(kv v3.KV, prefix string, val string) (*RemoteKV, error) {
for {
newKey := fmt.Sprintf("%s/%v", prefix, time.Now().UnixNano())
rev, err := putNewKV(kv, newKey, val, 0)
if err == nil {
return &RemoteKV{kv, newKey, rev, val}, nil
}
if err != ErrKeyExists {
return nil, err
}
}
}
// putNewKV attempts to create the given key, only succeeding if the key did
// not yet exist.
func putNewKV(kv v3.KV, key, val string, leaseID v3.LeaseID) (int64, error) {
cmp := v3.Compare(v3.Version(key), "=", 0)
req := v3.OpPut(key, val, v3.WithLease(leaseID))
txnresp, err := kv.Txn(context.TODO()).If(cmp).Then(req).Commit()
if err != nil {
return 0, err
}
if !txnresp.Succeeded {
return 0, ErrKeyExists
}
return txnresp.Header.Revision, nil
}
// newSequentialKV allocates a new sequential key <prefix>/nnnnn with a given
// value and lease. Note: a bookkeeping node __<prefix> is also allocated.
func newSequentialKV(kv v3.KV, prefix, val string) (*RemoteKV, error) {
resp, err := kv.Get(context.TODO(), prefix, v3.WithLastKey()...)
if err != nil {
return nil, err
}
// add 1 to last key, if any
newSeqNum := 0
if len(resp.Kvs) != 0 {
fields := strings.Split(string(resp.Kvs[0].Key), "/")
_, serr := fmt.Sscanf(fields[len(fields)-1], "%d", &newSeqNum)
if serr != nil {
return nil, serr
}
newSeqNum++
}
newKey := fmt.Sprintf("%s/%016d", prefix, newSeqNum)
// base prefix key must be current (i.e., <=) with the server update;
// the base key is important to avoid the following:
// N1: LastKey() == 1, start txn.
// N2: new Key 2, new Key 3, Delete Key 2
// N1: txn succeeds allocating key 2 when it shouldn't
baseKey := "__" + prefix
// current revision might contain modification so +1
cmp := v3.Compare(v3.ModRevision(baseKey), "<", resp.Header.Revision+1)
reqPrefix := v3.OpPut(baseKey, "")
reqnewKey := v3.OpPut(newKey, val)
txn := kv.Txn(context.TODO())
txnresp, err := txn.If(cmp).Then(reqPrefix, reqnewKey).Commit()
if err != nil {
return nil, err
}
if !txnresp.Succeeded {
return newSequentialKV(kv, prefix, val)
}
return &RemoteKV{kv, newKey, txnresp.Header.Revision, val}, nil
}
func (rk *RemoteKV) Key() string { return rk.key }
func (rk *RemoteKV) Revision() int64 { return rk.rev }
func (rk *RemoteKV) Value() string { return rk.val }
func (rk *RemoteKV) Delete() error {
if rk.kv == nil {
return nil
}
_, err := rk.kv.Delete(context.TODO(), rk.key)
rk.kv = nil
return err
}
func (rk *RemoteKV) Put(val string) error {
_, err := rk.kv.Put(context.TODO(), rk.key, val)
return err
}
// EphemeralKV is a new key associated with a session lease
type EphemeralKV struct{ RemoteKV }
// newEphemeralKV creates a new key/value pair associated with a session lease
func newEphemeralKV(s *concurrency.Session, key, val string) (*EphemeralKV, error) {
k, err := newKV(s.Client(), key, val, s.Lease())
if err != nil {
return nil, err
}
return &EphemeralKV{*k}, nil
}
// newUniqueEphemeralKey creates a new unique valueless key associated with a session lease
func newUniqueEphemeralKey(s *concurrency.Session, prefix string) (*EphemeralKV, error) {
return newUniqueEphemeralKV(s, prefix, "")
}
// newUniqueEphemeralKV creates a new unique key/value pair associated with a session lease
func newUniqueEphemeralKV(s *concurrency.Session, prefix, val string) (ek *EphemeralKV, err error) {
for {
newKey := fmt.Sprintf("%s/%v", prefix, time.Now().UnixNano())
ek, err = newEphemeralKV(s, newKey, val)
if err == nil || err != ErrKeyExists {
break
}
}
return ek, err
}

View file

@ -0,0 +1,80 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package recipe
import (
"fmt"
v3 "github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/mvcc/mvccpb"
"golang.org/x/net/context"
)
// PriorityQueue implements a multi-reader, multi-writer distributed queue.
type PriorityQueue struct {
client *v3.Client
ctx context.Context
key string
}
// NewPriorityQueue creates an etcd priority queue.
func NewPriorityQueue(client *v3.Client, key string) *PriorityQueue {
return &PriorityQueue{client, context.TODO(), key + "/"}
}
// Enqueue puts a value into a queue with a given priority.
func (q *PriorityQueue) Enqueue(val string, pr uint16) error {
prefix := fmt.Sprintf("%s%05d", q.key, pr)
_, err := newSequentialKV(q.client, prefix, val)
return err
}
// Dequeue returns Enqueue()'d items in FIFO order. If the
// queue is empty, Dequeue blocks until items are available.
func (q *PriorityQueue) Dequeue() (string, error) {
// TODO: fewer round trips by fetching more than one key
resp, err := q.client.Get(q.ctx, q.key, v3.WithFirstKey()...)
if err != nil {
return "", err
}
kv, err := claimFirstKey(q.client, resp.Kvs)
if err != nil {
return "", err
} else if kv != nil {
return string(kv.Value), nil
} else if resp.More {
// missed some items, retry to read in more
return q.Dequeue()
}
// nothing to dequeue; wait on items
ev, err := WaitPrefixEvents(
q.client,
q.key,
resp.Header.Revision,
[]mvccpb.Event_EventType{mvccpb.PUT})
if err != nil {
return "", err
}
ok, err := deleteRevKey(q.client, string(ev.Kv.Key), ev.Kv.ModRevision)
if err != nil {
return "", err
} else if !ok {
return q.Dequeue()
}
return string(ev.Kv.Value), err
}

76
vendor/github.com/coreos/etcd/contrib/recipes/queue.go generated vendored Normal file
View file

@ -0,0 +1,76 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package recipe
import (
v3 "github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/mvcc/mvccpb"
"golang.org/x/net/context"
)
// Queue implements a multi-reader, multi-writer distributed queue.
type Queue struct {
client *v3.Client
ctx context.Context
keyPrefix string
}
func NewQueue(client *v3.Client, keyPrefix string) *Queue {
return &Queue{client, context.TODO(), keyPrefix}
}
func (q *Queue) Enqueue(val string) error {
_, err := newUniqueKV(q.client, q.keyPrefix, val)
return err
}
// Dequeue returns Enqueue()'d elements in FIFO order. If the
// queue is empty, Dequeue blocks until elements are available.
func (q *Queue) Dequeue() (string, error) {
// TODO: fewer round trips by fetching more than one key
resp, err := q.client.Get(q.ctx, q.keyPrefix, v3.WithFirstRev()...)
if err != nil {
return "", err
}
kv, err := claimFirstKey(q.client, resp.Kvs)
if err != nil {
return "", err
} else if kv != nil {
return string(kv.Value), nil
} else if resp.More {
// missed some items, retry to read in more
return q.Dequeue()
}
// nothing yet; wait on elements
ev, err := WaitPrefixEvents(
q.client,
q.keyPrefix,
resp.Header.Revision,
[]mvccpb.Event_EventType{mvccpb.PUT})
if err != nil {
return "", err
}
ok, err := deleteRevKey(q.client, string(ev.Kv.Key), ev.Kv.ModRevision)
if err != nil {
return "", err
} else if !ok {
return q.Dequeue()
}
return string(ev.Kv.Value), err
}

View file

@ -0,0 +1,88 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package recipe
import (
v3 "github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/clientv3/concurrency"
"github.com/coreos/etcd/mvcc/mvccpb"
"golang.org/x/net/context"
)
type RWMutex struct {
s *concurrency.Session
ctx context.Context
pfx string
myKey *EphemeralKV
}
func NewRWMutex(s *concurrency.Session, prefix string) *RWMutex {
return &RWMutex{s, context.TODO(), prefix + "/", nil}
}
func (rwm *RWMutex) RLock() error {
rk, err := newUniqueEphemeralKey(rwm.s, rwm.pfx+"read")
if err != nil {
return err
}
rwm.myKey = rk
// wait until nodes with "write-" and a lower revision number than myKey are gone
for {
if done, werr := rwm.waitOnLastRev(rwm.pfx + "write"); done || werr != nil {
return werr
}
}
}
func (rwm *RWMutex) Lock() error {
rk, err := newUniqueEphemeralKey(rwm.s, rwm.pfx+"write")
if err != nil {
return err
}
rwm.myKey = rk
// wait until all keys of lower revision than myKey are gone
for {
if done, werr := rwm.waitOnLastRev(rwm.pfx); done || werr != nil {
return werr
}
// get the new lowest key until this is the only one left
}
}
// waitOnLowest will wait on the last key with a revision < rwm.myKey.Revision with a
// given prefix. If there are no keys left to wait on, return true.
func (rwm *RWMutex) waitOnLastRev(pfx string) (bool, error) {
client := rwm.s.Client()
// get key that's blocking myKey
opts := append(v3.WithLastRev(), v3.WithMaxModRev(rwm.myKey.Revision()-1))
lastKey, err := client.Get(rwm.ctx, pfx, opts...)
if err != nil {
return false, err
}
if len(lastKey.Kvs) == 0 {
return true, nil
}
// wait for release on blocking key
_, err = WaitEvents(
client,
string(lastKey.Kvs[0].Key),
rwm.myKey.Revision(),
[]mvccpb.Event_EventType{mvccpb.DELETE})
return false, err
}
func (rwm *RWMutex) RUnlock() error { return rwm.myKey.Delete() }
func (rwm *RWMutex) Unlock() error { return rwm.myKey.Delete() }

53
vendor/github.com/coreos/etcd/contrib/recipes/watch.go generated vendored Normal file
View file

@ -0,0 +1,53 @@
// Copyright 2016 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package recipe
import (
"github.com/coreos/etcd/clientv3"
"github.com/coreos/etcd/mvcc/mvccpb"
"golang.org/x/net/context"
)
// WaitEvents waits on a key until it observes the given events and returns the final one.
func WaitEvents(c *clientv3.Client, key string, rev int64, evs []mvccpb.Event_EventType) (*clientv3.Event, error) {
wc := c.Watch(context.Background(), key, clientv3.WithRev(rev))
if wc == nil {
return nil, ErrNoWatcher
}
return waitEvents(wc, evs), nil
}
func WaitPrefixEvents(c *clientv3.Client, prefix string, rev int64, evs []mvccpb.Event_EventType) (*clientv3.Event, error) {
wc := c.Watch(context.Background(), prefix, clientv3.WithPrefix(), clientv3.WithRev(rev))
if wc == nil {
return nil, ErrNoWatcher
}
return waitEvents(wc, evs), nil
}
func waitEvents(wc clientv3.WatchChan, evs []mvccpb.Event_EventType) *clientv3.Event {
i := 0
for wresp := range wc {
for _, ev := range wresp.Events {
if ev.Type == evs[i] {
i++
if i == len(evs) {
return ev
}
}
}
}
return nil
}

View file

@ -0,0 +1,17 @@
[Unit]
Description=etcd key-value store
Documentation=https://github.com/coreos/etcd
After=network.target
[Service]
User=etcd
Type=notify
Environment=ETCD_DATA_DIR=/var/lib/etcd
Environment=ETCD_NAME=%m
ExecStart=/usr/bin/etcd
Restart=always
RestartSec=10s
LimitNOFILE=40000
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,4 @@
rclone.conf
bin
etcd2-backup.tgz
*~

View file

@ -0,0 +1,9 @@
[Service]
Environment="ETCD_RESTORE_MASTER_ADV_PEER_URLS=http://172.17.4.51:2379"
Environment="RCLONE_ENDPOINT=s3-chom-testing-backups:chom-testing-backups/mytest"
Environment="RCLONE_CONFIG_PATH=/etc/rclone.conf"
Environment="ETCD_DATA_DIR=/var/lib/etcd2"
Environment="ETCD_BACKUP_DIR=/var/lib/etcd2-backup"
Environment="ETCD_RESTORE_DIR=/var/lib/etcd2-restore"
Environment="RCLONE_CHECKSUM=true"

View file

@ -0,0 +1,251 @@
# etcd2-backup-coreos
Remote backup and multi-node restore services for etcd2 clusters on CoreOS Linux.
**Warning:** This package is only intended for use on CoreOS Linux.
## Terminology
**Founding member** : The node which is the first member of the new recovered cluster. It is this node's rclone backup data (only) that will be used to restore the cluster. The rest of the nodes will join the cluster with no data, and simply catch up with the **founding member**.
## Configuration
Before installing etcd2-backup, you need to configure `30-etcd2-backup-restore.conf`.
```
[Service]
Environment="ETCD_RESTORE_MASTER_ADV_PEER_URLS=<http://host:port>"
Environment="RCLONE_ENDPOINT=remote-name:path/to/backups"
```
Assuming you're deploying to CoreOS with etcd2, you should only need to change
* `ETCD_RESTORE_MASTER_ADV_PEER_URLS`
This is the new advertised peer url of the new etcd2 node that will be the founding member of the new restored cluster. We will call this node the **founding member**.
* `RCLONE_ENDPOINT`
The rclone endpoint to which backups will be stored.
Feel free to point any number of machines at the same RCLONE_ENDPOINT, path and all. Backups for each machine are stored in a sub-folder named with the machine ID (%m in systemd parlance)
* `./rclone.conf`
The rclone configuration file which will be installed. Must list a `[section]` which matches `RCLONE_ENDPOINT`'s remote-name component.
An easy way to generate this config file is to [install rclone](http://rclone.org/install/) on your local machine. Then follow the [configuration instructions](http://rclone.org/docs/) to generate an `rclone.conf` file.
If you want to adjust backup frequency, edit `./etcd2-backup.timer`
## Installation
Once you've got those things configured, you can run `./build`.
The `build` script generates a tarball for copying to CoreOS instances. The tarball contains the `etcd2-backup-install` script.
After extracting the contents of the tar file and running the install script, three new systemd services are added. One service, `etcd2-backup`, performs periodic etcd backups, while the other two services, `etcd2-restore` and `etcd2-join`, handle restore procedures.
* `etcd2-backup.service`
A oneshot service which calls `etcdctl backup` and syncs the backups to the rclone endpoint (using an rclone container, of course). `etcd2-backup.timer` is responsible for periodically running this service.
* `etcd2-restore.service`
A oneshot service which wipes all etcd2 data and restores a single-node cluster from the rclone backup. This is for restoring the **founding member** only.
* `etcd2-join.service`
A oneshot service which wipes all etcd2 data and re-joins the new cluster. This is for adding members **after** the **founding member** has succesfully established the new cluster via `etcd2-restore.service`
## Recovery
This assumes that your cluster has lost quorum, and is not recoverable. Otherwise you should probably try to heal your cluster first.
### Backup Freshness
Two factors contribute to the relative freshness or staleness of a backup. The `etcd2-backup.timer` takes a backup every 30 seconds by default, and the etcd `snapshot-count` option controls how many transactions are committed between each write of the snapshot to permanent storage. Given those parameters, we can compute the upper bound on the outdatedness of a backup.
Assumptions:
* transaction rate is a constant `1000 transactions / second`
* `etcd2-backup.timer` is configured for a 30 second interval
* `etcd2 snapshot-count=10000`
```
max-missed-seconds= (10000 transactions / (1000 transactions / second)) + 30 seconds = 40 seconds
```
### Recovery Procedure
1. Make sure `etcd2.service` and `etcd2-backup.timer` are stopped on all nodes in the cluster
2. Restore the **founding member** by starting `etcd2-restore.service` and then, if successful, `etcd2.service`
3. Restore the rest of the cluster **one at a time**. Start `etcd2-join.service`, and then, if successful, `etcd2.service`. Please verify with `etcdctl cluster-health` that the expected set of nodes is present and healthy after each node joins.
4. Verify that your data is sane (enough). If so, kick off `etcd2-backup.timer` on all nodes and, hopefully, go back to bed.
## Retroactively change the founding member
It is necessary to change the cluster's founding member in order to restore a cluster from any other node's data.
Change the value of `ETCD_RESTORE_MASTER_ADV_PEER_URLS` in `30-etcd2-backup-restore.conf` to the advertised peer url of the new founding member. Repeat the install process above on all nodes in the cluster, then proceed with the [recovery procedure](README.md#recovery-procedure).
## Example
Let's pretend that we have an initial 3 node CoreOS cluster that we want to back up to S3.
| ETCD_NAME | ETCD_ADVERTISED_PEER_URL |
| ------------- |:-------------:|
| e1 | http://172.17.4.51:2379 |
| e2 | http://172.17.4.52:2379 |
| e3 | http://172.17.4.53:2379 |
In the event that the cluster fails, we want to restore from `e1`'s backup
## Configuration
```
[Service]
Environment="ETCD_RESTORE_MASTER_ADV_PEER_URLS=http://172.17.4.51:2379"
Environment="RCLONE_ENDPOINT=s3-testing-conf:s3://etcd2-backup-bucket/backups"
```
The `./rclone.conf` file must contain a `[section]` matching `RCLONE_ENDPOINTS`'s remote-name component.
```
[s3-testing-conf]
type = s3
access_key_id = xxxxxxxx
secret_access_key = xxxxxx
region = us-west-1
endpoint =
location_constraint =
```
## Installation
```sh
cd etcd2-backup
./build
scp etcd2-backup.tgz core@e1:~/
ssh core@e1
e1 $ mkdir -p ~/etcd2-backup
e1 $ mv etcd2-backup.tgz etcd2-backup/
e1 $ cd etcd2-backup
e1 $ tar zxvf ~/etcd2-backup.tgz
e1 $ ./etcd2-backup-install
# Only do the following two commands if this node should generate backups
e1 $ sudo systemctl enable etcd2-backup.timer
e1 $ sudo systemctl start etcd2-backup.timer
e1 $ exit
```
Now `e1`'s etcd data will be backed up to `s3://etcd2-backup-bucket/backups/<e1-machine-id>/` according to the schedule described in `etcd2-backup.timer`.
You should repeat the process for `e2` and `e3`. If you do not want a node to generate backups, omit enabling and starting `etcd2-backup.timer`.
## Restore the cluster
Let's assume that a mischievous friend decided it would be a good idea to corrupt the etcd2 data-dir on ALL of your nodes (`e1`,`e2`,`e3`). You simply want to restore the cluster from `e1`'s backup.
Here's how you would recover:
```sh
# First, ENSURE etcd2 and etcd2-backup are not running on any nodes
for node in e{1..3};do
ssh core@$node "sudo systemctl stop etcd2.service etcd2-backup.{timer,service}"
done
ssh core@e1 "sudo systemctl start etcd2-restore.service && sudo systemctl start etcd2.service"
for node in e{2..3};do
ssh core@$node "sudo systemctl start etcd2-join.service && sudo systemctl start etcd2.service"
sleep 10
done
```
After e2 and e3 finish catching up, your cluster should be back to normal.
## Migrate the cluster
The same friend who corrupted your etcd2 data-dirs decided that you have not had enough fun. This time, your friend dumps coffee on the machines hosting `e1`, `e2` and `e3`. There is a horrible smell, and the machines are dead.
Luckily, you have a new 3-node etcd2 cluster ready to go, along with the S3 backup for `e1` from your old cluster.
The new cluster configuration looks like this. Assume that etcd2-backup is not installed. (If it is, you NEED to make sure it's not running on any nodes)
| ETCD_NAME | ETCD_ADVERTISED_PEER_URL |
| ------------- |:-------------:|
| q1 | http://172.17.8.201:2379 |
| q2 | http://172.17.8.202:2379 |
| q3 | http://172.17.8.203:2379 |
We will assume `q1` is the chosen founding member, though you can pick any node you like.
## Migrate the remote backup
First, you need to copy your backup from `e1`'s backup folder to `q1`'s backup folder. I will show the S3 example.
```sh
# Make sure to remove q1's backup directory, if it exists already
aws s3 rm --recursive s3://etcd2-backup-bucket/backups/<q1-machine-id>
aws s3 cp --recursive s3://etcd2-backup-bucket/backups/<e1-machine-id> s3://etcd2-backup-bucket/backups/<q1-machine-id>
```
## Configure the New Cluster
```
[Service]
Environment="ETCD_RESTORE_MASTER_ADV_PEER_URLS=http://172.17.8.201:2379"
Environment="RCLONE_ENDPOINT=s3-testing-conf:s3://etcd2-backup-bucket/backups"
```
Since this is a new cluster, each new node will have new `machine-id` and will not clobber your backups from the old cluster, even though `RCLONE_ENDPOINT` is the same for both the old `e` cluster and the new `q` cluster.
## Installation
We first want to install the configured etcd2-backup package on all nodes, but not start any services yet.
```sh
cd etcd2-backup
./build
for node in q{1..3};do
scp etcd2-backup.tgz core@$node:~/
ssh core@$node "mkdir -p ~/etcd2-backup"
ssh core@$node "mv etcd2-backup.tgz etcd2-backup/"
ssh core@$node " cd etcd2-backup"
ssh core@$node " tar zxvf ~/etcd2-backup.tgz"
ssh core@$node " ./etcd2-backup-install"
done
```
## Migrate the Cluster
With `q1` as the founding member.
```sh
# First, make SURE etcd2 and etcd2-backup are not running on any nodes
for node in q{1..3};do
ssh core@$node "sudo systemctl stop etcd2.service"
done
ssh core@q1 "sudo systemctl start etcd2-restore.service && sudo systemctl start etcd2.service"
for node in q{2..3};do
ssh core@$node "sudo systemctl start etcd2-join.service && sudo systemctl start etcd2.service"
sleep 10
done
```
Once you've verifed the cluster has migrated properly, start and enable `etcd2-backup.timer` on at least one node.
```sh
ssh core@q1 "sudo systemctl enable etcd2-backup.service && sudo systemctl start etcd2-backup.service"
```
You should now have periodic backups going to: `s3://etcd2-backup-bucket/backups/<q1-machine-id>`
## Words of caution
1. Notice the `sleep 10` commands that follow starting `etcd2-join.service` and then `etcd2.service`. This sleep is there to allow the member that joined to cluster time to catch up on the cluster state before we attempt to add the next member. This involves sending the entire snapshot over the network. If you're dataset is large, or the network between nodes is slow, or your disks are already bogged down, etc- you may need to turn the sleep time up.
In the case of large data sets, it is recommended that you copy the data directory produced by `etcd2-restore` on the founding member to the other nodes before running `etcd2-join` on them. This will avoid etcd transferring the entire snapshot to every node after it joins the cluster.
2. It is not recommended clients be allowed to access the etcd2 cluster **until** all members have been added and finished catching up.

View file

@ -0,0 +1,25 @@
#!/bin/bash -e
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd "${SCRIPT_DIR}"
if [ ! -f "./rclone.conf" ];then
echo "Could not find $(pwd)/rclone.conf"
exit 1
fi
mkdir -p ./bin
GOPATH=$(pwd) go build -o ./bin/etcd2-restore etcd2-restore.go
tar cfz ./etcd2-backup.tgz \
*.{service,timer,conf} \
etcd2-join \
bin/etcd2-restore \
rclone.conf \
etcd2-backup-install
printf "Install package saved at\n\t -> $(pwd)/etcd2-backup.tgz\n\n"
printf "Copy to target machine and deploy.\n $> tar zxvf etcd2-backup.tgz && ./etcd2-backup-install\n\n"
echo "WARNING: this tarball contains your rclone secrets. Be careful!"

View file

@ -0,0 +1,33 @@
#!/bin/bash -e
if [ ! -f /etc/os-release ];then
echo "Could not find /etc/os-release. This is not CoreOS Linux"
exit 1
fi
. /etc/os-release
if [ ! "$ID" == "coreos" ];then
echo "os-release error: Detected ID=$ID: this is not CoreOS Linux"
exit 1
fi
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd "${SCRIPT_DIR}"
sudo cp ./rclone.conf /etc/
sudo mkdir -p /opt/bin
sudo mv etcd2-join bin/etcd2-restore /opt/bin
sudo mv *.{service,timer} /etc/systemd/system
sudo systemctl daemon-reload
for jobtype in restore backup join;do
sudo mkdir -p /var/run/systemd/system/etcd2-${jobtype}.service.d
sudo cp 30-etcd2-backup-restore.conf /var/run/systemd/system/etcd2-${jobtype}.service.d/
sudo ln -sf /var/run/systemd/system/etcd2{,-${jobtype}}.service.d/20-cloudinit.conf
done
sudo systemctl daemon-reload
echo "etcd2-backup install complete!"

View file

@ -0,0 +1,29 @@
[Unit]
Description=rclone powered etcd2 backup service
After=etcd2.service
[Service]
Type=oneshot
ExecStartPre=/usr/bin/rm -rf ${ETCD_BACKUP_DIR}
ExecStartPre=/usr/bin/mkdir -p ${ETCD_BACKUP_DIR}/member/snap
ExecStartPre=/usr/bin/echo ETCD_DATA_DIR: ${ETCD_DATA_DIR}
ExecStartPre=/usr/bin/echo ETCD_BACKUP_DIR: ${ETCD_BACKUP_DIR}
ExecStartPre=/usr/bin/etcdctl backup --data-dir=${ETCD_DATA_DIR} --backup-dir=${ETCD_BACKUP_DIR}
ExecStartPre=/usr/bin/touch ${ETCD_BACKUP_DIR}/member/snap/iamhere.txt
# Copy the last backup, in case the new upload gets corrupted
ExecStartPre=-/usr/bin/docker run --rm \
-v ${RCLONE_CONFIG_PATH}:/etc/rclone.conf \
quay.io/coreos/rclone:latest --config /etc/rclone.conf --checksum=${RCLONE_CHECKSUM} \
copy ${RCLONE_ENDPOINT}/%m ${RCLONE_ENDPOINT}/%m_backup
# Upload new backup
ExecStart=/usr/bin/docker run --rm \
-v ${ETCD_BACKUP_DIR}:/etcd2backup \
-v ${RCLONE_CONFIG_PATH}:/etc/rclone.conf \
quay.io/coreos/rclone:latest --config ${RCLONE_CONFIG_PATH} --checksum=${RCLONE_CHECKSUM} \
copy /etcd2backup/ ${RCLONE_ENDPOINT}/%m/
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,9 @@
[Unit]
Description=etcd2-backup service timer
[Timer]
OnBootSec=1min
OnUnitActiveSec=30sec
[Install]
WantedBy=timers.target

View file

@ -0,0 +1,39 @@
#!/bin/bash -e
# Copyright 2015 CoreOS, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http:#www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if [ $# -lt 3 ];then
echo "USAGE: $0 <master_advertise_peer_urls> <target_name> <target_peer_url>"
exit 1
fi
function convertDropin {
sed -e 's/^Added.*$/[Service]/g' -e 's/="/=/g' -e 's/^ETCD_/Environment="ETCD_/g'
}
masterAdvUrl=$1
targetName=$2
targetUrl=$3
cmd="etcdctl --peers ${masterAdvUrl} member add ${targetName} ${targetUrl}"
ENV_VARS=`$cmd`
echo "${ENV_VARS}" | convertDropin > 40-boostrap-cluster.conf
sudo mv 40-boostrap-cluster.conf /var/run/systemd/system/etcd2.service.d/
sudo systemctl daemon-reload
sudo systemctl cat etcd2.service
echo "You can now start etcd2"

View file

@ -0,0 +1,13 @@
[Unit]
Description=Add etcd2 node to existing cluster
Conflicts=etcd2.service etcd2-backup.service
Before=etcd2.service etcd2-backup.service
[Service]
Type=oneshot
ExecStartPre=/usr/bin/rm -rf ${ETCD_DATA_DIR}/member
ExecStartPre=/usr/bin/chown -R etcd:etcd ${ETCD_DATA_DIR}
ExecStart=/opt/bin/etcd2-join ${ETCD_RESTORE_MASTER_ADV_PEER_URLS} ${ETCD_NAME} ${ETCD_INITIAL_ADVERTISE_PEER_URLS}
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,126 @@
// Copyright 2015 The etcd Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"flag"
"fmt"
"os"
"os/exec"
"path"
"regexp"
"time"
)
var (
etcdctlPath string
etcdPath string
etcdRestoreDir string
etcdName string
etcdPeerUrls string
)
func main() {
flag.StringVar(&etcdctlPath, "etcdctl-path", "/usr/bin/etcdctl", "absolute path to etcdctl executable")
flag.StringVar(&etcdPath, "etcd-path", "/usr/bin/etcd2", "absolute path to etcd2 executable")
flag.StringVar(&etcdRestoreDir, "etcd-restore-dir", "/var/lib/etcd2-restore", "absolute path to etcd2 restore dir")
flag.StringVar(&etcdName, "etcd-name", "default", "name of etcd2 node")
flag.StringVar(&etcdPeerUrls, "etcd-peer-urls", "", "advertise peer urls")
flag.Parse()
if etcdPeerUrls == "" {
panic("must set -etcd-peer-urls")
}
if finfo, err := os.Stat(etcdRestoreDir); err != nil {
panic(err)
} else {
if !finfo.IsDir() {
panic(fmt.Errorf("%s is not a directory", etcdRestoreDir))
}
}
if !path.IsAbs(etcdctlPath) {
panic(fmt.Sprintf("etcdctl-path %s is not absolute", etcdctlPath))
}
if !path.IsAbs(etcdPath) {
panic(fmt.Sprintf("etcd-path %s is not absolute", etcdPath))
}
if err := restoreEtcd(); err != nil {
panic(err)
}
}
func restoreEtcd() error {
etcdCmd := exec.Command(etcdPath, "--force-new-cluster", "--data-dir", etcdRestoreDir)
etcdCmd.Stdout = os.Stdout
etcdCmd.Stderr = os.Stderr
if err := etcdCmd.Start(); err != nil {
return fmt.Errorf("Could not start etcd2: %s", err)
}
defer etcdCmd.Wait()
defer etcdCmd.Process.Kill()
return runCommands(10, 2*time.Second)
}
var (
clusterHealthRegex = regexp.MustCompile(".*cluster is healthy.*")
lineSplit = regexp.MustCompile("\n+")
colonSplit = regexp.MustCompile(`\:`)
)
func runCommands(maxRetry int, interval time.Duration) error {
var retryCnt int
for retryCnt = 1; retryCnt <= maxRetry; retryCnt++ {
out, err := exec.Command(etcdctlPath, "cluster-health").CombinedOutput()
if err == nil && clusterHealthRegex.Match(out) {
break
}
fmt.Printf("Error: %s: %s\n", err, string(out))
time.Sleep(interval)
}
if retryCnt > maxRetry {
return fmt.Errorf("Timed out waiting for healthy cluster\n")
}
var (
memberID string
out []byte
err error
)
if out, err = exec.Command(etcdctlPath, "member", "list").CombinedOutput(); err != nil {
return fmt.Errorf("Error calling member list: %s", err)
}
members := lineSplit.Split(string(out), 2)
if len(members) < 1 {
return fmt.Errorf("Could not find a cluster member from: \"%s\"", members)
}
parts := colonSplit.Split(members[0], 2)
if len(parts) < 2 {
return fmt.Errorf("Could not parse member id from: \"%s\"", members[0])
}
memberID = parts[0]
out, err = exec.Command(etcdctlPath, "member", "update", memberID, etcdPeerUrls).CombinedOutput()
fmt.Printf("member update result: %s\n", string(out))
return err
}

View file

@ -0,0 +1,26 @@
[Unit]
Description=Restore single-node etcd2 node from rclone endpoint
Conflicts=etcd2.service etcd2-backup.service
Before=etcd2.service etcd2-backup.service
[Service]
Type=oneshot
ExecStartPre=/usr/bin/rm -rf ${ETCD_DATA_DIR}/member
ExecStartPre=/usr/bin/mkdir -p ${ETCD_RESTORE_DIR}
ExecStartPre=/usr/bin/rm -rf ${ETCD_RESTORE_DIR}/member
# Copy the last backup from rclone endpoint
ExecStartPre=/usr/bin/docker run --rm \
-v ${RCLONE_CONFIG_PATH}:/etc/rclone.conf \
-v ${ETCD_RESTORE_DIR}:/etcd2backup \
quay.io/coreos/rclone:latest \
--config /etc/rclone.conf --checksum=${RCLONE_CHECKSUM} \
copy ${RCLONE_ENDPOINT}/%m /etcd2backup
ExecStartPre=/usr/bin/ls -R ${ETCD_RESTORE_DIR}
ExecStartPre=/opt/bin/etcd2-restore -etcd-name ${ETCD_NAME} -etcd-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS}
ExecStartPre=/usr/bin/cp -r ${ETCD_RESTORE_DIR}/member ${ETCD_DATA_DIR}/member
ExecStart=/usr/bin/chown -R etcd:etcd ${ETCD_DATA_DIR}/member
[Install]
WantedBy=multi-user.target