The user command-line tool

The first of two tools we will build allows the user to add, list, and remove paths for the backup daemon tool (which we will write later). You can expose a web interface or even use the binding packages for the desktop user interface integration, but we are going to keep things simple and build ourselves a command-line tool.

Create a new folder called cmds inside the backup folder and create another backup folder inside that so you have backup/cmds/backup.

Inside our new backup folder, add the following code to main.go:

func main() { 
  var fatalErr error 
  defer func() { 
    if fatalErr != nil { 
      flag.PrintDefaults() 
      log.Fatalln(fatalErr) 
    } 
  }() 
  var ( 
    dbpath = flag.String("db", "./backupdata", "path to database directory") 
  ) 
  flag.Parse() 
  args := flag.Args() 
  if len(args) < 1 { 
    fatalErr = errors.New("invalid usage; must specify command") 
    return 
  } 
} 

We first define our fatalErr variable and defer the function that checks to ensure that value is nil. If it is not, it will print the error along with flag defaults and exit with a nonzero status code. We then define a flag called db that expects the path to the filedb database directory before parsing the flags and getting the remaining arguments and ensuring that there is at least one.

Persisting small data

In order to keep track of the paths and the hashes that we generate, we will need some kind of data storage mechanism that ideally works even when we stop and start our programs. We have lots of choices here: everything from a text file to a full horizontally scalable database solution. The Go ethos of simplicity tells us that building-in a database dependency to our little backup program would not be a great idea; rather, we should ask what the simplest way in which we can solve this problem is.

The github.com/matryer/filedb package is an experimental solution for just this kind of problem. It lets you interact with the filesystem as though it were a very simple, schemaless database. It takes its design lead from packages such as mgo and can be used in cases where data querying needs are very simple. In filedb, a database is a folder, and a collection is a file where each line represents a different record. Of course, this could all change as the filedb project evolves, but the interface, hopefully, won't.

Note

Adding dependencies such as this to a Go project should be done very carefully because over time, dependencies go stale, change beyond their initial scope, or disappear altogether in some cases. While it sounds counterintuitive, you should consider whether copying and pasting a few files into your project is a better solution than relying on an external dependency. Alternatively, consider vendoring the dependency by copying the entire package into the vendor folder of your command. This is akin to storing a snapshot of the dependency that you know works for your tool.

Add the following code to the end of the main function:

db, err := filedb.Dial(*dbpath) 
if err != nil { 
  fatalErr = err 
  return 
} 
defer db.Close() 
col, err := db.C("paths") 
if err != nil { 
  fatalErr = err 
  return 
} 

Here, we use the filedb.Dial function to connect with the filedb database. In actuality, nothing much happens here except specifying where the database is, since there are no real database servers to connect to (although this might change in the future, which is why such provisions exist in the interface). If that was successful, we defer the closing of the database. Closing the database does actually do something, since files may be open that need to be cleaned up.

Following the mgo pattern, next we specify a collection using the C method and keep a reference to it in the col variable. If an error occurs at any point, we assign it to the fatalErr variable and return.

To store data, we are going to define a type called path, which will store the full path and the last hash value and use JSON encoding to store this in our filedb database. Add the following struct definition above the main function:

type path struct { 
  Path string 
  Hash string 
} 

Parsing arguments

When we call flag.Args (as opposed to os.Args), we receive a slice of arguments excluding the flags. This allows us to mix flag arguments and non-flag arguments in the same tool.

We want our tool to be able to be used in the following ways:

  • To add a path:
backup -db=/path/to/db add {path} [paths...]
  • To remove a path:
backup -db=/path/to/db remove {path} [paths...]
  • To list all paths:
backup -db=/path/to/db list

To achieve this, since we have already dealt with flags, we must check the first (non-flag) argument.

Add the following code to the main function:

switch strings.ToLower(args[0]) { 
case "list": 
case "add": 
case "remove": 
} 

Here, we simply switch on the first argument after setting it to lowercase (if the user types backup LIST, we still want it to work).

Listing the paths

To list the paths in the database, we are going to use a ForEach method on the path's col variable. Add the following code to the list case:

var path path 
col.ForEach(func(i int, data []byte) bool { 
  err := json.Unmarshal(data, &path) 
  if err != nil { 
    fatalErr = err 
    return true 
  } 
  fmt.Printf("= %s
", path) 
  return false 
}) 

We pass in a callback function to ForEach, which will be called for every item in that collection. We then unmarshal it from JSON into our path type, and just print it out using fmt.Printf. We return false as per the filedb interface, which tells us that returning true would stop iterating and that we want to make sure we list them all.

String representations for your own types

If you print structs in Go in this way, using the %s format verbs, you can get some messy results that are difficult for users to read. If, however, the type implements a String() string method, it will be used instead, and we can use this to control what gets printed. Below the path struct, add the following method:

func (p path) String() string { 
  return fmt.Sprintf("%s [%s]", p.Path, p.Hash) 
} 

This tells the path type how it should represent itself as a string.

Adding paths

To add a path, or many paths, we are going to iterate over the remaining arguments and call the InsertJSON method for each one. Add the following code to the add case:

if len(args[1:]) == 0 { 
  fatalErr = errors.New("must specify path to add") 
  return 
} 
for _, p := range args[1:] { 
  path := &path{Path: p, Hash: "Not yet archived"} 
  if err := col.InsertJSON(path); err != nil { 
    fatalErr = err 
    return 
  } 
  fmt.Printf("+ %s
", path) 
} 

If the user hasn't specified any additional arguments, for example if they just called backup add without typing any paths, we will return a fatal error. Otherwise, we do the work and print out the path string (prefixed with a + symbol) to indicate that it was successfully added. By default, we'll set the hash to the Not yet archived string literal this is an invalid hash but serves the dual purposes of letting the user know that it hasn't yet been archived as well as indicating as such to our code (given that a hash of the folder will never equal that string).

Removing paths

To remove a path, or many paths, we use the RemoveEach method for the path's collection. Add the following code to the remove case:

var path path 
col.RemoveEach(func(i int, data []byte) (bool, bool) { 
  err := json.Unmarshal(data, &path) 
  if err != nil { 
    fatalErr = err 
    return false, true 
  } 
  for _, p := range args[1:] { 
    if path.Path == p { 
      fmt.Printf("- %s
", path) 
      return true, false 
    } 
  } 
  return false, false 
}) 

The callback function we provide to RemoveEach expects us to return two bool types: the first one indicates whether the item should be removed or not, and the second one indicates whether we should stop iterating or not.

Using our new tool

We have completed our simple backup command-line tool. Let's look at it in action. Create a folder called backupdata inside backup/cmds/backup; this will become the filedb database.

Build the tool in a terminal by navigating to the main.go file and running this:

go build -o backup

If all is well, we can now add a path:

./backup -db=./backupdata add ./test ./test2

You should see the expected output:

+ ./test [Not yet archived]
+ ./test2 [Not yet archived]

Now let's add another path:

./backup -db=./backupdata add ./test3

You should now see the complete list:

./backup -db=./backupdata list

Our program should yield the following:

= ./test [Not yet archived]
= ./test2 [Not yet archived]
= ./test3 [Not yet archived]

Let's remove test3 in order to make sure the remove functionality is working:

./backup -db=./backupdata remove ./test3
./backup -db=./backupdata list

This will take us back to this:

+ ./test [Not yet archived]
+ ./test2 [Not yet archived]

We are now able to interact with the filedb database in a way that makes sense for our use case. Next, we build the daemon program that will actually use our backup package to do the work.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.139.169