LukasKorsikaDesignStudy

From CSSEMediaWiki
Revision as of 10:44, 30 September 2010 by Lukas Korsika (Talk | contribs)
Jump to: navigation, search

Contents

The Problem

The project I am designing in this study is an application to help me manage my files. I tend to have a number of copies of the same file scattered throughout my various computers and hard drives. I'm sure some of you also have this problem. The reasons for why this occurs include:

  • Some partitions/disks are only accessible under Linux, so I copy files over to Windows-accessible partitions.
  • I often copy videos to my laptop to watch away from my desk.
  • As above, it's useful to have music on both my laptop and my desktop.

While there are clever solutions to this at the filesystem level and the like, these are far too complicated and unstable for me to use on an everyday basis. This program performs a much simpler task: It will go through my filesystem, and tell me which files are the same. I can then use this information to decide how to deal with the situation.

The input to the program is a list of files to be compared for duplication, and the output is a set of groups of identical files, with each group separated by a new line. Originally I wanted to pass in a list of paths, but this method allows me much more flexibility. Given that this program is intended to be used primarily under Linux, I can use a tool like find(1) to generate the list of files I am interested in. This gives me the power to limit my comparisons to, for example, files with the .avi extension. Adding support for such advanced operation to this program would duplicate existing functionality, and violate the unix philosophy of Do One Thing, And Do It Well. (in OO terms, the Single responsibility principle).

It is based on an existing program I wrote in C many years ago (well, about three). The goal of this exercise is to improve the design to a level at which I am happy with it, and then implement it in C# (running under Mono).

For interest's sake, the original was around 1200 lines of dense C code. I really needed to improve the design, and this seemed like a good opportunity.

Requirements

A basic description of requirements the solution must fulfill.

  • Must take a list of files as input, and output groups of identical files
  • Must support a variety of approaches for determining equality -- for instance, file size, hashes, modification time, and the raw file contents.
  • Must use reasonable amounts of memory, I/O bandwidth, and time.
  • Should be file-system agnostic (that is, it shouldn't be limited to one particular file system)
  • Should be extensible, to ease implementation of possible future features. (the original does not fulfill this requirement).

Initial Design

(converted to Object Oriented Programming from the original C source, so some liberties have been taken with converting this into classes, but this is essentially its original form)

Also note that this diagram does not contain all the methods and functions, only the ones I mention in the design criticisms

Lko15-OldUML.png

Design Description

As this program was originally written in C most of the code is in a variety of functions which don't really belong to any class. That has been represented by the obvious God Class in this diagram. There are a few classes which this main program uses to perform its task. The helper classes are:

  • File -- This represents a file on the file system, and has methods to find its size, and its SHA-1 hash. (which are the only supported bits of data used to check for uniqueness).
  • Tree -- This is a simple class representing a Tree. This is basically a binary tree used to store the set of files corresponding to each value (each filesize, etc). A tree is composed of a set of TreeNode, and stores a reference to the root. It also has a prune method, which uses a recursive algorithm to remove all TreeNodes with one or fewer (<= 1) files in it.
  • TreeNode -- A tree node represents a node in a binary tree, stores its key (which may be size or hash depending on the tree), and a list of all files which have that value. TreeNode has a number of recursive methods to iterate over the tree, get the list of files at that node, and insert a new file with a key recursively.

Most of the functionality occurs in global functions outside of classes. These are shown in the God class in the diagram. This god class handles the parsing of arguments, the reading of the file list from stdin, building the groups of identical files, and outputting the result.

Grouping is performed by calculating the relevant statistic for each file (in this version either size or hash). This is used as a key for the file when it is inserted into the tree. Note that our Tree class supports having multiple values for a single key. Then it calls the tree's prune() method, which removes all nodes with only one value. In real terms, this represents getting rid of all files with a unique size/hash.

This process is first performed for size (as it is quickly obtainable). Each node now contains a set of files with the same size, which may be identical. To discover whether they are (most likely) identical we repeat the process using the hash of each file in a node. We do this on a node-by-node basis as files with different sizes are never equal. Any nodes that are left in the tree of hashes has a list of files with the same size and the same hash. These are most likely identical, and are output as a group to the standard output.

Criticisms

Along with the solutions

  • Uses a God class -- many sub-issues to do with this.
    • => Separate functionality into appropriate classes.
  • TreeNode deals with both maintaining collections of files, as well as implementing a binary tree. This violates the Single responsibility principle.
    • => Remove TreeNode, and instead make a Tree class that uses a MultiMap (which we probably have to implement. *sigh*, C#) to store the files.
  • The prune method (which removes nodes with only one file from the tree) should perhaps be in TreeNode rather than Tree, to Keep related data and behavior in one place.
  • The God class shouldn't have to ask the File for its size/hash. It should instead tell the collection to insert it using whatever hashing method that collection uses Tell, don't ask.
    • => What files are being sorted by really depends on the collection, so perhaps a collection should know this information. This way one can simply .Insert(File) into the tree. This could use a Classifier interface, which would be implemented by various concrete classifiers such as SizeClassifier, HashClassifier, etc, and used by the Tree to resolve a file into a sorting key.
    • In the end I decided to create a Grouper that takes one file group and returns a list of file groups for each classification value. (see below)
  • The tree should know what it's grouping by rather than that information being implied by the variable storing the tree (eg Tree sizeTree) Keep related data and behavior in one place.
    • (See above)
  • The File class shouldn't calculate hashes Single responsibility principle
    • => Create a HashAlgorithm class, which can be instantiated by the HashClassifier. Neither HashAlgorithm nor File knows about the existence of the other. As they are somewhat unrelated this keeps coupling low, making it much easier to reuse these classes.
  • It should be possible to add new key types to group by without modifying existing classes. This is in keeping with the Open closed principle, and hints strongly at Beware type switches, as that is essentially what hard-coded key types are.
    • => The Classifier approach suggested above nicely deals with that problem as creating a new key type is as simple as creating a new concrete Classifier.

New Version

The improved version of the design, taking into account the above criticisms and suggestions.

Lko15-R1UML.png

Design Description

This new design has moved most of the functionality out of the God class.

  • The Program class is responsible for parsing the command line arguments and standard input. It also maintains a list of file groups, and outputs them at the end.
  • The FileGroup class maintains a list of files which we suppose to be identical, as well as a textual description of the properties which lead us to suppose they are identical. In the current version of the program, this means they store a string of the format "Size: xKB" or "Size: xKB, Hash: ab2234989797b...".
  • The File class is the same as in the original design, except that the hash method has been moved to the HashAlgorithm interface, which takes merely an array of bytes to operate on. These can be read from the file using getContents(). This is because the File class shouldn't be responsible for calculating hashes, as per the single responsibility.
  • The Tree class has been renamed to Grouper. The actual tree algorithm has been separated out into the (generic) MultiMap interface, and its concrete implementation MapSetMultiMap. Program to the interface not the implementation.
  • Grouper is now an abstract class responsible for grouping the files by size/hash/etc. It has concrete implementations for Size and Hash in this design. The addFile method adds a file to the group, and categorises it based on the concrete type of the grouper. addFile is a Template_Method, with classify and describe as it's primitive operations. This deviates from the normal pattern a little as makeGroups also uses one of the primitive operations, but that's a minor deviation. makeGroups returns a set of all file groups with more than one member, in FileGroup form. It also uses the describe method to set a user-friendly description for the FileGroup (such as "Size: 103KB")
  • The Program runs by first initialising groups member of Program to be a single FileGroup representing all the files passed into the program. It then repeatedly calls groupBy for each grouper we are using. This is hard-coded in this design, but which Groupers to use could easily be passed in on the command line in future versions.

Tradeoffs

And reasons why things haven't been done certain ways

  • Grouper doesn't Avoid verb classes
    • This could perhaps be named better, but the task this class performs is important, and cannot easily be merged into any other classes, especially as Grouper is essentially an interface with multiple implementations. (technically an abstract class with multiple instantiable subclasses, but it's effectively the same).
Personal tools