LukasKorsikaDesignStudy

From CSSEMediaWiki
Revision as of 09:39, 1 August 2010 by Lukas Korsika (Talk | contribs)
Jump to: navigation, search

Contents

The Problem

The project I design in this study is an application to help me manage my file system. I tend to have a number of copies of the same file scattered throughout my various file system for reasons such as:

  • Some partitions are only accessible under Linux
  • I often copy videos to my laptop to watch away from my desk.

Requirements

  • Must take a list of files as input, and output identical files in groups (ie, cluster all identical files together, don't output unique files)
  • Must support a variety of approaches for determining equality -- based on raw data, or file-type specific comparisons.
  • Must use reasonable amounts of memory and I/O bandwidth.
  • Should be file-system agnostic (and support NFS, etc)
  • Should be extensible

Initial Design

(converted to Java from C, so some liberties have been taken with classes, but this is essentially its original form)

Lko15-OldUML.png

Design Description

As this was a program in C, there is essentially a God Class, with a few helper classes and methods thereupon. The helper classes are:

  • File -- This represents a file on the file system, and has methods to find its size, and its SHA-1 hash.
  • Tree -- This is a simple class representing a Tree. A tree is composed of a set of TreeNode, and stores a reference to the root. It also has a prune method, which uses a recursive algorithm to remove all TreeNodes with one or fewer (<= 1) files in it.
  • TreeNode -- A tree node represents a node in a binary tree, stores its key (which may be size or hash depending on the tree), and a list of all files which have that value. TreeNode has a number of recursive methods to iterate over the tree, get the list of files at that node, and insert a new file with a key recursively.

Most of the functionality occurs in functions which are placed in the God class to represent their being global functions. This god class handles the parsing of arguments, the reading of the file list from stdin, building groups of identical files, and outputting the result.

Grouping is performed by first building a grouping tree using the size as the key (it calls Tree.insert(myFile.getsize(), myFile);), then pruning the tree. It then iterates over the resulting tree (consisting of pairs of sizes and file lists) to find files that might be identical (as they have the same size).

For each of these groups it then constructs a new tree using the hash values as keys, prunes the tree, and outputs the files grouped by hash. Thus, the files are grouped by size and hash.

I realise that this is terrible design. This design study will iteratively improve the design, as well as creating a Java implementation of the program.

Criticisms

  • Uses a God class -- many sub-issues to do with this.
    • Separate functionality into appropriate classes.
  • TreeNode deals with both maintaining collections of files, as well as implementing a binary tree. This violates the Single responsibility principle.
    • Remove TreeNode, and instead make a Tree class that uses a MultiMap (which we probably have to implement. *sigh*, Java) to store the files.
  • The prune method should be in TreeNode, to Keep related data and behavior in one place.
  • The God class shouldn't have to ask the File for its size/hash. It should instead tell the Tree to insert it using whatever hashing method that tree uses Tell, don't ask.
    • A Tree could have a Classifier object which can be used for finding a given File's sorting key, allowing one to simply .Insert(File) into the tree. (In this case, Classifier is an interface, which would be implemented by various concrete classifiers such as SizeClassifier, HashClassifier, etc)
  • The tree should know what it's grouping by rather than that information being implied by the variable storing the tree( eg Tree sizeTree) Keep related data and behavior in one place.
    • (See above)
  • The File class shouldn't calculate hashes Single responsibility principle
    • Create a FileHasher, which can be instantiated by the HashClassifier. FileHasher knows about File, but File doesn't know about FileHasher Dependency inversion principle.
  • It should be possible to add new key types to group by without modifying existing classes. Open closed principle Beware type switches
    • Create a new concrete Classifier

Revision One

The first modified version of the design, taking into account the above criticisms and suggestions.

Lko15-R1UML.png

This new design has moved most of the functionality out of the God class.

  • The Program class is responsible for parsing the command line arguments and standard input. It also maintains a list of file groups, and outputs them at the end.
  • The FileGroup class maintains a list of files which are supposed to be identical, as well as a textual description of the properties which lead us to suppose they are identical. In the current version of the program, this means they store a string of the format "Size: xKB" or "Size: xKB, Hash: ab2234989797b...".
  • The File class is the same as above, except that the hash method has been moved to the MD5 class, as the File class should have a single responsibility.
  • The actual data storage component of a Tree has been separated out into a generic, reusable MultiMap, and Tree has been renamed to Grouper, as this is a more accurate description.
  • Grouper is an abstract class which implements the grouping logic. It has an abstract classify method which is overwritten by subclasses. Any Files which have the same return value for classify() are grouped together. Describe can be called on the return value of classify() to generate a human-readable representation of the classification (such as "Size: 103KB")
  • Program begins by initialising groups member to the list of files read on stdin. Then each call to groupBy(Grouper) iterates over each group, and further subdivides it according to the given Grouper.
Personal tools