doc.go 6.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151
  1. // Copyright © 2015-2016 Pierre Neidhardt <ambrevar@gmail.com>
  2. // Use of this file is governed by the license that can be found in LICENSE.
  3. /*
  4. A filesystem hierarchy synchronizer
  5. Rename files in TARGET so that identical files found in SOURCE and TARGET have
  6. the same relative path.
  7. The main goal of the program is to make folders synchronization faster by
  8. sparing big file transfers when a simple rename suffices. It complements other
  9. synchronization programs that lack this capability.
  10. See http://ambrevar.bitbucket.io/hsync and 'hsync -h' for more details.
  11. Usage:
  12. hsync [OPTIONS] SOURCE TARGET
  13. For usage options, see:
  14. hsync -h
  15. Implementation details
  16. We store the file entries in the following structure:
  17. entries := map[partialHash struct{size int64, pos int64, hash string}]fileMatch struct{
  18. sourceID *fileID{path string, h hash.Hash},
  19. targetID *fileID{path string, h hash.Hash}
  20. }
  21. This 'entries' map indexes the possible file matches by content ('partialHash').
  22. A file match references the paths that will be used to rename the file in TARGET
  23. from 'oldpath' to 'newpath'. Note that 'newpath' is given by 'sourceID.path',
  24. and 'oldpath' by 'targetID.path'.
  25. The algorithm is centered around one main optimization: rolling-checksums. We
  26. assume that two files match if they have the same partial hash.
  27. The initial partial hash is just the size with an empty hash. This speeds up the
  28. process since this saves an open/close of the file. We just need a 'stat'. Files
  29. will not be read unless a rolling-checksum is required. As a consequence,
  30. unreadable files with a unique size will be stored in 'entries', while
  31. unreadable conflicting files will be discarded. Note that the system allows to
  32. rename files that cannot be read.
  33. One checksum roll increments 'pos' and updates the hash by hashing the next
  34. BLOCKSIZE bytes of the file. BLOCKSIZE is set to a value that is commonly
  35. believed to be optimal in most cases. The optimal value would be the device
  36. blocksize where the file resides. It would be more complex and memory consuming
  37. to query this value for each file.
  38. We choose md5 (128 bits) as the checksum algorithm. Adler32, CRC-32 and CRC-64
  39. are only a tiny little faster while suffering from more clashes. This choice
  40. should be backed up with a proper benchmark.
  41. A conflict arises when two files in either SOURCE or TARGET have the same
  42. partial hash. We solve the conflict by updating the partial hashes until they
  43. differ. If the partial hashes cannot be updated any further (i.e. we reached
  44. end-of-file), it means that the files are duplicates.
  45. Notes:
  46. - Partial hashes of conflicting files will be complete at the same roll since
  47. they have the same size.
  48. - When a partial hash is complete, we have the following relation:
  49. (pos-1)*BLOCKSIZE < filesize <= pos*BLOCKSIZE
  50. - There is only one possible conflicting file at a time.
  51. A file match may be erroneous if the partial hash is not complete. The most
  52. obvious case is when two different files are the only ones of size N in SOURCE
  53. and TARGET. This down-side is a consequence of the design choice, i.e. focus on
  54. speed. Erroneous matches can be corrected in the preview file. If we wanted no
  55. ambiguity, we would have to compute the full hashes and this would take
  56. approximately as much time as copying files from SOURCE to TARGET, like a
  57. regular synchronization tool would do.
  58. We store the digest 'hash.Hash' together with the file path for when we update a
  59. partial hash.
  60. Process:
  61. 1. We walk SOURCE completely. Only regular files are processed. The 'sourceID'
  62. are stored. If two entries conflict (they have the same partial hash), we
  63. compute update the partial hashes until they do not conflict anymore. If the
  64. conflict is not resolvable, i.e. the partial hash is complete and files are
  65. identical, we drop both files from 'entries'.
  66. Future files can have the same partial hash that led to a former conflict. To
  67. distinguish the content from former conflicts when adding a new file, we must
  68. compute the partial hash up to the 'pos' of the last conflict (the number of
  69. checksum rolls). To keep track of this 'pos' when there is a conflict, we mark
  70. all computed partial hash as dummy values. When the next entry will be added, we
  71. will have to compute the partial hash until it does not match a dummy value in
  72. 'entries'.
  73. Duplicates are not processed but display a warning. Usually the user does not
  74. want duplicates, so she is better off fixing them before processing with the
  75. renames. It would add a lot of complexity to handle duplicates properly.
  76. 2. We walk TARGET completely. We skip all dummies as source the SOURCE walk.
  77. We need to analyze SOURCE completely before we can check for matches.
  78. - If there are only dummy entries, there was an unsolvable conflict in SOURCE.
  79. We drop the file.
  80. - If we end on a non-empty entry with an 'unsolvable' targetID, it means that an
  81. unsolvable conflict with target files happened with this partial hash. This is
  82. only possible at end-of-file. We drop the file.
  83. - If we end on an empty entry, there is no match with SOURCE and we drop the
  84. file.
  85. - If we end on a non-empty entry without previous matches, we store the
  86. match.
  87. - Else we end on a non-empty entry with one match already present. This is a
  88. conflict. We solve the conflict as for the SOURCE walk except that we need to
  89. update the partial hashes of three files: the SOURCE file, the first TARGET
  90. match and the new TARGET match.
  91. 3. We generate the 'renameOps' and 'reverseOps' maps. They map 'oldpath' to
  92. 'newpath' and 'newpath' to 'oldpath' respectively. We drop entries where
  93. 'oldpath==newpath' to spare a lot of noise.
  94. Note that file names are not used to compute a match since they could be
  95. identical while the content would be different.
  96. 4. We proceed with the renames. Chains and cycles may occur.
  97. - Example of a chain of renames: a->b, b->c, c->d.
  98. - Example of a cycle of renames: a->b, b->c, c->a.
  99. TARGET must be fully analyzed before proceeding with the renames so that we can
  100. detect chains.
  101. We always traverse chains until we reach the end, then rename the elements while
  102. going backward till the beginning. The beginning can be before the entry point.
  103. 'reverseOps' is used for going backward.
  104. When a cycle is detected, we break it down to a chain. We rename one file to a
  105. temporary name. Then we add this new file to the other end of the chain so that
  106. it gets renamed to its original new name once all files have been processed.
  107. */
  108. package main