The problem has not been solved...
I am trying this on version 101; it's easier to compare and read results
14 files on the folder (7 repeats)
Compare method: Content only; Experimental enabled.
Tags slider on 0.00.
Sensitivity slider:
1.00 down to 0.96 finds 0
0.95 down to 0.92 finds 1
0.91 finds 2
0.90 down to 0.88 finds 3
0.87 down to 0.74 finds 4
0.73 down to 0.72 finds 5
0.71 down to 0.66 finds 6
0.65 down to 0.49 finds 7
0.49 down finds too many non-repeats
Do not get me wrong, your program has helped me find a lot of duplicate files.
I do not pretend you find all the duplicates for me because I understand there are too many variables, specially with files that come from different sources, files that have been poorly ripped, files truncated at the beginning or the end, etc. but this sample shows 7 songs that have duplicates (experimental results over 70%) and the sensitivity has to go down in seven runs to find them all.
Dealing with 14 files is not that bad, but dealing with over 70.000 the results are overwhelming as I wrote to you before and besides it takes more than 3 days for every run on each setting of the sensitivity slider.
That I had written almost two months ago;
Today (05/24/10), I downloaded the new version 120 and tried again.
The results with the 14 files are about the same; only the settings for the now called Content slider are a little off, BUT...
...the speed working on the other folders is remarkable and it keeps finding duplicates at the regular setting of 90% for the Content.
At this pace I can rescan all the folders with different settings in a couple of hours.
The Content Precise I kept OFF; I still do not understand the philosophy behind the network connection for it to work.
As for the presentation, I will have to live with it; the old one (101) was easier to read.
Congratulations,
Keep up the good work
ferbaena