I am not sure about the technical details, so it might not be possible. But theoretically speaking there are performance issues which could be solved by implementing suggestion #2. Unless, the search algorithm and caching are working differently.
Comparing 1 song against 10.000 could be relatively fast. But comparing all 10.000 songs against each other will go pretty slow. Sometimes I don't want to find all duplicates in 1 folder, but I just want to check 1 file against all the files I have. If caching is necessary so be it. It would still go faster than comparing all files with each other, no?
PS. Similarity's search algorithm is far better than the other apps I have tried so far. I would even go further to say most duplicate finders are pretty broken. Similarity does a nice job though.