Posted by: muse2u
« on: October 25, 2012, 21:45:27 »1. So that means that the total files in the counter includes images. Which explains the large number. I didn't think that I had that many audio files.
2. Good suggestion to backup cache file. The only scan results that added entries to the cache accumulator number, were a single short ananlysis. All other long analysis runs either failed during execution (not necessarily Similarity caused) or when the program was shutdown after a long analysis (including the last one I mentioned in the above post).
3. I don't quite understand the use of groups but will investigate. Are you suggesting that if the file folders of a Similarity analysis run are not assigned to a group, no file comparisons take place. That is what I am trying to accomplish. I do not want an automated duplicate file check to take place during the analysis. I don't have a problem in Similarity getting a fingerprint of a few seconds of each song during the analysis and then storing the results in the db but I don't want the added overhead of the actual dup check processing during analysis. Those are things I would want to have the option of turning off. If I want a duplicate check, at a later date I want Similarity to use the initial analysis results to do the dup check.
4. I will look forward to any improvement in stability.
You did not answer two very important questions.
During the analysis (in this case if the file/folders are not assigned to a group) are the results stored in memory to perform immediate dup checks or are they periodically written to disk to make recovery easier/ use less memory / perform faster? The latter is what I would prefer and would want options to set those preferences.
The other.. Since the music db is constanly being maintained (e.g. audio tags changed, file and folder names changed, one track replaced by another, tracks deleted, etc.), does Similarity recognize these changes (at startup? or on request) and reanalyze only those files that have experienced a change, or does Similarity have to reanalyze the whole data base, which I have indicated could take a significant amount of time?
2. Good suggestion to backup cache file. The only scan results that added entries to the cache accumulator number, were a single short ananlysis. All other long analysis runs either failed during execution (not necessarily Similarity caused) or when the program was shutdown after a long analysis (including the last one I mentioned in the above post).
3. I don't quite understand the use of groups but will investigate. Are you suggesting that if the file folders of a Similarity analysis run are not assigned to a group, no file comparisons take place. That is what I am trying to accomplish. I do not want an automated duplicate file check to take place during the analysis. I don't have a problem in Similarity getting a fingerprint of a few seconds of each song during the analysis and then storing the results in the db but I don't want the added overhead of the actual dup check processing during analysis. Those are things I would want to have the option of turning off. If I want a duplicate check, at a later date I want Similarity to use the initial analysis results to do the dup check.
4. I will look forward to any improvement in stability.
You did not answer two very important questions.
During the analysis (in this case if the file/folders are not assigned to a group) are the results stored in memory to perform immediate dup checks or are they periodically written to disk to make recovery easier/ use less memory / perform faster? The latter is what I would prefer and would want options to set those preferences.
The other.. Since the music db is constanly being maintained (e.g. audio tags changed, file and folder names changed, one track replaced by another, tracks deleted, etc.), does Similarity recognize these changes (at startup? or on request) and reanalyze only those files that have experienced a change, or does Similarity have to reanalyze the whole data base, which I have indicated could take a significant amount of time?