在实际生活中,经常会有文件重复的困扰,即同一个文件可能既在a目录中,又在b目录中,更可恶的是,即便是同一个文件,文件名可能还不一样。在文件较少的情况下,该类情况还比较容易处理,最不济就是one by one的人工比较——即便如此,也很难保证你的眼神足够犀利。倘若文件很多,这岂不是个impossible mission?最近在看《python unix和linux系统管理指南》,里面就有有关“数据比较”的内容,在其基础上,结合实际整理如下。

该脚本主要包括以下模块:diskwalk,chechsum,find_dupes,delete。其中diskwalk模块是遍历文件的,给定路径,遍历输出该路径下的所有文件。chechsum模块是求文件的md5值。find_dupes导入了diskwalk和chechsum模块,根据md5的值来判断文件是否相同。delete是删除模块。具体如下:

1. diskwalk.py

import os,sys
class diskwalk(object):
        def __init__(self,path):
                self.path = path
        def paths(self):
                path=self.path
                path_collection=[]
                for dirpath,dirnames,filenames in os.walk(path):
                        for file in filenames:
                                fullpath=os.path.join(dirpath,file)
                                path_collection.append(fullpath)
                return path_collection
if __name__ == '__main__':
        for file in diskwalk(sys.argv[1]).paths():
                print file

2.chechsum.py

import hashlib,sys
def create_checksum(path):
    fp = open(path)
    checksum = hashlib.md5()
    while true:
        buffer = fp.read(8192)
        if not buffer:break
        checksum.update(buffer)
    fp.close()    
    checksum = checksum.digest()
    return checksum
if __name__ == '__main__':
        create_checksum(sys.argv[1])

3. find_dupes.py

from checksum import create_checksum
from diskwalk import diskwalk
from os.path import getsize
import sys
def finddupes(path):
    record = {}
    dup = {}
    d = diskwalk(path)
    files = d.paths()
    for file in files:
        compound_key = (getsize(file),create_checksum(file))
        if compound_key in record:
            dup[file] = record[compound_key]    
        else:
            record[compound_key]=file
    return dup

if __name__ == '__main__':
    for file in  finddupes(sys.argv[1]).items():
        print "the duplicate file is %s" % file[0]
        print "the original file is %s\n" % file[1]

finddupes函数返回了字典dup,该字典的键是重复的文件,值是原文件。这样就解答了很多人的疑惑,毕竟,你怎么确保你输出的是重复的文件呢?

4. delete.py

import os,sys
class deletefile(object):
    def __init__(self,file):
        self.file=file
    def delete(self):
        print "deleting %s" % self.file
        os.remove(self.file)
    def dryrun(self):
        print "dry run: %s [not deleted]" % self.file
    def interactive(self):
        answer=raw_input("do you really want to delete: %s [y/n]" % self.file)
        if answer.upper() == 'y':
            os.remove(self.file)
        else:
            print "skiping: %s" % self.file
        return
if __name__ == '__main__':
    from find_dupes import finddupes
        dup=finddupes(sys.argv[1])
    for file in dup.iterkeys():
        delete=deletefile(file)
        #delete.dryrun()
          delete.interactive()
        #delete.delete()

deletefile类构造了3个函数,实现的都是文件删除功能、其中delete函数是直接删除文件,dryrun函数是试运行,文件并没有删除,interactive函数是交互模式,让用户来确定是否删除。这充分了考虑了客户的需求。

总结:这四个模块已封装好,均可单独使用实现各自的功能。组合起来就可批量删除重复文件,只需输入一个路径。

最后,贴个完整版本的,兼容python 2.0, 3.0。

#!/usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import print_function
import os, sys, hashlib
class diskwalk(object):
    def __init__(self, path):
        self.path = path
    def paths(self):
        path = self.path
        files_in_path = []
        for dirpath, dirnames, filenames in os.walk(path):
            for each_file in filenames:
                fullpath = os.path.join(dirpath, each_file)
                files_in_path.append(fullpath)
        return files_in_path
def create_checksum(path):
    fp = open(path,'rb')
    checksum = hashlib.md5()
    while true:
        buffer = fp.read(8192)
        if not buffer: break
        checksum.update(buffer)
    fp.close()
    checksum = checksum.digest()
    return checksum
def finddupes(path):
    record = {}
    dup = {}
    d = diskwalk(path)
    files = d.paths()
    for each_file in files:
        compound_key = (os.path.getsize(each_file), create_checksum(each_file))
        if compound_key in record:
            dup[each_file] = record[compound_key]
        else:
            record[compound_key] = each_file
    return dup
class deletefile(object):
    def __init__(self, file_name):
        self.file_name = file_name
    def delete(self):
        print("deleting %s" % self.file_name)
        os.remove(self.file_name)
    def dryrun(self):
        print("dry run: %s [not deleted]" % self.file_name)
    def interactive(self):
        try:
            answer = raw_input("do you really want to delete: %s [y/n]" % self.file_name)
        except nameerror:
            answer = input("do you really want to delete: %s [y/n]" % self.file_name)
        if answer.upper() == 'y':
            os.remove(self.file_name)
        else:
            print("skiping: %s" % self.file_name)
        return
def main():
    directory_to_check = sys.argv[1]
    duplicate_file = finddupes(directory_to_check)
    for each_file in duplicate_file:
        delete = deletefile(each_file)
        delete.interactive()
if __name__ == '__main__':
    main()

其中,第一个参数是待检测的目录。

到此这篇关于如何用python寻找重复文件并删除的文章就介绍到这了,更多相关python删除重复文件内容请搜索www.887551.com以前的文章或继续浏览下面的相关文章希望大家以后多多支持www.887551.com!