How to Remove Duplicate Domains from a Large List of URLs? RegEx or Otherwise
I originally asked this question: Regular Expression in gVim to Remove Duplicate Domains from a List
However, I realize I may be more likely to find a working solution if I "broaden my scope" in terms of what solution I'm willing to accept.
So, I'll rephrase my question & maybe I'll get a better solution...here goes:
I have a large list of URLs in a .txt file (I'm running Windows Vista 32bit) and I need to remove duplicate DOMAINS (and the entire corresponding URL to each duplicate) while leaving behind the first occurrence of each domain. There are roughly 6,000,000 URLs in this particular file, in the following format (the URLs obviously don't have a space in them, I just had to do that because I don't have enough posts here to post that many "live" URLs):
http://www.ex开发者_StackOverflow中文版ampleurl.com/something.php http://exampleurl.com/somethingelse.htm http://exampleurl2.com/another-url http://www.exampleurl2.com/a-url.htm http://exampleurl2.com/yet-another-url.html http://exampleurl.com/ http://www.exampleurl3.com/here_is_a_url http://www.exampleurl5.com/something
Whatever the solution is, the output file using the above as the input, should be this:
http://www.exampleurl.com/something.php http://exampleurl2.com/another-url http://www.exampleurl3.com/here_is_a_url http://www.exampleurl5.com/something
You notice there are no duplicate domains now, and it left behind the first occurrence it came across.
If anybody can help me out, whether it be using regular expressions or some program I'm not aware of, that would be great.
I'll say this though, I have NO experience using anything other than a Windows OS, so a solution entailing something other than a windows program, would take a little "baby stepping" so to speak (if anybody is kind enough to do so).
Regular expressions in Python, very raw and does not work with subdomains. The basic concept is to use dictionary keys and values, key will be domain name, and value will be overwritten if the key already exists.
import re
pattern = re.compile(r'(http://?)(w*)(\.*)(\w*)(\.)(\w*)')
urlsFile = open("urlsin.txt", "r")
outFile = open("outurls.txt", "w")
urlsDict = {}
for linein in urlsFile.readlines():
match = pattern.search(linein)
url = match.groups()
domain = url[3]
urlsDict[domain] = linein
outFile.write("".join(urlsDict.values()))
urlsFile.close()
outFile.close()
You can extend it to filter out subdomains, but the basic idea is there I think. And for 6 million URLs might take quite a while in Python...
Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems. −−Jamie Zawinski, in comp.emacs.xemacs
For this particular situation I would not use a Regex. URL's are a well defined format and there exist an easy to use parser for that format in the BCL: The Uri
type. It can be used to easily parse the type and get out the domain information you seek.
Here is a quick example
public List<string> GetUrlWithUniqueDomain(string file) {
using ( var reader = new StreamReader(file) ) {
var list = new List<string>();
var found = new HashSet<string>();
var line = reader.ReadLine();
while (line != null) {
Uri uri;
if ( Uri.TryCreate(line, UriKind.Absolute, out uri) && found.Add(uri.Host)) {
list.Add(line);
}
line = reader.ReadLine();
}
}
return list;
}
I would use a combination of Perl and regexps. My first version i
use warnings ;
use strict ;
my %seen ;
while (<>) {
if ( m{ // ( .*? ) / }x ) {
my $dom = $1 ;
print unless $seen {$dom} ++ ;
print "$dom\n" ;
} else {
print "Unrecognised line: $_" ;
}
}
But this treats www.exampleurl.com and exampleurl.com as different. My 2nd version has
if ( m{ // (?:www\.)? ( .*? ) / }x )
which ignores "www." at front. You could probably refine the regexp a bit, but that is left to the reader.
Finally you could comment the regexp a bit ( the /x
qualifier allows this). It rather depends on who is going to be reading it - it could be regarded as too verbose.
if ( m{
// # match double slash
(?:www\.)? # ignore www
( # start capture
.*? # anything but not greedy
) # end capture
/ # match /
}x ) {
I use m{}
rather than //
to avoid /\/\/
- Find a unix box if you dont have one, or get cygwin
- use tr to convert '.' to TAB for convenient.
- use sort(1) to sort the lines by the domain name part. This might be made a little easier by writing an awk program to normalize the www part.
And ça va, you have the dups together. Use perhaps use uniq(1) to find dublicates.
(Extra credit: why can't a regular expression alone do this? Computer science students should think about the pumping lemmas.)
It can be achieved using the following code. It will extract all unique domain URLs from the text file. Even though it's not an efficient solution, you can use it up to the list of 100k URLs to obtain faster results.
from urllib.parse import urlparse
import codecs
all_urls = open('all-urls.txt', encoding='utf-8', errors='ignore').readlines()
print('all urls count = ', len(all_urls))
unique_urls = []
for url in all_urls:
url = url.strip()
root_url = urlparse(url).hostname
is_duplicate = any(str(root_url) in unique_url for unique_url in unique_urls)
if not is_duplicate:
unique_urls.append(url)
unique_urls_file = codecs.open('unique-urls.txt', 'w', encoding='utf8')
for unique_url in unique_urls:
unique_urls_file.write(unique_url + '\n')
unique_urls_file.close()
print('all unique urls count = ', len(unique_urls))
精彩评论