How to optimize my PageRank calculation?
In the book Programming Collective Intelligence I found the following function to compute the PageRank:
def calculatepagerank(self,iterations=20):
# clear out the current PageRank tables
self.con.execute("drop table if exists pagerank")
self.con.execute("create table pagerank(urlid primary key,score)")
self.con.execute("create index prankidx on pagerank(urlid)")
# initialize every url with a PageRank of 1.0
self.con.execute("insert into pagerank select rowid,1.0 from urllist")
self.dbcommit()
for i in range(iterations):
print "Iteration %d" % i
for (urlid,) in self.con.execute("select rowid from urllist"):
pr=0.15
# Loop through all the pages that link to this one
for (linker,) in self.con.execute("select distinct fromid from link where toid=%d" % urlid):
# Get the PageRank of the linker
linkingpr=self.con.execute("select score from pagerank where urlid=%d" % linker).fetchone()[0]
# Get the total number of links from the linker
linkingcount=self.con.execute("select count(*) from link where fromid=%d" % linker).fetchone()[0]
pr+=0.85*(linkingpr/linkingcount)
self.con.execute("update pagerank set score=%f where urlid=%d" % (pr,urlid))
self.dbcommit()
However, this function is very slow, because of all the SQL queries in every iteration
>>> import cProfile
>>> cProfile.run("crawler.calculatepagerank()")
2262510 function calls in 136.006 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 136.006 136.006 <string>:1(<module>)
1 20.826 20.826 136.006 136.006 searchengine.py:179(calculatepagerank)
21 0.000 0.000 0.528 0.025 searchengine.py:27(dbcommit)
21 0.528 0.025 0.528 0.025 {method 'commit' of 'sqlite3.Connecti
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler
1339864 112.602 0.000 112.602 0.000 {method 'execute' of 'sqlite3.Connec
922600 2.050 0.000 2.050 0.000 {method 'fetchone' of 'sqlite3.Cursor'
1 0.000 0.000 0.000 0.000 {range}
So I optimized the function and came up with this:
def calculatepagerank2(self,iterations=20):
# clear out the current PageRank tables
self.con.execute("drop table if exists pagerank")
self.con.execute("create table pagerank(urlid primary key,score)")
self.con.execute("create index prankidx on pagerank(urlid)")
# initialize every url with a PageRank of 1.0
self.con.execute开发者_高级运维("insert into pagerank select rowid,1.0 from urllist")
self.dbcommit()
inlinks={}
numoutlinks={}
pagerank={}
for (urlid,) in self.con.execute("select rowid from urllist"):
inlinks[urlid]=[]
numoutlinks[urlid]=0
# Initialize pagerank vector with 1.0
pagerank[urlid]=1.0
# Loop through all the pages that link to this one
for (inlink,) in self.con.execute("select distinct fromid from link where toid=%d" % urlid):
inlinks[urlid].append(inlink)
# get number of outgoing links from a page
numoutlinks[urlid]=self.con.execute("select count(*) from link where fromid=%d" % urlid).fetchone()[0]
for i in range(iterations):
print "Iteration %d" % i
for urlid in pagerank:
pr=0.15
for link in inlinks[urlid]:
linkpr=pagerank[link]
linkcount=numoutlinks[link]
pr+=0.85*(linkpr/linkcount)
pagerank[urlid]=pr
for urlid in pagerank:
self.con.execute("update pagerank set score=%f where urlid=%d" % (pagerank[urlid],urlid))
self.dbcommit()
This function is many times faster (but uses a lot more memory for all the temporary dictionaries) because it avoids the unnecessary SQL queries in every iteration:
>>> cProfile.run("crawler.calculatepagerank2()")
90070 function calls in 3.527 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.004 0.004 3.527 3.527 <string>:1(<module>)
1 1.154 1.154 3.523 3.523 searchengine.py:207(calculatepagerank2
2 0.000 0.000 0.058 0.029 searchengine.py:27(dbcommit)
23065 0.013 0.000 0.013 0.000 {method 'append' of 'list' objects}
2 0.058 0.029 0.058 0.029 {method 'commit' of 'sqlite3.Connectio
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler
43932 2.261 0.000 2.261 0.000 {method 'execute' of 'sqlite3.Connecti
23065 0.037 0.000 0.037 0.000 {method 'fetchone' of 'sqlite3.Cursor'
1 0.000 0.000 0.000 0.000 {range}
But is it possible to further reduce the number of SQL queries to speed up the function even more? Update: Fixed Indentation in calculatepagerank2().
If you have a very large database (e.g. # records ~ # pages in the WWW) using the database in a manner similar to what's suggested in the book makes sense, because you're not going to be able to keep all that data in memory.
If your dataset is small enough, you can (probably) improve your second version by not doing so many queries. Try replacing your first loop with something like this:
for urlid, in self.con.execute('select rowid from urllist'):
inlinks[urlid] = []
numoutlinks[urlid] = 0
pagerank[urlid] = 1.0
for src, dest in self.con.execute('select fromid, toid from link'):
inlinks[dest].append(src)
numoutlinks[src] += 1
This version does exactly 2 queries instead of O(n^2) queries.
I believe the majority of the time is being spent on these SQL queries:
for (urlid,) in self.con.execute("select rowid from urllist"):
...
for (inlink,) in self.con.execute("select distinct fromid from link where toid=%d" % urlid):
...
numoutlinks[urlid]=self.con.execute("select count(*) from link where fromid=%d" % urlid).fetchone()[0]
Assuming you have enough memory, you may be able to reduce this to just two queries:
SELECT fromid,toid FROM link WHERE toid IN (SELECT rowid FROM urllist)
andSELECT fromid,count(*) FROM link WHERE fromid IN (SELECT rowid FROM urllist) GROUP BY fromid
Then you could loop through the results and build inlinks
, numoutlinks
and pagerank
.
You may also benefit from using collections.defaultdict
:
import collections
import itertools
def constant_factory(value):
return itertools.repeat(value).next
The following then makes inlinks
a dict of sets. Sets are appropriate since
you only want distinct urls
inlinks=collections.defaultdict(set)
And this makes pagerank
a dict whose default value is 1.0:
pagerank=collections.defaultdict(constant_factory(1.0))
The advantage of using collections.defaultdict is that you do not need to pre-initialize the dicts.
So, put together, what I'm suggesting would look something like this:
import collections
def constant_factory(value):
return itertools.repeat(value).next
def calculatepagerank2(self,iterations=20):
# clear out the current PageRank tables
self.con.execute("DROP TABLE IF EXISTS pagerank")
self.con.execute("CREATE TABLE pagerank(urlid primary key,score)")
self.con.execute("CREATE INDEX prankidx ON pagerank(urlid)")
# initialize every url with a PageRank of 1.0
self.con.execute("INSERT INTO pagerank SELECT rowid,1.0 FROM urllist")
self.dbcommit()
inlinks=collections.defaultdict(set)
sql='''SELECT fromid,toid FROM link WHERE toid IN (SELECT rowid FROM urllist)'''
for f,t in self.con.execute(sql):
inlinks[t].add(f)
numoutlinks={}
sql='''SELECT fromid,count(*) FROM link WHERE fromid IN (SELECT rowid FROM urllist) GROUP BY fromid'''
for f,c in self.con.execute(sql):
numoutlinks[f]=c
pagerank=collections.defaultdict(constant_factory(1.0))
for i in range(iterations):
print "Iteration %d" % i
for urlid in inlinks:
pr=0.15
for link in inlinks[urlid]:
linkpr=pagerank[link]
linkcount=numoutlinks[link]
pr+=0.85*(linkpr/linkcount)
pagerank[urlid]=pr
sql="UPDATE pagerank SET score=? WHERE urlid=?"
args=((pagerank[urlid],urlid) for urlid in pagerank)
self.con.executemany(sql, args)
self.dbcommit()
Do you have enough RAM to hold the sparse matrix (fromid, toid)
in some form? That would allow big optimizations (with big algorithmic changes). At least, caching in memory the (fromid, numlinks)
that you now do with a select count(*)
in your innermost loop should help (I'd imagine that cache, being O(N)
in space if you're dealing with N
URLs, would be more likely to fit in memory).
I'm answering my own question, since in the end it came out that a mixture of all answers worked best for me:
def calculatepagerank4(self,iterations=20):
# clear out the current PageRank tables
self.con.execute("drop table if exists pagerank")
self.con.execute("create table pagerank(urlid primary key,score)")
self.con.execute("create index prankidx on pagerank(urlid)")
# initialize every url with a PageRank of 1.0
self.con.execute("insert into pagerank select rowid,1.0 from urllist")
self.dbcommit()
inlinks={}
numoutlinks={}
pagerank={}
for (urlid,) in self.con.execute("select rowid from urllist"):
inlinks[urlid]=[]
numoutlinks[urlid]=0
# Initialize pagerank vector with 1.0
pagerank[urlid]=1.0
for src,dest in self.con.execute("select distinct fromid, toid from link"):
inlinks[dest].append(src)
numoutlinks[src]+=1
for i in range(iterations):
print "Iteration %d" % i
for urlid in pagerank:
pr=0.15
for link in inlinks[urlid]:
linkpr=pagerank[link]
linkcount=numoutlinks[link]
pr+=0.85*(linkpr/linkcount)
pagerank[urlid]=pr
args=((pagerank[urlid],urlid) for urlid in pagerank)
self.con.executemany("update pagerank set score=? where urlid=?" , args)
self.dbcommit()
So I replaced the first two loops as suggested by allyourcode
, but in addition also used executemany() as in the solution from ˜unutbu
. But unlike ˜unutbu
I use a generator expression for args, to not waste too much memory, although using a list comprehension was a little bit faster. In the end the routine was 100 times faster than the routine suggested in the book:
>>> cProfile.run("crawler.calculatepagerank4()")
33512 function calls in 1.377 CPU seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.004 0.004 1.377 1.377 <string>:1(<module>)
2 0.000 0.000 0.073 0.036 searchengine.py:27(dbcommit)
1 0.693 0.693 1.373 1.373 searchengine.py:286(calculatepagerank4
10432 0.011 0.000 0.011 0.000 searchengine.py:321(<genexpr>)
23065 0.009 0.000 0.009 0.000 {method 'append' of 'list' objects}
2 0.073 0.036 0.073 0.036 {method 'commit' of 'sqlite3.Connectio
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler
6 0.379 0.063 0.379 0.063 {method 'execute' of 'sqlite3.Connecti
1 0.209 0.209 0.220 0.220 {method 'executemany' of 'sqlite3.Conn
1 0.000 0.000 0.000 0.000 {range}
One should also be aware of the following problems:
- If you use string formating with
%f
instead of using a placeholder?
for constructing the SQL statement you will loose precision (e.g. I got 2.9796095721920315 using?
but 2.9796100000000001 using%f
. - Duplicate links from one page to another are treated as only one link in the default PageRank algorithm. However the solution from the book didn't take that into account.
- The whole algorithm from the book is flawed: The reason is, that in each iteration, the pagerank score is not stored in a second table. But this means the outcome of an iteration depends on the order of the pages looped through and this might change the result after several iterations quite drastically. To fix this problem one either has to use an additional table/dictionary to store the pagerank for the next iteration or to use a completely different algorithm like Power Iteration.
精彩评论