开发者

Converting graph traversal to multiprocessing in Python

I've been working on a graph traversal algorithm over a simple network and I'd like to run it using multiprocessing since it it going to require a lot of I/O bounded calls when I scale it over the full network. The simple version runs pretty fast:

already_seen = {}
already_seen_get = already_seen.get

GH_add_node = GH.add_node
GH_add_edge = GH.add_edge
GH_has_node = GH.has_node
GH_has_edge = GH.has_edge


def graph_user(user, depth=0):
    logger.debug("Searching for %s", user)
    logger.debug("At depth %d", depth)
    users_to_read = followers = following = []

    if already_seen_get(user):
        logging.debug("Already seen %s", user)
        return None

    result = [x.value for x in list(view[user])]

    if result:
        result = result[0]
        following = result['following']
        followers = result['followers']
        users_to_read = set().union(following, followers)

    if not GH_has_node(user):
        logger.debug("Adding %s to graph", user)
        GH_add_node(user)

    for follower in users_to_read:
        if not GH_has_node(follower):
            GH_add_node(follower)
            logger.debug("Adding %s to graph", follower)
            if depth < max_depth:
                graph_user(follower, depth + 1)

        if GH_has_edge(follower, user):
            GH[follower][user]['weight'] += 1
        else:
            GH_add_edge(user, follower, {'weight': 1})

Its actually significantly faster than my multiprocessing version:

to_write = Queue()
to_read = Queue()
to_edge = Queue()
already_seen = Queue()


def fetch_user():
    seen = {}
    read_get = to_read.get
    read_put = to_read.put
    write_put = to_write.put
    edge_put = to_edge.put
    seen_get = seen.get

    while True:
        try:
            logging.debug("Begging for a user")

            user = read_get(timeout=1)
            if seen_get(user):
                continue

            logging.debug("Adding %s", user)
            seen[user] = True
            result = [x.value for x in list(view[user])]
            write_put(user, timeout=1)

            if result:
                result = result.pop()
                logging.debug("Got user %s and result %s", user, result)
                following = result['following']
                followers = result['followers']
                users_to_read = list(set().union(following, followers))

                [edge_put((user, x, {'weight': 1})) for x in users_to_read]

                [read_put(y, timeout=1) for y in users_to_read if not seen_get(y)]

        except Empty:
            logging.debug("Fetches complete")
            return


def write_node():
    users = []
    users_app = users.append
    write_get = to_write.get

    while True:
        try:
            user = write_get(timeout=1)
            logging.debug("Writing user %s", user)
       开发者_运维技巧     users_app(user)
        except Empty:
            logging.debug("Users complete")
            return users


def write_edge():
    edges = []
    edges_app = edges.append
    edge_get = to_edge.get

    while True:
        try:
            edge = edge_get(timeout=1)
            logging.debug("Writing edge %s", edge)
            edges_app(edge)
        except Empty:
            logging.debug("Edges Complete")
            return edges


if __name__ == '__main__':
    pool = Pool(processes=1)
    to_read.put(me)

    pool.apply_async(fetch_user)
    users = pool.apply_async(write_node)
    edges = pool.apply_async(write_edge)

    GH.add_weighted_edges_from(edges.get())
    GH.add_nodes_from(users.get())

    pool.close()
    pool.join()

What I can't figure out is why the single process version is so much faster. In theory, the multiprocessing version should be writing and reading simultaneously. I suspect there is lock contention on the queues and that is the cause of the slow down but I don't really have any evidence of that. When I scale the number of fetch_user processes it seems to run faster, but then I have issues with synchronizing the data seen across them. So some thoughts I've had are

  • Is this even a good application for multiprocessing? I was originally using it because I wanted to be able to fetch from the db in parallell.
  • How can I avoid resource contention when reading and writing from the same queue?
  • Did I miss some obvious caveat for the design?
  • What can I do to share a lookup table between the readers so I don't keep fetching the same user twice?
  • When increasing the number of fetching processes they writers eventually lock. It looks like the write queue is not being written to, but the read queue is full. Is there a better way to handle this situation than with timeouts and exception handling?


Queues in Python are synchronized. This means that only one thread at a time can read/write, this will definitely provoke a bottleneck in your app.

One better solution is to distribute the processing based on a hash function and assign the processing to the threads with a simple module operation. So for instance if you have 4 threads you could have 4 queues:

 thread_queues = []
 for i in range(4):
     thread_queues = Queue()

 for user in user_list:
    user_hash=hash(user.user_id) #hash in here is just shortcut to some standard hash utility 
    thread_id = user_hash % 4
    thread_queues[thread_id].put(user)

 # From here ... your pool of threads access thread_queues but each thread ONLY accesses 
 # one queue based on a numeric id given to each of them.

Most of hash functions will distribute evenly your data. I normally use UMAC. But maybe you can just try with the hash function from the Python String implementation.

Another improvement would be to avoid the use of Queues and use a non-sync object, such a list.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜