开发者

A good approximation algorithm for the maximum weight perfect match in non-bipartite graphs?

Drake and Hougardy find a simple approxima开发者_开发问答tion algorithm for the maximum weighted matching problem. I think my understanding of academic papers is above my capabilities so I'm looking for an easy implementation preferable in php, c, javascript?


Problem Definition and References

Given a simple graph (undirected, no self-edges, no multi-edges) a matching is a subset of edges such that no two of them are incident to the same vertex.

A perfect matching is one in which all vertices are incident to an edge of the matching, something not possible if there are an odd number of vertices. More generally we can ask for a maximum matching (largest possible number of edges in a matching) or for a maximal matching (a matching to which no more edges can be added).

If positive real "weights" are assigned to the edges, we can generalize the problem to ask for a maximum-weighted matching, one that maximizes the sum of edges' weights. The exact maximum-weighted matching problem can be solved in O(nm log(n)) time, where n is the number of vertices and m the number of edges.

Note that a maximum-weighted matching need not be a perfect matching. For example:

*--1--*--3--*--1--*

has only one perfect matching, whose total weight is 2, and a maximum weighted matching with total weight 3.

Discussion and further references for exact and approximate solutions of these, and of the minimum-weighted perfect matching problem, may be found in these papers:

"A Simple Approximation Algorithm for the Weighted Matching Problem" Drake, Doratha E. and Hougardy, Stefan (2002)

Implementation of O(nm log n) Weighted Matchings The Power of Data Structures Melhorn, Kurt and Schäfer, Guido (2000)

Computing Minimum-Weight Perfect Matchings Cook, William and Rohe, André (1997)

Approximating Maximum Weight Matching in Near-linear Time Duan, Ran and Pettie, Seth (2010)

Drake and Hougardy's Simple Approximation Algorithm

The first approximation algorithm of Drake-Hougardy uses the idea of growing paths using the locally heaviest edge at each vertex met. It has a "performance ratio" of 1/2 like the greedy algorithm, but linear time complexity in the number of edges (the greedy algorithm uses a globally heaviest edge and incurs greater time complexity to find that).

The main implementation task is to identify data structures that support the steps of their algorithm efficiently.

The idea of the PathGrowing algorithm:

Given: a simple undirected graph G with weighted edges

(0) Define two sets of edges L and R, initially empty.
(1) While the set of edges of G is not empty, do:
(2)    Choose arbitrary vertex v to which an edge is incident.
(3)    While v has incident edges, do:
(4)        Choose heaviest edge {u,v} incident to v.
(5)        Add edge {u,v} to L or R in alternating fashion.
(6)        Remove vertex v (and its incident edges) from G.
(7)        Let u take the role of v.
(8)    Repeat 3.
(9) Repeat 1.

Return L or R, whichever has the greater total weight.

Data structures to represent the graph and the output

As a "set" is not in any immediate sense a data structure of C, we need to decide what kinds of container for edges and vertices will suit this algorithm. The critical operations are removing vertices and incident edges in a way that allows us to find if any edges are left and to compare weights of the remaining edges incident to a given vertex.

The edges need to be searchable, but only to see if any is still left. One thinks first of a simple linked list of edges, without any special ordering. But this list also needs to be maintained through essentially random deletions. This suggests a doubly-linked list (back links as well as forward at each node), so that deletion of an edge may be done by fixing up the links to skip over any "removed" node. Edge weights can also be stored in this same structure.

Further we need the ability to scan all (remaining) edges incident to a given vertex. We can do this by creating a linked list for each vertex of (pointers to) incident edges. I will assume that the vertices have been preprocessed to ordinal values that can be used as an index into an array of pointers to these linked lists.

Finally we need to represent the edge sets L and R, one of which is to be returned as the approximate maximum matching. Our requirements are to be able to add edges to either set, and to be able to total the edge weights for both of them. Linked lists with dynamically allocated nodes can serve this purpose, perhaps storing pointers to the edge nodes in the original doubly-linked lists as the weight attribute will still persist there even after an edge becomes "removed" by link manipulation.

Such linked and doubly-linked lists can be created in time proportional to the number of edges, since the doubly-linked list entries may be allocated to vertex-specific links on input. With such a design in mind we can analyze the effort required by each step of the algorithm.

(to be continued)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜