开发者

multithreaded linked list in C++

First a little explanation of what I am trying to do:

My plan is to write a program with a socket stream implemented using the boost::asio library which feeds data to a parser implemented using boost:spirit::qi. The parser will take the packets and fill a packet object and then append that object to the end of a linked list of packet objects. The packet processor will read the first object in the list and do its processing and then move onto the next item and delete the first.

I decided to use a linked list because a if I used a std::queue I would have to lock the entire container every time the stream added a packet or the pro开发者_运维问答cessor removed one which would make the two threads run more or less serially, which I would like to avoid. Plus the queue class has a tendency to copy the entire objects whereas the linked list idea has the benefit of creating the object once and then just pointing to it. To avoid serializing this whole business I intend to place boost:mutex mutexes in each node and locking them from there. The idea is to have the socket stream create the list and immediately lock the first node, populate the node from the parser, create the next node and lock it, then unlock the first node and move on to the next node to do work. This way there's never an unlocked node dangling at the end that the packet procesor may jump to and delete under the socket streams nose. The packet processor will check the first node and try to lock it, if it locks it then it will do its processing and then unlock it, get the next node and then delete the first node. This way serialization is limited to those times when the packet processor has caught up to the socket stream class.

So now my question is, before I do the work of actually implementing this, does this sound like a good idea? I've tried it on a trivial test and it seems to work alright an I can't think of any serious issues with this as long as I implement exception handling and take care to free any memory I allocate, but if anyone can think of any problems with this idea that I've overlooked I would appreciate the input. Also I would appreciate any other suggestions anyone might have either as an alternative or that might make this idea work better.

Thanks in advance!


Check this article, it's about multiple consumers, but still brillant:

Measuring Parallel Performance: Optimizing a Concurrent Queue


This implementation is screaming three things at me:

  • Way too easy to get deadlock because insert and delete will need to lock multiple mutexes simultaneously, and you can't do that simultaneously. Well, you can, but you would need to put mutexes around your mutexes.
  • Way too easy to get corruption. The problems that lead to deadlock can also lead to corruption.
  • And slow, slow, slow. Think of your poor list walker. Each step involves an unlock, a lock, another unlock, and another lock. is going to have to be very careful in stepping, and man, will it be expensive. Lock and unlock each item, and do so in the correct order? Ouch.

This looks like a case of premature optimization and premature pessimization operating in parallel. What drove this architecture?

I suggest starting with the simple solution first. Lock the whole thing each time whenever you want to touch any part of it. See if that gets you in trouble. If it doesn't, problem solved. If it does, a next step up is to switch from mutexes to read-write locks. Just one, not a gazillion. Now you have to think a bit about whether you want a shared or exclusive lock. If you have been diligent about const-correctness, try using a shared lock (read lock) for your const methods, an exclusive lock (write lock) for your non-const methods.


I don't think what you suggest will work. Remember that when you delete a node from a linked list, you need to update the other nodes that point to the deleted node. Similarly, when you add a node you also need to update other nodes to point at the new node. So just locking the node being deleted or added isn't enough.

There are lock-free queues, but they are fiendishly difficult to get correct. For instance, look at the initial comments to the article here describing the extra work required to get a published algorithm to work.


Even though you are calling this a linked list, this is, in effect, a queue.

It is possible to implement Single Producer Single Consumer lock-free queues, if you are willing to use a fixed-size buffer. This let you control the memory usage at the cost of making the Producer wait if the Consumer is not fast enough.

Apart from this slight point, your design looks fine; it will probably be easier to get it right than the lock-free alternative.

Do remember to have a termination condition (a null pointer in the next field for example) so that the Producer may signal to the Consumer that there is no more things to process.


Hmm.. why such a complex solution to a common problem? There are plenty of producer-consumer queue classes out there - pick a reference/pointer/int sized one that works, (ie. no data copying).

Get bytes from your stream until you have assembled a 'packet object' according to your protocol. Shove the packet object reference onto the queue and immediately new() another one for the next packet. The packet processor consumes packet objects from the queue.

The queue is only locked for the time taken to push/pop an object reference to/from the queue. The packet object being assembled by the socket/stream callback and the packet object being processed by the packet processor are always different, so no locking required.

Trying to operate on objects that are already in the queue, (or queue-like linked-list), sounds like a nightmare to me, (and the other posters seem to agree). Is there some other reason why you need to do this?

Rgds, Martin

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜