开发者

How is this memoized DP table too slow for SPOJ?

SPOILERS: I'm working on http://www.spoj.pl/problems/KNAPSACK/ so don't peek if you don't want a possible solution spoiled for you.

The boilerplate:

import Data.Sequence (index, fromList)
import Data.MemoCombinators (memo2, integral)

main = interact knapsackStr

knapsackStr :: String -> String
knapsackStr str = show $ knapsack items capacity numItems
  where [capacity, numItems] = map read . words $ head ls
        ls = lines str
        items = map (makeItem . words) $ take numItems $ tail ls

Some types and helpers to set the stage:

type Item = (Weight, Value)
type Weight = Int
type Value = Int

weight :: Item -> Weight
weight = fst

value :: Item -> Value
value = snd

makeItem开发者_开发技巧 :: [String] -> Item
makeItem [w, v] = (read w, read v)

And the primary function:

knapsack :: [Item] -> Weight -> Int -> Value
knapsack itemsList = go
  where go = memo2 integral integral knapsack'
        items = fromList $ (0,0):itemsList
        knapsack' 0 _ = 0
        knapsack' _ 0 = 0
        knapsack' w i | wi > w    = exclude
                      | otherwise = max exclude include
          where wi = weight item
                vi = value item
                item = items `index` i
                exclude = go w (i-1)
                include = go (w-wi) (i-1) + vi

And this code works; I've tried plugging in the SPOJ sample test case and it produces the correct result. But when I submit this solution to SPOJ (instead of importing Luke Palmer's MemoCombinators, I simply copy and paste the necessary parts into the submitted source), it exceeds the time limit. =/

I don't understand why; I asked earlier about an efficient way to perform 0-1 knapsack, and I'm fairly convinced that this is about as fast as it gets: a memoized function that will only recursively calculate the sub-entries that it absolutely needs in order to produce the correct result. Did I mess up the memoization somehow? Is there a slow point in this code that I am missing? Is SPOJ just biased against Haskell?

I even put {-# OPTIONS_GHC -O2 #-} at the top of the submission, but alas, it didn't help. I have tried a similar solution that uses a 2D array of Sequences, but it was also rejected as too slow.


There's one major problem which really slows this down. It's too polymorphic. Type-specialized versions of functions can be much faster than polymorphic varieties, and for whatever reason GHC isn't inlining this code to the point where it can determine the exact types in use. When I change the definition of integral to:

integral :: Memo Int
integral = wrap id id bits

I get an approximately 5-fold speedup; I think it's fast enough to be accepted on SPOJ.

This is still significantly slower than gorlum0's solution however. I suspect the reason is because he's using arrays and you use a custom trie type. Using a trie will take much more memory and also make lookups slower due to extra indirections, cache misses, etc. You might be able to make up a lot of the difference if you strictify and unbox fields in IntMap, but I'm not sure that's possible. Trying to strictify fields in BitTrie creates runtime crashes for me.

Pure haskell memoizing code can be good, but I don't think it's as fast as doing unsafe things (at least under the hood). You might apply Lennart Augustsson's technique to see if it fares better at memoization.


The one thing that slows down Haskell is IO, The String type in Haskell gives UTF8 support which we don't need for SPOJ. ByteStrings are blazing fast so you might want to consider using them instead.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜