开发者

What is the fastest setup for computing large string datasets?

For my day job, I have been tasked with setting up a computer system to run calculations on a large database of strings. I have establish a proof of concept, but don't have the low-level knowledge to optimize the hardware and software environment. I was hoping for some guidance on this aspect.

Setup:

  • 100,000 record开发者_如何学Gos in a database containing strings
  • I will be performing string similarity calculations to look for approximate duplicates
    • i.e. each string against every other string, so ~5 billion calculations
  • I wrote the proof of concept in Ruby using SQLite3 as the database using 1000 sample rows
  • The total job should run in under a few days - the faster the better, but with diminishing returns. This is a one-time pass, so I don't need a supercomputer if a desktop setup can do it within a few days

What I'm Looking For:

  • If I'm building a custom box to run this job (and potentially future jobs of a similar nature), what hardware should I focus on optimizing? I.e. should I spend my limited budget on a very fast GPU? CPU? Large amounts of RAM? I don't know Ruby on a low enough level to know where the bottlenecks for this type of operation are
  • Am I missing a better approach? I won't get approval for any major purchases of software or expensive hardware, at least until I can prove this method works with this run through. But can anyone suggest a more efficient method of detecting inexact duplicates?


First off, 100,000 strings don't really qualify as a large dataset nowadays, so don't worry too much about the hardware. Here are some suggestions from my previous job (related to search and machine translation) and the current one where I deal with several 100k to millions of XML records all the time:

  • You want RAM. Lots of it.
  • As Soren said, you want to make sure your algorithm is good.
  • Choose your DB wisely. Postgres for example has excellent string functions and doing certain things directly in the DB can be very fast. Have I said you want a lot of RAM?
  • Your job sounds like it it would be fairly easy to partition into smaller subtasks which can be tackled in parallel. If that's really the case you might want to look at MapReduce. In the previous job we had pretty good workstations (4 cores, 8 GB of RAM) which were never turned off, so we turned some of them into a Hadoop Cluster that would do useful stuff. Since the machines were quite overpowered for everyday work use anyway, the users didn't even notice. It's usually not that difficult to turn something into a MapReduce job and the other advantage would be that you can keep the setup around for similar tasks in the future.
  • As for Ruby specific bottle necks, the biggest one in MRI is usually garbage collection, which thanks to its stop-the-world nature is super slow. When we profile this regularly turns out to be a problem. See why's article The fully upturned bin for details on Ruby GC. If you are set on using Ruby, you might want to compare MRI to JRuby, from my experience with the latter and profilers like JVisualVM I wouldn't be surprised if JRuby fared better.


The total job should run in under a few days...
This is a one-time pass...
Am I missing a better approach...

If this is a one-off task, You should really just run this on Amazon -- Get an Extra Large (4Core, 15GB RAM) machine for a few hours, and just run it there.


Your algo for string similarity is much more important than your hardware spec.

The key question on algos for string similarity is "when do you expect string to be similar?" Do you consider substrings, spelling errors, phonetics, typing errors.

This SO link have a great discussion on algos. 100,000 records is really very little data (in my world) but for ease of implementation, once you have a good algo, you should try to get as much RAM as possible. Doing it in Ruby may not be the best choice for a performance perspective either.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜