开发者

Financial applications on GPGPU

I want to know what sort of financial applications can be implemented using a GPGPU. I'm aware of Option pricing/ Stock price estimation using Monte Carlo simulation on GPGPU using CUDA. Can someone enumerate the va开发者_开发知识库rious possibilities of utilizing GPGPU for any application in Finance domain,


There are many financial applications that can be run on the GPU in various fields, including pricing and risk. There are some links from NVIDIA's Computational Finance page.

It's true that Monte Carlo is the most obvious starting point for many people. Monte Carlo is a very broad class of applications many of which are amenable to the GPU. Also many lattice based problems can be run on the GPU. Explicit finite difference methods run well and are simple to implement, many examples on NVIDIA's site as well as in the SDK, it's also used in Oil & Gas codes a lot so plenty of material. Implicit finite difference methods can also work well depending on the exact nature of the problem, Mike Giles has a 3D ADI solver on his site which also has other useful finance stuff.

GPUs are also good for linear algebra type problems, especially where you can leave the data on the GPU to do reasonable work. NVIDIA provide cuBLAS with the CUDA Toolkit and you can get cuLAPACK too.


Basically, anything that requires a lot of parallel mathematics to run. As you originally stated, Monte Carlo simultation of options that cannot be priced with closed-form solutions are excellent candidates. Anything that involves large matrixes and operations upon them will be ideal; after all, 3D graphics use alot of matrix mathematics.

Given that many trader desktops sometimes have 'workstation' class GPUs in order to drive several monitors, possibly with video feeds, limited 3D graphics (volatility surfaces, etc) it would make sense to run some of the pricing analytics on the GPU, rather than pushing the responsibility onto a compute grid; in my experience the compute grids are frequently struggling under the weight of EVERYONE in the bank trying to use them, and some of the grid computing products leave alot to be desired.

Outside of this particular problem, there's not a great deal more that can be easily achieved with GPUs, because the instruction set and pipelines are more limited in their functional scope compared to a regular CISC CPU.

The problem with adoption has been one of standardisation; NVidia had CUDA, ATI had Stream. Most banks have enough vendor lock-in to deal with without hooking their derivative analytics (which many regard as extremely sensitive IP) into a gfx card vendor's acceleration technology. I suppose with the availability of OpenCL as an open standard this may change.


F# is used a lot in finance, so you might check out these links

http://blogs.msdn.com/satnam_singh/archive/2009/12/15/gpgpu-and-x64-multicore-programming-with-accelerator-from-f.aspx

http://tomasp.net/blog/accelerator-intro.aspx


High-end GPUs are starting to offer ECC memory (a serious consideration for financial and, eh, military applications) and high-precision types.

But it really is all about Monte Carlo at the moment.

You can go to workshops on it, and from their descriptions see that it'll focus on Monte Carlo.


A good start would be probably to check NVIDIA's website:

  • CUDA's Finance Showcases
  • CUDA's Finance Tutorials


Using a GPU introduces limitations to architecture, deployment and maintenance of your app. Think twice before you invest efforts in such solution. E.g. if you're running in virtual environment, it would require all physical machines to have GPU hardware installed and a special vGPU hardware and software support + licenses. What if you decide to host your service in the cloud (e.g. Azure, Amazon)? In many cases it is worth building your architecture in advance to support scale out and be flexible and scalable (with some overhead of course) rather than scale up and squeeze as much as you can from your hardware.


Answering the complement of your question: anything that involves accounting can't be done on GPGPU (or binary floating point, for that matter)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜