开发者

Simulation and synthetic video generation for evaluation of computer vision algorithms

I am looking for an easy way to generate synthetic videos to test computer vision software.

Currently I am only aware of one tool that targets this need: ObjectVideo Virtual Video (OVVV). It is a Hal开发者_JAVA百科fLife 2 mod that allows to simulate cameras in a virtual world.

But I am looking for a more open (like in open source) and maybe portable solution. One way would be to implement the needed functionality on top of one of the dozen open-source 3D engines. Though, it would be great if somebody knows a library or tool that already implements something like OVVV does.

Also, if you do not no a ready-to-use solution: how would you tackle the problem?

PS: The reason I ask here is that I want to minimize my efforts spent on this issue. It's not that I had no idea how to do it. But my solutions would require me to invest to much time into this. So I am looking for concrete tips here ... :-)


If I were in your situation, I'd probably use POV-Ray since it's possible to write code in any language to produce .pov files to feed it. This is great where precise geometry, lighting, textures and complex exact motions are important. POV-Ray can be run entirely from the command line or programmatically with a system() call or equivalent.

Although POV-Ray isn't open source in the usual sense, it is free and you can get the source for it.


What about using one of the open source game engines? If I recall correctly, the Quake engine is now in the public domain, and it may be sufficient for your needs.

Most of the engines provide scripting features (often Lua) intended for AI and object behaviors, but which could easily provide the programmability you need.

Edit: Tricks for applying noise/distortion and other post-processing effects programmatically to video

A short script written in AviSynth will provide blur, distortion, contrast/frame-rate changes, noise addition, and a host of other possible effects. These effects are provided on the fly on a frame-by-frame basis, so you don't need to "render" the output to a huge video file for testing. Video programs will treat the script files like a normal video, albeit with more CPU needs during playback. So, you can feed your computer vision package a bunch of AviSynth scripts for testing, which may all feed from the same video source, but apply different levels of noise, blur, etc. Could save a LOT of time and disk space in testing!

Their site is briefly down, I think, but you can find the packages to DL it everywhere, since it is open source and widely used.


I've seen Ogre used for this exact purpose.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜