开发者

How to performance test a rails app as part of the continuous integration build cycle?

We currently use CruiseControl (ruby version) for CI, and it runs our unit and integration tests (primarily rspec).

That's great: it gives us instant feedback on any functional issues or regressions (I use instant in the approximate sense;-).

What it doesn't tell us is whether our commits have introduced a performance regression.

I'd like to have our build go RED if the tests measure a performance degredation of say 5% And of course point us to the issue, be it poorly responding database queries, time spent in a ruby function or controller response.

Continuous Performance Testing is a topic that has had a little discussion over the past few years, but aside from a few vendor offerings (mainly aimed at the java and .NET world), I don't see much on the Rails side. I think we are like m开发者_C百科ost: performance, load and volume testing is a separate activity, usually done before a major update, but otherwise frequently forgotten during the routine iterations and releases. And we only keep ourselves out of major trouble because of NewRelic's awesomeness at monitoring our live instances, and a dash of luck.

CI is essential for an agile development practice, and the lack of continuous performance testing during the build seems to be one of the few remaining big gaps in our tooling.

I would love some answers that can point to any tools that can help, or even experience in how you may have hacked this yourself. NB: we are not wed to CC, and not averse to including other - even commercial - products in the build cycle if they can do the job.


Assuming you've already seen this: http://guides.rubyonrails.org/performance_testing.html

On a different note, we run a nightly Grinder-based performance test through our Jenkins-based CI framework. Note that it's not run against every build, it's automatic each night, grabbing the latest build (so it's maybe every 4 builds or so.) Takes about 4 hours to run through tests at various concurrent-user levels, but we could shorten that significantly if we just ran at a single user level. Because it's run through Jenkins, we could return an error (make it go RED) if the metrics were bad, but we're not doing that just yet.

If you're really just looking for a performance degradation, running a single test (that measures what you care about) on the latest code and tracking those historical response times from build to build should do it for you.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜