开发者

Generating py.test tests in Python

Question first, then an explanation if you're interested.

In the context of py.test, how do I generate a large set of test functions from a small set of test-function templates?

Something like:

models = [model1,model2,model3]
data_sets = [data1,data2,data3]

def generate_test_learn_parameter_function(model,data):
    def this_test(model,data):
        param = model.learn_parameters(data)
        assert((param - model.param) < 0.1 )
    return this_test

for model,data in zip(models,data_sets):
    # how can py.test can see the results of this function?
    generate_test_learn_parameter_function(model,data)

Explanation:

The code I'm writing takes a model structure, some data, and learns the parameters of the model. So my unit testing consists of a bunch of model structures and pre-generated data sets, and then a set of about 5 machine learning tasks to complete on each structure+data.

So if I hand code this I need one test per model per task. Every time I come up with a new model I need to then copy and paste the 5 tasks, changing which pickled structure+data I'm pointing at. This feels like bad practice to me. Ideally what I'd like is 5 template functions that define each of my 5 tasks and then to just spit out test functions for a list of structures that I specify.

Googling about brings me to either a) factories or b) closures, both of which addle my brain and suggest to me that there must be an easier way, as this problem must be faced regularly by proper programmers. So is there?


EDIT: So here's how to solve this problem!

def pytest_generate_tests(metafunc):
    if "model" in metafunc.funcargnames:
     开发者_运维百科   models = [model1,model2,model3]
        for model in models:
            metafunc.addcall(funcargs=dict(model=model))

def test_awesome(model):
    assert model == "awesome"

This will apply the test_awesome test to each model in my list of models! Thanks @dfichter!

(NOTE: that assert always passes, btw)


Good instincts. py.test supports exactly what you're talking about with its pytest_generate_tests() hook. They explain it here.


You also could do that using parametrized fixtures. While hooks, is an API to build plugins for Py.test, parametrized fixtures is a generalized way to make a fixtures that outputs multiple values and generates additional test cases for them.

Plugins are meant to be some project-wide (or package-wide) features, not test case specific features and parametrized fixtures are exactly what's needed to parametrize some resource for test case(s).

So your solution could be rewritten as that:

@pytest.fixture(params=[model1, model2, model3])
def model(request):
    return request.param

def test_awesome(model):
    assert model == "awesome"


The simplest solution today is to use pytest.mark.parametrize:

@pytest.mark.parametrize('model', ["awesome1", "awesome2", "awesome3"])
def test_awesome(model):
    assert model.startswith("awesome")
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜