devork

E pur si muove

py.test test generators and cached setup

Sunday, June 13, 2010

Recently I've been enjoying py.test's test function arguments, it takes a little getting used too but soon you find that it's quite likely a better way then the xUnit-style of setup/teardown. One slightly more advanced usage was using cached setup together with test generators however. While not difficult that took me some figuring out, so let me document it here.

Since I haven't been a fan of generative tests before I'll explain why I think I can make use of them now. I was writing a wrapper around pysnmp to handle SNMP-GET requests transparently between the different versions. For this I wrote a number of test functions which do some GET requests and check the results, the basic outline of such a test is:

def test_some_get(wrapper_v1):
    oids = ...
    result = wrapper_v1.get(oids)
    assert ...

Here wrapper_v1 is a funcarg which returns an instance of my wrapper class configured for SNMPv1. The extra catch here is that this funcarg uses a function which tries to find an available SNMP agent, trying if one is running on the local host (for the developer) or if a well-know test host is reachable (for lazy developers on our dev network and for buildbots), skipping the test otherwise. But to avoid the relatively long timeouts involved for each individual test this function needs to be cached. Here's the outline of this funcarg:

def pytest_funcarg__wrapper_v1(request):
    cfg = request.cached_setup(setup=check_snmp_v1_avail, scope='session')
    if not cfg:
        py.test.skip('No SNMPv1 agent available')
    return SnmpWrapper(cfg)

Once having all the tests using this wrapper_v1 funcarg I obviously want exactly the same tests for SNMPv2 since that's the whole point of the wrapper. For this I'd need a wrapper_v2 funcarg which is configured for SNMPv2, but that would mean duplicating all the tests! Enter test generators.

The trick to combine test generators with cached setup is not to use the funcargs argument to metafunc.addcall() but rather use the param argument in combination with a normal funcarg. The normal funcarg can then use request.cached_setup() and use the request.param to decide how to configure the wrapper object returned. This is what that looks like:

def pytest_generate_tests(metafunc):
    if 'snmpwrapper' in metafunc.funcargnames:
        metafunc.addcall(id='SNMPv1', param='v1')
        metafunc.addcall(id='SNMPv2', param='v2')

def pytest_funcarg__snmpwrapper(request):
    cfg = request.cached_setup(setup=lambda: check_snmp_v1_avail(request.param),
                               scope='session', extrakey=request.param)
    if not cfg:
        py.test.skip('No SNMP%s agent available' % request.param)
    return SnmpWrapper(cfg)

Don't forget the extrakey argument to cached_setup. The caching uses the name of the requested object, "snmpwrapper" in this case, and the extrakey value to decide when to re-use the caching. If you forget extrakey both calls will return the same cfg.

And that's all that's needed! Test now simply ask for the snmpwrapper funcarg and will get run twice, once configured for SNMPv1 and once for SNMPv2. Running the tests will now look like this:

flub@signy:...$ py.test -v snmp_test.py
============================= test session starts ==============================
python: platform linux2 -- Python 2.6.5 -- pytest-1.1.1 -- /usr/bin/python
test object 1: /home/flub/.../snmp_test.py

snmp_test.py:57: test_get_one.test_get_one[SNMPv1] PASS
snmp_test.py:57: test_get_one.test_get_one[SNMPv2] PASS
snmp_test.py:63: test_get_two.test_get_two[SNMPv1] PASS
snmp_test.py:63: test_get_two.test_get_two[SNMPv2] PASS
snmp_test.py:71: test_get_two_bad.test_get_two_bad[SNMPv1] PASS
snmp_test.py:71: test_get_two_bad.test_get_two_bad[SNMPv2] PASS
snmp_test.py:79: test_get_many.test_get_many[SNMPv1] PASS
snmp_test.py:79: test_get_many.test_get_many[SNMPv2] PASS

=========================== 8 passed in 1.31 seconds ===========================

This wasn't very complicated, but having an example of using the param argument to metafunc.addcall() would have made figuring this out a little easier. So I hope this helps someone else, or at least me at some time in the future.

Update: Originally I forgot the extrakey argument to cached_setup() and thus the funcarg was returning the same in both cases. Somehow I assumed the caching was done on function identity of the setup function. Oops.

3 comments:

holger krekel said...

Hi Florian. nice post - i'd like to link it from the documentation. To be honest, i hadn't thought about using the setup/teardown function combination as a key. I wonder if this would practically make "extrakey" obsolete - in my examples i think it would. With the next major (1.4) release of py.test i'd like to streamline some issues with parametrized testing so such suggestions are very welcome, thanks.

One typo: it is "pytest_funcarg__", not "pytest_funcargs__" in the code.

best,
holger

Floris Bruynooghe said...

Feel free to link to from the docs or even incorporate, I'm glad you find it informative.

Having (setup, teardown) as key means they can't take arguments as you have to wrap a function that needs arguments in a lambda. Or at least not as easily, it's not that hard to work around. Having a syntax like py.test.raise for the function and it's argument wouldn't feel alien either, and would allow you to use the arguments as part of key too. I'm pretty much 50/50 on "extrakey" vs (setup, teardown), they both have their up- and downsides (even if I intuitively expected the setup/teardown way).

And thanks for pointing out the typo, I've corrected it.

New comments are not allowed.

Subscribe to: Post Comments (Atom)