E pur si muove

Iterative learning

Monday, July 11, 2005
Understanding Code

I's amazing how little you (well, at least me!) understand the first time you read some (complex) code. Only after a few times you start to get the big picture and see how it all fits together. It's mostly a matter of understanding the underlying principles I guess. Anyway, the upshot of that is that by now I understand mostly how hotshot and profile do their work! And nothing stops me anymore from finding(!) and checking out that little detail.

That's the name that I gave to the wrapper module for profile. I've written the basic outline of this file. That means I've got the class skeleton (full of raise NotImplementedError's) and the little functions that provide an alternative entry point to this class. No unittests yet though. I also wrote the part that will run the module as a script as well as a module. This lead me to another question: How do you write unittests for that? Any suggestions?

When writing the skeleton of the class I had to decide wich methods to wrap. I decided (and please let me know if I'm wrong!) that wrapping all of it is not usefull (and actually impossible). Basically I think they didn't use enough leading underscores in the choice of the method names. There's no point in having all internals exposed to users is there? Only a hack like hotshot.stats.load() uses them...

So the method's I decided to keep after looking at what's documented and what's used in other software are these:

  • __init__(self, timer=None, bias=None)
  • print_stats(self)
  • dump_stats(self, file)
  • run(self, cmd)
  • runctx(self, cmd, globals, locals)
  • runcall(self, func, *args, **kw)
  • calibrate(self, m, verbose=0)
  • Additionally I'll be keeping the bias bias attribute. This means I'm dropping quite a lot, but it looks like the most sensible way. Any feedback on this is welcome!


    Calibration is not the same in hotshot as in profile. In hotshot it tries to find the smallest possible time step. This gets then stored in the file, but is entirely unused after that. In profile this tries to find the delay in executing the code under the profiler to compensate for the invisable time spent between the calling of the profiler callback and the taking of the time. So I'll have to mimic the later. Guess I'll do so in the wrapper and then just save it with hotshot's .add_info() method to the file. It will then be the duty of the stats module to compensate for this.

    Changing development round?

    While devising concrete plans of how to write these wrappers I'm more and more thinking of starting with the statistics module. That module would only have to depend on hotshot. When trying to implement hprofile first it'll just never work as it will be missing crucial bits. I think it's rather inconvenient to work with something you don't know yet how it will look. Hence my urge to swap them round.

    Monday, July 11, 2005 |


    Brett said...

    For unit tests, I would try to test everything in as much isolation as possible. And as for testing the profiler running, just turn on profiling for a single function call, and then turn it off. Not much else you can do.

    As for the API, only wrap the public functions. Anything people use that is not in the documented API in the official docs is not officially supported.

    New comments are not allowed.

    Subscribe to: Post Comments (Atom)