E pur si muove faster then re.match()

Sunday, December 19, 2010

This is a counter-intuitive discovery, in IPython:

In [18]: expr = re.compile('foo')

In [19]: %timeit'foobar')
1000000 loops, best of 3: 453 ns per loop

In [20]: %timeit expr.match('foobar')
1000000 loops, best of 3: 638 ns per loop

So now I'm left wondering why .match() exists at all. Is it really such a common occurrence that it's worth an extra function/method?

Just to be complete, if this is actually what you want there is no performance gap:

In [25]: expr = re.compile('^foo')

In [26]: %timeit'foobar')
1000000 loops, best of 3: 617 ns per loop

In [27]: %timeit expr.match('foobar')
1000000 loops, best of 3: 612 ns per loop

Storm and SQLite in-memory databases

Tuesday, November 02, 2010

When using an SQLite in-memory databases in storm the different stores created from the same database are not modifying the same SQLite database. E.g.

db = storm.locals.create_database('sqlite:')
store1 = storm.locals.Store(db)
store2 = storm.locals.Store(db)

Here store1 and store2 will not refer to the same database, despite the fact that this is what would be natural. The reason is that SQLite in-memory databases are specific to their connection object. And the connection object is part of the store object, not the database object.

The upshot is that I can't use in-memory databases inside my unittests that easily because the code under tests assumes creating stores is cheap (not caring too much about the caching). Which all kind of sucks.

PS: A whole different rant is about libraries designed for "from foo import *", e.g. storm.locals.*, fabric.api.*. At least for the later you can do "import fabric.api as fab", "import storm.locals as storm" has it's limitations...

Finding the linux thread ID from within python using ctypes

Thursday, September 02, 2010

So I've got a multi-threaded application and suddenly I notice there's one thread running away and using all CPU. Not good, probably a loop gone wrong. But where? One way to find this is revert history in the VCS and keep trying it out till you find the bad commit. Another way is to find out which thread is doing this, this is of course much more fun!

Using ps -p PID -f -L you'll see the thread ID which is causing the problems. To relate this to a Python thread I subclass threading.Thread, override it's .start() method to first wrap the .run() method so that you can log the thread ID before calling the original .run(). Since I was already doing all of this apart from the logging of the thread ID this was less work then it sounds. But the hard part is finding the thread ID.

Python knows of a threading.get_ident() method but this is merely a long unique integer and does not correspond to the actual thread ID of the OS. The kernel allows you to get the thread ID: getid(2). But this must be called using a system call with the constant name SYS_gettid. Because it's hard to use constants in ctypes (at least I don't know how to do this), and this is not portable anyway, I used this trivial C program to find out the constant value:

#include <stdio.h>
#include <sys/syscall.h>

int main(void)
    printf("%d\n", SYS_gettid);
    return 0;

In my case the constant to use is 186. Now all that is left is using ctypes to do the system call:

import ctypes

SYS_gettid = 186
libc = ctypes.cdll.LoadLibrary('')
tid = libc.syscall(SYS_gettid)

That's it! Now you have the matching thread ID!

Going back to the original problem you can now associate this thread ID with the thread name and you should be able to find the problematic thread.

Return inside with statement (updated)

Saturday, August 14, 2010

Somehow my brain seems to think there's a reason not to return inside a with statement, so rather then doing this:

def foo():
    with ctx_manager:
        return bar()

I always do:

def foo():
    with ctx_manager:
         result = bar()
    return result

No idea why nor where I think to have heard/read this. Searching for this brings up absolutely no rationale. So if you know why this is so, or know that the first version is perfectly fine, please enlighten me!


Seems it's only relevant if you're reading a file using the with statement. This seems to have come from the python documentation itself:

The last version is not very good either — due to implementation details, the file would not be closed when an exception is raised until the handler finishes, and perhaps not at all in non-C implementations (e.g., Jython).

def get_status(file):
    with open(file) as fp:
        return fp.readline()

Sadly it doesn't say what the implementation details are nor how to do this correctly. I can have several more or less educated guesses at why and which way to do this better. But I'd love to get a more detailed description of what happens in the implementations when doing this, because the worst-case way of interpreting that example is that using open() or file() as a context manager is completely useless. Which I would hate.

Templating engine in python stdlib?

Monday, August 09, 2010

I am a great proponent of the python standard library, I love having lots of tools in there and hate having to resort to thirdparty libs. This is why I was wondering if there could be a real templating engine in the stdlib some day? I'm not a great user of templates, but sometimes string.Template is just not good enough and real templating would make things a lot more readable.

Not being a great templating user I don't know how the world of templating looks. Is there a template engine out there which would be a candidate to move to the stdlib? Or do most people think this is a stupid idea? Or maybe this has been discussed before and I didn't find the discussion?

Using Debian source format 3.0 (quilt) and svn-buildpackage

Monday, July 26, 2010

Searching the svn-buildpackage manpage for the 3.0 (quilt) format I thought that it wasn't able to apply the patches in debian/patches during build time. Instead I was doing a horrible dance which looked something like "svn-buildpackage --svn-export; cd ../build-area/...; debuild". Turns out I was completely wrong.

svn-buildpackage doesn't need to know about the source format. Instead it simply invokes dpkg-buildpackage which will automatically notice that the patches are not applied and apply them before building. That simple!

Thanks to Niels Thykier to point this out to me on IRC.

Europython, threading and virtualenv

Friday, July 23, 2010

I use threads

I do not use virtualenv
and dont't want to

Just needed to get that out of my system after europython, now mock me.

PS: I should probably have done this as a lightening talk but that occurred to me too late.

py.test test generators and cached setup

Sunday, June 13, 2010

Recently I've been enjoying py.test's test function arguments, it takes a little getting used too but soon you find that it's quite likely a better way then the xUnit-style of setup/teardown. One slightly more advanced usage was using cached setup together with test generators however. While not difficult that took me some figuring out, so let me document it here.

Since I haven't been a fan of generative tests before I'll explain why I think I can make use of them now. I was writing a wrapper around pysnmp to handle SNMP-GET requests transparently between the different versions. For this I wrote a number of test functions which do some GET requests and check the results, the basic outline of such a test is:

def test_some_get(wrapper_v1):
    oids = ...
    result = wrapper_v1.get(oids)
    assert ...

Here wrapper_v1 is a funcarg which returns an instance of my wrapper class configured for SNMPv1. The extra catch here is that this funcarg uses a function which tries to find an available SNMP agent, trying if one is running on the local host (for the developer) or if a well-know test host is reachable (for lazy developers on our dev network and for buildbots), skipping the test otherwise. But to avoid the relatively long timeouts involved for each individual test this function needs to be cached. Here's the outline of this funcarg:

def pytest_funcarg__wrapper_v1(request):
    cfg = request.cached_setup(setup=check_snmp_v1_avail, scope='session')
    if not cfg:
        py.test.skip('No SNMPv1 agent available')
    return SnmpWrapper(cfg)

Once having all the tests using this wrapper_v1 funcarg I obviously want exactly the same tests for SNMPv2 since that's the whole point of the wrapper. For this I'd need a wrapper_v2 funcarg which is configured for SNMPv2, but that would mean duplicating all the tests! Enter test generators.

The trick to combine test generators with cached setup is not to use the funcargs argument to metafunc.addcall() but rather use the param argument in combination with a normal funcarg. The normal funcarg can then use request.cached_setup() and use the request.param to decide how to configure the wrapper object returned. This is what that looks like:

def pytest_generate_tests(metafunc):
    if 'snmpwrapper' in metafunc.funcargnames:
        metafunc.addcall(id='SNMPv1', param='v1')
        metafunc.addcall(id='SNMPv2', param='v2')

def pytest_funcarg__snmpwrapper(request):
    cfg = request.cached_setup(setup=lambda: check_snmp_v1_avail(request.param),
                               scope='session', extrakey=request.param)
    if not cfg:
        py.test.skip('No SNMP%s agent available' % request.param)
    return SnmpWrapper(cfg)

Don't forget the extrakey argument to cached_setup. The caching uses the name of the requested object, "snmpwrapper" in this case, and the extrakey value to decide when to re-use the caching. If you forget extrakey both calls will return the same cfg.

And that's all that's needed! Test now simply ask for the snmpwrapper funcarg and will get run twice, once configured for SNMPv1 and once for SNMPv2. Running the tests will now look like this:

flub@signy:...$ py.test -v
============================= test session starts ==============================
python: platform linux2 -- Python 2.6.5 -- pytest-1.1.1 -- /usr/bin/python
test object 1: /home/flub/.../ test_get_one.test_get_one[SNMPv1] PASS test_get_one.test_get_one[SNMPv2] PASS test_get_two.test_get_two[SNMPv1] PASS test_get_two.test_get_two[SNMPv2] PASS test_get_two_bad.test_get_two_bad[SNMPv1] PASS test_get_two_bad.test_get_two_bad[SNMPv2] PASS test_get_many.test_get_many[SNMPv1] PASS test_get_many.test_get_many[SNMPv2] PASS

=========================== 8 passed in 1.31 seconds ===========================

This wasn't very complicated, but having an example of using the param argument to metafunc.addcall() would have made figuring this out a little easier. So I hope this helps someone else, or at least me at some time in the future.

Update: Originally I forgot the extrakey argument to cached_setup() and thus the funcarg was returning the same in both cases. Somehow I assumed the caching was done on function identity of the setup function. Oops.

Selectable queue

Saturday, May 29, 2010

Sometimes you'd want to use something like on Queues. If you do some searching for this it turns out that this question has been answered for multiprocessing Queues where you can simply use the real select on the underlying socket (IIRC), but for a good old fashioned Queue you're stuck.

Now it's easy to argue that this isn't that high a need, when I wanted this a while ago it turned out to be surprisingly simple to re-structure the design a little so that I no longer desired a selectable queue. But it's still something that hung around the back of my mind for a while, so I've kept thinking about it. My conclusion for now (which I haven't bothered implementing) is that simply cloning the normal queues but replacing the not_empty and not_full Conditions by Events gives you selectable queues. It changes the semantics slightly, but that doesn't seem harmful. This obviously isn't enough, so the second change to the queue is that you should be allowed to pass in the events to use. And now that you can share the not_full event between two queues you can simply wait on this event and you have your select.

Next time I want this I might actually implement this idea rather then re-design so that I don't want selectable queues anymore.

weakref and circular references: should I really care?

Wednesday, May 12, 2010

While Python has a garbage collector pretty much whenever circular references are touched upon it is advised to use weak references or otherwise break the cycle. But should we really care? I'd like not to, it seem like something the platfrom (python vm) should just provide for me. Are all those mentions really just for the cases where you can't (or don't want to, e.g. embedded) use the normal garbage collector?

Update: By some strange brain failure I seemed to have written "imports" rather then "references" in the title originally. They are obviously a bad thing.


Saturday, May 08, 2010

There is sometimes a need to set the process name from within python, this would allow you to use something like "pkill myapp" rather then the process name of your application just being yet another "python". Bugs (which I'm now failing to find in Python's tracker) have been filed about this and many wannabe implementations made, all of them seemed to have many problems however. Aiming for UNIX cross-compatibility they all messed up and end up trying to make the abstraction everywhere, and ususally the code was less then beautiful

But I've just discovered the python-prctl module by Dennis Kaarsemaker which does something far more sensible: rather then trying to be cross-platform it just wraps the Linux prctl system call (as well as libcap). The code looks well written and the API, while a little bit overloaded on the get-set names, seems nice. It even includes what is probably the most sensible implementation of clobbering argv that I've ever seen (but don't use that, no normal person should ever clobber argv!).

If someone writes a nice module like this to cater for the MacOSX guys, the only other system I know of has a system call to set the process name, then I may never have to worry about getting someting like this into PSI at some distant point in the future. (And speaking about PSI, yes the windows port is still slowly under way. I"m just busy with lots of other things at the same time.)

Storm and sqlite locking

Saturday, May 08, 2010

The Storm ORM struggles with sqlite3's transaction behaviour as they explain in their source code. Looking at the implementation of .raw_execute() the side effect of their solution to this is that they start an explicit transaction on every statement that gets executed. Including SELECT.

This, in turn, sucks big time. If you look at sqlite's locking behviour you will find that it should be possible to read from the database using concurrent connections (i.e. from multiple processes and/or threads at the same time). However since storm explicitly starts that transaction for a select it means the connection doing the read now holds a SHARED lock until you end the transaction (by doing a commit or rollback). But since it's holding this shared lock, for no good reason, it means no other connection can acquire the EXCLUSIVE lock at the same time.

The upshot of this seems to be that you need to call .commit() even after just reading the database, thus ensuring you let go of the shared lock. Can't say I like that.

Python optimisation has surprising side effects

Tuesday, April 27, 2010

Here's something that surprised me:

a = None
def f():
    b = a
    return b
def g():
    b = a
    a = 'foo'
    return b

While f() is perfectly fine, g() raises an UnboundLocalError. This is because Python optimises access to local variables using the LOAD_FAST/STORE_FAST opcode, you can easily see why this is looking at the code objects of those functions:

>>> f.__code__ .co_names
>>> f.__code__ .co_varnames 
('a', 'b')
>>> g.__code__ .co_names
>>> g.__code__ .co_varnames 

I actually found out this difference thanks to finally watching the Optimizations And Micro-Optimizations In CPython talk by Larry Hastings from PyCon 2010. I never realised that you could create a situation where the nonlocal scope would not be looked in.

Judging performance of python code

Saturday, April 03, 2010

Recently I"ve been messing around with python bytecode, as a result I now know approximately how the python virtual virtual machine works at a bytecode level. I've found this quite interesting but other then satisfying my own curiosity had no benefit from this knowledge. Until today I was wondering which code would be more efficient:

a += 1
if b is not None:
    a += 2
if b is None:
    a += 1
    a += 3

From a style point of view I'd prefer to write the first: I find it slightly more readable since it's got fewer indented blocks and it uses one line less on my screen. But my gut feeling tells me the second is faster, particularly if b is not None (this because in the first sample we'd do 2 add operations instead of one if b is not None). But now I can verify my gut feeling! All I need to do is compile both fragments and use the disassembler to investigate:

>>> co1 = compile(sample1, '', 'exec')
>>> dis.dis(co1)
  2           0 LOAD_NAME                0 (a) 
              3 LOAD_CONST               0 (1) 
              6 INPLACE_ADD          
              7 STORE_NAME               0 (a) 

  3          10 LOAD_NAME                1 (b) 
             13 LOAD_CONST               2 (None) 
             16 COMPARE_OP               9 (is not) 
             19 POP_JUMP_IF_FALSE       35 

  4          22 LOAD_NAME                0 (a) 
             25 LOAD_CONST               1 (2) 
             28 INPLACE_ADD          
             29 STORE_NAME               0 (a) 
             32 JUMP_FORWARD             0 (to 35) 
        >>   35 LOAD_CONST               2 (None) 
             38 RETURN_VALUE
>>> co2 = compile(sample2, '', 'exec')
>>> dis.dis(co2)
  2           0 LOAD_NAME                0 (b) 
              3 LOAD_CONST               2 (None) 
              6 COMPARE_OP               8 (is) 
              9 POP_JUMP_IF_FALSE       25 

  3          12 LOAD_NAME                2 (a) 
             15 LOAD_CONST               0 (1) 
             18 INPLACE_ADD          
             19 STORE_NAME               2 (a) 
             22 JUMP_FORWARD            10 (to 35) 

  5     >>   25 LOAD_NAME                2 (a) 
             28 LOAD_CONST               1 (3) 
             31 INPLACE_ADD          
             32 STORE_NAME               2 (a) 
        >>   35 LOAD_CONST               2 (None) 
             38 RETURN_VALUE

My analysis of this is pretty simple: count the number of instructions for when b is None and for when b is not None.

b is Noneb is not None

So ultimately the best performance depends on whether b will be None or not. However the difference in the best case is only one instruction, but the difference in the worst case is a whole mighty 4 instructions! This would seem to confirm my gut feeling: the ugly code is better. It also makes me wonder if this is not the sort of optimisation a compiler should be doing: create the bytecode for sample2 regardless of the source code (I'm not a compiler guy and do realise it might not be as simple as that since python is a dynamic language which allows you to change stuff, including executed code, at runtime, yada yada).

There's one more catch tough: I seriously doubt each python instruction can be executed in the same time! So let's use timeit to actually verify this. I'm omitting the trivial code, but this is the result:

$ python3 
sample1, b is None: 0.128307104111
sample2, b is None: 0.128338098526
sample1, b is not None: 0.244062900543
sample2, b is not None: 0.12109208107

To be honest, the result is exactly as speculated: the first sample is slower when b is not None, all others are pretty much the same. The one odd thing is the pretty much doubling of time for the bad case, this suggest the python virtual machine is spending most of the time doing the INPLACE_ADD instruction while all others are probably very quick.

Anyway, in conclusion I guess you can speculate about performance and get an idea by knowing what bytecode will be generated. But at the end of the day you'll get a simple and better answer by simply using timeit. So knowing something about python bytecode still hasn't gained me any benefit. It was still an interesting exercise tough.

Hacking mock: Mock.assert_api(...)

Thursday, April 01, 2010

Mock is a great module to use in testing, I use it pretty much all the time. But one thing I have nerver felt great about is the syntax of it's call_args (and call_args_list): it is a 2-tuple of the positional arguments and the keyword arguments, e.g. (('arg1', 'arg2'), {'kw1': None, 'kw2': None}). This does show you exactly how the mock object was called, but the problem I have is that it's more restrictive then the signature in python:

def func(foo, bar, baz=None):

func(0, 1, 2)
func(0, 1, baz=2)
func(0, bar=1, baz=2)
func(foo=0, bar=1, baz=2)

In this example all the calls to func() are exactly the same from python's point of view. But they will all be different in mock's call_args attribute. To me this means my test will be too tightly coupled to the exact implementation of the code under test. This made me wonder how it could be better.

Firstly I started out writing a function which would take the call_args tuples and know the function signature. This obviously isn't very nice as you need a new function for each signature. But this lead me on to adding the .assert_api() method to the mock object itself. I've been using this method for a while and am still not disappointed in it, so tought I should write about it. Here's how to use it:

mobj = mock.Mock(api='foo,bar,baz')
mobj.assert_api(foo=0, bar=1, baz=2)

It seems to me this is a fairly good compromise to an extra method on the mock object (and attribute, not shown) and nice concise way of asserting if a function was called correctly.

There are a number of side effects, mainly due to my implementation. The major one is that it consumes .call_args_list! This means each time you call .assert_api() the first item disappears from .call_args_list. Again, I like this as it allows me easily to check multiple calls by just doing multiple .assert_api() calls.

Another side effect, of less importance, is a new attribute "api" on the mock object, I can imagine people disliking that one. But it's never been in my way and means you can just assign to it instead of using the keyword argument when creating the mock. I find this handy in combination with patch decorators where the syntax to fine-tune the mock is rather heavy to my liking.

The last strange thing is .assert_api_lazy(), which is just a horribly bad name. It ignores any arguments which are present in the call on the mock object, but where not passed in as part of the api='...' parameter. It's effectively saying you only want to check a few of the arguments and don't care about the others.

Finally here is the code, for simplicity here implemented (and unchecked) as a subclass of Mock:

class MyMock(Mock):
    def __init__(api=None, **kwargs):
        Mock.__init__(self, **kwargs)
        self.api = api

    def _assert_api_base(self, **kwargs):
        """Return call_kwargs dict for assert_api and assert_api_lazy

        WARNING, this consumes
        if self.call_args_list:
            call_args, call_kwargs = self.call_args_list.pop(0)
            raise AssertionError('No call_args left over')
        call_args = list(call_args)
        if self.api is None:
            raise AssertionError('self.api is not initialised')
        for p in self.api.split(','):
            if call_args:
                call_kwargs[p] = call_args.pop(0)
        return call_kwargs

    def assert_api(self, **kwargs):
        """WARNING, this consumes self.call_args_list"""
        call_kwargs = self._assert_api_base(**kwargs)
        assert kwargs == call_kwargs, \
            'Expected: %s\nCalled with: %s' % (kwargs, call_kwargs)

    def assert_api_lazy(self, **kwargs):
        """WARNING, this consumes self.call_args_list"""
        call_kwargs = self._assert_api_base(**kwargs)
        for k in call_kwargs.copy().iterkeys():
            if k not in kwargs:
                del call_kwargs[k]
        assert kwargs == call_kwargs, \
            'Expected: %s\nCalled with (truncated): %s' % (kwargs, call_kwargs)

I'd be interested in feedback! It's definitely not perfect, but I do like the syntax of m=Mock(api='...'); ...; m.assert_api(...). One problem I can think of is that it won't deal gracefully with default arguments on the api yet, e..g.:

def func(foo, bar=42):

func(foo, 42)

These two calls are identical, but .assert_api() won't see them as the same. This hasn't bothered me yet, which is why I haven't looked into it. But I guess it should be considered for a general-purpose implementation of this idea.

Using lib2to3 in

Friday, March 26, 2010

It seems that many people think you need distribute if you want to run 2to3 automatically in your But personally I don't like setuptools (aka distribute) and hence don't like forcing this on users. No worries since plain old distutils supports this as well, but it simply appears less well known.

All you need to do is use the build_py_2to3 command supplied by distutils.command.build_py in python3 instead of the normal build_py command. This is how you can do this:

    from distutils.command.build_py import build_py_2to3
except ImportError:

if sys.version_info[0] == 3:
    COMMANDS['build_py'] = build_py_2to3
setup(..., cmdclass=COMMANDS, ...)

That's it! Ok, you have to do slighty more then just add a keyword argument to the setup() call. But just the 2to3 feature is not worth using distribute for.

Discovering basestring

Wednesday, February 10, 2010

This may seem simple and trivial, but today I discovered the basestring type in Python. It is essentially a base type for str and unicode, which comes in handy when writing isinstance(foo, basestring) in test code for example.

Strangely, despide PEP 237 mentioning the equivalent for int and long as "integer" I can't actually find what has become of that, so still writing isinstance(foo, (int, long)) for now.

Note that this is Python 2.x only, in 3.x it's all unicode so there is no longer a need for a common base type. int and long are unified as well in Python 3.x obviously.

Registering callbacks: to quack like a duck or tell which bit will quack the duck way?

Friday, February 05, 2010

When you have someplace where you register a set of callbacks you have a few options:

  1. Use an interface-style API. I.e. you have a Mixin style class defining the methods expected to be present, probably just raising NotImplementedError. You then just pass in the entire object to the .register() function.
  2. Just try if it quacks like a duck. IMHO this is just the pythonic equivalent of the above, simply without the explicit mixin class.
  3. Explicitly pass in the callback functions. This would be something like: .register(func1, ...). Secret benefit here is using the result of functools.partial() and things like it.

What I don't like about the first two options is that you force the method name of the callback onto the user. The third option allows you to achieve the same but at the cost of a more complicated register function. But I tend to prefer it because it's more explicit and it avoids "hiding" things in the object (via the mixin) thus keeping the interfacace between caller and callee cleaner.

Are there other design considerations when making this choice? Maybe there's already an essay somewhere exploring these ideas in more detail then I'm doing here?

Pointer arithmetic in C

Monday, January 25, 2010

Pointer arithmetic in C is great. The only downside is that it's only defined when pointing to an array (or an item just after the array). That is one giant downside.

Allocate a chunk of memory and you can't use pointer arithmetic. So if you are building a linked list this means you will have to allocate each item in the list separately, even if you know the total length the list will ever have.

Even K&R must have realised this was an annoying limitation. They point this out when they show how to write malloc (K&R2 8.7 Example -- A Storage Allocator)

There is still one assumption, however, that pointers to different blocks returned by sbrk can be meaningfully compared. This is not guaranteed by the standard, which permits pointer comparisons only within an array. Thus this version of malloc is portable only among machines for which general pointer comparison is meaningful.

If only the standard would have relaxed this restriction slightly (e.g. to continuously allocated regions) life would be so much better

The master plan

Monday, January 18, 2010

Catherine Devlin describes it very nicely:

I finally understand Al-Qaeda's master plan, and it's freaking brilliant. [...] I'm just surprised that we're choosing to participate in the plan. I thought we were on opposite sides?


Sunday, January 10, 2010

Why was it ever considered desirable to call a directory containing a file a "package" rather then just "module". They are after all simply "modules containing other modules". It's not like that solved some sort of problem was it? But, as sad as that sometimes might be, we can't change the past.

Anyway, people are continiously confused when talking about python and all the module distributions made available on the cheeseshopPyPI by various projects. And now yet another discussion rages over at distutils-sig where they want to re-work this confusion by using unambigous terms. The only possible outcome I can think of is more confusion.

What is so hard about accepting the current status quo? The thing that would help most is documenting something like the following in distutils documentation:

The word "package" is often used to refer to a module distribution, usually it is perfectly clear form the context whether it is talking about a module distribution or a module containing other modules.

Installing Debian on a Sun Fire X2200 M2

Saturday, January 09, 2010

Most important thing first: the eLOM console lives on ttyS1 (9600n8). If that gets you going you do not need to read any more.

This works pretty easily, but there's isn't much information about how this works if you don't like using a keyboard and a screen (having these things appears to be the norm for x86 hardware). But the X2200 comes with this "Embedded Lights Out Manager" which, although not as good as the LOM on a T1000, does the job nicely. It allows you to connect via the serial console, IPMI and ssh. So initially I used the serial console to setup the IP configuration of the eLOM and then I could ssh to it (and more importantly sit in a quiter room far away from the rack).

Once inside the eLOM it allows you to connect to the "serial port" of the machine (with the rather obsusucre "start /SP/AgentInfo/console" command, disconnect by hitting Esc + Shift-9). This allows you to see the BIOS settings etc. Now is the time to insert the Debian CD (or netboot or whatever), if inserting the CD you'll just get a blank screen since it's displaying pretty graphics, hitting escape gives you the boot prompt as explained in the Debian installation manual. Now the serial console you're seeing will appear as the second serial port to the OS (this is configurable in the BIOS I think but I didn't play with those settings). So the correct way to start the installation is:

expert fb=false console=ttyS1,9600n8

(or "install" instead of "expert" if you prefer that)

This will start the normal install process you're used too (and which is described in other guides) with the output appearing on your terminal emulator. One tip tough is to enable the installation via ssh when it allows you to select the installer components to load. The serial console is 9600 baud and painting the dialog windows over that is just painfully slow, this is a lot nicer over ssh.

The last thing to note is that debian-installer is smart enough to enable a getty on the serial console you've used for the installation (if fact it doesn't enable any gettys on the normal ttys). But it doesn't append the console=ttyS1,9600n8 to the grub menu, so you'll want to do this yourself if you like seeing the kernel output on the console.

Subscribe to: Posts (Atom)