Recently I said I created an increadably stupid, dump and boring GTK music player. I also offered to make the source code available in the very unlikely case someone else would find this thing interesting. I should have known to just do that right away, so here it is.
Yes, I've contributed to the masses of music players out there. It is pleasantly easy using python and the pygtk and pygst modules.
But why yet an other music player? Well it is, intentionally, increadably simple.
- No library. Just a simple list of files, played sequentially.
- No metadata. If you just want to play a few random files, why would you care about the metadata?
- Simple interface. It's very boring but fits in with GTK or GNOME app quite nicely.
- No bells or wistles. Seriously I want to listen to a few files, not have a visualisation, equaliser, unreadable skin or OMG such a cool gadget etc.
Basically, anything that the player doesn't do is done better by other applications (for me that is Ex Falso / Quod Libet).
Here the obligatory screen shot, so you can see it's boringness in all it's glory. Realy, there is nothing more about it then you can see.
This doesn't mean there is no room for improvement obviously. Mostly usability. It could do with a slider, pause functionality, a column with file duration, a context menu, delete-key binding, ... There is an endless list (writing GUI apps is so troublesome!). Just functionality wise it will stay very limited.
On the off chance that someone is also interested in this thing, just tell me. I'll gladly make the source available under some free license, it's rather tiny right now with one glade file and one python file of 270 lines of code. I just don't expect someone else will be interested in this ;-).
PS: Yes, that screenshot contains a typo, some things you just don't notice until you create a screenshot...
There are of course about a million other guides on how to use encrypted disks out there. However I did run into some trouble when trying this, so here mine. Specificly I address the issue of getting an encrypted root disk, with the root being on a Logical Volume Management (LVM) as most other guides only seem to describe how to setup a random disk/partition encrypted. I'm not going to duplicate the other guides too much, so read a reasonable one (like this one) first. The last special case is that I copy all the data accross once the disk is ready, otherwise I could have just used the debian-installer which does a great job.
This entire operation was done while being booted from a grml life cd, with the backups of the data on a USB disk
Firstly you need to partition your disk. Create 2 partitions, one small one for /boot which will stay unencrypted, and the other as large as you fancy. The boot partition should be a normal Linux partition (0x83) while the other one I did set to Linux LVM (0x8e), but I don't think that matters. The boot partition is simple: format it (e.g. mkfs -t ext3 /dev/hda1) and copy the data on it. The other partition is going to be a LUKS volume, on which we will create an LVM Physical Volume (PV) with a Volume Group (VG) on with several Logical Volumes (LV), say, / and /home. Let's do this:
~# cryptsetup luksFormat /dev/hda2
<asks for password>
~# cryptsetup luksOpen /dev/hda2 luksvolume
<asks for password>
The luksvolume part is the name of the volume for the device mapper, the disk will now appear in /dev/mapper/luksvolume. Great! Let's create our LVM setup on it:
~# pvcreate /dev/mapper/luksvolume
~# vgcreate mygroup /dev/mapper/luksvolume
~# lvcreate -L 10G -n root mygroup
~# lvcreate -L 10G -n home mygroup
The volumes are now available as /dev/mapper/mygroup-root and /dev/mapper/mygroup-home or via the symlinks /dev/mygroup/root and /dev/mygroup/home. Again, create your favourite filesystems on it and copy the data accross.
We're almost there, but not quite. The disk needs to be bootable, so mount the root partition somewhere and mount the boot partition inside it, then install grub on it: grub-install --root-directory=/mnt/newroot, time to double check /mnt/newroot/boot/grub/menu.lst and make sure all is fine in there.
Now make sure the encrypted disk will work when booting. For the following it is easiest to chroot /mnt/newroot as the command doesn't deal with alternative roots yet. So in the chroot write the /etc/crypttab:
# <target name> <source device> <key file> <options>
luksvolume /dev/hda2 none luks
Hopefully one day that would be enough, currently this was completely irelevant in this setup however (this file is only relevant for non-root encrypted disks currently). So you need to create another file, /etc/initramfs-tools/conf.d/cryptroot:
target=luksvolume,source=/dev/sda2,lvm=mygroup-root
Now recreate the initrd using update-initramfs -u and you should be all set. Get out of the chroot and boot the disk.
This should work on both Debian and Ubuntu, however when you're using Ubuntu you may get some funny results when it needs the password while usplash is running. It will quit usplash but not tell you it is waiting for a password, check out this bug report for some possible solutions.
Q: How long does it take to write a 120 GB disk full with random data?
A: 15 hours, 2 minutes and 27 seconds.
Obviously this depends on the machine. For me it was going at just under 8 minutes per GB, others report around 5 minutes per GB. Also this was using /dev/urandom as input for dd, wich is obviously not really random. I don't even want to think about how long it would take using /dev/random.
Happy brithday!
And thanks for everyone involved all those years.
If you read my last post about using Gazpacho you should read the comment by Johan Dahlin too. He's one of the authors of Gazpacho and explains the libglade and save format issues^Wthings in Gazpacho nicely.
Earlier I have made some simple GUI applications using PyGTK and Glade, which is surprisingly easy. Now I have another small itch to scratch and have another go at some GUI app. Only this time I decided that the coolness of Gazpacho looked slightly nicer to me, so gave that a try.
Creating the UI is easy enough and after some messing around I had something that would do. Gazpacho claims to create glade files compatible with Libglade, so I just went about what I did last time:
import gtk
import gtk.glade
class MainWindow:
def __init__(self):
self.wtree = gtk.glade.XML('ddmp.glade')
self.wtree.signal_autoconnect(self)
def on_mainwindow_destroy(self, *args):
gtk.main_quit()
def main(self):
gtk.main()
if __name__ == '__main__':
app = MainWindow()
app.main()
However this didn't quite work, I got libglade warnings about unexpected element <ui> and unkown attribute constructor. Furthermore gtk.glade gave tracebacks about assertions of GTK_IS_WIDGET. After a quick search on the great internet that didn't result in anything (I was wondering if my libglade was too old or so) I had a look at the examples supplied (stupid me, why would I not look there first?) and sure enough, they don't use gtk.glade. So the above code changes into:
import gtk
from gazpacho.loader.loader import ObjectBuilder
class MainWindow:
def __init__(self):
self.wtree = ObjectBuilder('ddmp.glade')
self.wtree.signal_autoconnect(self)
def on_mainwindow_destroy(self, *args):
gtk.main_quit()
def main(self):
mainwindow = self.wtree.get_widget('mainwindow')
mainwindow.show()
gtk.main()
if __name__ == '__main__':
app = MainWindow()
app.main()
So Gazpacho needs a different loader for the XML, the returned object appears to be behaving as the gtk.glade.XML widget tree which is nice (since gazpacho documentation seems to be non-existing). I suppose libglade doesn't cope with the gtk.UIManager code created by gazpacho yet (the FAQ seems to suggest there are patches pending) and that their custom loader translates it to something libglade understands. This does make me wonder if you can use gazpacho when using any other language then python, the examples only contained python code. Surely they'll want to support any language that has libglade?
Lastly it seems to hide windows by default, which I actually quite like. I remember in glade you had to explicitly hide dialog windows or they would show up at startup, this seems slightly more logical.
Overall I do quite like gazpacho so far, I'm glad I chose it and would recommend it. It still has some rough edges but is very nice already.
Obviously no real optimisation, just optimisation exactly like python -O [-O] would create. But in "normal" byte compiled files, i.e. in .pyc files instead of .pyo files.
Why would I want to do that? Well, we really don't feel like shipping code with assert statements or if __debug__: ... bits in. They are only useful during development and should not appear in shipped code. And while we're at it stripping the docstrings can't hurt either. Still no reason not to just use .pyo files though, but there are if the code also needs to run as a windows service. The Python for Windows Extensions provide an excellent framework for making your python code behave like a service, unfortunately it does not seem to support optimised code. And I only found an old (but interesting) email thread discussing this, but other then that no one seems to talk about these issues. So I started thinking if we could just modify our code during the build to strip out all the things we don't want in it, we effectively have .pyo code inside a .pyc. Try it for yourself:
$ echo pass > test.py
$ python -m py_compile test.py
$ python -OO -m py_compile test.py
$ cmp test.pyc test.pyo && echo equal || echo unequal
equal
It appears to me that modifying the code would be as sane as trying to get some of the things to work suggested in that thread, and in my eyes it seems cleaner for the moment. Python 2.5 comes with a parser module that allows you to parse python source code into Abstract Syntax Trees (ASTs), once you have the AST objects you can convert them to lists and tuples and then convert them back to an AST (here you get the opportunity to change the list form of the AST). Lastly it provides functions to compile these ASTs into a code object just like the builtin compile() function does. From there it is not far to creating a .pyc file, the py_compile module shows us that with the help of imp and marshal modules this is only a few lines of code.
Writing all of this down and looking at the py_compile code made me realise that the same might actually be achieved by simply renaming the .pyo files to .pyc files! Or there could be something in the marshal module that behaves differently when running optimised. I'll have to try that out.
All of this, however, raises the point mentioned in that python-dev thread linked above, why bother telling how optimised the compiled code is in the file extension? Guido's idea of storing which optimisation has been done in the .pyc is not bad - although I personally don't like the automatic writing of .pyc files, but that's another discussion. I'm not sure if the extra bytecode that Brett and Philip propose later is that great though, personally I'd get rid of -O completely and just run the .pyc if it's the same age as the .py and if it's not don't bother re-creating it - modules are mostly compiled on installation anyway (except for developers).
So, am I going insane? Or is there really no need for the behaviour with -O and .pyo? If the optimisations can be prefomed by some AST transformations anyway, then I think the (mostly annoying) -O behaviour is obsolete.
(And not writing .pyc files by default is an other story, in the mean time the .pyc files can be re-created using the same optimisations as the old one was created with. Or with the "current settings" of the python interpreter (which would mean none as I want to drop -O), I don't think that's too important right now.)
Update: Obviously the magic number in .pyc files differs from the one in .pyo files, so just renaming the files won't work. I should have know...
So I have a bit of organically growing information that is nice to represent on internal web pages in a semi-organised fashion. The first iteration just runs a few python scripts from some cron jobs that utilise SimpleTAL to create staticly served pages. SimpleTAL as templating language simply because we also use roundup as issue tracker and keeping the templating language the same seems like a sensible thing to do for a multitude of reasons. But as the number of pages grows and more and more features wanting cgi, e.g. to trigger a software build instantaniously instead of waiting for the next nightly cron job, creep up this seems like the right point to move to a proper framework that will make maintenance, organisation and code reuse a lot easier.
Only, how to choose? It seems to me that the current Python web framework world changes drastically every 6 months. And really, that's just plain annoying. I don't want to worry about an upgrade of the codebase in a few months time, whether because there's a newer version of the framework or it's simply not the hip way to do stuff in the flasy web 2.0 world and hence is left to die.
Django has been in the 0.9x releases for as long as I can remember, every time I looked as some of the docs they seemed to say "Don't use the lastest release as the svn repo is a lot cooler." Not very inspiring. It also seems a pretty steep jump from the so very light weight infrastucture currently in use, it is a big (but seemingly beautiful I admit) beast.
The turbogears approach of taking best of breed applications has always attracted me. Alas that does not seem to go hand in hand with the requirement to remain stable as the 2.0 announcements seem to prove. Although they do seem to promise 1.x continuity and easy 2.0 upgrade path.
I have formed no thoughts yet about pylons other then that it can't be that bad as turbogears 2 is going to use it, but I can say that paste has a nice paragraph right on the first page:
There's really no advantage to putting new development or major rewrites in Paste, as opposed to putting them in new packages. Because of this it is planned that major new development will happen outside of Paste. This makes Paste a very stable and conservative piece of infrastructure for building on. This is the intention.
Maybe I should check out their do-it-youself-framework...
Finally, did I mention zope just has way too much overhead? (I considered using a short script to compare all the dependencies of these packages in Debian, but it's too late at night...)
So, who in this web2.0-happy world is a stable building block for your web requirements? Seems a scary world.