sab39

... indistinguishable from magic
effing the ineffable since 1977

Categories

Recent Posts

Archives

"Wa Eebots Peas!"

5/19/2005
I finally got around to bundling up the last few months worth of work on NRobot, checking it in and calling it a release. There are a bunch of new features, but the most important is that if you're running under .NET robots are run in a sandbox - they are no longer given complete control over your system. This means that it's actually reasonable to download a robot written by someone you don't know or trust, and attempt to shoot the crap out of it - as long as you don't pass the "-insecure" option on the command line.

Another significant improvement in 0.20 is that it now ships with a sample robot written in Java. This isn't actually an improvement in NRobot at all, but an improvement in IKVM. As of IKVM 0.14 it is now possible to apply custom attributes to Java code, which was the only remaining feature required for use in NRobot. So at last I'm posting something on my blog that's actually vaguely on topic for Planet Classpath ;) All you Java programmers, go download it and start writing robots ;)

Unfortunately there are a couple of catches.

Currently Mono's security support is not complete enough for me to confidently guarantee your safety if you are running untrusted robot code in it. Work is ongoing on adding this support and I'm confident that it will eventually be safe, but if you're running NRobot under Mono today, I recommend using a user account with restricted permissions.

The other big catch is that IKVM can't currently generate code that will run inside a sandbox. Jeroen has done some work proving that this isn't impossible in theory, but will require very careful auditing of IKVM code to be certain that no security holes have been introduced. So for now, you must pass the -insecure option in order to run robots written in Java - in which case you again ought to run NRobot under a user account with restricted permissions.

Oh, the other big improvement in 0.20 is a feature that was actually requested - the ability to fine-tune the strengths and weaknesses of your robot. You can adjust 8 different properties (roughly categorized into 4 offensive and 4 defensive) to be either "better", "worse" or "neutral", but you must be sure that your choices balance out. That is to say, for every property you make better, you must make another one worse.

Here's my own robot DLL. Unzip it into the Bots/Win or Bots/Mono directory (as appropriate) of your NRobot installation. Naturally, unless you trust me implicitly, you shouldn't do this if you're running under Mono or with the -insecure flag unless you've taken other precautions to safeguard your system.

So what the heck does "Wa Eebots Peas!" mean? Well, Alexa loves watching games of NRobot play themselves. If she sees me working on my laptop, she'll come over, stare at the screen, and say her best impression of "Want Robots Please!".

 

Performance Victory

5/10/2005
I can confidently declare victory in the battle with the performance of cmScribe's permissions code. A page hit that was taking "about a minute" every time (the logging that enabled me to determine exactly how long things are taking wasn't added until later) is now taking 14 seconds on the first hit and is no different than other pages within the site on subsequent hits.

While working on this I re-learnt another obvious lesson about coding for performance, which can be summed up as: DON'T trust your instincts, MEASURE. This doesn't necessarily require complicated profiling tools (although I'm sure if you know how to use them they can be very useful). All I did was add code to log every permission lookup and every database hit to a file. But running a few "grep | wc" operations across the resulting log files gave me exceptionally useful information about which tables were being accessed excessively, and proved my gut instincts to be sorely lacking.

My initial feeling was that my best bet would be to try to avoid excessive hits to two tables, which we'll call TC and V. I spent a while working on TC and was disappointed to find that I'd only improved from 43 to 42 seconds. That was when I fired up grep and produced a little shell script which ran over my initial log giving output like this:

P: 9115
TC: 735
V: 4408

Turned out I was right about V being critical, entirely wrong that TC mattered at all (I'd reduced it to 234, which naturally made very little difference), and horrifyingly wrong to have entirely ignored P which was the worst offender by an order of magnitude.

The really scary thing about these numbers is that V contains 33 records and never changes at all (except with new builds of the software) while P contains about 250 and changes rarely (only on certain administrative actions).

Armed with this knowledge it was an absolute no-brainer to bring the entire contents of V and P into memory once and leave them there thereafter (with some code to re-fetch the contents of P when those administrative actions happen).

Performance improved by a factor of four, DB hits reduced by a factor of more than ten (from nearly 20,000 to under 1,700), and all without any need to fundamentally change the architecture of the system.

But I never would have got there if I'd only gone with my instincts about what could be improved. It was only by producing directly measurable information about what was really going on that I was able to spot the evil 9,000 hit table :)

 

Programming and Performance

5/8/2005
The approach I take to performance issues while coding is that performance issues should be in the back of your mind at all times. Not to ignore them, but also to resist the temptation to focus on performance too much during the design and initial implementation of a feature, and planning to revisit the issue if performance problems become apparent later.

This philosophy has both strengths and weaknesses and recent events have showcased both of these.

cmScribe uses a complex and flexible fine-grained permissioning mechanism where permissions can be granted to all kinds of actions on all kinds of objects. Having certain permissions can cause others to be granted implicitly, and the rules for this kind of implication can be any arbitrary C# code. Since the permissions are so fine-grained, any given page hit can require a large number of permissions to be evaluated. Furthermore, the implication rules mean that evaluating one permission may require a number of others to be evaluated as well.

The system is so complex, in fact, that I struggled quite a lot during the initial design process to come up with a way of meeting all the requirements at all. (Is it over-engineered? I don't know. I do know that after using it for a year there's only one feature I'd have cut, and that's never been used and doesn't add any complexity) The first and biggest advantage of keeping performance issues on the backburner is that if I'd had to juggle performance along with all the other constraints I was trying to meet, I don't know whether I'd have been able to produce a working system in the first place. In this case, deferring performance for later may have made the difference between impossible and possible.

Since then I've had to revisit this code for performance reasons on two or three separate occasions. You could look at this as a disadvantage of the approach I took: surely if performance had been designed in from the very beginning then I wouldn't have had to repeatedly fix performance problems later. But you can also look at it as a strength: the code worked adequately to start with without spending the time on performance. Later, as more demanding scenarios came up, it was possible to fix it without too much trouble to again perform adequately, by a combination of caching frequently-used information in memory, tweaking the order of operations to make the common cases use less steps, and micro-optimizing the individual steps to eliminate avoidable database hits and other expensive operations. I'm in the middle of an iteration of that process right now, and I'm entirely confident that I can have it performing adequately again shortly.

The weakness of the approach, however, is that an architecture designed without considering performance (since I was struggling so much with all the other issues, performance was probably further back in my mind even than usual) has turned out to have some performance bottlenecks that simply can't be removed without changing the architecture itself. There are situations where it's possible to know based on fixed information that there's no way a user could possibly have a particular permission, but that fixed information isn't available within the architecture, so the code will still chase down a number of dead ends before it arrives at the answer. And there's no way to make that information available with little caching tweaks and micro-optimizations. It needs a whole new structure.

For now, I can continue to tweak the heck out of the existing architecture and I'm confident it will perform adequately for quite some time. Which leads to the final advantage - even when you do reach the point where there's nothing to be done but throw out the whole thing and start over with performance at the very front of your mind, the experience gained from the first attempt will be invaluable in designing the system the right way. Every tweak I make to the existing code will be designed into the next version from day one.

Sounds a lot better than being stuck a year ago unable to write the thing at all because I couldn't get my head around how to make it fast, doesn't it? :)

 

Cancer Update

5/7/2005
The pathology results finally arrived on Thursday just in time for my appointment. Apparently I was a complicated case and they had a lot of difficulty getting the results, because I didn't have just one kind of tumor but a "mixed tumor" with several different kinds of cancer cells. This is apparently not uncommon and doesn't affect my chances of recovery, but it does change the next step.

Instead of radiation, they slice me open again - much more invasively, this time - and take out my lymph nodes.

Apparently based on my CAT and blood test results there's a 70% chance that they won't find anything, in which case that's all that needs to be done. The next most likely scenario is that they find a few nodes with cancer, but not many, in which case they still don't have to do anything else.

If they find a lot of cancer, though, then I probably need chemo.

I'm trying to see this as a good thing. Instead of definitely having radiation, and definitely having to cope with feeling like crap for a couple of months and still having to function normally, this way I get to recover from the surgery for a couple of weeks without any other demands on me, and then hopefully everything's over and done with.

The only cloud in this happy picture is the possibility that they'll find a lot of cancer and it won't be over and done with at all.

 

A traditional chant

5/1/2005
Ten points[1] to the first person to identify the source of this piece of traditional poetry...

Teegul teegul eeya ka,
Awi awa ahdi ah;
Ubba ubba ahdi aye,
Ahdi ahdi idi aye,
Teegul teegul eeya ka:
Awi awa ahdi ah!

[1] And points mean prizes![2]
[2] No actual prizes offered. Additional restrictions apply, call for important lease details.

 
Next Page
RSS