ACSAC 2008

So, I went to ACSAC the other week, and sketched out some crazy notes:

It seems that nobody is really working on large scale distributed systems reliability. By this I mean things like the Internet and the Power Grid. Both of these are highly distributed systems, where nodes automatically route around local failures. Unfortunately, and probably due to locality of knowledge, cascade failure is still an unsolved issue. Most of the solutions require early detection, and prevent systemic failure by removing of key nodes before things begin to cascade. This is quite similar to bulldozing trees to save the forest. It’s a sub-optimal solution. It’s also extremely difficult to automate, as it requires more global knowledge and assumes reliable control system connectivity in order to direct shutdown. It would be much better if each node behaved in such a way as to dampen the ‘cascade wave’ while still having only localized knowledge.

In some cases, rebooting becomes an issue. The graphic on wikipedia shows this quite well, when the center node reboots and becomes the only link in a high-demand chain. In the real power grid not all stations are capable of coming online without a fairly large kick-start from the grid itself. That is, some of the emergency/peak generators require power to start up. In a total network outage, you might find yourself without power for a while, even though you have a generator. Also, the power grid is run as close to maximum capacity as possible (for efficiency reasons) even though this increases the likelihood that a local failure can trigger a cascade.

Also these lines, I’ve also scrawled that it would be nice if a network protocol could give early warning of DDOS or similar attacks. Is early warning of global events possible in a distributed system with very localized knowledge at each node? given the lack of central authority, who would receive the warning? neighbor nodes? Can the protocol be specified so as to dampen over-traffic attacks? What info must a node disclose about itself in order to prevent a cascade failure (econ, info theory)?

Whitfield Diffie also spoke at the conference. He was nice enough to give explicit Creative Commons permission to record and distribute his speech, if any of us felt so inclined. He gave an off-the-cuff history of cryptography and society, with a focus on how we modify our behavior and adjust our risk-cost tradeoffs as technology changes. He also mentioned that he prefers Availability to Integrity to Confidentiality; that is, He thinks the rest of the security world has priories reversed.

Some more chicken-scratch:
Is it possible to do a BCC encryption? So that Alice post a single message a public BBS that can be read by both Bod and Charles, but each thinks they are the only recipient. Can this be done (a) transparently (b) in a way that adjusts to addition and removal of members in the group. Secure IRC, IM chat.

The best way to secure computer systems is probably to look toward biology, and create an immune system. Learning, dynamic collection of agents that monitor and repair.