================================================================================ Provable protocol implementations (chaired by Thomas Moscibroda) ================================================================================ Formally Verifiable Networking Anduo Wang, Limin Jia, Changbin Liu, Boon Thau Loo, Oleg Sokolsky (University of Pennsylvania), Prithwish Basu (BBN Technologies) Research challenge: ensuring implementation matches design and specification. Some work has been done in this space recently. Meta-model, formal logical statements (declarative networking), and then theorem proving. Use Network Datalog (NDlog). Verify program by proving invariants. Several examples for proving/verifying aspects of protocols presented. Several papers cited for justifying aspects of the system that they designed. ---- Discussion Lakshmi chimes with some networking words of wisdom. It seems like you've done a lot in this space. But for someone who doesn't know much about formal verification, what does this mean in the context of designing a new protocol? For designing a new protocol, designing invariants is hard! You send a message and it doesn't reach, writing the model is hard. So what does this mean/contextualized for a networking research. On your last slide of ongoing work, what did you mean by non-monotonic algebraic models for wider range of well-behaved protocols. What can a non-monotonic model do that a monotonic model can't? But in the context of networking, does this mean it can help us when links disappear. Non-monotonic algebraic model... Also, would you call BGP a well-behaved protocol? Well-behaved in that it will converge. Do you check for liveness as well as for safety properties? What model check are using? Are you using something like SMD? You're not using linear-temporal logic? No. It seems like you'd be getting a lot of false negatives by not using a stateful proof technique. So, I can imagine it checks for convergence, but this must be very conservative. Yes, very conservative with respect to the logic. So, tell me for someone not knowing what's going on in formal verification. There seems to be some really difficult thing that makes proving "systems" problems hard. It's tough, for instance, that you don't have types at the bottom level. Is there something equivalent here? In classical distributed algorithms theory, there seem to be a lot of reasonable things you can't prove (e.g., TCP handshake). Distributed algorithms are hard to prove, I agree. -------------------------------------------------------------------------------- Chemical Networking Protocols Thomas Meyer, Christian Tschudin (University of Basel) There are many protocols with the following principle: bring the system to equilibrium at the point of optimal operation. Network of nodes: "molecules". Interaction is a "reaction". Chemical Networking Theory. Network G=(V,E). Node is multiset of molecules. Law of mass action: this speed is equal to product of types of molecules. May set one type to 1 and speed is node. Fraglets - programming language to express distributed chemical reactions. Presents a partial list of instruction set of Fraglets. Discussion of idea of homeostasis: System recovers quickly from destroying even 80% of molecules (nodes?). ---- Discussion One of the nice things you said was that you can prove convergence using differential equations. Then you work with schedule and timing: does this destroy your notion of convergence? Can't exactly determine when next reaction happens. Every example you showed. Linear differential equations naturally but can you do this normally? If they are nonlinear, they admit a number of answers. Depends how you design the protocol. Is there a guide? You don't want a bistable or unstable system. No. There is some evidence that there are hints - this is just engineering. How do the robustness properties do you get from this approach relate to robustness in the typical sense we think of robustness? We're used to machines failing in certain ways. Segmentation faults. If one molecule is corrupt, then others okay. We don't have sequential-like paradigm. Do you have any thoughts for how code-injection happens? In chemicals, you accidentally drop something in. How does that happen in this case. Haven't looked at security aspect yet. You showed load-balancing results, but they seemed a little abstract to me. Did you try looking at self-repairing DHTs, for instance? How would something like this work in that environment? Not sure. How does looking at things in this paradigm help you? Could you imagine a system like this synthesizing code? What's the motivation for looking at it this way? Random thought: Traditional reactions don't have to deal with latencies. What happens when you had fixed link delay? It makes equations more complicated. EOS ================================================================================