stellar, consensus, recent programming
Apr. 8th, 2015 09:25 amTl;dr today my employer flipped the "public" bit on the code we've been writing for the past several months, as well as underlying paper describing the algorithm. So I'm exhausted and nervous and accomplished-feeling and looking forward to both processing fixes and suggestions from the internet, and also taking a vacation soon-ish. The rest of this post is a bit of a digression into why this might be interesting to you, dear reader, as well as a little background for how I got into it and why it's of (technical) interest to me.
I'm not going to discuss the economic logic of these things much at all; I suspect capital will do well in any variant of the current political order, it'd be nice if money transfer was easier, cheaper and better automatable, and I'm not expecting banks to vanish anytime soon. That's all I'm going to say about the political economy of the thing, aside from a few potshots at bitcoin people's politics.
Computer-y systems are composed of many communicating component-parts: smaller circuits, computers on a network, whatever. The systems-as-a-whole often face a design tension between coordination (component-parts acting in lock-step with one another) and independence (component-parts acting on their own). Ideally you want your system to produce useful behaviour with as little coordination machinery as possible: such machinery introduces focuses for failure as well as speed limits on the system as a whole. What coordination machinery you have has to also be very fast and very robust against failures.
So there's a great, long-standing tradition of computer-y engineering in the space of minimizing coordination machinery across systems, on the one hand, and on the other hand making what coordination machinery we must use fast and resilient to failure. These are sometimes called consensus systems but they share a lot of design considerations with stable network protocols of all sorts as well as version control semantics and (perhaps more curiously) concurrent programming language semantics. As a young computer nerd I got interested in several such coordination problems; they are often at the highly-vexing core of problems we face when fighting computers.
In particular, I will point again to the version control link above. The work we did on monotone, which I started nearly 14 years ago now (oh goodness I feel old) is I think the most wide-impact work I've ever been involved in. At the core of that work, quite a ways after we got dug into the problem, was a simple and (in hindsight) blindingly obvious step: taking the trust-focused design of linked timestamping out of cryptography research and using it as the spine of a content addressed store like Venti, allowing it to project forward in time without having to acquire annoying external reference clocks and clock-sync problems.
It turns out this has a radically simplifying effect on the construction of distributed systems: it is not an exaggeration to say that most internet-scale distributed systems built since then have at least considered, if not adopted, part of this design. Most people view it as originating in git -- which directly copied and massively popularized it -- but to the best of my knowledge this lineage actually originated in a now-lost email comment I got from Jerome Fisher, that culminated in adding a new object type to the system. The idea of bringing linked hashing into the spine of a CAS was absolutely Jerome's; it's one of the few times in my professional life when I have watched a colleague synthesize ideas in a way I knew, instantly, would have huge implications. (The most recent was watching Niko deliver the delightful discovery that the mutator methods of an affine-typed value statically vanish when you move it from a mutable to immutable owner.) Anyway, the causality-spine-for-a-CAS problem solves perhaps a third your distributed-system woes: it produces a clock, no matter where you are in the system, that you can be sure of the causal order of.
What it doesn't do is merge together two clocks, or decide when they're merged, if you happen to split reality, or have your clocks diverge. This is a problem that the Codeville and Monotone teams dug into immediately after digesting Jerome's bright idea, and the resulting research (called Mark Merge) still doesn't get much love; most of the existing VCSs use some combination of LCS-driven and content-hash-tracking 3-way merge, with occasionally hilarious results. It turns out in VCS-land the difference from "well-anchored 3-way merge" doesn't matter often enough to make people lose much sleep; the main problem with CVS in this regard was just that it didn't record the right ancestor, an easy thing to fix in any replacement.
To some extent that work was edging into the problem-spaces currently being addressed with CRDTs and LVars, but you'll have to follow Lindsey Kuper for details on that stuff; it's very likely to be at the center of systems you're using now or will be using in the future, but it's pretty deep and subtle work, and I'm totally nonexpert in it.
Deciding when you have a single, consistent clock, or any other agreed-on value, is mostly outside the purview of VCS; generally people are content to have the integrator-node in a VCS system differ from their own, so long as they can track the relationships between versions, find version numbers and decide which one has which bug.
In other systems though -- for example, computers doing conventional double-entry accounting -- it's vitally important to the correctness of the system that you be able to prevent divergence of opinion between nodes (say, the balance of an account) lest people physically do contradictory things in the real world based on those divergent opinions (say, buy two pairs of $100 shoes in different stores using the same "$100 bank-account balance", in two different not-yet-reconciled stores). So, banks and utility companies and government systems do this all the time. You can't buy two shoes with the same balance, nor enter a country twice with the same passport, nor move your utility billing address to two different locations.
All manner of computer systems maintain consistent sets of facts ("databases"), and processing changes to them at high speed and with high reliability usually falls under the rubric of "transaction processing". You can buy Large Computers from Serious Companies to do it. They do it very unforgivingly and very fast. These systems (and their deployment in the real world) usually have two interesting "distributed systems" levels to them. At least for the purposes of this blog post.
The first and most obvious level is a large-scale operationally centralized "client/server" type of distributed system. All the data is stored in machines owned by "the bank" or "the government" and everyone who talks to it talks as a mere client. A terminal. A dependent entity that sends requests to the central server and gets responses that it must trust absolutely. Nothing peer-to-peer about it.
The second, often hidden level is a set of small-scale administratively centralized replicated state machine distributed systems, running a consensus algorithm inside the datacenter of the party that owns the database. These are not always present, and they have a high degree of technical-ecological variation, from a small paxos algorithm doing lock management or replica-election in leader/follower setups, to very elaborate IBM sysplex thingies I can barely comprehend. But they have in common an interest in maintaining a set of redundant replicas doing exactly the same thing, step by step, in a specific and consistent order. Having replicas reduces the possibility of catastrophic failure, and/or improves the performance of the system overall (eg. by providing extra read-capacity).
Both levels of distributed system that appear in conventional transaction processing are, you will note, quite centralized in one way or another. Often, seemingly, in reflection of the existing power imbalances we have with banks, governments and other institutions owning the transaction processors. Since centralization is a bit anathema to the architectural principles of the internet as well as frowned-on by various engineering and political sects, lots of people have considered, over the years, ways of building transaction processing systems that aren't centralized.
The first level of distributed system -- the privileged position of one peer over another in admitting a transaction -- is comparatively easy to reformulate as a decentralized system. Insofar as this is possible, it's typically done by associating each fact in the database(s) with a public cryptographic key, and requiring transactions that update the facts to be signed by the private-halves of the associated keys. This means that "authority to make a transaction" is pushed to "the edges" of the network, which is all well and good in internet-architecture logic.
But if you take the opportunity, once freed from keeping all your data on a single server, to replicate it between uncoordinated (potentially even untrusting) peers, allowing the public keys to protect the integrity of the data, you'll generally produce a system vulnerable to double-spending, the very problem transaction processing sets out to eliminate. Someone can cryptographically sign two valid transactions -- "buy the red shoes with my $100" and "buy the green shoes with my $100" -- and send them to two different vendors. If there's any sort of time lag or communication failure between the two vendors or the database replicas they consult with -- which is going to be frequent in practice, especially since the vendors have no reason to consult directly with one another and you might even be using a git-like chained-history model of time to avoid needing a central clock -- you might get away with breaking the very invariants the system was designed to prevent.
Essentially: the system you want to be keeping consistent records can have its replicas temporarily diverge, and each replica might admit a transaction that disagrees with a transaction admitted by the other. Reconciling these might not even be possible when the replicas compare notes in the future. You might have two git branches you can't merge, a conflict.
It turns out this is the exact sort of problem addressed by the replicated state machine consensus algorithms at the second level of distributed system discussed above, in conventional transaction processing. But whereas it's an optional "high integrity" part of conventional centralized transaction processing -- some transaction processing is done with only a single replica and no such consensus system -- as soon as you start reformulating the transaction processing task as a decentralized system, one that merely shares self-signed transactions between an open set of equal peers at the edges of a widely distributed internet, the problem of maintaining lock-step consensus, at root just a question of consensus about the order transactions should occur in, comes into very sharp relief. It becomes the problem.
Moreover, most of the obvious ways of solving it -- mostly based on some kind of majority voting scheme -- seem not to work very well; most are subject to something called a sybil attack, the very crude attack consisting of "adding fake participants" -- so-called "sybils", or "sockpuppets" -- until you overwhelm the population you're polling from. Indeed, at this point some people consider sybil-resistance equivalent to the distributed consensus problem, in the sense that if you can solve the former you're probably only one step from a solution to the latter.
This is basically the problem space that "bitcoin" launched itself into, though with right-libertarian political noise surrounding it to such an extent that for the most part I don't want to deal with that community. The problem of acquiring -- automatically -- a system-wide consensus view of a data structure otherwise built using a totally open-ended membership, using content-addressed storage of "accounts" and a spine of linked hashes of the transactions causing them to change (the so-called "block chain").
You could call it the problem of automatically selecting, at any given moment, an "official" branch out of all the forks of a git repository in existence in the world, so that everyone knows "where to look" to get "the official" current state of it. Only this git repository contains financial account balances and you have to be the owner of an account to change it (this part is comparatively easy cryptography). Bitcoin throws a whole pile of obnoxious goldbug monetary theory and ponzi scheme incentives into the mix to spread itself, but at a protocol level that's all it's really trying to do. Moreover, in order to resist sybil attacks -- it's built to have open membership -- it also uses cryptographic puzzle-solving (based on compute-power) to distribute the authority for selecting the official consensus-state.
This is a particularly bad idea, because while integrity checking using a hash like SHA256 is asymptotically secure, using prefix-collisions of it for a cryptographic puzzle designed to rate-limit something you want to have happen regularly is inherently not, and it's worse still when you are relying not on evaluating a single adversary but the current difference between network-wide average and worst-case maximum computational resources of a pool of adversaries. Combine with bitcoin's dual-use of the puzzles as "incentives", and you get the predictable result of a hardware manufacturing arms race, a small country worth of oil burned for no reason, and a "transaction processing" network with extremely confusing and non-obvious failure modes, likely vulnerability to the very computationally-powerful actors it was designed to resist, and a very low transaction rate and even worse settlement time.
I am, needless to say, no more a fan of the technological approach taken here as I am of the cultural and political motives underlying the craze. I say this with complete respect to the team who engineered it. The code's good, the cryptographic reasoning sound. I just think the security model and solution strategy is misguided.
Anyway, all this is a very roundabout way of laying the background story for what stellar is and why I got involved in it. It's an attempt at generalizing existing consensus algorithms into something that works in the political and technical reality of the internet -- requiring open membership and distributed trust, using content-addressing and cryptographic causality chaining rather than synchronized clocks, etc. -- without throwing the speed and comprehensibility of existing consensus-algorithm research out with the bathwater of fixed membership lists. In theory, the algorithm(s) involved will run at "full speed" (closer to normal transaction processors) and use no more resources than any other distributed-system protocol on the net. It should use only a handful of resources to acquire consensus, just enough to conduct a couple rounds of vote and verify a handful of Ed25519 signatures.
Viewed through this lens, it's basically picking up on distributed-system engineering threads I last visited before git even existed, though tackling an edge of the terrain that VCS users typically don't care about. Plenty of protocols do need consensus views of things, though, and stellar was an opportunity for me to implement the plumbing for such a system, with someone else more clever doing the paxos-y protocol-design heavy lifting, and me just doing what I do best: wiring fussy things up so they hold together and hopefully run ok.
The work emerged, initially, around a fork of code written by a company called ripple, who were trying to do something similar. I don't know much about them. After some difficulty working with that code, and in light of the fact that we wanted to rewrite the upper consensus layer anyways, we wound up deciding to rewrite the whole program. This wasn't a decision taken lightly, but so far it has seemed to be the right one, for us. We were able to simplify a lot in the process, and that makes it much easier for us to understand.
The new code is actually a pretty good program. I'm pleased with it, even a little proud. It uses internet standard file formats, protocols and components whenever possible. It's very transparent and easy to set up. It stays out of the way and doesn't overextend itself into a general purpose secure mobile code substrate or smart contract layer or anything of the sort; it just tries to come to consensus on a transaction set, verify the crypto signatures, apply the transactions to an adjacent SQL database, and publish the results to long-term, stable storage. In a loop. Nice and easy.
The stellar protocol does piggy-back on declared -- though distributed -- trust relationships. It does not fabricate trust out of thin air; consensus-node operators have to, at some level, decide when someone is a sybil and when they're someone at least known. The fact is that you, and especially network service providers you're a client of, are making trust decisions all the time already, and have already made a dozen more-serious ones before even considering which peers are participating in transaction consensus at the mid level of financial infrastructure. Probably it's a decision you'll never make, any more than you make decisions about peering at the IP transit level or overnight lending at the interbank level. These decisions are made by institutions and based on their inter-institution trust relationships; stellar doesn't try to eliminate those relationships, just ensure they can always be made transparently and at the edges of the network, in a way that lacks an inherent central point of control.
Bitcoin people hate admitting these relationships exist, because trust means authority means the state, the state is evil and tyrants and murderers and so forth; but trust relationships exist as a fact of life at all levels of human affairs, and trying to avoid them entirely is (to my thinking) an exercise in futility. Bitcoin itself, as an ecosystem, is plagued with trust problems despite its disdain for "central banks" or singular points of trust; it turns out that trusting a bank is less commonly a problem than trusting an uninsured and unregulated exchange, or worse, trusting a random scam artist's crypto keys that they sent to you in phishing emails.
You downloaded your bitcoin app from somewhere you trust. You bought your computer from somewhere you trust. Even the decision to trust the person-with-the-most-SHA256-colliding hardware (bitcoin's current, absurd rule) is a trust judgment. It's trust-turtles all the way down. So while on the one hand I lament the design and especially implementation of many clunky, legacy trust-graph systems (cough DNS cough x.509) I have to admit that the graphs emerge any time you make any decision at all, and codifying them in software is nothing novel, admitting it is just being explicit about what you're already doing. We codified them in monotone, people codify them in git (git actually did this better, though less purist: they hitched to the DNS, PGP and SSH trust graphs directly), and I don't mind codifying them in something like the configuration files for a transaction-processing network.
I'm not going to discuss the economic logic of these things much at all; I suspect capital will do well in any variant of the current political order, it'd be nice if money transfer was easier, cheaper and better automatable, and I'm not expecting banks to vanish anytime soon. That's all I'm going to say about the political economy of the thing, aside from a few potshots at bitcoin people's politics.
coordination
Computer-y systems are composed of many communicating component-parts: smaller circuits, computers on a network, whatever. The systems-as-a-whole often face a design tension between coordination (component-parts acting in lock-step with one another) and independence (component-parts acting on their own). Ideally you want your system to produce useful behaviour with as little coordination machinery as possible: such machinery introduces focuses for failure as well as speed limits on the system as a whole. What coordination machinery you have has to also be very fast and very robust against failures.
So there's a great, long-standing tradition of computer-y engineering in the space of minimizing coordination machinery across systems, on the one hand, and on the other hand making what coordination machinery we must use fast and resilient to failure. These are sometimes called consensus systems but they share a lot of design considerations with stable network protocols of all sorts as well as version control semantics and (perhaps more curiously) concurrent programming language semantics. As a young computer nerd I got interested in several such coordination problems; they are often at the highly-vexing core of problems we face when fighting computers.
version control
In particular, I will point again to the version control link above. The work we did on monotone, which I started nearly 14 years ago now (oh goodness I feel old) is I think the most wide-impact work I've ever been involved in. At the core of that work, quite a ways after we got dug into the problem, was a simple and (in hindsight) blindingly obvious step: taking the trust-focused design of linked timestamping out of cryptography research and using it as the spine of a content addressed store like Venti, allowing it to project forward in time without having to acquire annoying external reference clocks and clock-sync problems.
It turns out this has a radically simplifying effect on the construction of distributed systems: it is not an exaggeration to say that most internet-scale distributed systems built since then have at least considered, if not adopted, part of this design. Most people view it as originating in git -- which directly copied and massively popularized it -- but to the best of my knowledge this lineage actually originated in a now-lost email comment I got from Jerome Fisher, that culminated in adding a new object type to the system. The idea of bringing linked hashing into the spine of a CAS was absolutely Jerome's; it's one of the few times in my professional life when I have watched a colleague synthesize ideas in a way I knew, instantly, would have huge implications. (The most recent was watching Niko deliver the delightful discovery that the mutator methods of an affine-typed value statically vanish when you move it from a mutable to immutable owner.) Anyway, the causality-spine-for-a-CAS problem solves perhaps a third your distributed-system woes: it produces a clock, no matter where you are in the system, that you can be sure of the causal order of.
What it doesn't do is merge together two clocks, or decide when they're merged, if you happen to split reality, or have your clocks diverge. This is a problem that the Codeville and Monotone teams dug into immediately after digesting Jerome's bright idea, and the resulting research (called Mark Merge) still doesn't get much love; most of the existing VCSs use some combination of LCS-driven and content-hash-tracking 3-way merge, with occasionally hilarious results. It turns out in VCS-land the difference from "well-anchored 3-way merge" doesn't matter often enough to make people lose much sleep; the main problem with CVS in this regard was just that it didn't record the right ancestor, an easy thing to fix in any replacement.
To some extent that work was edging into the problem-spaces currently being addressed with CRDTs and LVars, but you'll have to follow Lindsey Kuper for details on that stuff; it's very likely to be at the center of systems you're using now or will be using in the future, but it's pretty deep and subtle work, and I'm totally nonexpert in it.
consistency and transactions
Deciding when you have a single, consistent clock, or any other agreed-on value, is mostly outside the purview of VCS; generally people are content to have the integrator-node in a VCS system differ from their own, so long as they can track the relationships between versions, find version numbers and decide which one has which bug.
In other systems though -- for example, computers doing conventional double-entry accounting -- it's vitally important to the correctness of the system that you be able to prevent divergence of opinion between nodes (say, the balance of an account) lest people physically do contradictory things in the real world based on those divergent opinions (say, buy two pairs of $100 shoes in different stores using the same "$100 bank-account balance", in two different not-yet-reconciled stores). So, banks and utility companies and government systems do this all the time. You can't buy two shoes with the same balance, nor enter a country twice with the same passport, nor move your utility billing address to two different locations.
All manner of computer systems maintain consistent sets of facts ("databases"), and processing changes to them at high speed and with high reliability usually falls under the rubric of "transaction processing". You can buy Large Computers from Serious Companies to do it. They do it very unforgivingly and very fast. These systems (and their deployment in the real world) usually have two interesting "distributed systems" levels to them. At least for the purposes of this blog post.
The first and most obvious level is a large-scale operationally centralized "client/server" type of distributed system. All the data is stored in machines owned by "the bank" or "the government" and everyone who talks to it talks as a mere client. A terminal. A dependent entity that sends requests to the central server and gets responses that it must trust absolutely. Nothing peer-to-peer about it.
The second, often hidden level is a set of small-scale administratively centralized replicated state machine distributed systems, running a consensus algorithm inside the datacenter of the party that owns the database. These are not always present, and they have a high degree of technical-ecological variation, from a small paxos algorithm doing lock management or replica-election in leader/follower setups, to very elaborate IBM sysplex thingies I can barely comprehend. But they have in common an interest in maintaining a set of redundant replicas doing exactly the same thing, step by step, in a specific and consistent order. Having replicas reduces the possibility of catastrophic failure, and/or improves the performance of the system overall (eg. by providing extra read-capacity).
decentralized transactions
Both levels of distributed system that appear in conventional transaction processing are, you will note, quite centralized in one way or another. Often, seemingly, in reflection of the existing power imbalances we have with banks, governments and other institutions owning the transaction processors. Since centralization is a bit anathema to the architectural principles of the internet as well as frowned-on by various engineering and political sects, lots of people have considered, over the years, ways of building transaction processing systems that aren't centralized.
The first level of distributed system -- the privileged position of one peer over another in admitting a transaction -- is comparatively easy to reformulate as a decentralized system. Insofar as this is possible, it's typically done by associating each fact in the database(s) with a public cryptographic key, and requiring transactions that update the facts to be signed by the private-halves of the associated keys. This means that "authority to make a transaction" is pushed to "the edges" of the network, which is all well and good in internet-architecture logic.
But if you take the opportunity, once freed from keeping all your data on a single server, to replicate it between uncoordinated (potentially even untrusting) peers, allowing the public keys to protect the integrity of the data, you'll generally produce a system vulnerable to double-spending, the very problem transaction processing sets out to eliminate. Someone can cryptographically sign two valid transactions -- "buy the red shoes with my $100" and "buy the green shoes with my $100" -- and send them to two different vendors. If there's any sort of time lag or communication failure between the two vendors or the database replicas they consult with -- which is going to be frequent in practice, especially since the vendors have no reason to consult directly with one another and you might even be using a git-like chained-history model of time to avoid needing a central clock -- you might get away with breaking the very invariants the system was designed to prevent.
Essentially: the system you want to be keeping consistent records can have its replicas temporarily diverge, and each replica might admit a transaction that disagrees with a transaction admitted by the other. Reconciling these might not even be possible when the replicas compare notes in the future. You might have two git branches you can't merge, a conflict.
It turns out this is the exact sort of problem addressed by the replicated state machine consensus algorithms at the second level of distributed system discussed above, in conventional transaction processing. But whereas it's an optional "high integrity" part of conventional centralized transaction processing -- some transaction processing is done with only a single replica and no such consensus system -- as soon as you start reformulating the transaction processing task as a decentralized system, one that merely shares self-signed transactions between an open set of equal peers at the edges of a widely distributed internet, the problem of maintaining lock-step consensus, at root just a question of consensus about the order transactions should occur in, comes into very sharp relief. It becomes the problem.
Moreover, most of the obvious ways of solving it -- mostly based on some kind of majority voting scheme -- seem not to work very well; most are subject to something called a sybil attack, the very crude attack consisting of "adding fake participants" -- so-called "sybils", or "sockpuppets" -- until you overwhelm the population you're polling from. Indeed, at this point some people consider sybil-resistance equivalent to the distributed consensus problem, in the sense that if you can solve the former you're probably only one step from a solution to the latter.
bitcoin (ugh)
This is basically the problem space that "bitcoin" launched itself into, though with right-libertarian political noise surrounding it to such an extent that for the most part I don't want to deal with that community. The problem of acquiring -- automatically -- a system-wide consensus view of a data structure otherwise built using a totally open-ended membership, using content-addressed storage of "accounts" and a spine of linked hashes of the transactions causing them to change (the so-called "block chain").
You could call it the problem of automatically selecting, at any given moment, an "official" branch out of all the forks of a git repository in existence in the world, so that everyone knows "where to look" to get "the official" current state of it. Only this git repository contains financial account balances and you have to be the owner of an account to change it (this part is comparatively easy cryptography). Bitcoin throws a whole pile of obnoxious goldbug monetary theory and ponzi scheme incentives into the mix to spread itself, but at a protocol level that's all it's really trying to do. Moreover, in order to resist sybil attacks -- it's built to have open membership -- it also uses cryptographic puzzle-solving (based on compute-power) to distribute the authority for selecting the official consensus-state.
This is a particularly bad idea, because while integrity checking using a hash like SHA256 is asymptotically secure, using prefix-collisions of it for a cryptographic puzzle designed to rate-limit something you want to have happen regularly is inherently not, and it's worse still when you are relying not on evaluating a single adversary but the current difference between network-wide average and worst-case maximum computational resources of a pool of adversaries. Combine with bitcoin's dual-use of the puzzles as "incentives", and you get the predictable result of a hardware manufacturing arms race, a small country worth of oil burned for no reason, and a "transaction processing" network with extremely confusing and non-obvious failure modes, likely vulnerability to the very computationally-powerful actors it was designed to resist, and a very low transaction rate and even worse settlement time.
I am, needless to say, no more a fan of the technological approach taken here as I am of the cultural and political motives underlying the craze. I say this with complete respect to the team who engineered it. The code's good, the cryptographic reasoning sound. I just think the security model and solution strategy is misguided.
stellar
Anyway, all this is a very roundabout way of laying the background story for what stellar is and why I got involved in it. It's an attempt at generalizing existing consensus algorithms into something that works in the political and technical reality of the internet -- requiring open membership and distributed trust, using content-addressing and cryptographic causality chaining rather than synchronized clocks, etc. -- without throwing the speed and comprehensibility of existing consensus-algorithm research out with the bathwater of fixed membership lists. In theory, the algorithm(s) involved will run at "full speed" (closer to normal transaction processors) and use no more resources than any other distributed-system protocol on the net. It should use only a handful of resources to acquire consensus, just enough to conduct a couple rounds of vote and verify a handful of Ed25519 signatures.
Viewed through this lens, it's basically picking up on distributed-system engineering threads I last visited before git even existed, though tackling an edge of the terrain that VCS users typically don't care about. Plenty of protocols do need consensus views of things, though, and stellar was an opportunity for me to implement the plumbing for such a system, with someone else more clever doing the paxos-y protocol-design heavy lifting, and me just doing what I do best: wiring fussy things up so they hold together and hopefully run ok.
The work emerged, initially, around a fork of code written by a company called ripple, who were trying to do something similar. I don't know much about them. After some difficulty working with that code, and in light of the fact that we wanted to rewrite the upper consensus layer anyways, we wound up deciding to rewrite the whole program. This wasn't a decision taken lightly, but so far it has seemed to be the right one, for us. We were able to simplify a lot in the process, and that makes it much easier for us to understand.
The new code is actually a pretty good program. I'm pleased with it, even a little proud. It uses internet standard file formats, protocols and components whenever possible. It's very transparent and easy to set up. It stays out of the way and doesn't overextend itself into a general purpose secure mobile code substrate or smart contract layer or anything of the sort; it just tries to come to consensus on a transaction set, verify the crypto signatures, apply the transactions to an adjacent SQL database, and publish the results to long-term, stable storage. In a loop. Nice and easy.
trust relationships
The stellar protocol does piggy-back on declared -- though distributed -- trust relationships. It does not fabricate trust out of thin air; consensus-node operators have to, at some level, decide when someone is a sybil and when they're someone at least known. The fact is that you, and especially network service providers you're a client of, are making trust decisions all the time already, and have already made a dozen more-serious ones before even considering which peers are participating in transaction consensus at the mid level of financial infrastructure. Probably it's a decision you'll never make, any more than you make decisions about peering at the IP transit level or overnight lending at the interbank level. These decisions are made by institutions and based on their inter-institution trust relationships; stellar doesn't try to eliminate those relationships, just ensure they can always be made transparently and at the edges of the network, in a way that lacks an inherent central point of control.
Bitcoin people hate admitting these relationships exist, because trust means authority means the state, the state is evil and tyrants and murderers and so forth; but trust relationships exist as a fact of life at all levels of human affairs, and trying to avoid them entirely is (to my thinking) an exercise in futility. Bitcoin itself, as an ecosystem, is plagued with trust problems despite its disdain for "central banks" or singular points of trust; it turns out that trusting a bank is less commonly a problem than trusting an uninsured and unregulated exchange, or worse, trusting a random scam artist's crypto keys that they sent to you in phishing emails.
You downloaded your bitcoin app from somewhere you trust. You bought your computer from somewhere you trust. Even the decision to trust the person-with-the-most-SHA256-colliding hardware (bitcoin's current, absurd rule) is a trust judgment. It's trust-turtles all the way down. So while on the one hand I lament the design and especially implementation of many clunky, legacy trust-graph systems (cough DNS cough x.509) I have to admit that the graphs emerge any time you make any decision at all, and codifying them in software is nothing novel, admitting it is just being explicit about what you're already doing. We codified them in monotone, people codify them in git (git actually did this better, though less purist: they hitched to the DNS, PGP and SSH trust graphs directly), and I don't mind codifying them in something like the configuration files for a transaction-processing network.