I’ve never understood Oracles as a concept. They seem fundamentally untrustworthy.
ETH seemed to have a higher tolerance for introducing the trade offs of Oracles than Bitcoin.
But without Oracles, the overlay networks you speak of couldn’t compute any meaningful logic without requiring an absurd amount of resources from the main net.
If you’re making a distributed computer, it’s like you’re scaling silicon circuits up to the network level. So any operation using those circuits will of course be more expensive. That means that many computations on the main net are cost prohibitive.
I would like to learn more about the trade-offs and risks of a blockchain relying on Oracles as a source of truth.
Like I said, Ethereum can be used as a distributed computer presuming an complete web of 100% trust between nodes. E.g. when the nodes are all run by a single corporation, with the node operators being different employees in different locations. In that case, the oracles (= native processes interacting with the blockchain nodes) would be presumed-trustworthy just as much as the nodes themselves are. The only question would be whether the out-of-chain computations of the oracles are able to move the state forward in a deterministic, linearizable manner as if they were on-chain computations — which is totally possible if you make your blockchain into a collection of arbitrary-CRDT smart-contracts and only interact with shared data through them.
But to directly address your statement, in general, oracles as a concept are “trustworthy” iff 1. they rely solely on public information as input; or 2. are run by a centralized entity, which is the same entity that has sole ownership of the resource being modified by the oracle; or 3. the system is built such that any oracles can submit a cryptographic proof of X, and as soon as any oracle does so, the condition is triggered.
Re: case 1, just as you can re-compute the transactions of a blockchain to arrive at the same consensus state, you should be able to grab a copy of the oracle’s code yourself and run it in a proper “context”, whereupon it will re-assert all the same transactions it ever originally asserted to the chain. This requires that all APIs the oracle deduces its assertions from have historical-data variants.
Re: case 2, the oracle is basically just standing in for a human. If you trust a human to do X, then you trust an oracle to do X.
Re: case 3, this is what on-chain prediction markets do. Rather than the win-condition for a prediction being something a human watches for and then enters, many private individuals (bettors) are incentivized to each have their own oracle running that watches for the win-condition however they like, and then — as soon as that condition attains — proves to the smart contract that the condition attains. The smart contract doesn’t have to care which oracle submitted the proof, only that there is now such a proof.
Oracles are always going to be less decentralized than a distributed blockchain VM. Chainlink's (oracle platform) approach to this is to have oracle nodes post a bond that can get slashed if the computation is deemed invalid. But most Chainlink oracles have 21 or fewer nodes. This is fine for supplemental data but would be vulnerable to all the same attacks that DPOS chains have suffered if used as base layer consensus.
ETH seemed to have a higher tolerance for introducing the trade offs of Oracles than Bitcoin.
But without Oracles, the overlay networks you speak of couldn’t compute any meaningful logic without requiring an absurd amount of resources from the main net.
If you’re making a distributed computer, it’s like you’re scaling silicon circuits up to the network level. So any operation using those circuits will of course be more expensive. That means that many computations on the main net are cost prohibitive.
I would like to learn more about the trade-offs and risks of a blockchain relying on Oracles as a source of truth.