Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think IPFS is trying to replace HTTP; it's trying to replace URLs. IPFS's goal is to offer universal content-centric resource location as an alternative to the Web's host-centric resource location. This is a nice thing because it decouples content from hosting--you can get the content from any reachable peer who has it. A good chunk of Team IPFS's engineering efforts have been put towards offering a wide variety of protocols to efficiently resolve a content hash to the content in a wide variety of contexts, covering things like peer discovery, content hash announcements, hash naming (IPNS), super-peers (ipfs-cluster), pub/sub, and so on.

IPFS is definitely not a filesystem in the classical (POSIX-y) sense. I think the naming is unfortunate, because it implies that IPFS has things a global root directory, permission bits, users, ACLs, immutable human-meaningful names that resolve to mutable content, a POSIX-y consistency model, etc. IPFS does none of these things--it defers them to layers above it. All IPFS is concerned with is providing a Merkle DAG abstraction, and the necessary network protocols to walk it and fetch the underlying content.



Yes, and that's the problem - because it cannot address many of the features required for dynamic content. I'm sure it works fine for static content, but that's the easy problem to solve (and it has been solved before).


You're right, IPFS does not try to address dynamic content. I don't speak for the authors, but I think their opinion is that doing so is out of scope. I think they expect the application to represent each piece of mutable content as a Merkle DAG, and have the application acquire each user's written version of the content from IPFS, merge them, and announce the "latest" version under a new hash.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: