At last! This was the only feature I was missing in Postgres for years.
Postgres is so amazing I still find it hard to believe it's free. I abused it many times (as a graph database; as a real-time financial data analytics engine with thousands of new ticks coming in each second; as a document storage with several TBs of data per node), and amazingly, it just worked. Magic.
They have used adjacency lists, which looks strange to me — this is a common way of representing graph data in memory, but doesn't map well to the relational model. I have used "extreme normalization", where all edges are represented as many-to-many relationships with associative tables.
Though I stored document relationships and social graphs, not semantic graphs, as they describe in this paper. I have little experience with semantic graphs — perhaps, they have different requirements.
How many joins did it have (sounds very joinery :-D)/or how deep was your graph? Presumably it was doing full table scans for everything - sounds very interesting anyway.
I have a Postgres database with 25 bn rows on a single host with application level partitioning and a parallel query infrastructure. Surely abuse, but Postgres has handled it beautifully.
I built something very similar once. My tip would be to shove some HTTP interface on the front of it for some cache guarantees (ie. no arbitrary writes so you can trivially flush, etc. etc.)
While I was listening this presentation I've thought that it's quite weird functionality and intersects a lot with transaction visibility rules that MVCC already has. So it is interesting what is your use case?
Analysing open data portals. PG holds status data on retrievability of links. Depicting the evolvement of such an open data portal over time is possible with custom datetime columns and window fucntions, it's just tedious and error prone.
Do you know any commercial database with comparable level of documentation and community support?
I worked with a certain other database from a company starting with "O" and ending with "racle", and support/documentation quality was abysmal. Our support contract was quite basic, but still, it expected to be better than free alternatives.
In comparison to what? Mysql? Perhaps. Mysql is a little simpler and quicker to pick up, but one must remember, the original intent of mysql was to stop Devs from using text files for storage...
But in comparison to an enterprise solution like DB2 or Oracle? Not even close...
If you are trying to pull nosql into the argument, that is not even apples to oranges, plus with nosql, you get to retool your entire architecture.
Postgres is so amazing I still find it hard to believe it's free. I abused it many times (as a graph database; as a real-time financial data analytics engine with thousands of new ticks coming in each second; as a document storage with several TBs of data per node), and amazingly, it just worked. Magic.
If in doubt, choose Postgres.