Some of us may have been saying that for years, but now the Gartners of the world are picking up on it too. So, the oracles have spoken:
“The application of graph processing and graph DBMSs will grow at 100 percent annually through 2022 to continuously accelerate data preparation and enable more complex and adaptive data science”.
That all sounds great, in theory. In practice, however, things are messy. If you’re out to shop for a graph database, you will soon realize that there are no universally supported standards, performance evaluation is a dark art, and the vendor space seems to be expanding by the minute.
Recently, the W3C initiated an effort to brings the various strands of graph databases closer together, but it’s still a long way from fruition.
So, what’s all the fuss about? What are some of the things graph databases are being used for, what are they good at, and what are they not so good at?
Property graphs and RDF are the 2 prevalent ways to model the world in graph; What is each of these good at, specifically? What problems does each of these have, and how are they being addressed?
RDF* is a proposal that could help bridge graph models across property graphs and RDF. What is it, how does it work, and when will it be available to use in production?
What about query languages? In the RDF world, SPARQL rules, but what about property graphs? Can Gremlin be the one graph virtual machine to unite them all?
What about the future of graph databases? Could graph turn out to be a way to model data universally?
Moderated by George Anadiotis.
Director of Applied Innovation, London Lab at Refinitiv
VP of product, Cambridge Semantics
Research scientist, Uber