Giter Site home page Giter Site logo

allensmile / openrichpedia Goto Github PK

View Code? Open in Web Editor NEW

This project forked from z1514/openrichpedia

0.0 1.0 0.0 10.52 MB

东南大学多模态知识图谱-OpenRichpedia

License: MIT License

JavaScript 89.54% Dockerfile 0.01% Python 0.06% Less 10.22% CSS 0.17% Java 0.01%

openrichpedia's Introduction

Richpedia: A Comprehensive Multi-Modal Knowledge Graph

Introduction

With the rapid development of Semantic Web technologies, various knowledge graphs are published on the Web using Resource Description Framework (RDF), such as Wikidata and DBpedia. Knowledge graphs provide for setting RDF links among different entities, thereby forming a large heterogeneous graph, supporting semantic search, question answering and other intelligent services. Meanwhile, public availability of visual resource collections has attracted much attention for different Computer Vision (CV) research purposes, including visual question answering, image classification, object and relationship detection, etc. And we have witnessed promising results by encoding entity and relation information of textual knowledge graphs for CV tasks. Whereas most knowledge graph construction work in the Semantic Web and Natural Language Processing (NLP) communities still focus on organizing and discovering only textual knowledge in a structured representation. There is a relatively small amount of attention in utilizing visual resources for KG research. A visual database is normally a rich source of image or video data and provides sufficient visual information about entities in KGs. Obviously, making link prediction and entity alignment in wider scope can empower models to make better performance when considering textual and visual features together.

As mentioned above, general knowledge graphs focus on the textual facts. There is still no comprehensive multi-modal knowledge graph dataset prohibiting further exploring textual and visual facts on either side. To fill this gap, we provide a comprehensive multi-modal dataset (called Richpedia) in this paper, as shown in figure below.

In summary, our Richpedia data resource mainly makes the following contributions:

  • To our best knowledge, we are the first to provide comprehensive visualrelational resources to general knowledge graphs. The result is a big and high-quality multi-modal knowledge graph dataset, which provides a wider data scope to the researchers from The Semantic Web and Computer Vision.
  • We propose a novel framework to construct the multi-modal knowledge graph. The process starts by collecting entities and images from Wikidata, Wikipedia, and Search Engine respectively. Images are then filtered by a diversity retrieval model. Finally, RDF links are set between image entities based on the hyperlinks and descriptions in Wikipedia.
  • We publish the Richpedia as an open resource, and provide a faceted query endpoint using Apache Jena Fuseki1. Researchers can retrieve and leverage data distributed over general KGs and image resources to answering more richer visual queries and make multi-relational link predictions.

Download

You can download images and triples of relationship from here through BaiduYun Drive. Because the image entity folder is relatively large, we split it into two parts(City&Sight, People) for download.

Image

NT Files

Friendly Link

Our data uses other resources, so we make a statement here.

  • Wikidata is becoming an increasingly important knowledge graph in the research community. We collect the KG entities from Wikidata as EKG in Richpedia.
  • Wikipedia contains images for KG entities in Wikidata and a number of related hyperlinks among these entities. We will collect part of the image entities from Wikipedia and relations between collected KG entities and image entities. We will also discover relations between image entities based on the hyperlinks and related descriptions in Wikipedia.
  • Google, Yahoo, Bing image sources: To obtain sufficient image entities related to each KG entity, we implemented a web crawler taking input as KG entities to image search engines Google Images, Bing Images, and Yahoo Image Search, and parse query results.

License

This work is licensed under a Creative Commons Attribution 4.0 International License

Contact

  • Qiushuo Zheng [email protected]
  • Jianxiong Zheng [email protected]
  • Guilin Qi [email protected]
  • Meng Wang [email protected]
  • Update

    • V2.0

      Add images and triples of relationship.

    More information

    Github Pages

    website

    openrichpedia's People

    Contributors

    z1514 avatar

    Watchers

     avatar

    Recommend Projects

    • React photo React

      A declarative, efficient, and flexible JavaScript library for building user interfaces.

    • Vue.js photo Vue.js

      🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

    • Typescript photo Typescript

      TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

    • TensorFlow photo TensorFlow

      An Open Source Machine Learning Framework for Everyone

    • Django photo Django

      The Web framework for perfectionists with deadlines.

    • D3 photo D3

      Bring data to life with SVG, Canvas and HTML. 📊📈🎉

    Recommend Topics

    • javascript

      JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

    • web

      Some thing interesting about web. New door for the world.

    • server

      A server is a program made to process requests and deliver data to clients.

    • Machine learning

      Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

    • Game

      Some thing interesting about game, make everyone happy.

    Recommend Org

    • Facebook photo Facebook

      We are working to build community through open source technology. NB: members must have two-factor auth.

    • Microsoft photo Microsoft

      Open source projects and samples from Microsoft.

    • Google photo Google

      Google ❤️ Open Source for everyone.

    • D3 photo D3

      Data-Driven Documents codes.