IMGpedia is a Linked Dataset that incorporates visual information of the images from the Wikimedia Commons dataset: it brings together descriptors of the visual content of 15 million images, 450 million visual-similarity relations between those images, links to image metadata from DBpedia Commons, and links to the DBpedia resources associated with individual images. It allows people to perform visuo-semantic queries over the images.
For exploring the data, you can follow these links:
- SPARQL Endpoint
- IMGpedia RDF Dumps
- VoID Statistics
- Figshare
- w3id Persistent Identifier
- GitHub Issue Tracker
Reference implementations for the visual descriptors used in the IMGpedia project1 written in Python, Java and C++. These are made publicly available as an effort to bring the Image Analysis process closer to the Semantic Web community. However, these implementations can be used by anyone under GNU GPL license
- Gray Histogram Descriptor: We transform the image from color to grayscale and divide it into a fixed number of blocks. A histogram of 8-bit gray intensities is then calculated for each block. The concatenation of all histograms is used to generate a description vector.
- Histogram of Oriented Gradients: We extract edges of the grayscale image by computing its gradient (using Sobel kernels), applying a threshold, and computing the orientation of the gradient. Finally, a histogram of the orientations is made and used as a description vector.
- Color Layout Descriptor: We divide the image into blocks and for each block we compute the mean (YCbCr) color. Afterwards the Discrete Cosine Transform is computed for each color channel. Finally the concatenation of the transforms is used as the descriptor vector, with 192 dimensions.
- Edge Histogram Descriptor: For each 2 x 2 pixel block, the dominant edge orientation is computed (horizontal, vertical, both diagonals or none), where the descriptor is a histogram of these orientations. This implementation is no longer used in the IMGpedia project.
- DeCAF7: Uses a Caffe neural network pre-trained with the Imagenet dataset. To obtain the vector, each image is resized and ten overlapping patches are extracted, each patch is given as input to the neural network and the last selfconvolutional layer of the model is extracted as a descriptor, so the final vector is the average of the layers for all the patches
In order to get the code to work a few dependencies are needed:
-
OpenCV: This is the main dependency for all descriptors. Our code works with version 2.4.11 or superior. For installation instructions, please refer to the official documentation for OpenCV in Linux, Windows. Or Install just python bindigs if you like.
-
Caffe: This is only needed for the neural networks used to extract DeCAF7, so, if you will not use it, there is no need to install Caffe. Otherwise, you can find installation instructions here
And that's it, once you've installed OpenCV and Caffe, all algorithms should run in your favourite language.
Both python and java implementations are objects that inherit from superclass DescriptorComputer which defines the abstract method compute that is implemented according to the algorithms of each descriptor so, in order to compute the descriptor vector of an image you should do something like (in Python, Java syntax can be inferred):
computer = GrayHistogramComputer(2,2,32)
img = cv2.imread("image.jpg")
descriptor = computer.compute(img) #so descriptor is a vector of 2 x 2 x 32 dimensions
C++ implementation consist only on functions that can be imported and used with no object orientation.
Finally, any doubt you have with the process, send me an e-mail to: sferrada [at] dcc [dot] uchile [dot] cl or open up an Issue.
1 Read more about the IMGpedia project here. If you want to visit our SPARQL Endpoint and try some queries visit this link, the available vocabulary can be found here.