Skip to content

A ConvNet for locating pictures of molecules in chemistry papers.

Notifications You must be signed in to change notification settings

acarl005/LewisNet-2

Repository files navigation

LewisNet-2

The successor to LewisNet-1. A convolutional neural network for recognizing, locating, and detecting Organic Chemical Lewis Structures and Skeletal formulas in chemistry papers built with Google's Tensorflow.

molecule-1 molecule-2 molecule-9 molecule-19

This is a convolutional implementation of "sliding window" so it's very computationally efficient, described in this paper. AlexNet was modified to have only convolutional layers for this implementation.

paper-demo

Why Build This

There are hundreds of thousands of academic Chemistry papers. Some services like Reaxys and SciFinder index these papers to allow you to search by chemical structures or reactions. I would like to automate the process of identifying chemical structures in these papers, to make these indexes easier to maintain.

This involves two steps:

  1. Object detection - find the location of chemical structures within papers
  2. Identification - not just classification, but determining the actual chemical formula

This project is the first step. It can locate Lewis structures in chemistry papers.

Data Gathering

Some of the positives were gathered from the ChemSpider API. I downloaded the first 10,000 images (by ascending integer ID). Some of the negatives were taken from papers without chemical structures in them, about 9,000 images. Papers were converted to PNGs and cropped into 150x150 tiles. For each class, several thousand were downloaded and labelled manually from Google images. This was done with a Chrome Extension called Label Gun.

Training

The network was trained on a Lenovo U31-70 with a NVIDIA GeForce 920M running Ubuntu 16.04.3.

Failure Cases

Among the training data, the most common false positives are from drawings of "stick figure" people, and physics diagrams such as circuits or Feynman diagrams. False negatives seem to occur when the molecule in the image is too zoomed out or too small.

Hidden Layer Visualizations

I'm using a technique from Donahue et. al., 2013. Here I find crops of the input image space that maximize activation in higher layers. This gives us an idea what each filter is "looking for". In this case, the top 9 patches are shown for each filter. The negatives were excluded in this visualization.

Hidden layer 1

The first conv layer has 96 filters. Here are the 864 images (9 * 96) that maximize the activations in the first layer. Each box in this grid is for one filter. Within the box are the 9 patches (crops) from the input images that produce the largest observed activations for that filter.

This layer appears to be detecting simple things like curves, edges, vertices, and individual letters/numbers.

layer1

Hidden layer 2

This layer is detecting more complex things like parts of rings, methyl groups, double bonds, quaternary carbons, amines, carbonyls, etc. Notice that quite a few of the units are all blank, meaning there was no activation whatsoever for all the positive images. These units are probably stimulated by salient features in the negatives.

layer2

Hidden layer 3

This layer detects even more complexity, like entire rings or polycycles, allyl groups, and other functional groups.

layer3

Releases

No releases published

Packages

No packages published