-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reverse image search [NOT URGENT] #13
Comments
@npscience @xolotl what a great thread! I have been thinking about similar problems and tinkering with some databases namely PMC and Open knowledge Maps (@npscience, can you introduce me to latter?) Our major motivation (to answer your usefulness question) was to "autopopulate" and then ask ReFigure users to confirm connections. I use google image search a lot and it is not clear why it lumps certain set of images under the same search. Google does have a reverse image search but I have not used it and not sure how it is doing it. our search algorithm is very basic...of course if we get to having 10,000 images it would still be useful. The most applicable current technology appears to be actually identifying images based on the text surrounding them. On PMC ARTICLE (not image) search, the "related articles" do seem to be closely related. Similarly, the articles that land up in one of the bubbles of the open knowledge maps are usually closely related. @xolotl does image annotation work for hypothes.is now? |
Sorry, no image annotation in Hypothesis yet. We've been focused on making
annotation available in Maori places/formats, but definitely plan to
support image annotation at some point.
The interesting thing now is to use the annotation layer as a channel to
deliver extra info. What I imagine is an annotation layer that could
automatically show readers other uses of the same image in other
articles/documents.
On Fri, Sep 1, 2017 at 8:45 PM re-figure ***@***.***> wrote:
@npscience <https://github.com/npscience> @xolotl
<https://github.com/xolotl> what a great thread! I have been thinking
about similar problems and tinkering with some databases namely PMC and
Open knowledge Maps ***@***.*** <https://github.com/npscience>, can you
introduce me to latter?) Our major motivation (to answer your usefulness
question) was to "autopopulate" and then ask ReFigure users to confirm
connections.
I use google image search a lot and it is not clear why it lumps certain
set of images under the same search. Google does have a reverse image
search but I have not used it and not sure how it is doing it.
our search algorithm is very basic...of course if we get to having 10,000
images it would still be useful. The most applicable current technology
appears to be actually identifying images based on the text surrounding
them. On PMC ARTICLE (not image) search, the "related articles" do seem to
be closely related. Similarly, the articles that land up in one of the
bubbles of the open knowledge maps are usually closely related.
@xolotl <https://github.com/xolotl> does image annotation work for
hypothes.is now?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#13 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA84rF29YHksrGr4hHaqZYJpuZj6IQw2ks5seM9ZgaJpZM4PKVIY>
.
--
= Nate Angell
+1.503.927.1162 mobile
[email protected] email, chat
ixmati = skype
@xolotl <http://twitter.com/xolotl>
xolotl.org
|
On Sat, Sep 2, 2017 at 11:54 AM Nate Angell <[email protected]>
wrote:
Sorry, no image annotation in Hypothesis yet. We've been focused on making
annotation available in Maori places/formats, but definitely plan to
support image annotation at some point.
IF the hurdle is technological and has to do with identifying or annotating
an image, our image capture actually works across all websites. Let us know
if we can provide support.
The interesting thing now is to use the annotation layer as a channel to
deliver extra info. What I imagine is an annotation layer that could
automatically show readers other uses of the same image in other
articles/documents.
that is a fascinating idea.
…
On Fri, Sep 1, 2017 at 8:45 PM re-figure ***@***.***> wrote:
> @npscience <https://github.com/npscience> @xolotl
> <https://github.com/xolotl> what a great thread! I have been thinking
> about similar problems and tinkering with some databases namely PMC and
> Open knowledge Maps ***@***.*** <https://github.com/npscience>, can you
> introduce me to latter?) Our major motivation (to answer your usefulness
> question) was to "autopopulate" and then ask ReFigure users to confirm
> connections.
>
> I use google image search a lot and it is not clear why it lumps certain
> set of images under the same search. Google does have a reverse image
> search but I have not used it and not sure how it is doing it.
>
> our search algorithm is very basic...of course if we get to having 10,000
> images it would still be useful. The most applicable current technology
> appears to be actually identifying images based on the text surrounding
> them. On PMC ARTICLE (not image) search, the "related articles" do seem
to
> be closely related. Similarly, the articles that land up in one of the
> bubbles of the open knowledge maps are usually closely related.
>
> @xolotl <https://github.com/xolotl> does image annotation work for
> hypothes.is now?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#13 (comment)
>,
> or mute the thread
> <
https://github.com/notifications/unsubscribe-auth/AA84rF29YHksrGr4hHaqZYJpuZj6IQw2ks5seM9ZgaJpZM4PKVIY
>
> .
>
--
= Nate Angell
+1.503.927.1162 <(503)%20927-1162> mobile
***@***.*** email, chat
ixmati = skype
@xolotl <http://twitter.com/xolotl>
xolotl.org
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#13 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AdjmFU1LdZVPUD-ZtHtRWkYtyYkh5Ndoks5seXpKgaJpZM4PKVIY>
.
|
Nate (@xolotl) from hypothes.is had the idea of finding similar images using reverse image search , the process where you input an image as your search term, and google returns similar images (or other images from that webpage, etc).
For ReFigure, this could mean running the search from the figure you are looking at, find other images/figures on the web that are similar, add relevant ones to the ReFigure. So it's about discovery of related material before it's been curated/linked by a scientist.
This brings up several Qs:
Leaving this thought here for future reference, thanks to Nate for bringing it up!
The text was updated successfully, but these errors were encountered: