Open Distributed Scientific Annotations Cloud
Each reader of scientific paper can publish their annotations to a distributed public annotations cloud, others can load as they read, and discuss.
So, let's say you are reading a paper, have ideas and annotations during the process of reading. You click (or point) to the location, where you want to add an annotation. The system takes the context of the location on the paper (e.g., the reader extracts large enough context of surrounding words or sentences, which uniquely identifies the location, and allows later display the same annotation around the same text in other formats - be it HTML on the web, or other. If that's a picture, then the picture features are extracted via, and the pixel location, allowing to display the same annotation on top of the same image in other formats). Essentially, we would have the context IDs and coordinates with context associated with feature sets, with 1:1 correspondence between context IDs and feature sets, and 1:many correspondence between context ID and annotation.
Then, who-ever reads the paper, in what-ever reader, they could load public annotations, browse their history. This would be nice to have a conversation per annotation. E.g., each annotation creates a possibility for thread of comments. Inside the comments, you could refer to other annotations.
Moreover, each paper would have its paperid generated based on the feature extraction from the paper's text, especially title, summary, and, if there exists, just use the DOI. It seems good to make such system as widely usable as possible, not just for scientific papers, but for any PDFs in general.
Hopefully, this would make reading papers not a lonely activity at all, and cross-pollination of ideas lead to many new developments.
A hypothetical electromechanical device enabling individuals to develop and read a large self-contained research library, create and follow associative trails of links and personal annotations, and recall these trails at any time to share them with other researchers, and would closely mimic the associative processes of the human mind.
Maybe this idea could be extended to any content published on the web?
Is it the location that is important, or the actual text snippet being annotated? Maybe you can somehow hash the text snippet, and then process any document (mostly in any format, just extracting the text) and produce the set of hashes of snippets that could ever be annotated in it (using some clever form of rolling hash or so). Then use that to fetch all the annotations ever created for it using some form of content-addressing system.
Your idea reminds me a bit of Xanadu.