This movie is a great resource for us. Whoever watched it please share your thoughts. It has so many threads for biz models and so many leads for brave partnerships. The main message of paywalls is that the publishers charge an enormous subscription fees that most humans cant afford to access literature, and thus creates an unfair advantage although scientific scholarship is typically generated by public funding.
MaterialsZone greatest innovation is that our platform can take scientific publishing even another step further, it’s not just the bullying journals and their paywalls that prevents fast track to innovation, it is also the obfuscation of digital data and the inherent load that pier review brings on the authors.
MZ platform with its unique recommendation engine can create an automated statistically reviewed results on novel finger prints and novel figures of merits. In such system a pier review mechanism shall only use referees to resolve disputes rather than hold innovation till deprivation .
The Double Dip – pay to keep open, pay again to access
The first time I got a paper peer review accepted I read the copyright note that the American chemical society asked me to sign, it was saying that after signing I won’t have any commercial rights in the article, the system was kind enough to allow me a quota of invitations I could use to send to colleagues for a onetime access. Later in my career the EU urged that published work generated under EU grant has to be open accessed. The publishers had a special solution for this, you can salvage your work in advance by paying approx 3000 euro and open access it to the public. I also remember almost arguing with a co-author about using the money for this purpose and eventually he agreed. I’m embarrassed to say that today when I’m outside of university domain and people ask me to send a copy of this “open access” works of mine, the link simply doesn’t work.
Pure Obfuscation momentum preserved by the system
The absurd is that, at least for exact sciences research, in order to have a literature review to support ones ongoing research, one needs to open many publications and dig out numeric data from text fields, and many times “reverse” engineer graph data from figures to its original digital form in order have a set of fingerprints to calibrate research against, or a set of data to run analysis from a different perspective. In other words, researcher ‘A’ takes raw data, process it and select only data that serves the title of the work, then A plants data in text fields under inconsistent names. Furthermore, A makes smoothing and cherrypicking of data and convert it to images that are totally not machine readable as there is no standard in it. Then rigorous pier review process will convince A to shape his work even more so it is more concise for the reviewer to read, thinking readers will actually read the millions of papers out there. Then researcher B will have to undo all of researcher A’s work, from a pdf or a piece of paper.
The failures of NLP and text mining
The idea of machines that collect data from papers and converts it back to digital form. In my recent activity within several data savvy societies I was surprised to hear that even if one created a Natural Language Processing (NLP) engine that screens many pdfs, assuming accessing many is easy, and extract data back to it’s original form, or created a PDF add-on that tags data by human assistance, this data is copyrighted to the publishers, and can’t be published as a database for digital data
AI ML not an option
True AI/ML collaboration is a huge challenge for science for the reasons of trust and provenance (proper crediting), interoperability (the ability to exchange data between different software) and an endless array of dimensions. This is the capability of our era and the chance to solve many problems technologies of the future, the current method is not anywhere near enabling the future as it could be at least two decades ago.