Strange Posted February 18, 2017 Posted February 18, 2017 Just came across this while catching up with various blogs: http://ottawacitizen.com/storyline/worlds-main-list-of-science-predators-vanishes-with-no-warning Beall's list of predatory publishers has been taken off-line. 2
John Cuthber Posted February 18, 2017 Posted February 18, 2017 Can you find it on the wayback machine?
StringJunky Posted February 18, 2017 Posted February 18, 2017 Isn't there away to have a group of suitably knowledgeable people to create an online list of vetted and approved journals that have to meet certain criteria?
zztop Posted February 19, 2017 Posted February 19, 2017 This is extremely disappointing considering that I contributed nearly 100 items to his list.
fiveworlds Posted February 19, 2017 Posted February 19, 2017 Isn't there away to have a group of suitably knowledgeable people to create an online list of vetted and approved journals that have to meet certain criteria? If you pay to have it hosted I don't mind coding it. Or you could just start a thread here.
Strange Posted February 19, 2017 Author Posted February 19, 2017 That is only one part of his website. And, it will be interesting to see how soon the people who threatened Beall turn up here demanding that it is removed ....
Strange Posted June 8, 2017 Author Posted June 8, 2017 Someone has put a snapshot of the archive online: http://beallslist.weebly.com(Thanks to koti for that link.)
KipIngram Posted June 8, 2017 Posted June 8, 2017 I'd guess that the strength of legal threats would vary from nation to nation, depending on libel laws.
Sensei Posted June 9, 2017 Posted June 9, 2017 If you pay to have it hosted I don't mind coding it. Or you could just start a thread here. You can make C/C++ Firefox and Chrome plug-ins, which will be working parallel to browsers, and checking whether user is visiting website from the list (downloaded from website one time per week/month), and informing him/her about probably not so high quality knowledge presented on it. People using these websites, from 3rd party world especially, probably have no idea what does mean "predatory publisher", and are using them as knowledge database.. With plugins it will be automatic. But they will have to be advertised as filters for students and teachers.
fiveworlds Posted June 9, 2017 Posted June 9, 2017 (edited) You can make C/C++ Firefox and Chrome plug-ins, which will be working parallel to browsers, and checking whether user is visiting website from the list (downloaded from website one time per week/month), and informing him/her about probably not so high quality knowledge presented on it Firefox and Chrome plug-ins are mostly javascript. Anyway I have the plugins for firefox, chrome and micosoft edge. bealls_list-1.0-an+fx-windows.zip beallslist.zip Edited June 10, 2017 by fiveworlds 1
Sensei Posted June 10, 2017 Posted June 10, 2017 Firefox and Chrome plug-ins are mostly javascript. Anyway I have the plugins for firefox, chrome and micosoft edge. Sort array of urls and use proper binary-search instead of brute-force algorithm. It will require just 11-12 string comparisons, instead of 2400..
fiveworlds Posted June 10, 2017 Posted June 10, 2017 (edited) Sort array of urls and use proper binary-search instead of brute-force algorithm. It will require just 11-12 string comparisons, instead of 2400.. Yeah that's a great idea. Okay so that is updated to use binary-search. beallslist.zip Edited June 10, 2017 by fiveworlds
pavelcherepan Posted June 11, 2017 Posted June 11, 2017 Just came across this while catching up with various blogs: http://ottawacitizen.com/storyline/worlds-main-list-of-science-predators-vanishes-with-no-warning Beall's list of predatory publishers has been taken off-line. People who organise these sort of takedowns seem to completely forget about the Streisand Effect.
Sensei Posted June 11, 2017 Posted June 11, 2017 Yeah that's a great idea. Okay so that is updated to use binary-search. Now you're doing duplicate of entire list every iteration (that's always very slow), and recursion... Binary-search algorithm does not need them. Implement binary-search as separate function (split code to logic blocks), which is simply returning -1 and index at which what we're searching for is present. And then in upper level, the main code, check if( result != -1 ) which means something is found, and do code related to it.
fiveworlds Posted June 11, 2017 Posted June 11, 2017 (edited) Implement binary-search as separate function It is even faster to not use a function for it at all. It only saves like 0.2 ms on my computer though. Anyway the addon passed review by mozilla https://addons.mozilla.org/en-US/firefox/addon/beallslist/ chrome webstore isn't accepting new addons at the moment for anything other than chrome OS. beallslist.zip Edited June 11, 2017 by fiveworlds
CharonY Posted June 14, 2017 Posted June 14, 2017 People who organise these sort of takedowns seem to completely forget about the Streisand Effect. It is a bit different though, as especially smaller predatory journals can change create new umbrella organizations and push out some new journals. Most are not really in the business of making a name for themselves. Thus, even keeping the old list online, it will be outdated rather quickly. I get basically daily requests to submit in journals I have never heard of, most of which are likely to be predatory.
Paul2reach Posted May 24, 2019 Posted May 24, 2019 The use and debate about the Beall's list is still alive and kicking. See a nice overview with enough nuance: https://en.wikipedia.org/wiki/Beall's_List In my experience the work of Mr. Beall caused awareness for this threat and though certainly not flawless it can be used to make up your own mind on whether a particular journal or publisher is trustful enough to send your work to.
Recommended Posts