# Describing the progress of projects – a drama in 5 acts

Some time ago, I was looking for a nice, vivid way to describe and present the progress of our projects we do (for a meeting with VIPIs – very important PIs). Usually, when people have to come up with this, they present some numbers – percentages to show the progress. I don’t like this, because of the following reasons:

• Percentages are often given without greater context. Ask yourself: What is the measure behind it actually? Are we talking about the (relative) amount of time, work volume, …?
• Percentages are often thrown at you without giving a reference/base (i.e. the total time, work volume, …). It is actually almost impossible to give this information – unless you already completed the project. Then, and only then you would KNOW how much time or work a project costs (has cost). This means also that percentages are only right at the two extremas: 0% (not started) and 100% (completed). The rest is pure assumption.
• Individual tasks that have to be done for progressing are hard to compare and judge in terms of the actual contribution to the project. You just have overcome an issue – is the plus in percentage now 50%? 30%? 53%? Has every task the same weight?

You will find out for sure in the end. When you have done the project. But this is never the point when you show the progress of a project. You show the progress if you are RIGHT IN the project, don’t you?

## What a drama!

Let me tell you something about the classical drama, common in France during the 17th-19th century. It consists usually of five acts:

1. Exposition (Introduction/Prologue): Setup story, character, setting
2. Complication/Conflict (increase until climax/rising action)
3. Peripeteia (climax): Inversion of circumstances/feeling of hero
4. Retardation (falling action, decrease until conclusion): Slow part from climax to final resolution
5. Dénouement (conclusion/resolution): final status of all persons (damned/happy end)

When you read this, does it not come to your mind that every project you did, do, and will do, is actually a drama? Replace hero with researcher, story with hypothesis, character with molecules, and settings with instruments, and so on…

Hence, I decided to use this structure to show the progress of our projects. First, let me rephrase the terms a little bit:

1. Exposition: Setup of hypothesis, review of literature, first ideas, plans, …
2. Complication: First experiments, revealing issues (i.e. complications), first attempts to solve these issues, … This is the part of the project in which you feel that the project is controlling and working against you.
3. Eureka: developing solutions to issues, overcome them (inversion of complications). This is the part of the project in which you gain control of the project. Step by step you increase your confidence.
4. Protocol: Slow part, in which all experiments are systematic and „nice“ (i.e. according to protocol). Here, you control the project. You wonder from time to time why you had problems in the first place.
5. Dénouement: Publishing (any form), planing next subproject/sequel, …

I assigned colors to these acts (or phases as I will call them from now on) and show the progress of projects like this:

As you can see, I use a gradient between the individual colors to visualize the progress. Project 5 is practically finished (just published the paper for this) – it has the colors of all phases. Project 7 is in the protocol phase. Project 2 and 6 are both in the Eureka phase. However, you might see a little difference in the shading of the gradient. The achievment of a phase is a fluent process. Here, I wanted to express that we already worked a little longer/more in the Eureka phase on Project 2 than on Project 6. Project 3 and 4 have not really been started yet (we practically know only that these projects exists and read a paper or two on them). For Project 1 we just started with the very first experiment.

How to setup the gradient? Well, since we describe here a process that has already taken place, we can simply use the spend time on every phase to balance the gradient. It does not need to be too accurate, though. If you spend a week on the hypothesis and literature, a month for the initial experiments, and now just started the third phase (let’s say for two weeks), then your gradient ratio would be 1:4:2 or 14%:57%:29%. Simple, isn’t it? And no more GUESSING.

I realized, however, that just colors do not work alone – I needed kind of a “progress bar”. The length of the progress bars you see in the above picture represent the progress according to the total phases (i.e. 2 out of 5, 3 out of 5, etc) – wherever I feel it fits better in. This can be seen for Project 1, which just finished phase 1 and already has a slight shade of phase 2. However, the length is still 1 out of 5 (20%).

Note that I also listed some minor projects in the slide, which I did not evaluated in terms of progress. The reason is that these projects are always “present”. We are always working on new mathematical models or little things to improve our daily work.

I just wanted to share this here in case anyone else finds this useful 🙂

# Random open impressions

Disclaimer: Maybe you do not agree with some of the points. But before you rush to the comment section or Twitter: Think about it. Just one day or night. Purpose of this text is to give some impressions. Some things might sound very sad, some might make you angry, some might make you hate me, but still be nice. I am not against one thing or another. I just describe things, which bug me.

TL;DR: Free some time, grab some tea, and come back. Or never come back. Life will not always give you an abstract. (Think about it: if you just have time for the abstract, why do you need open access?)

# The semantic open bubble

As with probably each community, there are insiders and outsiders. As insider you will get used to phrases, terms, and concepts and use them very natural and without thinking about them. For an insider everything is clear. Context, semantics, …. For outsiders however, the situation is a little bit different:

The bubble phenomenon. And yes, I used “it” instead of “he” or “she”.

In this case, everything else than “open” is unclear. Please keep this in mind when reading the following texts.

Let me tell you a story from my time at Regensburg. It was a while ago at Scientometrics 2012 at the University of Regensburg (most of the content is broken already). I found this interesting, so I attended some of the talks.

At one of the days, there was also a meeting of our alumni association (Chemistry). One of the attendees was a not-so-young professor, who’s name I redact (don’t want to denounce someone). Actually, he wondered what was going on downstairs because the Scientometric talks were actually in our department. So, I told him. His response left me baffled: “Scientometrics? That can’t be that important if no one knows what it is.

That is true. Ok, it has some flaws and, therefore, might be not the very best system, but it works. Just very roughly: people do science, write about it, get reviewed, publish it, there is some progress. Nothing is really open or transparent. Like an old car, the current publishing system works. It became rusty over time. Oil is leaking from somewhere. There is now new technology for comfort, safety, etc., but still this old car takes the owner from A to B. It works, because it does what it should do. The owner got used to its flaws, bugs, and discomforts. The owner likes the car very much.

This is point zero.

Now, you – as a friend of the owner – come around the corner and tell the owner that this car is not up-to-date anymore in terms of safety and technology. Furthermore, the costs for maintaining the old car outweigh its value by magnitudes. Conclusion: The owner should get a new car. An open car. Have you ever tried this (or something similar)?

Anyway, I think everyone can imagine this situation, right? We have two perspectives here: the owner and the friend. First, let us look at the process of getting a new car from the perspective of the friend. Getting a new car is similar to an exothermic process. The benefits of having an new car (more comfort, less flaws) outweigh keeping the old one:

How the friend sees it and trys to describe it.

However, the same process looks a little different to the owner, who loves the car and got used to it. The owner also does not see the full picture, but the huge activation energy. Thus, the process is more like this:

How the car owner sees it. Endothermic.

# Open Access: Open on one side only

Probably, the primary argument for open access is the “free to all”-argument, i.e. free access to knowledge. I mean, the name says it, right? The reader does not have to pay.

Let us visualize this a little bit! First, the classical model:

Open access:

So, whoever could not afford to read the journals, now cannot afford to publish in the journals. An indicator for this is the raising of the predatory publishers all over the world. They offer what people can afford: discount publishing. Of course, discount is exactly what people get: Saving peer-review, copy-editing, etc. This is despite the fact that there are a lot of waivers for publishing in more reputable journals (e.g. PlOS One). However, there are waivers for accessing, too (e.g. Elsevier).

From this point, I am not sure which model is better. Here, open access does not seem to be the fantastic game changer. It is not like people could not get access to things if they really wanted to. Ask a friend or use another affiliation (e.g. through VPN) to read some paper you could not access to. What is easier? To share a PDF or to share publishing costs?

Do not get me wrong, though: I am not against open access. But, maybe like a Pokémon it should evolve into “open science” at some level – like depicted below (where ? is some government funded agency)…

# OpenTransparent peer-review

Here, I refer here to the story of “put a man on your paper to improve it“. Still, I do not know the reviewer or editor. Only the victim. If you every asked yourself why sexism (or any other form of discrimination) is still so present: Here you might have one of the major reasons.

The female postdoc who experienced this situation is known (I will not repeat her name here). In fact, she had to go to the public to deal with this situation. IMHO nothing would have happend if she just wrote a formal complain to one of the other editors. Actually – let’s be honest – nothing really had happend to the editor and reviewer at all. They removed the anonymous reviewer from the database and asked the anonymous editor to leave. We do not want to damage their CVs and harm their careers, right?

Before I want to go on, I also want to refer to the post of Michael Eisen. It and its comments section explain very well why this situation might not be easy to deal with. I agree. Not easy. Maybe you need a case-by-case discussion/jury/etc.

However, this case is clear IMHO. This is not a border case. There is no misunderstanding. I hereby demand the name of the reviewer. Why? Simply because I do not want to work with such an asshole, ever. Furthermore, I think the career of the reviewer cries for an arrow to its knee.

There are of course other reasons as well. What about the postdoc for example? As said, the postdoc is now known. Does this harm her? Who knows. Somewhere in future she might actually – without knowing – meet the reviewer again. The reviewer knows her. Driven by envy (she got a lot of attention) or irascibility (removed from the reviewer database? because of her?) or any other illogical emotion, he can do the same thing again – or harm her more. “Oh, she needs this grant/position for her male-free research? Well…” And what about other authors? But do not worry, I am sure they eventually remove the reviewer from other databases as well!

Also, other people from the scientific tea party might just not like her for what she did. But her name is known.

Will open peer reviewing be a solution to harassment? I think, it has nothing to do with this. Obviously, it might be not as easy (instead: joking: “Also, I think there are too many female authors on your paper… just joking, I do not want to make you cry 😉“) or direct (instead: parallel communication) as a closed, one-side-anonymous peer reviews but it will happen. For sure. Look at all the social networks. Facebook and Youtube. Oh, how nice people are there! And do not make the mistake to think that you (neither me!) are better because you are a scientist! In the end, we are all humans. Driven by emotions and passions (otherwise I would not do research).

Open peer review also opens the other direction (from authors towards reviewers) for harassments. Imagine, someone will write a very critical review on one of your papers. Next time you write a review for their paper – are you biased? What if someone is not just biased but really angry? What if a particular subcommunity tolerates harassment (social auto-regulation does not work in every case)? Does open peer review solve this?

No. You will deal with the same good or bad people. They just re-dressed in a nice suit. Harassment will find it’s way. The point is just how we deal with it. Code of conduct? Ethical guidelines for reviews? Yes, very important.

Mostly needed is transparency, however. The more transparent a process (to everyone) is the better. There can be transparent open as well as closed peer reviewes. Let’s take the situation from above. Why is nothing know about the actual evaluation process of this known? Did they ask author, editor, and reviewer for statements? If yes, what were these? Did they tell the author who the reviewer was? How came they up with the conclusion “The best way to deal with the situation is to remove editor and reviewer from our database and never talk about this again.“? Afterwards, you could just publish the whole story as a paper (to be clear: during investigation is another thing. Care must be taken.). Or do nothing of this at all. Well.

# Bottled science for everyone

Still, some people seem to be surprised about how capitalism works. Just a little reminder: You probably buy (like me here in Canada, because the tap water is not as good as back in Europe – here in my apartment) bottled water. Bottled. Water. Water is THE basis for life. Should be available for everyone. People die of thirst on this planet. Still, there are companies such as Nestlé out there, which think these people should buy water. Bottled.

Back to academia! Let’s take foe #1 of the community: Elsevier. Elsevier is a Dutch company and it’s a corporation (Elsevier B.V. – Besloten vennootschap met beperkte aansprakelijkheid, Ger. Gesellschaft mit beschränkter Haftung, Limited Company, …). What is the #1 aim of such a corporation? Yield (Ger. Rendite). People do not invest their money in a corporation for curing cancer. They want yield, i.e. more money.

How do we generate yield? We sell a product for the highest price possible, i.e. the price – in this case – libraries, universities, and public/private institutes are willing to pay. And Elsevier does a pretty good job. It indivually negotiates with every single customer about the price. They want science. Elsevier sells it. Bottled.

There is no ‘damage‘. You may say that you payed too much, because you found out that the product you bought is not as much as worth as you thought. You may say that other payed less than you. However, you were willing to pay USD$9.8 billion (too much). Maybe Elsevier (and/or other publishers) ripped you off (which makes you look kind of stupid, retrospectively). Well, feel free to change this for the future! But besides that, ‘arbitrary money value‘ is the impact factor of the capitalistic world. It is really bad at measuring anything, i.e. impact/relevance or ‘damage‘. Why? Sorites paradox. People will say a ‘damage‘ of €1/$1/apple is not relevant (well actually it is relevant – just depends on the situation, perspective, etc.). When does it become relevant? €1? €100? €10 000? €1 000 000? … And can you just compare a $10k-‘damage’ to another$9.8b-‘damage’? I think not.

But my numbers are not arbitrary! Here, take a look at my method/data!‘. Oh, I am sure you can put your little numbers in Excel and it will calculate something out of it. As does Thomson Reuters. There is just a little – negotiated – correction factor. Did I mention capitalism, yet?

Just do not get me wrong: There is real damage done by these publishers. Björn Brembs mentions some of them in his blog. Irreproducibility for example. Kills humans. I cannot object.

Is that not enough, though? Do we need a dollar sign to have a stronger argument? Stronger than ‘kills humans’? Sometimes, I get the impression it is just about ‘whining‘ (‘But my/this damage is so much more!!!!111one‘) not about the actual issue. For me, it could be $9.8 (why do we even give .8? As if it would matter that it is .2 less from$10…) fantastillion arbitrary ‘damage‘. Don’t care. The number is so high and unimaginable that it loses any relevance (for me). The number is not even connected to me, i.e. I do not pay it (just a negligible part of it due to my role as taxpayer). Thus, this number does not bother me. And I am very sure, I am not the only one…

I perceive academia as social community. The community provides PDF not available for everyone (e.g. #icanhazpdf), collaborates on projects, and shares ideas, source code, data, etc. Orally on conferences as well as in any kind of written form. Most people I met keenly told me about their research/projects. The community wants to be ‘open‘ by nature. There is even a pirate bay for publications. Money – again as I perceive it – plays a minor part (excluding things such as patents). Money is just a resource you have to harvest (funding) and then can spend on more research. If Sigma Aldrich would accept apples, every research group would have their own orchard – and a major task of PhD students would be to maintain it.

Academia wants to put knowledge into a well of wisdom, free for everyone, who is thirsty. This ‘ideal’ world is now confronted with the ‘real’ one: bottled science. Obviously, this will not work for good. From the perspective of academia, the issue it has to solve (for every single part) is: Quo vadis?

Do you want private publishers? If not, why not nationalize/socialize academic publishing? Why not taking these fantastillion dollars from above and just let a public institution handle this? In my honest opinion: As electricity, health, water – knowledge is a human right – should be delivered to your home for a nickel. To every home.

Now there are all these nice alternative publishers, such as Winnower and F1000. You know what I am afraid of? That The Winnower and F1000 Research Ltd. as well as this other famous California-based corporation will at one point grow too much (economically) or simply be bought. Don’t say it’s not possible, because it is. That there might be a point in future, when capitalism-mode will take over. Ethics? Open values? Number one question will be: How to increase yield. Every year. Well, I think I know how this will start. Any company can start by people with the most honest and best ideals. But every dollar earned will scratch just a little bit at these ideals. Until they break eventually. But yeah, let us do the same mistake again and again. Why not.

There might be strong arguments for private publishing, though. If so, and people want it, then things have to be arranged to not get ripped off now or in future. Negotiations must be open for example. The data for any kind of measure must be available and no arbitrary correction factors should be in use (‘Nice impact factor you have here. What a shame if it would drop…‘). Etc.

# Damage in trust

Let me tell you a story about one of my papers. As you can see (if you click the link), it was published in Journal of Separation Science, a nice little journal. It fits there very well, I think, and I still like it being there. Reviewer and editor were very nice. Nice experience!

Another thing you might recognize: I am the senior author (nice opportunity my PI gave me back then), i.e. I had completely free hand what to do and where to publish this work. Journal of Separation Science was not my first idea. Actually, I wanted to publish it in Chromatography (Publisher: MDPI). Open Access.

Back when we did the work, they just started with the journal and it was free to publish in there. I looked up some people on the board and the papers already published there and decided it would be a good home for our thing.

However, this here also happend during this time. Beall put MDPI on his list (and removed it later – too late). Back then, I just recently got to know this list.

Now I was unsure and full of doubts if we should really publish at MDPI. I mediated and pondered. I tried to find more information, positive information about Chromatography and MDPI. There was none. Just Beall’s post about this ‘chinese’ publisher, which is now a COPE member and all (but was not back then).

In the end, my doubts told me to go for a known journals rather than for an experiment. It was my first senior author paper and I didn’t want to risk it (could not afford OA there, was glad to pay the 275 Euro for beyond-page-limit – seriously… page limit? in the digital age?). May sound stupid, but that is what happend. Up to now, I do not really trust MDPI – I cannot really say why. And I feel somehow sorry for this.

The damage of trust is underestimated. And people such as Jeffrey Beall are really good at planting the seeds of doubt. I know there are a lot of people who prove him wrong or finding flaws (or no data at all) in his lists. These people are important.

However, IMHO, no post will ever fully cure the initial damage done. I think that can only happen if one (here: me) successfully publishes a paper in Chromatography and sees that there is nothing wrong with doing so. Maybe I risk it in future. Maybe I don’t.

Still stupid?

Well, Beall hates Frontiers. Now you may know that Frontiers is ‘good’ – or you don’t. Imagine now that you want to persuade someone – maybe co-authors or colleagues – to publish there. It is new for them, so they will search the internet for information (i.e. ask the allmighty oracle: Google). They will find and read these negative blog posts. Seeds of doubt planted on this new world. And they know the old world still works (as stated above). You can have all arguments pro-Frontiers in your pocket, be to most charismatic person (you are not!), and I also grant you infinite amount of time. Guess your success rate. Be realistic.

Moreover, it is worse than you might think: Raising one negative concern in form of a Tweet or blog post (if right or wrong does not really matter) about one-of-these-open-access-journals does not hurt only this one journal but is a knife in the back of the whole open access community. All these nice ideals, and then a commercial publisher seems to decide to just make money out of it by including pseudoscience.

Is there a solution to this issue?

Bye the way: This is probably the section, which explains my tweets with Brembs. It started with suggesting to start a community driven (black)list. Read for yourself.

# Open privilege

I want to emphasize this shortly: Being open – unfortunately – is a privilege but not a right. Being a professor with tenure and everything is a good starting point. Being in a group where your boss is a convinced advocate of open science is a jackpot (play in the lottery!).

Just wanted to say here: Don’t exclude people who want, but maybe cannot be open.

# Open tools

Being open requires new tools. Open tools. Open source for open science needs! Yeah! So, what do people point you to when asked? Github and Python. Github. And. Python.

Maybe, we should start talking about the technical skills of people. I recently had to explain someone how to make a screenshot. Twice. And how to actually print – it’s harder when you have virtual and physical printers set up on every machine and you have to adjust the settings – people ignore the settings dialog completely and just press OK. Everything which looks confusing on the screen will be ignored. There is a nice presentation on Figshare about this but couldn’t find it anymore.

Anyway. We live in a world where people put “Word” as technical skill in their CV. Have you ever thought about what this actually means? It means “I can turn on the computer, double click on an icon, and use the keyboard to input letters and words to the computer. Sometimes I save them, too.“. It is the digital equivalent to “I am able to write with my pen on a notebook.” – and really, they use it for everything:

Everything redacted. As I said, I do not want to denounce anyone.

And these people are not only undergraduates or pupils, they are professors, postdocs, or research associates. People who can use a smartphone and log into Twitter/Facebook/whatever, but fail to use basic functions of a work computer. People who get half of their life trained to use a specific piece of software such as Office or Origin and are then unable to use something new.

We have control and evaluation software for our capillary electrophoresis instruments here. In the old version there is an “analyze“-button in the toolbar with a little green calculator on it. In newer versions they replaced the icons of some buttons including this one. The button is at the exact same position as before. Has the exact same size and tooltip. Does the exact same thing. What happend? People complained “There is no analyze-button anymore!” – they just could not find it. People with several degrees could not find a single button.

The situation with Office is even worse. I have to submit papers in Word-format – not because the submission-software of the publishers could not handle Latex-scripts (actually all these platforms can handle a lot of formats! It’s really amazing!) – but because at the step of copy-editing someone is not able to handle formats other than a fancy docx-file. Yeah, be open but please use this proprietary format from Microsoft (I know it is theoretical open – but is it really an open standard?).

Now, if people are trained to use certain software and you want to give them an alternative, you just have to copy the UI and say it is the same, right? Well, people use Origin, which is probably the worst designed software on earth. This software costs a lot of money (500 bucks in whatever currency for a single license). There are free alternatives such as Qtiplot or SciDAVis. They do exactly the same. They even copy the hell-of-a-user-interface. But what do people do? Yes. They use Origin. And I am not talking about legal versions yet.

And the best part: Do you know who people blame if something goes wrong, i.e. they make a mistake or cannot figure out something? The software (“Origin did not want to plot my data exactly how I wanted“). The computer (“crashed last time“). It is so easy!

Once. Again. You point this people to Github? And. iPython? [Include very loud laughing here].

No. Let us be serious. You cannot exclude technical-not-so-skilled people from the scientific community. However, there is no way that in a lifetime you will train all these people to use software like Python or Git. And to be honest – I do no think it is necessary. We can develop good, robust, open software for all our needs. And we should do this, because it is necessary!

I give you an example. There are all these discussions about “open/publish all your data” around. But, if you cannot properly document your data, then – for me – the data is worthless. Documentation is so important. A single little step – you may or may not document – can make a hell-of-a-difference.

So, I had an idea: I wrote a grant for a software tool to help chemists document and organize data. For people with technical skills as described above. Furthermore, I think only a chemists knows or understands what another chemists needs (or might need). At least the communication between chemists is easier than informatician and chemists or else. After half a year, the grant was finally rejected. The reason was given to me like this: “This can be done by any informatic group in 3-7 months and probably the author never heard of Github or iPython.” Welcome to the bubble. Well, I think I just have to make myself more clear in the next try.

# Comments on ‘The open research value proposition’

These are some comments on The open research value proposition: How sharing can help researchers succeed, shared through Twitter.

151: I find this a very controverse section. On the one hand you say: IF = stupid (I agree). On the other side, you support the IF by giving numbers. Wouldn’t a statement á “we will not give IF for the journals…” be better? Another question: Is there a better way of comparing the quality (content!) of journals? What about indicators for Visibility such as Altmetrics – do they differ for OA/non-OA? (You have “visibility” in the title of this section but all you talk is “journal impact factor here”. This is not good, I think. Sorry 🙁 )

268: I really “love” the Eigenfactor list, where almost all “Chemistry”-journals are in the category of “physics” (do not even dare to start a discussion) and most of the journals are very specific to certain regions (“Turkish Journal of…”, “Indian Journal of …”, …). Additionally, the list looks not very polished (I can sort, but only descending category, for example). I know, you are not responsible for this list. However, presenting this list could scare me (and maybe others) off instead of attracting to open science. Do not know how to improve this, however 🙁

271: Maybe emphasize that PeerJ is 99$per article PER AUTHOR, i.e. an average paper with 3 authors costs$297 dollars, which is more useful for comparison. Maybe check the other prices, too. Actually, a table for the average paper (is it actually 3 authors? it is not ONE author, for sure) scenario would be nice. Question: What does it cost to publish a 3-authors-paper in…? (with and without special membership options maybe). You have the data, so present it to the reader! 🙂

303: The DFG (German Research Foundation) has special programs for OA to support Universities (and in turn authors). Maybe want to add this: http://www.dfg.de/en/research_funding/programmes/infrastructure/lis/funding_opportunities/open_access_publishing/index.html

# A little gedankenexperiment

The DFG (German research association) had a budget of about 2.7 billion Euro in 2014. Let us take half of it, i.e. 1.35 billion Euro.

In 2015, German Universities (including Hochschulen) have about 28 000 graduations, i.e. there should be about the same amount of PhD students (at least we assume this for here). Additionally, we have about 190 000 people employed in main part of science (docents + assistants, scientific and artistic personal, lecturers for special tasks – did not count professors or side-jobbers). This includes postdocs of all sorts.

In total, we need to pay 218 000 people per year. In Germany, there is a table for this (in Euro per month). Let’s assume E13Ü, level 6 for everyone, i.e. 5500 Euro per month.

That are 1.2 billion Euro plus some overhead (administration/bureaucracy). So 1.35 billion Euro could be enough, couldn’t it be?. Of course, this was only half of the budget. For the other, we could just spend time to apply for equipment and stuff. Or… we just pay everyone (or for everyone, i.e. research group leader) 5500 Euro research money per month (plus some overhead). Also, there could be some mechanism, which allows you to take money in advance, so you can buy a mass spectrometer or something similar expensive. This certainly needs some more thoughts – but in principle?

I know and hope that someone will come around the corner and tell me why this is a naive assessment of the situation (Milchmädchenrechnung). If not, I do not understand why we just waste our time with discussions about how long contracts should be, why we are spending too much time (equals work time and salary money) on applications for research money, why we underpay our Doktoranden, and why there are even researchers, who do not get payed at all.

# The Power of Kindness

This is a response to Inger Mewburn’s post, which you can find here. It is a nice post (thanks for the book tip, Inger, I will definitely read it!). Now, I just want to share some thoughts about this topic.

The topic is about jerks in academia. I think everyone could share a story about such people. For instance, I volunteered in doing a talk shortly after my PhD. I not really had to do it – it was just a favor for a good time and a good collaboration. They wanted to extend to grant (it was a research training group), there was an evaluation commission, and so students from this project should do talks and tell about the project. My old advisor asked me and I said yes. However, I was actually the backup for someone I knew 99.999% would do the talk.

Nevertheless, I prepared the slides, practice the talk, discussed it with my old advisor. And then there was a “final rehearsal” where everyone should do their talks. My talk was the last one. It was a really bad talk. Today I am a little embarrassed when I think about it. Imagine the audience: about 30 students and about 10 professors. Students were fine (just asked a couple of questions), but there were a lot of critics from the professors.

Now, critics is fine. How you deliver it, is the question. I like to be direct and ask people to be this likewise. However, you do not have to be an asshole to be direct and vice versa. Among the professors there was the One. You probably have seen this oh-I-have-also-an-affiliation-with-a-top-university-somewhere-else-so-be-glad-that-I-talk-to-you-at-least.

One literally insulted me and my talk. “Das ist Fliegenschiss!” (about: This is no more than flyspeck – hard to translate such things!). The more he talked the more he raged and got even more angry. Minutes. I got angry – and I swear if I would have something in my hand… The others did not say anything. Fear? Do they fear this colleague so much to not interupt him? Well, one tried. She started with “Look, we appreciate your work and contribution to this research training group, but …” Then raging One took over again. Tension rising. It was a hard trial for me. I do not want to talk too much about details – because this makes me angry again. I do not want to be like this.

Nevertheless, at some point (in one of the raging breaks) another professor raised his hand. Let us call him Professor Teddybear. He looked like one. People told me that his hearing is not really good anymore. Professor Teddybear raised his hand. And he pointed out some very plain thing (something about puting a name on one of the slides), another professor pointed out just before him (another raging break). Everyone was aware of this. I looked at him and at my slide. Looked at him. I was about to say something like “Yeah, X just pointed that out.” but…

He smiled. Professor Teddybear sat there, his eyes half closed, and smiled. “This is a very good idea.“, I said and smiled, too. Suddenly, all this tension and rage… was gone. People laughed (except One, of course). Session ended well.

Do not tell me, this is not power – to relax everyons tension at once. To make people comfortable. I want to be like this. (Still, I do not know if he really just did not hear the comment before him – in future versions of this story I will probably just skip his omit the info about his bad hearing…)

Since then, I try to be more relaxed, which is hard work, in fact. Irascibility is a beast. You need to hunt it down. Every day. Every moment. Now I am at my second postdoc (Canada – incarnate politeness and diversity – love it!) and just try to be kind and nice. Not to rage. Seeing things very relaxed – a good way to stay healthy in terms of mentality.

You know what? People like me. They come to me to get an advice. I suggest things based on my experience (I do not use my experience to argue!) and try not to force my opinion on people. I tell them when I may be wrong. I tell them “my way might not be the best, you have to figure it out yourself“, etc. Being a good friend and colleague is the key for good collaborations and work. I have some ethics.

As far as I understood, this is what Inger & Friends are doing as well. I like to be part of a “Circle of Niceness” rather than a part of people everyone fears. Kindness gives you power. It can be as solid as rock withstanding waves after waves of raging. Smiling. This is true power. (By the way: buddhistic way of thinking helps a lot here!)

I think – and this is important – people appreciate kindness. There are probably far more people of kind manners and honest attitudes than you think. The only reason you never hear about these persons: Raging is much louder than smiling.

So, what do you want to do ’bout these assholes?

# Introduction

For quite a long time, people discussed about terms and concepts like ‘impact factor’, ‘negative results’, and ‘reproducibility’ in all kinds of analog and digital media. Here, I want to make a little contribution to this discussion on my own, mainly because I feel that some important points (at least important for me) often get lost in the shuffle.

The title may be familiar to you. It is as some of the following concepts and ideas based on a book I read a while ago. The German version, which I read, is Dalai Lama (2013). Das Buch der Menschlichkeit: Eine neue Ethik für unsere Zeit, Bastei Entertainment, ISBN: 9783838749174 (eng. Bstan-ʼdzin-rgya-mtsho XIV. (2001). Ethics for the New Millennium, Riverhead Books, ISBN: 9781573228831).

How does a (buddhistic) book about ethics fit in here? Well, let us start this discussion with something, which has been discussed so many times that probably everyone is tired of it. Me included. However, I think it is important for explaining the later concepts.

# Impact factor

Hence, let us talk about ‘impact’, first. What is actually impact? According to the Oxford Dictionary it is:

1. The action of one object coming forcibly into contact with another.
2. The effect or influence of one person, thing, or action, on another.

Do these definitions really tell us what impact is? I find them kind of vague. ‘Influence’ or ‘effect’ are exactly like ‘impact’: everyone has an imagination of the concept behind the word, but there is no math-like, exact definition. You cannot really grab it, can you?

If I throw a stone (a pack of journal-volumes) into a peaceful lake (a crowd of scientists), it will have an impact for sure. However, measuring all the effects will be impossible. First, the stone will displace the water. This will cause waves. Animals will try to dodge the falling stone in the water. It will eventually crush into the bottom of the lake and whirl up some sand or other stones. Maybe destroy something. Also, do not forget about secondary impacts such as the waves reaching the border of the lake and having their own impact there. Of course, this impact was caused by me throwing the stone.

Now, we try to measure this impact – despite the fact we do not even really understand what it is actually – by number of citations. For the metaphor from above, this would be like counting the waves the stone has caused and dividing it by the number of all waves (during a certain time period). On top, we do not even count it ourselves, we assign this task to a company with financial interests.

So, what about the impact of the stone under the surface? What about long-term impacts? What about the stone itself? With no word, I described the stone. Not the size nor color nor material. It could be toxic for the lake or just stay there for hundreds of years and be a home for organisms. But wouldn’t it be important to know all these things to get a more complete picture of the stone and the nature of its impact? To know what actually IS the stone and not (only) what it causes?

Analogously, wouldn’t it be necessary to know the content of scientific work to assess it? Sure. However, the impact factor will not tell us that at all.

Every publishing metric system so far including the impact factor is subject to the principle of the sorites paradox. If you have a heap of sand, it does not matter if you take a sand corn away or add one. However, we know that two, three, … sand corns are not a heap and therefore it would matter. So, there must be an abitrary boundary where a couple of sand corns become a heap, right? For metrics of scientific work this is – roughly – the arbitrary boundary between very important and not so important. Hypothetically spoken, if a metric system is subject to the principle of the sorites paradox, its nature is arbitrary and its value useless.

Does it matter that Cell has an actual impact factor of 32 (2014) instead of 30? Does it matter if one of my paper has 4000 Downloads instead of 3800? But it does matter if 100 people read this post instead of just 2! So, where is the boundary? 50 people? 25 people? For any metric, can you ever tell where exactly this boundary is? It seems arbitrary.

Let us for a moment assume we have a metric scale with only one boundary and we knew exactly where this boundary is (our scientific cat told us!). Then it is obvious that this boundary divides our metric scale into two areas, a low one (i.e. not so important) and a high one (very important). Within these areas (other people would say ‘classes’) the metric value becomes completely useless. As for high impact journals it does not matter if a low impact journal has an factor of 1.0 or 1.5 or 2.0… It does not matter if two persons read my article or three did.

There seems to be a straightforward solution to this problem. You can simply create more classes or areas by introducing more boundaries. For instance, you could try to distinguish between few readers (around 5), many readers (around 100), and a lot of readers (around 1000). However, the problem remains the same: the boundaries between classes are arbitrary if set and the metric values itself become useless within a certain class. The following picture visualizes this thought.

In turn and in my honest opinion, this renders these metrics useless and arbitrary. Yet, people like and create this kind of metrics for a simple reason. In contrast to the definition of ‘impact’ from before, a simple number is graspable. People can work with simple numbers. Counting, comparing, sharing, simple math. The higher the number, the more of ‘it’ – whatever ‘it’ is. People feel better with ‘more’ instead of ‘less’. People like having more IQ points (than others). People like having more impact points (than others). People like having more working hours (than others). The last one seems wrong at first glance, but since the impact of work generally is quantified by counting the number of working hours, more hours imply that a person is more valuable (for society). Analogously, the quality of peer-review is often quantified by counting the number of days it takes.

There is a reason why this kind of ‘simple’ metric does behave as described. These metric systems count things, which may or may not be linked to the state of a work (e.g. important, of high quality, …), instead of measuring a quantity, which is linked (directly or indirectly) to the state. The impact factor tries to describe the quality of scientific work in journals by counting citations. It should be obvious that the impact factor fails in doing this. If a journal just publishes (not obvious) frauds and people just reference to these works, because they investigate every single paper to expose these frauds then the journal possesses high impact but no quality. Hence, the state of quality is not given by counting citations. It might be an indication (in both directions), though.

Let me try to make this even clearer by taking a converse example from physics, viz. temperature scale. Any temperature scale (metric) is linked to the physical concept of temperature. Temperature gives information about the state of matter. You can draw exact boundaries (e.g. liquid water between 0°C and 100°C, ice below 0°C, etc.) and it matters anywhere on the scale (in thermodynamic systems!) if something has x degree or x+1 degree. Tungsten melts at 3422°C not at 3421°C. Oxygen freezes at -219°C not at -218°C. This is exactly the opposite behavior of the metric systems described above.

Another important characteristic of the latter metric system is the existence of outer boundaries, i.e. it seems to have natural limits on both ends. For temperature this might not be obvious. There is an absolute zero temperature on one end. Since temperature describes a thermodynamic equilibrium the absence of this equilibrium renders the specification of temperature completely useless although it is done for describing plasma states for example. In this case, ‘temperature’ becomes a countable metric – it just counts the thermal kinetic energy of particles. In consequence, there is an upper boundary for the temperature scale (i.e. when there is no thermodynamic equlibrium anymore).

Conclusively, if we want to assess scientific work and its quality with a metric system, we cannot just count things. We have to find or create a system, which is linked to the qualitative state of the work, which allows us to set exact boundaries, and which possesses outer boundaries (i.e. is restricted on both ends). The question is just: Does such a system exists?

# Criteria for assessing scientific work

Before we even think about a new metric system, we should find or define criteria by which we want to assess scientific work. No matter which we (or others) chose, in the end, the aim is always the same: we only seek a tool – no – THE tool, the one ring, the holy grail, the… to assess them all! To assess all scientific work!

Now the time has come to reference to these books I mentioned in the introduction. As said I read the German version and will therefore cite this here (did not find the original text). The following quote is about a principle to assess a moral action. One of the principles, I try to put into practice everyday.

Daraus können wir ableiten, daß ein Kriterium zur Beurteilung einer moralischen Handlung darin besteht, wie ihre Auswirkung auf die Erfahrungen oder Glückserwartungen anderer ist. Eine Handlung, die diese verletzt oder ihnen Gewalt antut, ist potentiell unmoralisch. Ich sage »potentiell«, weil die Folgen unserer Handlungen zwar wichtig sind, es aber noch andere Aspekte zu bedenken gilt, etwa die Frage nach der Absicht sowie die nach dem Wesen der Handlung selbst. Uns allen fallen Dinge ein, die wir getan und mit denen wir andere verletzt haben, obwohl das keineswegs in unserer Absicht lag. (Seite 36ff)

This is the English version (by Google translator and some modifications from my side – if someone has the original text, I would be more than happy to cite it!):

From this we can deduce that a criterion for assessing a moral action is its impact on the experiences or expectations of happiness of other people. An act, which hurts them, is potentially immoral. I say potentially because the consequences of our actions are important, but there are other aspects that are important as well such as the intention and the nature of the action itself. We all remember things we did that hurt others, even though that was not our intention. (Page 36 et seqq.)

I think, we can adapt this concept for assessing scientific work. Let us just replace ‘moral action’ by ‘experiment’ or ‘scientific work’, ‘consequences’ by ‘results’, and ‘nature of action’ by ‘performance’. Then the marked section becomes ‘the results of our experiments are important, but there are other aspects that are important as well such as the intention and the performance of the experiment itself’. Thereby, we can in general assess any experiment and in consequence any series of experiments (i.e. publication) by individually evaluating its three parts, viz. intention, performance, and results.

The intention covers the initial idea, the hypothesis, and/or the overall goal. It usually contains an essay about the state of research and how the work fits in there (introduction). Performance accurately describes how the experiments were designed and performed. What tools, instruments, and materials/chemicals were used. What was the raw data and how was the raw data processed and evaluated. Results covers interpretation, discussion, conclusion, etc.

Think of classical treasure hunting for a simple metaphor: Intention is the idea to get rich by digging up a pirate treasure in the Caribbean and outline of the plan to do so. Performance describes the tools (ship, shovel, crew members, parrots, …) you used including a picture of the map and how you got from the base harbor to the treasure island. Also, how you found the way from the beach to the X, what you lost on the way, and what the treasure was like. Results will discuss what you gained (gold, experience, illness), and if it was worth the effort. In the end, you can refer to your initial overall goal (getting rich) and judge because of your experiment (or maybe you tried severall times?) if treasure hunting is a feasible way to reach this goal.

At the current scientific state, every publication describing treasure hunting will lead to the same conclusion, i.e. that it is in principle a feasible way to get rich. Only if the authors successfully found a treasure of gold, their research will be published, though. Also, there are some problems and issues people have to solve before ‘in principle’ becomes ‘in fact’ (reducing the number of dying crew members, burried treasures are a limited resource, parrots eat all the crackers, …).

This means we need to get away from looking at the Results, only. Immediately. So, let us look at the other parts, too, in order to actually qantify the quality of the overall work!

# Assessing scientific work

For now, all three parts described in the previous section shall be equivalently weighted, i.e. 1/3. Thus, this reduces the importance of results by 2/3 (from almost 100%) while the importance of the other two parts strongly increases (from almost 0%). Then, let us analyze each section and discuss how we could assess it.

We start with a tough one. The content of intenstion is hard to judge – almost impossible – and trying to do so (like reviewers do for example) causes the “we cure cancer”-phenomenon. Authors feel the urge that their contribution to science has to be something bigger, something that scratches at the doors to Stockholm. Also, by every journal policy it has to be a novel piece. In consequence, authors justify all of their intentions with “a new way” to “eventually cure cancer”. Maybe one of my readers thinks now “is this really so important?”.

As the title of this blog posts suggests, the content is about ethics. Is the ethical and moral OK to (indirectly) force people to hide their true intentions? I think not. Allow people to be honest! Accept that people were “just” interested in this because no one else did it! Accept that people have cool ideas by doing nerdy stuff because for the sake of (“Why?” “Because we can!”)! Accept that people tried to reproduce things from another scientific work, but were unable to do, and want to publish their efforts including discussion and conclusion! Let them tell their story in a simple language!

Second section to assess is the performance section, which reminds you probably of the “Materials and methods” section of most papers. However, in which journal did you see that this is 33% of the work? Usually, it consists of many short subsections roughly describing the experimental methods by omitting all kinds of important details (humditiy, anyone?). If it is too long you can look up the remaining parts in the suplementary files, a wild jungle of text and pictures no one cared about to write, format, or review. Let me pull up a quote something from the Hitchhiker’s guide to the galaxy:

The plans were on display. […] even if you had popped in on the off chance that some raving bureaucrat wanted to knock your house down, the plans weren’t immediately obvious to the eye […] I eventually had to go down to the cellar! […] With a torch! […] It was on display in the bottom of a locked filing cabinet, stuck in a disused lavatory with a sign on the door saying “Beware of the Leopard”.

So, the experimental details may be all there. Some in the manuscript, some in the supplementaries. You just have to find them! And be able to open it, because they might be submitted in a unknown or rare data format.

However, the performance of an experiment is the central point of any scientific work. We cannot talk about the lack of reproducibility, if we force authors to reduce the fraction of this central point to 5%, maybe 10%, of the whole work. Some journals even print this section in a smaller font visually reducing further the importance.

Here, we again force authors to do something, they may not want, i.e. shorten and part their description of the performance. An then we accuse their work of not being reproducible. This remindes me of Nelson telling people “stop hitting yourself” while constantly hitting them in the face. It should be clear that if you shorten something to a certain degree, you have to omit things. If the description of an experiment requires 10 pages, you cannot reduce this to one paragraph and on the same time keep the same grade of detail. You cannot tell people to write the “most important things” in the manuscript and put the rest in the supplementaries. I do not even see the point.

In our digital era, there is absolutely no need to shorten something for the sake of shortening it or because it looks “nicer” and fits the journals policies. However, journals and publisher dictate what and how things will be scientifically published. Academic ethics have to defer to journal policies. For the sake of ethics and science, we have to change publishing.

These policies cause another issue. Publishers practically implemented something I will call “reproducibility by obscurity” (referring to “security by obscurity”). As described above, finding the details of methods and experiments is sometime a real hassle. Thus, we withdraw responsibilities from the authors and publisher and push it to the reader. If you cannot reproduce something from a paper, you never know if you just couldn’t get all of the details together (so the problem is on your side) or something is missing in or really wrong with the description. When in doubt, you just try to find another work. Thus, nobody (or hardly anyone) contest the reproduciblity of the work, i.e. it has some kind of existence – even it may be only virtual (Schrödinger’s cat may be dead by now!). Just because of obscurity. There is absolutely no ethics in this developement. It obfuscates the state of reproduciblity. This is a problem. A big problem. (Personally, I noticed that the higher the impact factor the less details you find in the papers. This is probably just my own impression, though.)

Last but not least, let us talk about the most controversial section on the list, i.e. results. So far, results have been the only quantify to assess scientific work. Well, to be more precise, arbitrary concepts of positive and negative results have been. However, how can results be positive or negative at all? I’ll make an action, there is an equal reaction. I’ll do an experiment, there is a result. Always. Case closed.

Unless you apply your expectations on the results. Then you derive a new concept of “results” I will call here “expectults” but everyone else will keep calling “results”. Obviously, expectults change with expectations. If my expectation on any publication is that its results will cause world peace and solve starvation, well, then practically every publication published negative expectults. So, they should be retracted. All of them!

While writing the last paragraph I realized how stupid the word “expectults” is. I think it fits very well, because this is what we do. We literally expect that every publication cures cancer. On top, we only publish things, which fit in the big picture of curing cancer. Things, which are politically (i.e. by policy) not acceptable, will not considered for publication. This fits the definition of censorship.

The odd thing is that editors and reviewers are enforcing this. Both groups are scientists. Working in the same system. Maybe dooming and cursing this system. They could change it. Still, they are doing the same process over and over again. In the end, scientist are just censoring themselves.

Do you know what is even worse than negative expectults? Let us assume you have a collaboration with another group and they should do something for you (measure, synthesize, etc). After a while, they tell you there were “no results”. “No results?” you ask. “Yeah, no results. Did not work.” they say. This is a direct result of above policy and I am sooo tired of this. I spend (i.e. waste) so many hours of tearing answers out and getting the actual results from them, so we can track down issues and work on them. Why not just say “This is the spectra. It does not show the expected product lines, but here you can see the lines for educt 1, 2, 3, and an unknown line. What do you think?”. Does this only happen to me?

How do you fight censorship? By transparency, i.e. open science. However, you have not only to publish reviews of manuscript (versions), which were accepted, but on all those which were not accepted. Transperancy does not work in one direction, only. Of course, this renders a distinction between accepted and not accepted publications completely useless. So, simply let the people publish whatever results they have. Let everyone review and assess the work. In public. This is the only way.

For assessing the results we want to exclude any (personal) expectations out of it. Otherwise, it will only cause discussions on virtual experiments to perform, which may or may not show some point the reviewer wants to make – most likely to prove somehow that the authors did not do the experiments properly or at all. Well, but if you want to do more experiments, then do it. Write it up and publish it yourself!

Also, is it even ethical to always assume the worst? Why not just assume the authors did everything right (pro reo) and write supportive reviews instead of assuming they were all douchebags (contra reo) and try to destroy their work?

Ok, we roughly discussed now what we want to assess. How can we meter it? This is what the next section is about.

# Metering the quality of scientific work

If you read the previous section, then you can already presume the conclusion of this section. There is no specific metric, which can do what we want. Each work is individual and, therefore, should be treated individually. Read individually. Reviewed individually. Discussed individually. Assessed individually, by words and language not by some arbitrary metric system.

Even formal things are impossible to meter. Scoring or measuring the quality of explanation and presentation, for example. Text styles are as many in number as there are people. Some might come up with ideas such as “giving the text to 10 random-selected, undergraduate students – every student who understood it is a point”. This is random, arbitrary, and elaborate.

Many things could be considered binary maybe trinary, such as logic of an explanation or reproduciblity of an experiment. The explanation is or is not logic (or partly). The experiment can fully, can partly, or cannot at all be reproduced. Integers could be assigned to these states (e.g. 0, 1, 2). Then, you could try to count these things and divide them by the total amount, e.g. count reproducible experiments. I explained before why counting things to quantify quality is a bad idea. For experiments, where does one experiment start and another one ends? Is setting a solution to a certain pH one or two experiments (adding acid/base as one, measuring the pH as the other)? Two simple experiments may be easier to reproduce than an experiment involving the Large Hadron Collider.

Let us assume that we could somehow meter the quality of the invidual parts, viz. intention, performance, and results. How will the individual parts be weighted? For the last part of my post, I set them to 33% each, mainly to reduce the importance of and focus on the results section. Difficult. Also, all section will most likely have different metric systems unless we implement a simple 5-stars-option (this works good for hotels all over the world, right?). How to combine different metric systems to a final one?

Another option would be badges! Achievements! Everyone likes achievements, right? Let us give paper the ‘Golden Reproducibility Badge’ for works being replicated by at least three other research groups! Let us give people the ‘Mo-Mo-Monster Review’-Achievment for writing ten good (respectful and helpful) reviews! At least they are very popular among the open communities. Why not in science?

As I wrote before, in the end we only seek THE one tool to assess all scientific work! Maybe someone in future finds a nice system. I do not know. No one can know this. However, I know that we become so focused on this quest that we already forgot about the thing, which matters most: the scientific content behind the number(s).

# Ethics for a new scientific millennium

In another article I already described the imbalance of so-called peer-reviewers and authors of a submitted manuscript. Anonymous peer-reviewers have been granted power. Enforcing their anonymous point of view on authors. Demanding more and more experiments and citations. Being a sexist. Being douchebags. However, being a douchebag or sexist is independent of anonymity. For instance, see several Twitter ‘discussions’. I think, we need some guidelines for assessing and discussing in a scientific context.

Of course, there already a bunch of guidelines and rules for good scientific practice, which are enforced by research foundations like DFG. However, all of these guidelines and rules are only for creating scientific work (i.e. scientific misconduct, which is important!), but there are no real guidelines for assessing scientific work. So, I wrote up some ethical guidelines, which might help you and others. I do not and will not want to call them rules, since I am not seeking to enforce them on anyone (see # 1). They are in no specific order.

1. I do not enforce my opinions and views on others, no matter what. All my comments, remarks, and assessments are meant to be respectfully written recommendations and advices.
2. I will not assess any scientific work by its results only nor will I ever use or apply the arbitrary concepts of negative and positive expectults, or no results.
3. I am aware that I might be wrong on anything. If I am wrong, I will admit this. If I am not sure about something, I will admit this. If I have no clue, I will admit this.
4. I will not support or tolerate reproducibility by obscurity. Performances of experiments should be as accurate as possible. Accuracy is mandatory.
5. I will not demand more results, experiments, citations (esp. on my own work), or other things for the sake of. Additional things should offer added value. Also, they should not be mandatory.
6. I will not use any metric for judging or proving the quality of scientific work.
7. I will bear full responsibility for my assessment, comments, and remarks as authors of scientific work bear full responsiblity for their work.
8. I will not tolerate, obscure, or turn a blind eye on any form of discrimination, harassment, or the like, at any point.
9. I do not directly or indirectly force people to act in an improper or unethical way.

Some of these points remind you probably of rules and guidelines in discussion forums. But what is assessing work other than a discussion? At least it should be one. Maybe you find the list redundant. Maybe you find some things be über-natural. Well, please go ahead and search for the lists of all these ‘respectful’ comments from reviewer 3.

This list and its points are not set in stone. They are mine. I try to follow them and maybe extend them in future. Everyone can make his/her own list. My point is just: We need ethical guides for assessments. They will not turn douchebags to upright people – I know this. However, they are a tool, which allows me to clarify my position and distinguish myself from douchebags. I simply want to do science.

# Final remark

I hope, you liked my little contribution. Leave a comment with your own point of view and share it (and mine) with others!

# At least do not call it ‘peer’-review.

First, I want to show you the definition of the word ‘peer’ from Wiktionary. The one in the Oxford Dictionary is similar.

Etymology 2
From Anglo-Norman peir, Old French per, from Latin par.
Noun

peer (plural peers)

1. Somebody who is, or something that is, at a level equal (to that of something else).
2. A noble with a hereditary title, i.e., a peerage, and in times past, with certain rights and privileges not enjoyed by commoners.
3. A comrade; a companion; an associate.

Source: Wiktionary

I think this definition is very clear and explains very well what a ‘peer’ is. I consider most of my colleagues and friends peers despite their actual title or degree. I can walk up to anyone of them and ask them for advice or to review something I did and wrote (and vice versa, of course). ‘Look, is this understandable? How did I present the data or the experimental setup? Is something missing? Can you please check language and spelling? …’ I think, you get the idea.

After reviewing they will come back to me and we will discuss their suggestions, critics, remarks, and other comments. We are both open in this discussion and its outcome, simply because no one depends on the other one. It does not matter if I do not accept everything (or anything at all). My peer does not insist of taking over everything (or anything at all). Sometimes really nice and new ideas can rise from such a discussion, which improve the work (‘Did you think of using this for …?’). We both meet at eye level, true peers, because we know each other – background, experience, oppinions, point-of-views on certain things, etc. We know these things, because we are able to find out, since we know who the other person is. We can talk to each other. We can look up work we did before. Etc.

Sometimes my peer-reviewer asks why I did a particular thing in this particular way. I will explain. We discuss. In the end, my reviewer could take along something new from the discussion, as well. Not surprising, since both of us are peers, this whole process provides equal opportunities (learning, improving, …) for both of us.

Now, let us switch to the so-called blind ‘peer’-review process of most journals out there. I send the manuscript to an editor of a journal I want to publish in (indirectly over some very static webpage, but still). In the end the manuscript is forwarded to some reviewers, I do not know. I maybe can assume that the editor at least selected one of the five demanded suggested reviewers – but who knows? Not me, for sure.

While I as applicant (well, suppliant) try to make an effort in writing a nice cover letter, I can hardly expect any salutation or even complete sentences from my reviewers. Do not know if it is my German background or just me, but such things offend and insult me. It shows (to me) that a person does not care at all, because the person has not to. Being harsh, aggressive, ‘bitchy’, insulting, cynical, ironical, sarcastic? No problem on this side.

Suppliantly, I tolerate and accept remarks, comments, critics, and point-of-views without (great) discussion. On top, I will thank(!) the reviewers for their comments and suggestions. I – as others – just take the line of least resistance, the primrose path for scientific publishing.

There was this one reviewer who told me and my co-author not to invent words such as programmatically. I felt young and rebellious. I wanted to send him a free copy of the Oxford Dictionary, but couldn’t – I did not know who the reviewer #3 was. I changed the sentence(s) and avoided to ‘invent’ words. End of rebellion.

Of course, I know that these reviewers are my scientific peers. Well, actually. I do not know. People and Wikipedia tell me that. I cannot check or review the professional expertise of my reviewers.

For fun, consider this: What exactly prevents an editor of Food & Functions to ask Vani Hari for reviewing? Only the editor knows. No one can check. Not during the process. Not later.

The reviewers, however, can check. They have my name and the power of Google, Scifinder, or ISI Web of Knowledge/Science. They can look up my work and see my expertise (or the lack of). This can change their opinion on my manuscript or the way they formulate their comments. I have to guess.

Do I assume the reviewer has a broad but general knowledge in this field but is a novice to this particular technique we are describing in our manuscript? Could insult the reviewer, if I start to explain simple things to him. Do I really want to talk about vocabulary (see above) with the reviewer? Could insult him, if a German tries to explain English to a native speaker. Also, a simple comment could mean something completely different depending if the person is an expert or novice to this field. Misunderstanding could annoy the reviewer. Commenting on reviewers comments is like gambling.

Now, may I ask you, my dear reader, where exactly this is ‘peer’-reviewing?

Post scriptum, I want to add three things:

• Do not get me wrong: There were some really good suggestions in every review-round. Sometimes they changed a great portion of the manuscript and improved the overall work. But don’t reviewers deserve credit for this?
• I do not consider reviewers ‘gatekeepers’ as some people do. In the end, it is the editor’s decision. The editor could completely ignore the reviews and let the manuscript pass (or not). Editor’s decision. Editor’s responsibility. Editor’s gatekeeper.
• Of course, the main thing is that the system works and there are absolutely no flaws. I mean, could you instantly bring up a single case of fraud? On top, the quality output of ‘peer’-reviewing speaks for itself! Especially, in the most reputable journals, right? RIGHT?

# Somethings strange in the (academic) neighbourhood. Who ya gonna call?

(Warning: Rant incomming.)

Some weeks ago I finished my new proposal and submitted it to Deutsche Forschungsgemeinschaft (DFG; English: German Research Foundation). Among others I applied for some non-personnel costs – including publication costs. You can apply for € 750 per year, which should suffice for submitting papers but also for printing costs (posters). In sum, that are € 2250 for a 3 year project.

I wonder. How am I ever able to pay any article processing charges with this kind of funding? For instance, the new Science Advances Journal charges \$ 3000-4000 per article! Of course, this is an extreme case. But even the ‘moderate’ charges are around € 1500. The flagship of open access, PLOS ONE, is not cheaper

This not a sole problem with open access journals, though. If you look at more traditional journals such as the Journal of Separation Science, you will find charges for everything. Pages. Color figures. Everything.

What is the solution for a early-stage researcher like me? Save money to only publish one or two papers per 3-year-cycle? Underpay graduate students even more (not that I will!), so they can publish? Use your own money?

How comes that I pay for (gold) open access – i.e. transparent science – but the publisher can’t transparently tell me what the money is for? Applying for a doi? Paying the reviewers? *cough* Of course, they have to pay servers, some personal, etc. But couldn’t that be covered by fixed deals like BioMedCentral has with the University of Regensburg?

University of Regensburg is a member of BioMedCentral. The charges are completely covered by the membership, i.e. no charges for the authors! Hooray!

I have a problem here. The very first question for me at every journal is: what will it cost me to publish in there. Not the quality. Not the audience. Not the content. The costs are top priority. Should that be the case?

Somethings strange in the (academic) neighbourhood. Who ya gonna call?

# Steps to an open tool for pKa-prediction

Update, 16th Sep, 2014: Started an writeLatex-document on this topic.

Is there an open source pKa or LogD tool available?. This question was asked 4 years ago. Still, there is no really good tool available. This post is about my thoughts about creating one.