I’ve been wondering about why FOSS is often compared to the academic world, but at least in my limited experience, I see little people that grasp its concept in the world of research. On a quick look, developing FOSS in a research environment would be very good: not only you’d get publicly available results when you publish, but at the same time you can make sure that in an extreme case your application will be carried on by someone else should you not be able to continue development.
At least in the life sciences, it’s hard to see such a mentality. I can understand , but at the same time, ? For me, such an idea would be optimal. Once the paper is out, you can release your software (GPL would be best) and make sure someone will improve or mantain in. Of course you won’t be able to publish for each upgrade you do, but I would generally think of that as a bad policy, one made just to increase the publication count.
Does something like that happen with FOSS in other research areas?