by benwr on 4/16/19, 1:31 AM with 7 comments
by ArtWomb on 4/16/19, 12:42 PM
The image of a solitary Principle Investigator is fading fast. Cloud based datasets, Jupyter Notebooks, open review archives, even public discussions and distribution of results via Twitter attest to the collaborative nature of science in the modern era.
Consider the recent initiatives in Brain understanding, that could yield profound implications beyond neuroscience and AI into public policy and our most fundamental beliefs. And its not just discoveries about intelligence. Completing entire transcriptomes of cell diversity in mouse and nematode brains creates a cell atlas at a level of detail that other researchers can then use in their own investigations. Such as exploring the robustness of genetic diversity in transcription error rates!
by tgbugs on 4/16/19, 7:07 AM
My initial thought was that the primary challenge here (which the author addresses in part) is in estimating the time it takes to complete a task. If you already 'know' how much time something is going to take then is there really uncertainty? How much uncertainty? The author seems to be going in the right direction, but there seems to be something more here about sources of ignorance that touches on the number of special cases in the problem space, or something of that nature. I wonder if there is any work trying to infer the number of special cases, or 'practical realities' of a problem space (maybe a kind of roughness, inhomogeneity, or irregularity?) that will ultimately be the major time cost.
Another thought is that 'bad' negative results don't have the provenance required to rule anything out, but if you know exactly what was done then you have much stronger evidence about where the problems might lie.
Finally this is deeply connected to another issue which is that sometimes we don't have the resources to devote to solving the really big problems so we never even try. The economics of cutting edge research only serves to drive us further from the hardest questions because we don't have anywhere to start because we haven't the faintest idea why we fail.
Somehow this reminds me of my first play-through of Darksouls -- the only thing that kept me going was the belief that it could be completed. My repeated failures were required for me to slowly gather enough information to see where I was going wrong. Funding basic research that is truly at the edge of the unknown is like only funding noobs that have never played before, or maybe more like funding good players from one game that come to another, they'll get there eventually, but they have to be able fail, and if you make them play Ironman mode then we might as well give up -- the game is too hard.
by btrettel on 4/16/19, 4:43 AM
> One result is the ranking theorem: If independent projects are ranked based on the ratio of benefit-to-cost, and selected from the top down until the budget is exhausted, the resulting project portfolio will create the greatest possible value (ignoring the error introduced if the portfolio doesn't consume the entire budget).
This is speaking more generally of a "budget" which could be time, money, or something else. It's an approximation because when the resource is nearly spent it becomes a more complicated optimization problem, selecting projects which fit, which doesn't necessarily pick projects with the highest rate.
Obviously this becomes more complicated if the projects are not independent, e.g., one is a prerequisite of others.
by roboy on 4/16/19, 4:40 AM
by crucialfelix on 4/16/19, 6:28 AM
In that case I want to choose the search path with the largest variability in output, and limit downside time cost by just giving up after some time.
This is the antifragile approach. Seek out sources of volatility. Explore, don't chase predetermined goals.
This is also useful for software development.