Links (58)
17 May 2022 | 12:00 am

Unsurprisingly, human capital matters more than buildings for scientific output. Also, some evidence for the Newton hypothesis.

Equipment Supply Shocks. And how Steve Jobs got a law changed to be able to get every school in California a computer.

Inside Fast's (An apparel company larping as a payments company) rapid collapse

Using ML to design hardware for specific NN architecture. Interesting as well how the authors paid attention to economics (considering engineering salaries, semiconductor manufacturing costs, etc)

Hopepunk

A case study in early-stage startup execution

Why is AI hard and physics simple?

Riva on toxoplasmosis and the role of germs in chronic diseases. After the Epstein-Barr & MS paper, I think it's worth looking more into this.

Black-White disparities in life outcomes in the US start early in life (ht/ Scott Alexander)

Some new Gwern essays, on teaching reading of scientific research

The Terra blockchain (Anchor, Luna, UST) imploded. Though this is more Ponzi-like than the article says. Proper fractional reserva banking requires more stable assets (short term ones, not mortgages or long term bonds as is common practice now; also FDIC bad, we can have our free market banks and eat our stability cake too)

Ben Reinhardt on housing researchy ideas in startups or not

Gato, DeepMind's new model, at first I thought it's probably the closest to AGI I've seen so far, a single system that can achieve ok performance across tasks as diverse as controlling a robot arm and answering questions. Sure it still makes silly mistakes ("What's the capital of France -> Marseille") but as we know with AI, if the proof of concept kind of works, taking that to working at human or superhuman performance is a matter of more dakka. Some Gato discussion and skepticism in this thread. Also see here and here to curb your enthusiasm.

How does Tenstorrent work?

Scott Alexander reviews a book on Xi Jinping

Andy Matuschak, interviewed

Microbes have structures in them that trap gas, enabling them to float

Aubrey de Grey & Charles Brenner debate

Lab mice are not getting chonkier

A Longevity intervention combination I wanted to see for a while: senolytics and reprogramming

Roger's Bacon extremely spicy essay on progress studies

I ask Twitter "Has anyone used LLMs for science?"

Michael Nielsen asks twitter what to read to understand Transformers

Students are asked to do some data analysis in a dataset where the datapoints are arranged in the shape of a gorilla. A number of them missed it.

Richard Ngo reviews a book on NY nightlife

Ben Kuhn on 1:1s

Book review of Making it in Real Estate

Sarah Constantin's newsletter, lately covering energy storage

Lada Nuzhna on deep learning for biochemistry

Transformers for software engineers

Brief interview with Luke Gilbert, a new core investigator at Arc Institute

Everything Everywhere All At Once was good


New Science's NIH report: highlights
25 April 2022 | 12:00 am

New Science just published an excellent report on the NIH, both a good primer for those curious about how the world's premier science funding institution works as well as an essay packed with insights that go beyond the obvious. A recent theme of my latest few essays is the key importance of tacit knowledge in many contexts, and in science reform in particular. I noted elsewhere that the claim "the HHMI funds better researchers than NIH in aggregate" was generally believed to be true until Azoulay (2011) came out1. Accordingly, the essay was written not by consulting public sources, but by unearthing common knowledge within the NIH world by directly talking to researchers that work or have worked within the system. Relatedly, another interesting read, making complementary points, is Reforming Peer Review at NIH from the recently launched Good Science Project.

[1]. I would go as far as claiming that had the results in Azoulay contradicted popular wisdom regarding the relative standing of NIH and HHMI, it would have been taken as an issue with the methods of the paper, and not so much for evidence of research funding modalities not mattering. After all, one man's modus ponens is another's tollens

I agree with the overall point of the essay: The NIH is at the moment an invaluable and irreplaceable institution. It can and should be improved, and there is room for a more diverse funding ecosystem which would be preferrable to the current situation where a single institution, to a large extent, orchestrates the world's life sciences research.

Here I wanted to single out specific points the essay makes that I found particularly interesting:

The single most consistent criticism of the NIH that I heard from sources, across all issues, was that the organization is too “conservative.” That is, too conservative in an institutional sense, not an ideological sense.

The NIH is considered insufficiently willing to take risks. This can be seen in its consensus-based grant evaluation, the de facto discouragement of ambitious grants, its drift away from basic research, and the lopsided distribution of grants which favor large, established organizations and researchers.

This is something that one can read in many places in the internet (that the NIH is conservative in its funding). We can tentatively take this as true based on relatively general agreement. An interesting next step, not suggested in the essay, is asking scientists for specific kinds of research that wouldn't get funded. Perhaps brief examples of grants and a brief rationale for why that research would probably not get funded despite its putative merits. Collecting those would help understand in a more fine-grained way exactly what sort of biases the NIH process has. Workshops with senior scientists to discuss NIH grants and what should be funded or not could be held with traditional and new funders to help bridge that gap in understanding. Open Philantrophy could have released the grants they chose from th Transformative Research Award but chose not to, probablyd due to confidentiality reasons. Making those grants accessible

One interviewee related stories of two instances when their grants were rejected because they involved technology not ongoing in their lab and, thus, there was no preliminary data. In the latter case, the interviewee had been receiving NIH grants for over forty years, they had served as an editor on a major journal, and had been an advisor for an NIH institute. All that clout and history wasn’t enough to get the grant approved. While such ability to withstand political forces is impressive, the reason the grant wasn’t approved was that the interviewee never worked in the field in which they applied for the grant.

Fortunately, the proposal later caught the attention of a prominent non-profit. The interviewee submitted a one-page application and they “nearly fell off [their] chair” a few months later when they got full approval at a higher funding amount than expected. Their project has since yielded “transformational” progress in the field, and though the interviewee is extremely positive about the NIH overall, they are concerned about the lack of risk-tolerance in study sections.

I almost thought: Ed Boyden! But no, Ed Boyden is too young to have been applying for grants for forty years. But this matches his story with expansion microscopy.

A few researchers had an interesting take on an unintended consequence of this system: the NIH is biased against “super nerds.” Navigating the “benevolent ponzi scheme” requires anticipating the judgments of colleagues, knowing the right people to talk to for advice, plotting out how to stagger grant timing and explain results that diverge from official grant applications. These are all skills correlated with extraversion, networking, and sociability. They are not the typical traits of a socially awkward scientist who loves to spend hours going through data sets and discussing abstract theories, rather than figuring out how to game complex bureaucratic systems.

This is not to say that a researcher can’t be both a great scientist and a skilled player of the game. But there are certainly researchers who are uncomfortable with the system, and who wish they could spend more of their time on the science and less on figuring out how to get to do the science.

This was also the case back in the day in the Rockefeller Foundation in the 40-50s. Though I could not retrieve the document ("N.S. Notes on Officer's Techniques"), there is a section in there on this and I have the relevant quote here: I know of a man who is almost surely the best expert on the genetics of oak trees in the world; but he doesn't take baths, and he swears so much and so violently that most persons just won't work with him. As a result he is living a frustrated and defeated life; and he not only doesn't have any students - he doesn't even have a job. Weaver then goes on to say that they would't fund people like this because team work is key in science. This may be an extreme case, but it is an example against the thought that "anti- super-nerd" bias is a recent phenomenon.

On the other hand, there are some researchers who are probably a bit too comfortable with complexities embedded in the grant system. Whether by design or happenstance, some lab leaders gain reputations at being so good at getting grants that they focus most of their energy on getting resources and then leave the actual science to their staff. Then again, maybe a bunch of super nerds working for a master grant-getter is the ideal lab structure?

This, along with other points (the redistribution that occurs with indirect costs, the benevolent Ponzi scheme, etc) is an interesting example of why things may not be as bad as they seem. Maybe they are not efficient, but the system as a whole seems to have found workarounds for issues that would otherwise make the research enterprise more inefficient than it actually is.

The NIH gives about 50% of all extramural grant money to 2% of applying organizations, most of which are universities or research facilities attached to universities. 38 The top 10 NIH recipients (out of 2,632 institutions) 39 received $6.5 billion in 2020. This is 22% of the NIH’s total, extramural grant budget ($29.5 billion40), and 16% of the NIH’s entire budget.

In 2020, the top ten largest recipients of NIH money were: 41

  1. Johns Hopkins University - $807 million through 1,452 awards
  2. Fred Hutchinson Cancer Research Center - $758 million through 301 awards
  3. University of California San Francisco - $686 million through 1,388 awards
  4. University of California Los Angeles - $673 million through 884 awards
  5. University of Michigan Ann Arbor - $642 million through 1,326 awards
  6. Duke University - $607 million through 931 awards
  7. University of Pennsylvania - $594 million through 1,267 awards
  8. University of Pittsburgh at Pittsburgh - $570 million through 1,158 awards
  9. Stanford University - $561 million through 1,084 awards
  10. Columbia University Health Sciences - $559 million through 1,003 awards

Here it would have been interesting to look at per-researcher figures as well, with the caveat the indirect costs and intra-university redistribution would not be accounter for if looking at the PI level. But nonetheless, what kind of PI gets more funding, and for what. Is it mostly driven by lab size? Would be interesting to know!

The essay also discussed the Grant Support Index. I made some remarks on that here and especially here:

A few years ago, there was a debate around whether NIH should cap funding for individual researchers, on the grounds that there are decreasing marginal returns to concentrating funding on a single investigator. Opponents argued that such a policy would unfairly penalize successful investigators leading large labs that are doing highly impactful work. [6] The proposal was ultimately scrapped. It’s not relevant whether such a proposal would have worked: both sides had reasonable arguments. What is important is that at no point did NIH think of randomizing or trialing this policy at a smaller scale—they designed it from the outset as a policy to affect the entirety of NIH’s budget.

That is the kind of thinking that we need to change. Instead, NIH should have considered selecting a subset of investigators and applying a cap to them, and then compared results a decade into the future with those that were left to accumulate more traditional funding.

I am now less bullish on RCTing our way through science reform but I do still think that enabling experimentation at smaller scales would a good step forward.

On researcher age I wrote an essay just on that question where I end up thinking that at the end of the day, relative to the current situation, a younger workforce would be good, hence my idea of a Young Researchers Research Institute.

On study sections, there are videos on youtube that show how they work in practice. I found these very helpful when I first came across the "study section" concept.

In sum, it can take 14 months before a grant submission results in funding. Meanwhile, existing grants will be running out. Postdocs will be coming and going from the lab. New discoveries will pull research in one direction while other research paths peter out. PIs will have to take all of this complex management and budgeting into account while going through a process which typically takes 6-14 months to pay out.

This is unacceptable and while true that not every grants program needs to aspire to be Fast Grants, cutting down time by at least half seems within the bounds of possibility. The gains in the responsivity of the system are most likely worth whatever minor accuracy in review quality could occur. The essay has a good sampling of ways to reform the current study section model.

The NIH’s extramural grant system is somewhat convoluted, but it has a logic. It is designed to promote private research with government money while simultaneously fostering research institutions through subsidies.

When the NIH awards extramural grants to researchers, the funds are divided between direct costs and indirect costs.

Direct costs are funds given to the researcher. These funds cover costs that are easily associated with the researcher and their team, including salaries, supplies, equipment, and lab space.

Part 5 has a good explanation of how indirect costs work and why they are there, and presents the pro and con case for high indirects. I tend to side with the "they are too high" position and while universities may complain they don't have enough money, being forced to tighten their belts could force efficiencies: perhaps more investments in reducing costs in sequencing, imaging, or pathology cores, making them more like autonomous business units that charge researchers the cost of what they want to do, and extra to account for maintenance and new purchases. Somewhat less realistically, scientists could be encouraged to shop around for shared facilities at other research establishments to induce some healthy competition. A move towards transparent pricing and less cross-subsidies seems on net good. Here I think: Could we do a case study for a particular university, and see how their indirects work? The incentives might not be there. But the NIH could commission a survey of indirect utilizations, including interviews with adminstrators and scientists, to see exactly what is happening with those indirects, and what the implications of reducing them might be.


Applied positive meta-science
20 April 2022 | 12:00 am

As scientists we must accept that the world has limited resources. In all fields we must be alert to cost-effectiveness and maximization of efficiency. The major disadvantage in the present system is that it diverts scientists from investigation to feats of grantsmanship. Leo Szilard recognized the problem a quarter-century ago when he wrote that progress in research could be brought to a halt by the total commitment of the time of the research community to writing, reviewing, and supervising a peer review grant system very much like the one currently in force. We are approaching that day. If we are to continue to hope for revolutions in science, the time has come to consider revolutionizing the mechanisms for funding science and scientists.

-Rosalyn S. Yalow, medical physicist (Peer Review and Scientific Revolutions, 1986)

A while back I wrote Better Science: A reader and even earlier "Fixing science (Part 1)". There never was a followup up to those until now. I also once tweeted the diagram below:

In this post I initially wanted to think through a categorization of proposals to fix or reform science. There's fixing the replication crisis proposals, various novel funding schemes, new institutions, building tools to help scientists, etc. In thinking about the first axis that becomes salient I immediately jumped to "fixing what's broken" vs "making what's good even better". I tend to be pulled towards the former rather than the latter. My favorite proposal among my New Models for Funding and Organizing Science is after all the Adversarial Research Institute, a novel institution that would not do any new research, but rather strengthen and challenge the existing body of work.

So instead I thought it'd be interesting to explore reforms from a more positive angle as I think most of the discourse around "fixing science" tends to focus on fixing what's broken rather than on improving what is working. Some scattered thoughts on the topic, take this to be working notes to clarify my own thinking, somewhat edited:

Imagine we start tomorrow a research institute in a domain where results are generally known to be reliable. Scientists at this institute will be freed from having to apply for grants, and they will enjoy generous budgets to buy state of the art equipment if they need it. What's more, they will only publish in an open-access free-to-publish publication launched by the institute itself (As Arcadia will do). They can work on whatever they want.

Can we reasonably believe that their productivity can be further improved?

In this example, I've set aside some of the most pressing concerns in applied meta-science: no need to worry about unreliable results, hard to follow methods sections, uncertain funding and job stability, and so forth.

One first family of ideas is to scale up the kind of science that gets done. There are projects that require either coordination between multiple entities, or a large team working on the same goal to produce a new tool or dataset. By setting up an institution that allows for large teams of staff scientists and technicians to execute on a single vision either in one organization (FRO) or across many (an ARPA-type program), we can unlock a different type of work than the one done today at academic labs. This is the driving insight behind Focused Research Organizations and riffs on ARPA like PARPA. This type of organization starts with a goal in mind and works backward to get there. Sometimes the building blocks for the next piece of technology, or the right tools to ask a longstanding question are just lying there waiting for someone to combine them in the right way. Related to FROs there are the related ideas of "architecting discovery" and "bottleneck analysis".

But this all works only if there's a relatively clear path to a goal.

Finding more building blocks

Genetic engineering had been possible for a long time, but there is a qualitative difference between randomly integrating genes into DNA, sometimes by shooting them with a gun into cells, and doing so precisely, in specific sites of the genome. CRISPR gets lots of attention, but before CRISPR, the 2007 Nobel Prize in Physiology of Medicine rewarded Capecchi, Evans, and Smithies for introducing specific gene modifications in mice. This was the first time a targeted edit was made in vivo. The method they used relies on flanking the sequence to be inserted by DNA that matches the sequences that flank the target side. If this sounds similar to guide RNAs is because the idea is exactly the same: using unique DNA (Or RNA, or even proteins) sequences to identify specific unique sites in the entire genome, guiding a variety of effector proteins to the desired site. Conceptually, all gene editing tools we have, CRISPR, TALENs, ZFNs, meganucleases, find their way to the target spot in the same way. Many of these were found from bacteria, not engineered ex nihilo. Hence prior to CRISPR, if one had tried to think of what comes next, the idea of using some sort of sequence-matching guide attached to some protein, and lifting this out of some bacteria would have been the obvious guess. This is exactly what happened, but it would not have been possible without prior discoveries that were wholly unmotivated by gene editing. But "guide+effector protein+desired edit" is one paradigm. It may be the only one we have right now but it doesn't have to be the only one. I don't have much idea of what other ways of making precise gene edits would be, but reprogramming shows an alternative way to approach this question. If asked to roll back the state of a cell to what it originally was (a stem cell), the within-paradigm answer is finding 7000 guide RNAs or similar, and making a few dozen thousand edits in the epigenome until it matches what it used to be. But it is much easier than that: delivering four transcription factors (the "Yamanaka factors") works. The out-of-paradigm way of thinking is not specific genes, but cell state in general. Admittedly this is cheating because the machinery to reprogram the cells is already in the cells, and we don't know yet how reprogramming works, but it's a proof of concept of cell state engineering (as opposed to genetic engineering).

Molecular recording (for example Linghu et al. 2021) is less famous than CRISPR, not very widely used but very interesting nonetheless. Molecular recording is getting cells to record what they do in molecular ticker tapes, as it happens in real time. You could imagine a neuron recording its history of spike activity in a very long protein ticket tape. Then you could imagine all neurons doing this. At that point you have a full recording of what a mammalian brain has been doing for a few minutes. We're not there yet but we're at the point where we can make cute cell pictures with the ticker tapes inside of them:

image-20220408115312430

How do you make this? In this case, first you want something that can form filaments inside cells. The Linghu paper says 14 of these proteins were tested, they picked one and ran with it to develop it into a tool capable of making cells happily record themselves. Why those 14? Probably someone searched Google Scholar and found some suitable candidates characterized in prior work. Now one could imagine taking those 14, doing some ML to generate more candidates, and then trying to explore the space of possible proteins to find others that are even better, but the point is that in this case a very deliberate invention relied on a plethora of unrelated discoveries, just as with CRISPR. In the ideal scenario the input "Give me proteins that can form filaments and get fluorescent tags attached to them" wouldn't only output what we know, but something like "Spend 10% of your resources in randomly sampling the Earth for new microbes, 40% poking at this one specific genus of bacteria and 50% on running AlphaProtein1. This seems to require full-fledged AGI.

[1]. Some hypothetical new model that can predict some properties of proteins out of their sequence, which could plausibly be used to optimize for a given function

If not for the Mojicas of the world we wouldn't have the building blocks needed for building new tools: might we have to conclude that, as one book put it, greatness cannot be planned? That we need undirected explanation to find building blocks?

I have to admit, the title of this book bothers me a lot. The point of the book is that on the way to get to X, exploration of unrelated areas without the objective of getting to X in mind can lead to key stepping stones to get to X. So to pursue X it's best to spend some resources not pursuing X at all.

To an engineer-minded person like me, serendipitous discovery sound like a sadly inefficient affair. It's something (as with tacit knowledge) to be acknowledged as real, but not celebrated. It's giving up on the dream of an assembly line of knowledge, accepting a somewhat romantic view of science where we have to give up on reasoning about the best path forward from first principles2.

[2]. As exemplified by Kary Mullis' account of PCR discovery. In his The Unusual Origins of PCR he writes: 'Sometimes a good idea comes to you when you are not looking for it. Through an improbable combination of coincidences, naivete and lucky mistakes, such a revelation came to me one Friday night in April, 1983, as I gripped the steering wheel of my car and snaked along a moonlit mountain road into northern California's redwood country.'

Prime Editing (Anzalone et al., 2019), a recent and more accurate successor to the original CRISPR was developed with an objective in mind (more accurate gene editing), it was not found by chance. But its building blocks were: One of the components of Prime Editor 1 (PE1) is a reverse transcriptase enzyme from the Moloney Murine Leukemia Virus (M-MLV RT). Who would have told us that a nasty cancer-inducing virus would one day be used for gene editing! They then constructed a bunch of alternate versions by introducing mutations to M-MLV RT. Why M-MLV? Maybe because it's commonly available. Perhaps in an alternative universe, Prime Editing would be powered by avian myeloblastosis virus RT instead.

From these and other examples it seems that we could do a bit more planning for greatness: The process that led to these tools was

  1. A preexisting library of characterized components
  2. An idea of how a kind of component would be used for a given purpose
  3. Trial and error

Step 3, technically known as fuck around to find out exists there where simulation is imperfect. If there are 3-4 candidate solutions and testing them is quicker and/or cheaper than trying to simulate or otherwise prove the optimality of one, then testing them all, then picking the best candidate makes sense. Given that it's hard to simulate biology in silico and a common unit of simulation (cells) are small, the idea of screening libraries of compounds in vitro is pervasive in the life sciences. It's something that works reasonably well and is cheaper and faster at the moment than running a cell on AWS (which we can't do yet).

Step 2 is about generating a good research question, here "How do we get cells to record the expression of one protein over time" which may have been originally motivated by a higher level question "How do we record the entire brain". It is also about having the right knowledge to know what building blocks make sense, and the knowledge of where to go find them.

But if optimization is what we want (Once the properties of the building blocks are known) then instead of relying on the next Mojica to notice weird patterns in genomes, we could be able to optimize or synthesize directly the protein that is needed. In a recent paper (Wei et al., 2021) train a convolutional neural network to predict the performance of guide RNAs. They provide a tool as well to run predictions of what the best guide RNAs are to target given genes. The same applies to Step 1: If we have a large dataset of biological entities and some readouts for what they do, eventually we'll be able to just ask a model to give us what we want. The space of possible proteins that are useful is finite, and relying on what happens to work in nature is but a step towards progressively better tools.

Working towards such models and datasets is one way to plan for, and accelerate, future great discoveries.

Tools for (scientific) thought

But we're far from a world where the best possible gene editing tool comes out of an ML model; that probably comes close to the kind of problem one probably needs AGI for.

Other kinds of software could be useful. Imagine for a moment that scientific search engines like Google Scholar or Semantic Scholar disappear. Imagine going back to manually reading hundreds of volumes of multiple journals to find what you want. Is there a tool that makes our current situation analogous to the pre-Scholar days? Some months ago I tried to riff on Scholar in my Better Google Scholar post, concluding that it's very hard indeed. In practice, the kind of software that one sees in science is not particularly glitzy. Some of it is for lab logistics (Quartzy, Softmouse, FreezerPro), lab notebooks (Benchling), or reference management (Mendeley, Zotero). There are some special purpose software tools to solve concrete problems. BLAST can take sequences of DNA or proteins and return similar ones in other species. Primer-BLAST can generate primers for PCR. GraphPad is commonly used for biostatistics and Biorender for illustrations. And then there's Excel.

It seems to me some of these tools are not like the others. Some tools make life easier, others enable new kinds of experiments. Suppose we go travel back in time and give Crick & Watson Google Scholar and GraphPad. I don't think much would have changed. Give them the tooling used by modern X-ray crystallography and we would have had the structure of DNA faster than we actually did. Similarly, give TPUs to the early pioneers of artificial neural networks and we may have jumped to the present situation without a few AI winters inbetween.

[3]. Is there any innovation in enabling physicists and mathematicians to think faster or newer thoughts? Aside from DeepMind's recent paper. Any innovations adopted at scale? Not a field I spend a lot of time thinking about!

But these tools still are only useful once you have the right question to ask. Or do they? By having more tools, we are able to piddle around in novel ways3. This piddling can then lead to better questions. For example, the question "How do neural networks learn or represent knowledge" is pointless until we have the hardware to run large models and the visualization tools to look into them. Only at that point can Chris Olah and friends go and wonder and wander.

Perhaps we have to reconsider the idea that "time-saving" tools do something minor. The one lever we know for sure we have to generate new interesting ideas is having more time. So the more time that can be spent thinking and less doing mechanical work like finding a paper, talking to a sales rep to buy a new instrument, or debugging a new protocol, the more interesting thoughts will be thought.

Time is all you need?

At first, I thought that "marginal improvements" that just save time wouldn't be that interesting to think about. Doing the same thing but faster or cheaper doesn't seem like how one gets to interesting new thoughts. But if those thoughts are a function of primarily time spent thinking, then buying time is how one gets more of thoughts: scientists spend a lot of time not only applying for grants, but also pipetting, cleaning lab equipment, and moving samples in our out of instruments. Spending more time reading papers, or doing analysis seems like a better use of their time. Whether we can do better than buying more time is something that we could figure out by surveys of how specific discoveries came to be, for example taking Nobel-winning research, or by talking to scientists directly.

In writing this, I am more confident that at least for me the most useful thing (not the most useful thing period, that's hard to say; rather the thing at the intersection of what I enjoy doing and what I can do) is not to find ways to help us think better, but to take existing building blocks and designing FROs for concrete problems we know we have.

Appendix: A collection of various proposals to improve science

When I started writing this post I tried to think of some high level categories for all the "fix science" proposals. Some of these are discussed in earlier posts, see Science Funding.

  • Software

    • Search tools
    • Social annotation
    • Reference managers
    • ML models for replication
    • ML for idea generation
  • Tooling

    • Cloud labs (Strateos, ECL)
    • Lab automation
    • Spatial creativity (Dynamicland? Interior design/architecture to foster collaboration & serendipity, MIT Building 20)
  • New institutions

    • PARPA
    • FROs
    • Prediction markets for replication
    • Uber, but for technicians
    • New journals
  • Funding mechanisms

    • Lotteries
    • Find the best, fund them
    • Fast Grants
    • Fund People, not Projects
    • Alternative evaluation practices (deemphasise impact factor etc)
    • Funding for younger/first time Pis
    • Funding high disagreement research proposals
  • Norms

    • Preregistration
    • Replication
    • Fund the right studies
    • Mandatory retirement
    • Cap on funding
    • Better methods sections
    • Open Science
  • Activities, norms

    • Roadmapping
    • Workshops
    • Grants to strengthen studies
    • Record videos
    • Reviews
    • Fighting data forging


More News from this Feed See Full Web Site