Links (59)
20 June 2022 | 12:00 am

For those diet coke drinkers among you, yes aspartame is perfectly fine. It probably is not some kind of secret nootropic either.

IVF is a procedure commonly associated with infertility treatments as a last resort option. It is also a requirement if one wants to do any embryo screening. There's a small rabbithole to be explored (I didn't fully go into it) of potential adverse effects of IVF: it might increase in risk of cerebral palsy among others, which could defeat the point of the screening itself (Except if a particularly nasty monogenic disease is present). Older work (Kamphuis et al. 2014) points to IVF being riskier whereas newer work says it's probably fine and the older data is an artifact of older IVF tending to lead to twins (Spangmose et al., 2021)

Ultima Genomics, a company building DNA sequencers, at last launched, at prices substantially lower than market-leading Illumina (Also cheaper than BGI which already was cheaper than Illumina). Some caveats of their technology in this twitter thread. A Tiktok explains it

Remember that book on NY party life I linked to here? A reader did a conceptual replication of the premise of the book.

An evaluation of NSF's SGER ("high risk high reward") programme

Clinical trials on plasma aphaeresis for lifespan

Scott Alexander on a nootropic recommendation engine

Derek Lowe asks: How long as we going to keep doing this [trying to reduce amyloid in the brain to treat Alzheimer's]? My answer: As long as pharma credibly believes they can shitpost their way past the FDA and make money off it with questionable clinical trials, same thing as in cancer research. Though that heuristic doesn't quite fork for Alzheimer's, because cancer drugs do get approved and with the exception of aducanumab, Alzheimer's drugs do not. This might be some kind of Pascal's wager? An act of desperation? (If not amyloids then what?). It does seem to me an exercise in mass irrationality.

Though I'm not a doomer about cancer therapy in general, these results seem legit, doomer in chief Vinay Prasad likes the trial

AGI doomerism, with some commentary

Combined treatment continues to show promise for lifespan, in flies

Cognitive apprenticeship via expert streaming

Ben Kuhn on 10x engineers

Sam Bankman-Fried thinks (as do I) big tech companies could work as well with substantially fewer employees

You have seen sequence to structure predictions (AlphaFold), watch a human do structure to sequence

New Science just launched their fellowship

Younger people are more pro- political violence. Is that an age effect (Young people in each time period tend to be more pro) or a cohort effect? (Society is becoming more pro-political violence). It seems the latter to me. Thread here

Why metformin is not so exciting for longevity

Early transoceanic telegraphy was not strictly better than carrying messages in steam boats

New longevity-funding foundation just dropped

Michael Nielsen on Effective Altruism, with some commentary in the replies

Ben Reinhardt on asking good questions to map research fields

A case for more career scientists

A japanese company is publishing their total fertility rate to show they are family-friendly

The story of how one Dr Cowie singlehandedly? raised US IQ by 15 points

On Reboot's Ineffective Altruism
10 June 2022 | 12:00 am

I've seen "Ineffective Altruism" used a couple of times to poke fun at EAs. I remember the first time I saw the phrasing I jumped to some state inbetween of amused and confused. Ineffective Altruism sounds jocular (who would oppose something effective!) so what must be going on is a reaction to the EA aesthetic or specific definitions of "effectiveness". That in turn of course leads us to ask what that alternative effectiveness might be. Or is it a reaction against not so much EA but specific causes EAs seem to favor. I think it's also a reaction against something that has nothing to do with EA or even utilitarianism: "Ineffective Altruism" seems to be at its core at least in substantial part an embracing of the concept of Knightian (or "radical") uncertainty, and relatedly of formal methods for decisionmaking. It is also secondly a positive moral evaluation of such fact.

It is similar to what I have said elsewhere about tacit knowledge: It is one thing to say that some knowledge is really hard to get, that maybe one has to go to the one master that knows a specific craft and apprentice for a year. It is another thing to say that such a thing is good (perhaps because it allows us to go on quests seeking tacit knowledge), as opposed to lamenting, as I do here, the fact that this type of knowledge limits human flourishing.

A good riff on these themes is Michael Nielsen's Notes on Effective Altruism. I link here to the tweet because the replies contain some good discussion. A funky related reading are these essays from David Chapman where you could take the essay below to be Stage 3 making a misguided critique of something like Stage 4.5 while at the same time making valid points that could be made from Stage 5 (Where you go get your post-rational sage license).

Here a recent essay from RebootHQ is copied below and commented on in the sidenotes. You can find other commentary of this same article by Nick Whitaker here. I decided to comment line by line and developed a few new additions to the stack. I have colored lines in red, yellow, and green to denote "how do I feel" about that specific line. Take red to mean annoyed, yellow is me having some quibbles with it, and green me appreciating the line. I mostly focused on the negatives deliberately, so you won't see much green. Hover over each sentence to focus the relevant sidenote. Some sidenotes are hidden because of lack of space, but they will become visible and pop over the others when hovered.

Towards Ineffective Altruism (All text below is extracted from their Substack!)

This tweet from Timnit Gebru has been living in my head rent-free for the past month. Recently, she and other critics of big tech (as well as former longtermist Phil Torres) have been loudly sounding the alarm about effective altruism and longtermism on Twitter and in various publications.

Note that these two things are different, but they get intermingled throughout the essay
Was this word chosen to taint EA with it being just a thing for male techbros? Or was it chosen because most rich people are men? We can point to some male EA tech billionaires but there are only 1-2 female ones? Like Caroline Ellison from Alameda and FTX

These ideologies scare me, and I want to engage with them seriously — not because I believe in them, but because they are seemingly rational, relying on the language of science, moral philosophy, and statistics. They are increasingly influential among policymakers, intellectuals, well-funded institutions, and the richest men in the world. Their ubiquity makes them pernicious and hard to combat. To take them on, we must critique their philosophical foundations, their rhetoric, and their material impacts simultaneously.

It's interesting that they start bashing EA even before hinting at why it might be a bad thing. I assume they don't want to say that EA is bad because it's ubiquitous (which is far from the truth, even amongst the billionaire class)

Some definitions

At its most basic, the effective altruism movement makes a generally utilitarian argument about how the world’s privileged people should spend their time and money if they want to maximize their positive impact on the world.

EA principles are meant to be followed by everyone, not just the privileged ones so this specificity is unnecessary. Its role here is merely rhethoric

Effective altruism was born mostly at Oxford in the late 1990s and early 2000s, at around the same time that the internet industry in Silicon Valley was experiencing its first cycle of boom and bust. The dominant ideologies of both come from the business culture of the time, and the two have become closer together since. As Nadia Asparouhova writes in her recent piece on “Idea Machines”: “Effective altruism is often associated with tech, but it’s genetically more similar to McKinsey.”

Is this different from other industries? Or is rather the opposition to STEM thinking which, by composition, is present in tech. This paragraph seems to be trying to tie together tech and EA, using some form of guilt by association (which works if someone doesn't like the tech sector in general) to taint EA as a movement.

Nowadays, effective altruism’s epistemology and tools often parallel those of the tech industry. At its heart, it is driven by the principle of maximization and informed by statistical analysis. With these methods, effective altruists make arguments such as maximizing disability-adjusted life-years by allocating time and money towards initiatives that provide a mosquito net for a child in a poor country (rather than providing direct donations to the child’s family).

At first glance, this all seems straightforward and uncontroversial, even if it speaks of “doing good” in the terms of a business investment. If we want to make the world a better place by giving money away, of course we should maximize the good that each dollar does, you might say. And besides, how bad can an ideology be if its principal goal is to give billions of *maximally effective* dollars away to charity each year?

Note that until now no argument against EA has been made, but lots of suspicion has been raised in the essay

These are fair points, and I don’t entirely disagree with them. Billions of dollars per year from the wealthy tech elite used to convince people to go vegan or to give to non-religious health NGOs or to end factory farming is not, on its face, a bad thing.

But effective altruism is just the tip of the utilitarian iceberg. Beneath the visible argument that giving must be optimized in order to be “good,” there are an array of ideologies in close contact with effective altruism that are far stranger, more ethically dubious, and highly influential. Foremost among them is the ideology of longtermism — an ideology that Phil Torres (a former longtermist himself) has described as “one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about.”

Longtermism originates from the same spaces and places as effective altruism, including the rationalist community and online blogs like LessWrong. If we can survive the next few hundred years, colonize other worlds, and learn to simulate conscious beings with computers, longtermists say, there could be a lot of people that exist in the future. The high end of the range is 1058 (10 billion trillion trillion trillion trillion), but most say there could be at least quad- or quintillions. If all beings are equally important, regardless of when or where they exist, then doing something right now that has a tiny probability (say, a one in one quintillion chance) of affecting a tiny fraction of the future people (0.00000000000000000000000000001% — that’s 28 zeros), could still potentially change the lives of more than 10 billion people, more than the nearly 8 billion people existing on the planet today.

From this time-and-space-agnostic view, the current state of the world and the humans in it begins to seem miniscule, a grain of sand in the beach of a future that may span galaxies and trillions of years.

With this perspective, new priorities emerge. Instead of focusing on the material inequities of our world, longtermists think that the way to do the most good in the long-term is to focus on the things that could prevent this unthinkably large set of futures from coming to pass. Thus, we ought to focus on studying and reducing existential risks — potential developments that could wipe humanity out completely or permanently constrain humanity before it achieves its full potential. Existential risks include global totalitarian governments, deadly pandemics, asteroids, nuclear wars, misaligned hyper-intelligent AI systems that destroy human civilization, and other unspecified horrors.

“Strong longtermism” is a variant of longtermism advanced by Hilary Greaves and William MacAskill that argues that, “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focusing primarily on the further-future effects.” An extended quotation from their paper is illustrative of the impacts that these ideologies can have. Let’s say Shivani, a philanthropic donor, wants to donate $10,000 to the cause that will do the most good:

Suppose, for instance, Shivani thinks there’s a 1% probability of a transition to a world government in the next century, and that $1 billion of well-targeted grants… would increase the well-being in an average future life, under the world government, by 0.1% with a 0.1% chance of that effect lasting until the end of civilisation, and that the impact of grants in this area is approximately linear with respect to the amount of spending. Then, using [a] figure of one quadrillion lives to come, the expected good done by Shivani contributing $10,000 to this goal would… be 100 lives. In contrast, funding for the Against Malaria Foundation, often regarded as the most cost-effective intervention in the area of short-term global health improvements, on average saves one life per $3500.

In simple terms, Shivani can save 35 expected future lives for each current life she can save. In this instance, the premier example of donating based on effective altruist principles is utterly ineffective compared to the logic of longtermism and existential risk. The idea that studying existential risk and reducing it by a fraction of a percent could improve the lives of untold future millions is a powerful one.

Of course, longtermism is not the principal motivation for most effefctive altruists, and there are gradations of how far one can subscribe to this argument. Regardless, in recent years, it has increasingly begun to drive giving, set priorities, and define the movement as a whole, prompting some to ask if effective altruism is just.) longtermism now.
This is fair (acknowledging that not all EA is LT; and the links there point to good discussion in the EA forums)

Why understanding effective altruism and longtermism is important

Yes. Interestingly, EA was criticised in the past for not paying enough attention to politics!
A bit unclear what this means. Is this 'EA is not just about individual choice, but what EAs believe now affect everyone because they are getting into politics?'

Effective altruism and longermism are ideologies that are increasingly influential among the richest men and the most prestigious institutions in the world, shaping policy and capital allocation. The movement has shifted to pushing young adherents towards careers in government, with a focus on reducing existential risks through policy . These recent developments take our discussion away from questions of rhetoric and morals (for the moment) and squarely into material considerations.

Yes, but Elon has been wanting to make mankind a multiplanetary species way before EA was a noticeable thing

Longtermism and existential risk are particularly influential ideologies among those who made fortunes in technology and in elite institutions. Elon Musk has cited the work of Nick Bostrom (who coined the term existential risk in 2002) and has donated millions to the Future of Humanity Institute and Future of Life Institute, sister organizations based out of Oxford. Jean Tallinn, a founder of Skype worth an estimated $900 million in 2019, also cofounded the Center for the Study of Existential Risk at Cambridge, and has donated more than a million dollars to the Machine Intelligence Research Institute (MIRI). Vitalik Buterin, a cofounder of the Ethereum cryptocurrency, has donated extensively to MIRI as well. Peter Thiel, the radical libertarian donor, early Trump supporter, and funder of JD Vance’s Ohio Senate campaign, delivered the keynote address at the 2013 Effective Altruism summit.

Longtermism is also increasingly popular among rank-and-file effective altruists, to the point where many consider them to be synonymous. According to data from the Open Philanthropy Grants database, in 2021 effective altruists donated $92 million to AI risk research, $21 million to biosecurity and pandemic preparedness, and $10.5 million to global catastrophic risk research. Altogether, this $125 million towards longtermist existential risk research represents a larger slice of donations than any other individual cause. And the allure of AGI (Artificial General Intelligence) — a major focus/fear of effective altruism and longtermism — is especially clear in industry, where multiple startups and big tech companies pour billions of dollars into research and development.

These bureaucrats, donors, research institutes, and companies are by no means an ideological monolith, nor do they necessarily represent the beliefs of the average effective altruist. However, this web of entities has one key feature — intellectual, institutional, and financial capital. A relatively small cadre of longtermist academics housed within and legitimized by influential institutions can advance ideas that guide how governments and venture capitalists think about and shape the future.

Towards ineffective altruism

Bruh the essay is 66% of the way in and you haven't started to criticise!

So far, in the spirit of critique , I’ve laid out the philosophical underpinnings of the effective altruism and longtermism movements and the material superstructures that have arisen from those foundations over the past two decades.

Patently false. The linked piece just tells the reader that there may be other causes that are more impactful, and tries to use consensus evidence to estimate how much of a problem climate change actually is. The 80k piece in fact says: We’d love to see more people working on this issue, but — given our general worldview — all else equal we’d be even more excited for someone to work on one of our top priority problem areas.

It seems to me that the seemingly limitless bounds of longtermism are ultimately a moral carte blanche on anything we do (except make the species extinct). It’s easy to see how this position can ultimately lead to reprehensible outcomes. Just this week, 80,000 Hours released a piece that argues for effective altruists to not focus their careers on climate change — a process which will uproot hundreds of millions of mostly non-white poor people and cause billions to experience chronic water scarcity — because it has a low chance of becoming uncontrollable and turning Earth into Venus . Other longtermists worry that their ideology would provide rationalizations for genocide if political leaders took it literally. Mathematical statistician Olle Häggström, usually a proponent of longtermism, imagines

It's a formula in the sense that cooking a paella can be reduced to buying ingredients+cooking. Sure, at some level, but the high level guidelines are not meant to give you all the details! What would be a non simplistic formula for doing good? Reading thousands of philosophy books and then going with your best guess? Maybe. But the EA-in-practice is anything but simplistic, in some cases it involves lots of interviews with experts (As in the OpenPhilantrophy case), spreadsheets, and some statistics.

a situation where the head of the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders.

Besides the moral hazards of advocating these positions, these ideologies provide an overly simplistic formula for doing good : 1) define “good” as a measurable metric, 2) find the most effective means of impacting that metric, and 3) pour capital into scaling those means up.

Now *this* can be a critique, but it's a shame it is not developed in further length and it appears near the end of the essay. What are those measurable non-optimizable things? (EA would not recommend to try to optimize the unoptimizable). Deep down, this is a critique of a certain kind of utilitarianism which can be made, but has to be made, not just hinted at

But following the formula of effective altruism is clearly not all that being good requires. There are boundless ways of doing good that are fundamentally immeasurable or, if they are measurable, may not be optimized. Nevertheless, this universe of actions demands our consideration. To follow in the footsteps of Timnit Gebru (and to be purposefully contrarian), let’s call the philosophy of seriously considering the merits of doing good immeasurably or suboptimally ineffective altruism.

This seems like a bad example by the definitions of the authors. Cash transfers, of all things, is the easiest thing to evaluate. At the very leist one can compare what $10 buys the local homeless vs what it buys the global poor. You get more of your 'helping the poor' value by donating in an effective way. What the authors should instead argue for, and do so directly, is that it is more valuable to donate to the local poor for other reasons.

Ineffective altruism might look like giving $10 to a houseless person who asks for it. It might look like organizing to ensure that as many people as possible have access to basic material needs like food, housing, and healthcare . It might look like the ephemeral work of knitting a social and political community together. After all, how can one quantify the resiliency of a particular neighborhood? None of these actions would be particularly “effective,” and yet they might also have more of a tangible impact than unknowably reducing an existential risk by some fraction of a percentage point. They also show an understanding of one’s responsibilities to their community, how strengthening community is also important for our shared future, even if it isn’t measurable.

A large chunk of EA money goes towards this end, if you think deworming or malaria. Perhaps the authors of the piece could offer some reasoning as to how shipping food to Africa might be better
Fair, this is indeed a hard problem, but it would be an interesting one to try to think through. You could first sit down and try to define what you mean by resilient, then make some surveys and go out to ask residents to get a sense of how the 'hood is doing. Doing that over time gets a sense of how things are evolving. Perhaps something as simple as taking the first principal components of a dataset covering various kinds of social ills (poverty, crime rate, drug abuse) closely approximates one's intuition well enough to, say, prioritize what neighborhoods need the most help. Something like this is, paradoxically, what real world 'ineffective altruists' at NGOs and local government would engage in to prioritize intervention at a more local and immediate scale. 'Local' or 'Nonutilitarian' is someting one can accept without lazily accepting the 'Ineffective' part
Sure, in the EA sense (of global utilitarianism), but I get the sense that the critique is mixing up values (local, quantifiable) and views on the extent to which we can use quantitative methods to decide how to act

Ineffective altruism eschews metrics, because “What does doing good look like?” should be a continuously-posed question rather than an optimization problem . As an ideology of allocating resources, it is recognized as explicitly political, rather than cloaking itself in the discourse of science and rationality. It allows us to get outside of the concept of altruism entirely — a concept that feels limiting in its focus on the actions of the individual — and instead consider a paradigm of collective, democratic mutual aid. Most importantly, ineffective altruism allows us to ask harder questions than effective altruism does: questions about who and what we value.

This is an ironic thing to say, given that EAs are constantly posing that question and evolving as a result. EAs were once criticised for focusing too narrowly on things we could measure directly at the expense of say a career in politics. GiveWell constantly reassesses their charity picks.
Individualism does not seem to be built into altruism, if you take an NGO one could talk about the organization being altruistic without any verbal gymnastics. What I suspect the authors might want to say here is that they want to bring together altruism and self-interest in the sense that a close group of friends or a couple might modify each other until the right thing to do and what one wants to do are one and the same. Derek Perfit famously mused on this question, but this essay is not quite there. EA starts with a directional definition of good (depending on how strong/weak the definition will be more or less totally utilitarian). It is very fruitful to be able to have a label that conveys interests in certain values to facilitate working together in furthering those values, rather than arguing about them all the time. There is a space for that, but the name of that is ethics, not ineffective altruism
Another capitalism bad+guilt by association move
Sounds like... EA. Survival and Flourishing Fund is a thing!

What might “moral good” look like outside of market-derived values (like the maximization principle)? How can we collectively decide to allocate resources? How can we build societies based on principles that cannot be measured, like mutual respect and solidarity? How can we eliminate material misery from the world? What might we do to ensure the flourishing of future generations, rather than just their survival? How can we depart from a society where those who have the privilege to choose to care about others can, and move towards a society where everyone has the power to care about others and must?

People all over the world have been attempting to answer these questions for generations. After massive street protests in 2019 in Chile, 80% of the population voted to redraft the nation’s constitution — an effort that is currently in progress and will be finalized this September. In Taiwan, Digital Minister Audrey Tang is building effective tools for building consensus and making decisions online. Tang helped enable a highly effective set of COVID-19 policies that kept the disease largely outside Taiwan for more than two years, influenced what digital democracy looks like on the island, and inspired other online civil processes around the world. And in the United States, the last few years has seen rising interest in small-d democratic institutions like labor unions and mutual aid organizations. These efforts may be inefficient or messy or unpredictable, but are good in part because of those facts, not in spite of them.

It is one thing to say that something has no choice but being unpredictable or messy (e.g. information collection has its cost, so we may want to be suboptimal locally given some global constraint on resources. But this doesn't sound like that; it sounds like an emotional rejection of rational thinking itself. Fine if you want to go there but this is very very controversial and needs justification!)
As we get some distance from effective altruism and longtermism, we can also begin to consider other ways of thinking about the long-term future. Our conceptions of the future inform our actions today, and the future is much too important to cede to an ideology with the ethos and rhetoric of longtermism. Seventh-generation decision making, for example, is an indigenous principle that is enshrined in the Constitution of the Iroquois Nation. It mandates Iroquois leaders to consider the effects of their actions over seven generations, encompassing hundreds of years. Seven generations is a long time, but it is also a finite amount of time. Although this framework prioritizes long-term thinking, it doesn’t bring the weight of infinity to bear on the present. And unlike longtermism, the seventh-generation principle doesn’t pretend to be scientific. It doesn’t rely on unfalsifiable guesses about a future we can’t even imagine to assign expected values to different political decisions; rather, it makes thinking about the future a moral imperative.
The framework there is one that makes sense; but it is interesting to tease out why: It is a framework that applies in a particular context, to a circunscribed set of people (The Iroquois Nation) rather than mankind as a whole. In that setting, the odds that a recognizable Iroquois Nation will exist a billion years into the future is so unlikely that it is reasonable to discard everything but a few hundreds of years, as they do. This is not incompatible with being an EA; that is, the time horizon applied to your 'universal altruism' can be infinite whereas the one for your communitarian altruism can be a few generations

Philosopher Karl Popper on about the dangers of exclusive focus on the utopian ideal of the far future over the material concerns of the present day:

We must not argue that a certain social situation is a mere means to an end on the grounds that it is merely a transient historical situation. For all situations are transient. Similarly we must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next.

If you think basic bases is opaque math I think it denotes an issue with the writers, not EA... to put it mildly. In the depths of EA forum one can find very technical questions, but *most* of EA discourse is very clear. Card-carrying EA organizations strike me as honestly striving to be clear about what they write, and why they make the decisions they make (Thinking of OpenPhil here)

When we critically examine effective altruism and longtermism, we can see them as falsely utopian ideologies cloaked in the opaque vocabulary of science and math . Let’s instead strive for a world where altruism doesn’t have to be maximally effective for it to be worthy, where doing good doesn’t have to be optimized, where morals aren’t a function of the market.

This is something I have seen before, bunching together utilitarianism with markets or capitalism. It might happen because some of the language one sees in economics (utility) is similar; though economics is mostly a descriptive science that can be applied to analyze any economic system. But if we analyze carefully what the authors seem to mean, it doesn't make much sense: The EA values are totally independent of any economic system, and while markets facilitate calculations (hey, there are prices!), if this is the reason for that line, we would have to say that most decisions are a function of the market, e.g. 'where my choice of where I buy butter beans is not a function of the market', and that being a bad thing

Links (58)
17 May 2022 | 12:00 am

Unsurprisingly, human capital matters more than buildings for scientific output. Also, some evidence for the Newton hypothesis.

Equipment Supply Shocks. And how Steve Jobs got a law changed to be able to get every school in California a computer.

Inside Fast's (An apparel company larping as a payments company) rapid collapse

Using ML to design hardware for specific NN architecture. Interesting as well how the authors paid attention to economics (considering engineering salaries, semiconductor manufacturing costs, etc)


A case study in early-stage startup execution

Why is AI hard and physics simple?

Riva on toxoplasmosis and the role of germs in chronic diseases. After the Epstein-Barr & MS paper, I think it's worth looking more into this.

Black-White disparities in life outcomes in the US start early in life (ht/ Scott Alexander)

Some new Gwern essays, on teaching reading of scientific research

The Terra blockchain (Anchor, Luna, UST) imploded. Though this is more Ponzi-like than the article says. Proper fractional reserva banking requires more stable assets (short term ones, not mortgages or long term bonds as is common practice now; also FDIC bad, we can have our free market banks and eat our stability cake too)

Ben Reinhardt on housing researchy ideas in startups or not

Gato, DeepMind's new model, at first I thought it's probably the closest to AGI I've seen so far, a single system that can achieve ok performance across tasks as diverse as controlling a robot arm and answering questions. Sure it still makes silly mistakes ("What's the capital of France -> Marseille") but as we know with AI, if the proof of concept kind of works, taking that to working at human or superhuman performance is a matter of more dakka. Some Gato discussion and skepticism in this thread. Also see here and here to curb your enthusiasm.

How does Tenstorrent work?

Scott Alexander reviews a book on Xi Jinping

Andy Matuschak, interviewed

Microbes have structures in them that trap gas, enabling them to float

Aubrey de Grey & Charles Brenner debate

Lab mice are not getting chonkier

A Longevity intervention combination I wanted to see for a while: senolytics and reprogramming

Roger's Bacon extremely spicy essay on progress studies

I ask Twitter "Has anyone used LLMs for science?"

Michael Nielsen asks twitter what to read to understand Transformers

Students are asked to do some data analysis in a dataset where the datapoints are arranged in the shape of a gorilla. A number of them missed it.

Richard Ngo reviews a book on NY nightlife

Ben Kuhn on 1:1s

Book review of Making it in Real Estate

Sarah Constantin's newsletter, lately covering energy storage

Lada Nuzhna on deep learning for biochemistry

Transformers for software engineers

Brief interview with Luke Gilbert, a new core investigator at Arc Institute

Everything Everywhere All At Once was good

More News from this Feed See Full Web Site