Skip to main content


From my bsky feed - two consecutive posts. Nature Sci Rep publishes incoherent AI slop. eLife publishes a paper which the reviewers didn't agree with, making all the comments and responses public with thoughtful commentary. One of these journals got delisted by Web of Science for quality concerns from not doing peer review. Guess which one?
in reply to Dan Goodman

oooh what's the 2nd article? 👀
(I know, that's not the point of your post but I'm already convinced that Nature Publishing Group sucks and that #eLife is better :) )
in reply to El Duvelle

@elduvelle follow this thread because the commentary is almost as interesting as the paper (which also looks very cool btw):

bsky.app/profile/behrenstimb.b…

in reply to Dan Goodman

ooh.. Just seeing the title I remember having a look at the preprint some time ago and thinking that it made quite bold claims without properly understanding the biology of the "spatial cells". I'll have a look at the thread & eLife reviews when I get a chance!
in reply to El Duvelle

So the author says

"Intuitive cell types are found in random artificial networks using the same selection criteria neuroscientists use with actual data."


This is already wrong. The proper criteria for selecting, say, place cells, involve not only high spatial information content but also stability of the spatial firing across sessions (if you wanted to demonstrate that a new brain region has place cells) or, at least, within-session stability e.g. with session halves, and in 2D space so it cannot be about distance or time coding. You might see some papers passing peer-review without these quite stringent criteria, but it doesn't mean that place cells are not a thing.

I doubt any random network will produce this by chance. But.. I'll read the paper :)

in reply to El Duvelle

@elduvelle I think this is fine - you don't have to agree with it to think that it's a worthwhile contribution. I'd say that the majority of papers I read I find something that makes me think it's fundamentally wrong, but that doesn't mean they shouldn't have been published or that they didn't make a contribution to my understanding. I think it's problematic that we pretend that published papers are - by virtue of being published - correct.
in reply to El Duvelle

@elduvelle
Not entirely; perhaps you feel strongly about it because you also work on hippocampus and spatial navigation.

To me, cell types are a useful categorisation, but not a determining one. In some ways I'm reminded of the use of brain regions and their functional relationships as a surrogate for the actual synaptic connectivity diagrams, or connectomes.

Cells as entities can be genetically and experimentally manipulated, so we tend to assign attributes and properties to them, when they in good part only have those properties as a function of the inputs they receive and the specific position in the circuit that they hold.

There's also an absurd amount of plasticity at the circuit level when some neurons are missing or inactivated. That is one characteristic of biological networks: robustness to perturbation and graceful degradation of system function rather than catastrophic.

#neuroscience #CellTypes

in reply to Albert Cardona

I feel strongly about it because indeed this is my field and I know how easy it is to mix up something that "could be a place cell" vs "a place cell".
Unfortunately it seems that the authors of the paper criticised the first one, which is a strawman, rather than the last one. So, I'm not actually sure it is a useful contribution - rather, it risks getting people to think that we have very lenient criteria to define "functional cell types".
This entry was edited (1 week ago)
in reply to El Duvelle

@elduvelle
The field in general, beyond hippocampus and place cells, has a very broad, expedient, working definition of a cell type. It's unfortunate this paper focuses only on the hippocampus because the issue is generic across all of neuroscience.

What the paper highlights is what I was mentioning about resilience and robustness. Remove some cells thought to be critical and a biological neural network's function degrades but persists, operating in some capacity, sometimes even at full capacity.

That a biological neural network has the ability to incorporate new neurons into a circuit (via neurogenesis) is very much an integral part of such robustness, and also a vector for circuit incremental development and for circuit change through evolution.

That also ties with the surprises found when studying randomly wired artificial neural networks: that they can be trained to perform a task, and that often a tiny fraction of the overall network is responsible for the capacity to perform that task. But remove that and some other part of the network steps in.

in reply to Albert Cardona

@albertcardona
Yes, these are great points, but these have been made before many times no? We know that lesion or inactivation studies are not perfect because other regions may take over and because a lot of the brain processing is distributed, redundant & plastic anyway.

That wasn't really my original point though... I am annoyed because they seem to attack a strawman and make it look like they are attacking the foundations of an entire field. But it seems that they are just not aware of what the field uses to prove the existence of different functional cell types. The reviewers are also saying this, and even Tim Behrens says in the thread that the title is hyperbolic and the abstract is not justified.

Do we really want to spend time reading and discussing a paper if it's based on ignorant or hyperbolic claims? I am not sure...

in reply to El Duvelle

@elduvelle @albertcardona it's not my area of expertise but my impression from Tim's thread is that it is interesting but interpretation is too strong?
in reply to El Duvelle

@elduvelle @albertcardona
I know nothing about place cells (really nothing). But I do feel all three of you have point, and it reminds me of the debates of M/E cells in #drosophila circadian rhythm. They are useful for most of time, but sometimes it doesn't fit the data, it is fine to report these cases and refram the definition. But sometimes you can see someone just wants to make a wide insertion, but sometimes they are right! what mechanism allows us to manage all these?
in reply to Ko-Fan Chen 陳克帆

in reply to El Duvelle

@kofanchen @albertcardona
In addition... unfortunately, even the eLife abstract is wrong:

"some degree of spatial tuning (e.g., place cells) [...] emerges in sufficiently complex systems trained to process visual information. This intriguing observation [...]".


First, something with "spatial tuning" is not the same as "place cells". Second, models that create "view cells" from visual inputs have been around for decades (e.g. Arleo & Gerstner 2000). This is not intriguing. The hard part is making that into an actual place cell-like activity which mimics the properties of actual place cells.

in reply to El Duvelle

@elduvelle @kofanchen @albertcardona this is all really interesting and probably something I'll have to learn about some day. But I think there's a key point you're missing here. I could find papers that I have just as strong disagreements with in any journal that intersects my field. But usually you don't get to see that. Strong disagreements with a paper is great. It helps a field make progress when someone who doesn't understand the subtleties as well as an expert blunders in and starts smashing things.
in reply to Dan Goodman

@kofanchen @albertcardona

OK, maybe I am being too specific about the details of my disagreement.
Let's say some people made a model of a starfish. But they write a paper about it and call it a horse. they say "look, we made a model of a horse because this thing has legs and we consider legs to be a defining criteria of a horse".
Now, say they cut some of the legs of their model starfish and test stuff about its mobility. They'll conclude something, like "horses can still gallop when they loose two legs" and "also, people should be careful about classifying animals into species because the species don't really mean anything".

Would that paper be interesting and worth reading? I don't think so. It's just using a false premise to make a weird conclusion. I am open (and actually hope) to be proven wrong about the fact that their starfish is, indeed, a horse, but until then, I'll just be annoyed that people are making a big deal of this.

in reply to El Duvelle

@elduvelle @kofanchen @albertcardona I'm absolutely open to the idea that this paper is as wrong as you say it is. Like I say, not my area. But even if so, that happens in science (surprisingly often) and some of those papers can set back a field for years. This one is less likely to do that because of the easily accessible opinion of the reviewers.
in reply to Dan Goodman

@kofanchen @albertcardona

OK, if the point is that it's better for this paper to be published in eLife rather than in a more obscure journal (say, with unavailable peer-review reports), yes, I agree with that. But could it not have just stayed a preprint? It seems that the authors didn't really listen to the reviewer's suggestions and dismissed most of their reservations without actually trying to answer them, so I'm not sure if the reviewing process was useful to them, or to the paper. But the time spent on reviewing it means those reviewers couldn't review something else that might have been more valuable instead.

I would also have preferred if the eLife summary was accurately reflecting the flaws of the paper, especially the lack of logic behind the main reasoning. The current one gives some legitimacy to the paper's unsupported results & claims. I usually defend eLife when people criticise the new model, saying that "you can read the summary" but for that to work, the summary has to be accurate..😕

in reply to El Duvelle

@elduvelle @kofanchen @albertcardona but if your takeaway from this is that this lab doesn't engage with reviewers' concerns and just wants to publish their take regardless, then that's incredibly useful to know, right? I'd like to know that about more labs! (Not saying this is the only thing one could take away from this paper btw, but it's one perfectly valid thing to take away.)
in reply to Dan Goodman

@kofanchen @albertcardona

Maybe.. in that case I don't really want to do that as I know one of the authors and I'm sure they're able to do good science. I just think they misunderstood some fundamental things in this case. Do we want to generalise bad (or good) impressions from one paper to all past and future papers from a lab? Ugh.. I don't know 🤷

Probably time to go to bed :)

in reply to Dan Goodman

@elduvelle @kofanchen @albertcardona I think the fundamental point here is that we have to get away from the idea that scientific papers are correct and that the job of publishing is to weed out incorrect ones. All papers are wrong. A large number are knowingly wrong. They're moves in a game, but it's a game whose structure does - amazingly - tend to lead to better ideas despite taking a lot of wrong turns and (in retrospect) wasting a lot of time. That convergence on better ideas happens quicker the more information we have and the more transparent the processes are.
in reply to Dan Goodman

Warning - long post about publishing more vs publishing "better"

Sensitive content

in reply to El Duvelle

@elduvelle @kofanchen @albertcardona great example! I don't think we can catch all problems before publication and even if we could the cost would be crippling. I think we'd need hundreds of reviewers per paper, each given weeks to months of time to review. We'd never get anything done. And it wouldn't even work because some problems only become apparent when you try to use results to push onwards, which we wouldn't be doing if we were spending all our time reviewing. I totally agree that fields can get sidetracked for years by an impressive sounding result with flaws. My point is that we should make it easier for those flaws to become visible by increasing transparency and doing post publication review, rather than pinning our hopes on an - in my view doomed - attempt to stop it from ever being published in the first place.
in reply to El Duvelle

Your example actually reminds me of this historical case of a false witness of deer as horse and the person did this on purpose so I think it is closer to the case of this paper.

en.wiktionary.org/wiki/%E6%8C%…

@neuralreckoning @albertcardona

El Duvelle reshared this.

in reply to El Duvelle

in reply to Redish Lab

@adredish @elduvelle @albertcardona @jonmsterling the only difference I see is in eLife's favour. I know for a fact that there are papers out there published in good journals where there was only one reviewer and that reviewer stated that the paper has misleading claims not justified by the results. I know because I was that reviewer. And it didn't happen only once. So the fact that there is one paper in eLife where the reviewers weren't keen doesn't seem like a very high proportion to me, and there is the huge advantage that we can actually see it. In the cases I know about I'm the only person who knows this. But those papers are being listed on CVs. What we need to do is not to consider eLife papers as a downgrade compared to a "refereed" one, but get over the deeply wrong idea that having gone through an opaque peer review process counts for much.
in reply to Dan Goodman

@adredish @elduvelle @albertcardona @jonmsterling

Elife forces a reviewing body to actually look at the published work and evaluate its importance rather than blindly relying on journal impact factor. (Here reviewing body is anyone evaluating someone’s CV)

They provide a standardized nomenclature making it somewhat easier for someone outside of the field to understand the reviews.

The paper in question was considered important but incomplete.

in reply to MCDuncanLab

@MCDuncanLab @adredish @elduvelle @albertcardona @jonmsterling

I think it's revealing to consider traditional journals as a perturbation on the eLife model instead of vice versa:

- rate quality of evidence and impact via review
- if a threshold isn't met, don't publish
- publish, keeping the ratings secret - they are IMPLIED based on journal reputation

... whereas eLife always "publishes" but also always reveals the evaluation

in reply to B Thuronyi

@thuronyi @MCDuncanLab @adredish @elduvelle @jonmsterling

eLife senior editors desk-reject the majority of papers received. I don't have exact numbers but on aggregate may be about 2/3rds. Most of those rejected (1) aren't considered to sufficiently advance the field or to provide a validation of an existing result, or (2) aren't proper attempts at rigorous scientific research, or, some times, (3) there isn't an editor with a matching background and expertise to evaluate it and guide its review process.

in reply to Dan Goodman

@elduvelle

The paper:

"The inevitability and superfluousness of cell types in spatial cognition", Luo et al. 2025
elifesciences.org/reviewed-pre…

Quite the poster child for why at @eLife we support publishing papers that we consider important and yet nonetheless label as incomplete: the questions are worth asking, the discussion has to happen, the suggested experiments need to be voiced out and aired, to prompt someone to take them on to the lab.

#ScientificPublishing

in reply to Albert Cardona

@albertcardona @elduvelle @eLife yes I'm so glad to see it! So far most eLife papers just looked like normal journal papers, but this shows what the new model can do. Brilliant stuff.
in reply to Dan Goodman

@elduvelle @eLife

Can't agree more. Scientific publications are the means for scientists to talk to each other in a formalised way, and not at all as a way to accrue points towards career advancement or funding. Let's retake that original purpose from the choking grasp of the bean counters.