From my bsky feed - two consecutive posts. Nature Sci Rep publishes incoherent AI slop. eLife publishes a paper which the reviewers didn't agree with, making all the comments and responses public with thoughtful commentary. One of these journals got delisted by Web of Science for quality concerns from not doing peer review. Guess which one?

El Duvelle
in reply to Dan Goodman • • •(I know, that's not the point of your post but I'm already convinced that Nature Publishing Group sucks and that #eLife is better :) )
Dan Goodman
in reply to El Duvelle • • •@elduvelle follow this thread because the commentary is almost as interesting as the paper (which also looks very cool btw):
bsky.app/profile/behrenstimb.b…
El Duvelle
in reply to Dan Goodman • • •El Duvelle
in reply to El Duvelle • • •So the author says
This is already wrong. The proper criteria for selecting, say, place cells, involve not only high spatial information content but also stability of the spatial firing across sessions (if you wanted to demonstrate that a new brain region has place cells) or, at least, within-session stability e.g. with session halves, and in 2D space so it cannot be about distance or time coding. You might see some papers passing peer-review without these quite stringent criteria, but it doesn't mean that place cells are not a thing.
I doubt any random network will produce this by chance. But.. I'll read the paper :)
Dan Goodman
in reply to El Duvelle • • •Albert Cardona
in reply to El Duvelle • • •@elduvelle
Not entirely; perhaps you feel strongly about it because you also work on hippocampus and spatial navigation.
To me, cell types are a useful categorisation, but not a determining one. In some ways I'm reminded of the use of brain regions and their functional relationships as a surrogate for the actual synaptic connectivity diagrams, or connectomes.
Cells as entities can be genetically and experimentally manipulated, so we tend to assign attributes and properties to them, when they in good part only have those properties as a function of the inputs they receive and the specific position in the circuit that they hold.
There's also an absurd amount of plasticity at the circuit level when some neurons are missing or inactivated. That is one characteristic of biological networks: robustness to perturbation and graceful degradation of system function rather than catastrophic.
#neuroscience #CellTypes
El Duvelle
in reply to Albert Cardona • • •Unfortunately it seems that the authors of the paper criticised the first one, which is a strawman, rather than the last one. So, I'm not actually sure it is a useful contribution - rather, it risks getting people to think that we have very lenient criteria to define "functional cell types".
Albert Cardona
in reply to El Duvelle • • •@elduvelle
The field in general, beyond hippocampus and place cells, has a very broad, expedient, working definition of a cell type. It's unfortunate this paper focuses only on the hippocampus because the issue is generic across all of neuroscience.
What the paper highlights is what I was mentioning about resilience and robustness. Remove some cells thought to be critical and a biological neural network's function degrades but persists, operating in some capacity, sometimes even at full capacity.
That a biological neural network has the ability to incorporate new neurons into a circuit (via neurogenesis) is very much an integral part of such robustness, and also a vector for circuit incremental development and for circuit change through evolution.
That also ties with the surprises found when studying randomly wired artificial neural networks: that they can be trained to perform a task, and that often a tiny fraction of the overall network is responsible for the capacity to perform that task. But remove that and some other part of the network steps in.
El Duvelle
in reply to Albert Cardona • • •@albertcardona
Yes, these are great points, but these have been made before many times no? We know that lesion or inactivation studies are not perfect because other regions may take over and because a lot of the brain processing is distributed, redundant & plastic anyway.
That wasn't really my original point though... I am annoyed because they seem to attack a strawman and make it look like they are attacking the foundations of an entire field. But it seems that they are just not aware of what the field uses to prove the existence of different functional cell types. The reviewers are also saying this, and even Tim Behrens says in the thread that the title is hyperbolic and the abstract is not justified.
Do we really want to spend time reading and discussing a paper if it's based on ignorant or hyperbolic claims? I am not sure...
Dan Goodman
in reply to El Duvelle • • •Ko-Fan Chen 陳克帆
in reply to El Duvelle • • •I know nothing about place cells (really nothing). But I do feel all three of you have point, and it reminds me of the debates of M/E cells in #drosophila circadian rhythm. They are useful for most of time, but sometimes it doesn't fit the data, it is fine to report these cases and refram the definition. But sometimes you can see someone just wants to make a wide insertion, but sometimes they are right! what mechanism allows us to manage all these?
El Duvelle
in reply to Ko-Fan Chen 陳克帆 • • •@kofanchen @albertcardona
... Show more...I think the question of whether place cells (and any other "spatial cells") are directly "used" by the brain is a very good one, which is mostly still unanswered. But that doesn't seem to be the question raised by the paper, which doesn't model actual place cells or any other spatial cells. Instead they model some kind of visual cells, without proving that they are spatial. For example, their criteria for defining place cells seem to be that... they have at least one region of high activity in the environment. That's all I could find from their methods. Any random noise generator would probably fulfill this. Making extrapolations from this model to actual place (/spatial) cells seems completely unjustified. Caveat: maybe they explain their actual criteria somewhere else and I have missed those.. If that's the ca
@kofanchen @albertcardona
I think the question of whether place cells (and any other "spatial cells") are directly "used" by the brain is a very good one, which is mostly still unanswered. But that doesn't seem to be the question raised by the paper, which doesn't model actual place cells or any other spatial cells. Instead they model some kind of visual cells, without proving that they are spatial. For example, their criteria for defining place cells seem to be that... they have at least one region of high activity in the environment. That's all I could find from their methods. Any random noise generator would probably fulfill this. Making extrapolations from this model to actual place (/spatial) cells seems completely unjustified. Caveat: maybe they explain their actual criteria somewhere else and I have missed those.. If that's the case please let me know.
Then there is the question of whether putting cells in functional categories is useful. It seems that they claim that it isn't:
However, they base this claim on the "finding" that cell types can arise in any complex network. But they haven't shown this (see previous comment). So this entire argument is wrong. It's also ignoring the decades of research on (actual) spatial cells which have shown that each type will react in very different ways that have consistent rules within a cell type and sometimes very different across types (say, place vs head-direction cells).
Overall, this seems a good example to me of something that shouldn't be published in its current form, because it is based on a misunderstanding, and it risks spreading the misunderstanding further.
El Duvelle
in reply to El Duvelle • • •@kofanchen @albertcardona
In addition... unfortunately, even the eLife abstract is wrong:
First, something with "spatial tuning" is not the same as "place cells". Second, models that create "view cells" from visual inputs have been around for decades (e.g. Arleo & Gerstner 2000). This is not intriguing. The hard part is making that into an actual place cell-like activity which mimics the properties of actual place cells.
Spatial cognition and neuro-mimetic navigation: a model of hippocampal place cell activity - Biological Cybernetics
SpringerLinkDan Goodman
in reply to El Duvelle • • •El Duvelle
in reply to Dan Goodman • • •@kofanchen @albertcardona
OK, maybe I am being too specific about the details of my disagreement.
Let's say some people made a model of a starfish. But they write a paper about it and call it a horse. they say "look, we made a model of a horse because this thing has legs and we consider legs to be a defining criteria of a horse".
Now, say they cut some of the legs of their model starfish and test stuff about its mobility. They'll conclude something, like "horses can still gallop when they loose two legs" and "also, people should be careful about classifying animals into species because the species don't really mean anything".
Would that paper be interesting and worth reading? I don't think so. It's just using a false premise to make a weird conclusion. I am open (and actually hope) to be proven wrong about the fact that their starfish is, indeed, a horse, but until then, I'll just be annoyed that people are making a big deal of this.
Dan Goodman
in reply to El Duvelle • • •El Duvelle
in reply to Dan Goodman • • •@kofanchen @albertcardona
OK, if the point is that it's better for this paper to be published in eLife rather than in a more obscure journal (say, with unavailable peer-review reports), yes, I agree with that. But could it not have just stayed a preprint? It seems that the authors didn't really listen to the reviewer's suggestions and dismissed most of their reservations without actually trying to answer them, so I'm not sure if the reviewing process was useful to them, or to the paper. But the time spent on reviewing it means those reviewers couldn't review something else that might have been more valuable instead.
I would also have preferred if the eLife summary was accurately reflecting the flaws of the paper, especially the lack of logic behind the main reasoning. The current one gives some legitimacy to the paper's unsupported results & claims. I usually defend eLife when people criticise the new model, saying that "you can read the summary" but for that to work, the summary has to be accurate..😕
Dan Goodman
in reply to El Duvelle • • •El Duvelle
in reply to Dan Goodman • • •@kofanchen @albertcardona
Maybe.. in that case I don't really want to do that as I know one of the authors and I'm sure they're able to do good science. I just think they misunderstood some fundamental things in this case. Do we want to generalise bad (or good) impressions from one paper to all past and future papers from a lab? Ugh.. I don't know 🤷
Probably time to go to bed :)
Dan Goodman
in reply to Dan Goodman • • •El Duvelle
in reply to Dan Goodman • • •Sensitive content
@kofanchen @albertcardona
I would disagree with that. We should try as much as possible to publish clear, logical and truthful papers. We can hope that authors would spontaneously want to do this, but sometimes (time pressure, financial pressure) that's not enough. In those cases it is good to have additional barriers and checks, such as peer-reviewing. Some things might still go through but it's better to have 10% of papers being bad rather than 90%.
It is true that on the long term, the "true-er" ideas might spontaneously prevail. But humans are driven toward believing simple explanations that can waste so much time. There are several examples of this in my field... For example I'm writing a review of the Tolman's sunburst experiment (1946) and its replications - a single experiment, that influenced the inception of the theory of the cognitive map, which is pretty much at the origin of the whole spatial field that we're discussing here.. Turns out this experiment has never really been replicated, and its original design was flawed from the start (PS: it's really not about the lamp). But people ignored the (published!) replications to keep focusing on the original experiment. It's been almost 80 years now, and I'm sure many labs have spent resources trying to replicate this in vain.
This is also probably the case for a few other, more recent findings in my field (planning replay, reward replay)...
Would it improve things if even more papers were published regardless of their quality? I don't see how. I already find it hard to keep up to date with my field.. Instead I think we should publish less but be more stringent about what gets published - stop using potential clout as a criteria but instead insist on having the right controls so that any interpretation that comes out reflects the unbiased reality, as much as possible.
Dan Goodman
in reply to El Duvelle • • •Ko-Fan Chen 陳克帆
in reply to El Duvelle • • •Your example actually reminds me of this historical case of a false witness of deer as horse and the person did this on purpose so I think it is closer to the case of this paper.
en.wiktionary.org/wiki/%E6%8C%…
@neuralreckoning @albertcardona
指鹿為馬 - Wiktionary, the free dictionary
WiktionaryEl Duvelle reshared this.
Redish Lab
in reply to El Duvelle • • •@elduvelle @albertcardona
To me this question seems to be the issue of the #eLife journal hypothesis: they are providing reviews on preprints. They are basically post-preprint review (like #PubPeer), but unlike PubPeer, they still think (at least they talk of themselves as) a journal.
I think what #eLife and #PubPeer are doing is great. But they cannot be listed in one's CV as "refereed publications" in the way that other gatekept* journals are.
... which gets at the point @jonmsterling made about sepa
... Show more...@elduvelle @albertcardona
To me this question seems to be the issue of the #eLife journal hypothesis: they are providing reviews on preprints. They are basically post-preprint review (like #PubPeer), but unlike PubPeer, they still think (at least they talk of themselves as) a journal.
I think what #eLife and #PubPeer are doing is great. But they cannot be listed in one's CV as "refereed publications" in the way that other gatekept* journals are.
... which gets at the point @jonmsterling made about separating "preprints", "refereed publications" and "titles I'm thinking about writing" (in preparation) on one's CV.
It would be interesting to see how #eLife is still being treated as a "journal" on CVs and for grants and promotion.
BTW, in an earlier discussion, we agreed that one could list eLife papers in one's CV as long as one also included the eLife assessment on one's CV. Wanna bet these authors don't? 🤔
* Yes, I know eLife is gatekept by editors, but the door is opened based on "interesting", not based on "correct". (And, yes, there is evidence that the Glam journals do that as well, but they are at least ostensibly _claiming_ to only publish papers that are "correct".)
#ScientificPublishing
Dan Goodman
in reply to Redish Lab • • •MCDuncanLab
in reply to Dan Goodman • • •@adredish @elduvelle @albertcardona @jonmsterling
Elife forces a reviewing body to actually look at the published work and evaluate its importance rather than blindly relying on journal impact factor. (Here reviewing body is anyone evaluating someone’s CV)
They provide a standardized nomenclature making it somewhat easier for someone outside of the field to understand the reviews.
The paper in question was considered important but incomplete.
B Thuronyi
in reply to MCDuncanLab • • •@MCDuncanLab @adredish @elduvelle @albertcardona @jonmsterling
I think it's revealing to consider traditional journals as a perturbation on the eLife model instead of vice versa:
- rate quality of evidence and impact via review
- if a threshold isn't met, don't publish
- publish, keeping the ratings secret - they are IMPLIED based on journal reputation
... whereas eLife always "publishes" but also always reveals the evaluation
Albert Cardona
in reply to B Thuronyi • • •@thuronyi @MCDuncanLab @adredish @elduvelle @jonmsterling
eLife senior editors desk-reject the majority of papers received. I don't have exact numbers but on aggregate may be about 2/3rds. Most of those rejected (1) aren't considered to sufficiently advance the field or to provide a validation of an existing result, or (2) aren't proper attempts at rigorous scientific research, or, some times, (3) there isn't an editor with a matching background and expertise to evaluate it and guide its review process.
Albert Cardona
in reply to Dan Goodman • • •@elduvelle
The paper:
"The inevitability and superfluousness of cell types in spatial cognition", Luo et al. 2025
elifesciences.org/reviewed-pre…
Quite the poster child for why at @eLife we support publishing papers that we consider important and yet nonetheless label as incomplete: the questions are worth asking, the discussion has to happen, the suggested experiments need to be voiced out and aired, to prompt someone to take them on to the lab.
#ScientificPublishing
The inevitability and superfluousness of cell types in spatial cognition
elifesciences.orgDan Goodman
in reply to Albert Cardona • • •Albert Cardona
in reply to Dan Goodman • • •@elduvelle @eLife
Can't agree more. Scientific publications are the means for scientists to talk to each other in a formalised way, and not at all as a way to accrue points towards career advancement or funding. Let's retake that original purpose from the choking grasp of the bean counters.
HistoPol (#HP) 🏴 🇺🇸 🏴 reshared this.
Natxo M.C.
in reply to Albert Cardona • • •Articolo - Open Science Italia
open-science.it