Profile avatar
danilexn.bsky.social
Open-ST • Computational Modeling of Spatial Omics @mdc-berlin.bsky.social • PhD Candidate Occasional composer and pianist • he/him https://rajewsky-lab.github.io/openst
42 posts 479 followers 613 following
Prolific Poster
Conversation Starter
comment in response to post
That said, if the money goes into stopping Trump’s & Putin’s mafia state, then by all means, spend every last cent. No complaints there. Fuck them.
comment in response to post
Bonjour, voici des alt. Européennes à pas mal d’outils différents : european-alternatives.eu
comment in response to post
Super interesting, thanks! In this direction there’s the also big open problem rg sequencing breadth vs depth. pubmed.ncbi.nlm.nih.gov/38940162/
comment in response to post
Yeah, there’s a big gap in literature there. One that gets close is the Hu et al 2024 (Genome Biology) meta-analysis, but they study niches, not cell types. There are others about imaging-based data, but these are handled differently (bc of big differences in sensitivity).
comment in response to post
Back to the original point about clustering, some time ago there were preprints showing how to quantify and remove “diffuse” genes to improve clustering results. I don’t share them bc I think the results were not super satisfactory. It is a completely open problem.
comment in response to post
It depends a lot on whether one wants to measure platform- or tissue-specific differences, but also there’s a lot of readouts that can be confounded by section thickness.
comment in response to post
For tissues, I would trust 1D profiles if tissue is very sparse or has very large cells. Note: lateral diffusion might happen during in-situ capture, but more important is the “diffuse background” that appears during prior steps. This actually drives most clustering artifacts.
comment in response to post
Very good question. IMO, measuring expression across a 1D space is problematic – does not take into account artifacts bc of section thickness. Very sparsely 2D-cultured cells (on chip) and measuring some very well known markers, would give a good baseline.
comment in response to post
The Rajewsky’s do not need to collaborate with companies to be a good group ;)
comment in response to post
You have all the details here www.cell.com/cell/fulltex...
comment in response to post
And the same with this crazy trend of treating cells as “sentences of genes”. “UMAP looks better” or “clustering looks better” are not benchmarks.
comment in response to post
In this work, we explored how training data similarity impacts protein-ligand prediction accuracy—an overlooked aspect in recent benchmarks. Our analysis shows that the current co-folding methods struggle to generalize beyond ligand poses in their training data.(2/n)
comment in response to post
This music and a glass of wine are the best companions for a night of writing some science.
comment in response to post
I have the same questions