michelnivard.bsky.social
Professor of Genetic Epidemiology, University of Bristol at: [email protected]
3,187 posts
4,124 followers
1,668 following
Regular Contributor
Active Commenter
comment in response to
post
maybe this is my summer project then, do some Kurtosis on a beach in Bali, write truth in the sand only for the tide to wash it away as I dance around the fire... (i.e. re-submit a preprint while juggeling kids summer camps and vacation travel...)
comment in response to
post
I think I agree, I have been toying around with models where specific higher order moments are a function of measurement artifacts, (e.g. ceilings and floors -> skewness; acquiescence --> skewness towards agreement) and you can ID/disentangle artifact variables and try and model those.
comment in response to
post
maybe this is my summer project then, do some Kurtosis on a beach in Bali, write truth in the sand only for the tide to wash it away as I dance around the fire... (i.e. re-submit a preprint while juggeling kids summer camps and vacation travel...)
comment in response to
post
Not kidding, very very very best work I did (Also because of BRILIANT grad student and former grad adivisor) is a bunch of Pre-printed, or even not pre-printed yet, (co)kurtosis SEM.//causal inference work and I kinda know It'll just never ever go anywhere?
comment in response to
post
comment in response to
post
comment in response to
post
Thats what using higher order moments feels like Ted... Their the psychedelics of statistical inference: very few dare partake, but those that do, and don't flee screaming, are irrationally hyped about them. Not affraid to admit I'd add Kurtosis to the psych department drinking water if I could!
comment in response to
post
O yeah sorry, I did latent mixture growth curves, so different classes (categoricall) each with an intercept and slope, those were then fixed not random within class... I supose their similar ways to capture the same data neither of which has much to do with age/cohort/period anything?
comment in response to
post
“Solving the age period cohort problem with co-kurtosis based estimators” would be my flame war paper
comment in response to
post
"I'll take increasing, adolecent limited, persistent high, low and very low for 500$ Alex!"(also I wrote one, I am sorry)
comment in response to
post
of hij is afgekoeld & meent het, who knows...
comment in response to
post
Volgens mij was hij vaak de rechtse wingman van Rutte (dan kon die het "menselijke"/"normie" rechts doen) en is hij nu de "linkse" wingman Dilan? Maar dan van buiten, bij Rutte twijfelde er niemand aan leiderschap, als ze dit bij Dilan in de fractie doen -> weken artikelen over wie de leider is.
comment in response to
post
We need a starterspack for top notch shitposters, like full on interdisciplinary.. I want that modern dance flame war where I only get the emotive tone and none of the substance…
comment in response to
post
Has anyone tried vibecoding LaTeX yet?
comment in response to
post
Yes you can (as you can in R btw! Using source(“my_script.R”)) documentation in Python is horrendous compared to R…
comment in response to
post
Kinda wild at some point psychiatric genetics really was the most absurd noise dredging hotbed, but somehow managed to coms out of that really really well…
comment in response to
post
Can confirm I know 3 ppl with this cut!!
comment in response to
post
😭
comment in response to
post
Yeah these papers have a few hundred skimmers, a hand full of readers and 3/4 phds/postdocs who use them in a way that spurs on their science significantly… in a way it’s a consequence of healthy science on which you can actually build more great science…
comment in response to
post
Their mostly very well funded vessels/curators, via culture and review we ourselves impose some of the “5 figures each 10 panels, 50 supplements” madness
comment in response to
post
Yeah but some of us also write 50 pages of supplements per paper, so I see why the journals kinda try to reign us in 🤣😅
comment in response to
post
Of vQTLs and HWE, CNV or SNV, where SNV is also just SNP, and PRS is the same as PGR, GRS and also PGI
comment in response to
post
Yeah we do that, like a lot…
comment in response to
post
If I remember to ‘cp’…. bsky.app/profile/mich...
comment in response to
post
I just today did 16-bit runs to set a baseline for 8-bit runs.. it’s crazy I can train protein models that must have cost google and Facebook a ton of compute in 2019-2021 in a day and a half on some commercial cloud GPUs (not even very top of the line!)
comment in response to
post
Ill try linear regression in 32,16,8bits in PyTorch with AdamW, which should handle the formats easily.. see if its an issue with routines that just expect 64bits
comment in response to
post
Yeah that seem weird… could it be that the mode numerical estimation in stats (ML, OLS etc) just isn’t compatible with those formats?
comment in response to
post
The thing is, if I compress a 16-bit deeplearning model to 8-bits or even 6-bits I loose almost no quality? 🤷 I think there are architecture trade-offs that favor lower precision and more-parameters/layers?
comment in response to
post
I don’t think many deep-learning models converge to a minimum with identified parameters?
comment in response to
post
Yes…. All I have left to remember this by is the sweet loss curve I saved with an external tool and a 234$ bill
comment in response to
post
Aeldari == humanities faculty, T’au Empire == computer sciences
comment in response to
post
I haven’t played chess in so long I might loose to the kettle at this point though…
comment in response to
post
At the end of each turn roll the NIH/NSF die, if it comes up “1” every faction looses 27% of its troops due to funding cuts, game writes itself…