Profile avatar
randy.pub
Analytics at presagegroup.com | PhD Epidemiology | #julialang #rstats | publications at www.randy.pub | GitHub @rdboyes, currently working on https://github.com/TidierOrg/TidierPlots.jl | 🐦@randyboyes
136 posts 365 followers 450 following
Getting Started
Active Commenter
comment in response to post
It's not an inherent property of the task, no - e.g. here's an example in julia that does the same thing but has the expected variance in the last digits
comment in response to post
It’s the 17 squares all over again Why would you put this evil in my brain this early in the morning
comment in response to post
This is probably the right answer, but it’s a little unsatisfying
comment in response to post
This was my first thought too! I would have only been half-happy if it worked though - it’s neat but wouldn’t look exactly like the target, which is the goal
comment in response to post
comment in response to post
XCOM does this as well, but only on the lower difficulties. On the higher ones, the probability it shows is the one it uses, and it feels brutal
comment in response to post
I’m willing to give it a try!
comment in response to post
It's more broad than that - it just means "the best existing measure" for anything, not causality specifically
comment in response to post
[1] 100 2 3 4 5 6 7 8 9 10
comment in response to post
This is mandatory DLC we all install in our early 30’s, unfortunately
comment in response to post
comment in response to post
Yes until very no
comment in response to post
Yeah I’ll make an exception for Excel and half of one for GitHub but Teams is so bad I feel like it cancels those out
comment in response to post
This but for all Microsoft products
comment in response to post
They're "hack-proof and secure", obviously
comment in response to post
Not to mention that it might be easier to get people to agree to review papers if they had a focused objective to review a specific part of the paper
comment in response to post
Fair - but I think I personally would be more confident in the results of a paper with one code review and one methods review vs two methods and zero code
comment in response to post
Professional programmers are estimated to introduce around 50 bugs per 1000 lines of code, 10 of which make it through checks How many bugs are scientists (amateur programmers at best) letting through without checks?
comment in response to post
Not every reviewer would have to be an expert in every area though. Someone from a related discipline who knows R well could easily check if the code matches what they said they did in the paper, catch data cleaning errors, etc even if they can’t comment on the subject matter
comment in response to post
Haha I wrote it, hit reply, then googled ā€œveil of ignoranceā€ to see if it made sense or not :) I think you’re right
comment in response to post
Maybe you don’t but I pre-register my studies from behind the veil of ignorance
comment in response to post
comment in response to post
I'm bookmarking this for reference since R's NSE stuff always throws me off! One of the nice things about working in @tidierjl.bsky.social is that you can drop down into #julialang at any time, which has a nice division between "normal evaluation" and "shenanigans":
comment in response to post
Yes But as soon as you need to collaborate with someone who ā€œdoesn’t want to see the codeā€ the whole thing falls apart Currently sanding off the rough edges of a typst.app based workflow for report-writing, it’s pretty promising but there’s still some pain
comment in response to post
I think this is it - the perfect data visualization
comment in response to post
You can cut one more pixel if you're willing to assume you can interpolate to get blank points
comment in response to post
Cut the intermediate braille labels for even more information density! (people should of course be able to infer that the categories are in alphabetical order between a and g ;) )
comment in response to post
Journalists publish corrections for small errors that do not change their conclusions all the time. Textbooks have errata. I don’t see what the issue is with having a mistake - no matter how trivial - corrected online, as long as it is actually a mistake, which it seems like you are not disputing
comment in response to post
I will concede that that particular mistake does not seem important But it is a mistake, and there is no reason for it not to be on pubpeer. If it was the only comment on the paper, my own response would be to have increased confidence in the results as a result of its pubpeer page
comment in response to post
I don’t know whether the comments posted on pubpeer are accurate. I haven’t looked into it. But the frequency of comments, number of accounts per individual, whether or not different commenters know each other, and everything else in that ā€œarticleā€ are all irrelevant to that question.
comment in response to post
The rate of reporting is not relevant - are there problematic images in your papers? If you think she is wrong, say that. Demonstrate that even one of these reports is fabricated/incorrect and you may get some sympathy
comment in response to post
I don’t see any problem with scaling the data here If the rate on the right axis was ā€œper 20,000ā€ they could be the same numbers, but I don’t think that would improve the graph in any meaningful way
comment in response to post
The org that does this for Ontario is called ICES, and there’s some basic info on their site: www.ices.on.ca/data-privacy/
comment in response to post
Good to know, bad to use (unless there is not other option) The code in the screenshot could be replaced with map call and would be much clearer
comment in response to post
Considering the millions that are spent to eke out a couple extra percentage points on benchmarks, it seems ridiculous to argue that 0.06% is a meaningless number. 100 or so of the right books and we’re talking about the difference between state of the art and ā€œnot in the runningā€
comment in response to post
I believe
comment in response to post
Me, using both -> and <- for assignment in the same script
comment in response to post
comment in response to post
Season 1 is great, the rest is pretty forgettable
comment in response to post
I typically do this with a loop and knitr::knit_child - all of my examples are code I can’t share but I can write an example if you need/want more detail
comment in response to post
You’re forgetting that I can just claim non-flying-pigs robustness was confirmed in an unpublished analysis when questioned