Profile avatar
oliviajkirtley.bsky.social
Assistant Research Professor & Co-Director, Center for Contextual Psychiatry, KU Leuven. Adolescent #mentalhealth, experience sampling, #SelfHarm & #suicide researcher, #OpenScience. All views my own.
80 posts 2,251 followers 944 following
Regular Contributor
Active Commenter
comment in response to post
Take care there, Heather. I hope you're recovery continues to go smoothly.
comment in response to post
I suppose another question is also where and when you might expect the findings of a study to be applied in the real world, as there's often a long lead-time, e.g., getting interventions into clinical practice, & some findings may have more local rather than international applications.
comment in response to post
The REF info could be a good way of doing this, as then the exact real world application of the research is often specified. Otherwise, I guess you could maybe try and create very specific statements about the proposed real world application & then use these to steer lit/grey lit reviews.
comment in response to post
Really interesting. Were you able to look at whether any of the findings had actually been applied in the real world?
comment in response to post
Congratulations, Nicolas!
comment in response to post
Thanks. I'll email for some more info.
comment in response to post
Thanks for your replies. Good to hear about the privacy measures. This answers some of my questions, but not quite all of them. Do you maybe please have an email address and some other info (perhaps the 1500+ users have more info than appears on the website?) about the company/service/initiative?
comment in response to post
Maybe there's more info on the editorial process page? There seems to be a missing link in the "study components & declarations" section of the submission guidelines page.
comment in response to post
It would also be great to see more detailed info from Lifecycle about how the evaluation services will be used and how they will fit with human review systems. I like the idea of all components of research getting a review opportunity & I'm curious about the practical side.
comment in response to post
I guess the other evaluation services listed have more info, like the team behind the initiative/service with names, etc., and funding sources, and so Paper Wizard seems an outlier here in terms of transparency.
comment in response to post
Thanks, Brian & Eileen. Any chance you can please say a bit more about what exactly Paper Wizard will be used for at Lifecycle? Is there any other info about Paper Wizard anywhere, who owns it, how it works, accuracy, etc? Desktop vs. mobile site has a tiny bit more info, but really not much.
comment in response to post
We've had good experiences with SEMA3, but are now using m-Path (m-path.io/landing/), and we're very happy with it. Disclosure: It's developed by colleagues at KU Leuven, but I'm not involved.
comment in response to post
Cool to see statcheck & regcheck being used here. How about Paper Wizard, which seems to promise content-related review for any topic? Will submissions be used to train Paper Wizard models and what happens to those data following submission? Their website seems a bit light on details.
comment in response to post
This guide for parents of young people who self-harm was developed in the UK, so some of the helplines, etc., won't be useful, but maybe the general advice can give some helpful pointers: www.psych.ox.ac.uk/news/new-gui...
comment in response to post
"...so very bewildered..." is a state I can definitely relate to.
comment in response to post
Yes, agree with Whitney here, and also about more dynamic indices of careless responding. Adding in my colleagues @gudruneisele.bsky.social & @millapihlajamaki.bsky.social who've been diving into careless responding measures.
comment in response to post
Were they instructed to only fill out 4 beeps a day and some just figured out they could do more?
comment in response to post
Sounds like maybe keeping responses closest to prompt could work & also thinking about some careless responding analyses. Probably worth considering some sensitivity analyses too, but it sounds like careless responding may be more of an issue, e.g., cramming in responses to hit credit threshold.
comment in response to post
As Whitney says, the different numbers of obs per participant won't be an issue. I guess I'm wondering 1) how could participants complete more than 4 questionnaires (unless including event-contingent?) and 2) whether there is something different about participants who completed >4?
comment in response to post
I still think about the lobster roll I had in Key West in 2019. So good!
comment in response to post
Yes, through a university-wide mentoring scheme for new professors. Had to approach a potential mentor in my dept. I also have quite a few informal mentors. All are more senior. Someone wise told me to think in terms of having a community of mentors & I've always tried to approach it like that.