Profile avatar
lorenzoscottb.bsky.social
Researcher at the European Commission Joint Research Centre (JRC) 🇪🇺 working on AI for health. Developer of DReAMy, an open-source toolkit for dream analysis. All views are my own. https://lorenzoscottb.github.io
48 posts 3,100 followers 612 following
Prolific Poster
Conversation Starter
comment in response to post
Would computational approaches like open source toolkit and/or trained models (e.g. LLMs) be of interest too?
comment in response to post
Heya, happy to be on the list if there is still a slot 🙂
comment in response to post
Good question, but o don’t think so. I haven’t seen/found anyone yet.
comment in response to post
Thanks, but it seems that the response is quite full of errors/hallucinations. In the paper of the first two datasets, there is no mention of existing age-based metadata (nor in the dataset download web page), while the third I donìt think it qualifies as histopath.
comment in response to post
A useful and very informed add to the debate would also be @lampinen.bsky.social 🙂
comment in response to post
Preprint: arxiv.org/abs/2302.14828 Experiments code: github.com/lorenzoscott... Trained HF models: huggingface.co/DReAMy-lib
comment in response to post
Overall, the work shows how LLMs can be adapted to annotate dream reports from different populations with minimal supervision, potentially allowing for standardised and replicable annotation of large datasets for research purposes!
comment in response to post
Lastly, we tested if our model was robust to OoD unlabeled data from a subject with a diagnosed PTSD (a Veteran of the Vietnam War), and found that the model’s prediction fit the *expected* emotion distribution, without simply mimicking the training distribution.
comment in response to post
We also conducted an ablation experiment, to understand if the performance was influenced by memorisation or implicit statistics within different series (subsets of DreamBank), but found no significant evidence of these differences impacting the model.
comment in response to post
Our main results show a generally strong and stable performance across most single emotions and emotion sets, aside from a widespread poor performance for sadness.
comment in response to post
We hence reframed the task to suit the HVDC scoring method. Using a multi-label setting, we trained a model to predict if each of the 5 HVDC emotions was appearing independently!
comment in response to post
Preliminary experiments showed that binary predictions from an LLM pre-trained on sentiment analysis do not correlate with the general sentiment of a report, nor with single positive/negative emotions.
comment in response to post
Longer story: we study if LLMs can be used to replicate HVDC emotion feature, and, if so, with which granularity? Can we do so without supervision? If not, how robust is a supervised classifier to biases and out-of-distribution (OoD) data?
comment in response to post
TL,DR: we test if LLMs can automatically annotate #Dream reports' emotional content following the Hall and Van de Castle (HVDC) framework, and find that a robust classifier can be built with minimal supervision!
comment in response to post
Thanks for reading 😀. The literature around NLP tools to study dream reports goes quite back (see Elce et al 21), but they kinda "got stuck" on word-dictionaries, word2vec and simple NeuralNets. Here we tried to overcome many existing limitations with different types of LLMs!
comment in response to post
DReAMy will be at the #WorldSleep23 congress with an oral presentation. If you want to start playing with it, you can start with the notebook/collab tutorials I've prepared! github.com/lorenzoscott...
comment in response to post
Lastly, probably my favourite part: DReAMy! Given the encouraging results, I wanted to empower the dream research community with these tools, as well as other useful classic NLP tools. So I built a fully open source python library, designed for non-expert users.
comment in response to post
All these models are freely available and can be tested on the dedicated Hugging Face demo: huggingface.co/spaces/DReAM...
comment in response to post
In this case, the idea was to train a Text-2-Text model (here, T5), to literally write the dream report annotation. For these experiments, I focused on Characters, Emotions, and Activity. While Char and Em were rather easy to model, Actv were not, maybe due to output structure.