Profile avatar
almogsi.bsky.social
Assistant Professor at Ben-Gurion University. Computional social psychologist. Studying social media, misinformation, polarization and language
37 posts 2,541 followers 706 following
Regular Contributor
Active Commenter
comment in response to post
can't say I disagree :(
comment in response to post
It'd be great to be added. Thanks for putting this together!
comment in response to post
It'd be great to be added. Thanks for creating this list!
comment in response to post
The book is still a work in progress, and if you spot errors or inaccuracies, please reach out!
comment in response to post
It is designed to cover the very specific topics I'm teaching in my course at BGU, but it should be a pretty useful resource (I hope!) for students and researchers in social sciences who do text as data in R
comment in response to post
To state the obvious -- yes, black-box ML is awesome, LLMs are great, and we do not discourage anyone from using them (I use them all the time). But there is much to gain in transparency, interpretability, and accuracy when integrating theory into our prediction models ✌️
comment in response to post
In the paper, we provide evidence and examples of each element and how it is implemented in psychological research (Non-paywalled version here www.almogsi.com/my_files/Sim...)
comment in response to post
Our key argument is that we shouldn't discount human insight. Psychologically informed decision-making can create robust, nuanced decision support systems by using *informed feature extraction*, *informed priors*, and & *informed data collection*
comment in response to post
In an already classic paper, Yarkoni & Westfall (2017) suggest several practices psychology should adopt from machine learning. Analogously, we suggest that machine learning (and other disciplines) can also learn from psychology
comment in response to post
This paper concludes my main postdoc project with @lewan.bsky.social and the great collaboration I had with the rest of the team at Bristol, MPI and Northeastern. Thank you all! @stefanherzog.bsky.social @lorenzspreen.bsky.social @anaskozyreva.bsky.social @brionyswire.bsky.social [8/8]
comment in response to post
This work complements our previous paper and takes another step towards stressing the need and opening possibilities for developing protections against unethical manipulation in digital spaces [7/8] doi.org/10.1093/pnas...
comment in response to post
The other two studies employed AI to modify ads to appeal to people who are high or low on openness to experience personality trait. The results confirmed the effectiveness of AI in generating targeted content [6/8]
comment in response to post
The first two studies involved real political ads collected from Facebook and evaluated for their persuasiveness, confirming that ads closely matching an individual's personality are notably more effective [5/8]
comment in response to post
Our findings consistently demonstrated through 4 studies that political messages crafted to personality traits resonate more with the intended audience. Notably, generative AI can automate this process on a large scale and still remain persuasive [4/8]
comment in response to post
We examined how political advertisements, when aligned with an individual's personality traits can be more persuasive than non-tailored messages [3/8]
comment in response to post
Our study raises concerns about the potential for misusing generative AI like ChatGPT to create personalized political advertisements based on individual psychological profiles [2/8]
comment in response to post
@mattansb.bsky.social
comment in response to post
pretty sure a good 80% of that amount is due to my recent formatting exchange with an editorial assistant
comment in response to post
dang I'm still figuring out bsky: #cssky #socialpsychology #SocialPsycSky #PsychSciSky Did I get it right? 8/7
comment in response to post
I could not have done this without my amazing coauthors - Britt Hadar and Michael Gilead. It took us a long time and *many* versions but we made it and I'm super proud of the final result 7/7
comment in response to post
Together, these findings advance our understanding of the nuanced relationship between personal and linguistic agency. It's not just about what we say, but also how our words mirror our inner psychological states and traits 6/7
comment in response to post
Study 3 and its two replications (N = 43,140) took us into the Reddit community, specifically r/depression subreddit. Here, we uncovered that the language used by individuals (likely) grappling with depression tends to be less agentive 5/7
comment in response to post
Study 2 (N = 2.7 M): We took a deep dive into the social media realm, operationalizing increased personal agency as one’s social media followership (i.e., social rank). The results? Less personal agency (i.e., fewer followers) was associated with less agentive language. 4/7
comment in response to post
Study 1 (N = 835): We found that sense of personal agency, operationalized between participants as recalling instances of having more or less power over others, affects the use of agentive language. 3/7
comment in response to post
Previous psycholinguistic findings showed that the use of passive voice influences the level of agency attributed to other people. To investigate whether passive voice use relates to people’s personal sense of agency, we conducted three studies 2/7
comment in response to post
👋