tejassrinivasan.bsky.social
CS PhD student at USC. Former research intern at AI2 Mosaic. Interested in human-AI interaction and language grounding.
21 posts
283 followers
153 following
Regular Contributor
Conversation Starter
comment in response to
post
I'm trying to make "bleet" a thing
comment in response to
post
The only silver lining of my ACL rejection is that I have something to submit to EMNLP
comment in response to
post
Ty for the plug π
Model confidence is a good decision aid (arxiv.org/pdf/2001.02114), while explanations are less useful and can cause over-reliance (arxiv.org/abs/2310.12558, arxiv.org/pdf/2406.19170). Other interaction cues like AI warmth can also make a difference (arxiv.org/abs/2407.07950).
comment in response to
post
What do you mean by core capabilities, for VLMS? IMO core capabilities should be determined by the applications we care about, and I'd argue medical use cases are as important (if not more) as MSCOCO-style images/scenes
comment in response to
post
What are you using o1pro for? And in what aspects do you think it's better than other LLMs?
comment in response to
post
Is this advice you reserve for a particular class of problems, or is it just generally applicable because we still don't know the full breadth of LLM capabilities?
comment in response to
post
I'm always three days away from being three days away
comment in response to
post
We hope our work inspires the community to more closely consider how user characteristics, including but not limited to trust, affect how people rely on AI assistance.
Work done with the always-awesome @thomason.bsky.social!
comment in response to
post
Improving AI reliability is more important than ever as AI systems are increasingly deployed in real-world settings with high stakes. We believe it is important for AI researchers to think about the user-AI dyad π§π€, rather than just the AI in a vacuum.
comment in response to
post
These findings show that being able to estimate usersβ trust levels can enhance human-AI collaboration πͺ but we also find that modeling user trust is very challenging! π Our work reveals promising new directions for user modeling that extend beyond merely learning user preferences.
comment in response to
post
We show that adapting AI behavior to user trust levels, by showing AI explanations during moments of low trust and counter-explanations during high trust, effectively mitigates inappropriate reliance and improves decision accuracy! These improvements are also seen with other intervention strategies.
comment in response to
post
In two decision-making tasks, we find that low and high user trust levels worsen under-reliance and over-reliance on AI recommendations, respectively πππ
Can the AI assistant do something differently when user trust is low/high to prevent such inappropriate reliance? Yes!
comment in response to
post
Do each of these correspond to a particular conf deadline? I'm guessing
May: EMNLP
July: AACL?
Oct: EACL/NAACL
Feb: ACL
comment in response to
post
Hi Marc! Could I get added?
comment in response to
post
Ooh what agent? Any pointers to how I can set this up?
comment in response to
post
EveryPhD EveryLab all at once
comment in response to
post
As long as the last time you saw/spoke to them was last year -- I wish my dentist Happy New Year in August.
comment in response to
post
You forgot about mid-training (which incidentally is also what I call my training runs).