Profile avatar
ai-nikolai.bsky.social
CS. PhD Candidate in LLM Agents @ImperialCollegeLondon || ex tech-founder
18 posts 477 followers 203 following
Regular Contributor
Conversation Starter

Do LLMs need rationales for learning from mistakes? 🤔 When LLMs learn from previous incorrect answers, they typically observe corrective feedback in the form of rationales explaining each mistake. In our new preprint, we find these rationales do not help, in fact they hurt performance! 🧵

Not that you need another thread on Deepseek's R1, but I really enjoy these models, and it's great to see an *open*, MIT-licensed reasoner that's ~as good as OpenAI o1. A blog post: itcanthink.substack.com/p/deepseek-r... It's really very good at ARC-AGI for example:

LLM360 gets way less recognition relative to the quality of their totally open outputs in the last year+. They dropped a 60+ page technical report last week and I don't know if I saw anyone talking about it. Along with OLMo, it's the other up to date open-source LM. Paper: https://buff.ly/40I6s4d

#NLP #LLMAgents Community, I have a question: I have been running Webshop with older GPTs, e.g. gpt-3.5-turbo-1106 / -0125 / -instruct). On 5 different code repos (ReAct, Reflexion, ADaPT, StateAct) I am getting scores of 0%, while previously the scores where at ~15%. Any thoughts anyone?

#NLP #LLMAgents Community, I have a question: I have been running Webshop with older GPTs, e.g. gpt-3.5-turbo-1106 / -0125 / -instruct). On 5 different code repos (ReAct, Reflexion, ADaPT, StateAct) I am getting scores of 0%, while previously the scores where at ~15%. Any thoughts anyone?

Posting a call for help: does anyone know of a good way to simultaneously treat both POTS and Ménière’s disease? Please contact me if you’re either a clinician with experience doing this or a patient who has found a good solution. Context in thread

Meet OLMo 2, the best fully open language model to date, including a family of 7B and 13B models trained up to 5T tokens. OLMo 2 outperforms other fully open models and competes with open-weight models like Llama 3.1 8B — As always, we released our data, code, recipes and more 🎁

Pretty cool people are being added to the LLM Agent & LLM Reasoning group. Thanks @lisaalaz.bsky.social for suggesting @jhamrick.bsky.social @gabepsilon.bsky.social and others. Feel free to mention yourself and others. :) go.bsky.app/LUrLWXe #LLMAgents #LLMReasoning

#EMNLP2024 was a fun time to reconnect with old friends and meet new ones! Reflecting on the conference program and in-person discussions, I believe we're seeing the "Google Moment" to #IR research play out in #NLProc. 1/n

I thought to create a Starter Pack for people working on LLM Agents. Please feel free to self-refer as well. go.bsky.app/LUrLWXe #LLMAgents #LLMReasoning

I thought to create a Starter Pack for people working on LLM Agents. Please feel free to self-refer as well. go.bsky.app/LUrLWXe #LLMAgents #LLMReasoning

Hi Bluesky, would like to introduce myself 🙂 I am PhD-ing at Imperial College under @marekrei.bsky.social’s supervision. I am broadly interested in LLM/LVLM reasoning & planning 🤖 (here’s our latest work arxiv.org/abs/2411.04535) Do reach out if you are interested in these (or related) topics!

Quick intro to myself. I am a CS PhD in LLM Agents @imperial-nlp.bsky.social with @marekrei.bsky.social. This is our latest work on LLM Agents: StateAct: arxiv.org/abs/2410.02810 (outperforming ReAct by ~10%). Feel free to reach out for collaboration.

Welcome to Bluesky to more of our NLP researchers at Imperial!! Looking forward to following everyone's work on here. To follow us all click 'follow all' in the starter pack below go.bsky.app/Bv5thAb