nathanielblalock.bsky.social
Graduate Research Assistant in Dr. Philip Romero's Lab at Duke/Wisconsin Reinforcement and Deep Learning for Protein Redesign | He/him
20 posts
123 followers
418 following
Getting Started
Conversation Starter
comment in response to
post
Let me know if you’d like me to clarify anything. I’m happy to talk!
comment in response to
post
Me too 🤪 It is really exciting to be submitting! We definitely learned a lot along the way
comment in response to
post
Thank you for sharing our work @kevinkaichuang.bsky.social! It means a lot
comment in response to
post
Thank you for posting about our preprint!
comment in response to
post
and our open-source code at github.com/RomeroLab/RLXF
comment in response to
post
Want to learn more? Check out our preprint at www.biorxiv.org/content/10.1...
comment in response to
post
We apply RLXF across five diverse protein classes to demonstrate its generalizability and effectiveness at generating optimized sequences by learning functional constraints beyond those captured during pre-training
comment in response to
post
Experimental validation reveals the RLXF-aligned model generates a higher fraction of functional sequences, a greater number of sequences more fluorescent than CreiLOV, and the brightest oxygen-independent fluorescent protein variant reported to date
comment in response to
post
We align ESM-2 to experimental fluorescence data from the CreiLOV flavin-binding fluorescent protein. The aligned model learns to prioritize mutations that enhance fluorescence, many of which are missed by the base model
comment in response to
post
RLXF follows a two-phase strategy inspired by RLHF. Supervised Fine-Tuning initializes the model in the right region of sequence space. Proximal Policy Optimization directly aligns sequence generation with feedback from a reward function like a sequence-function predictor
comment in response to
post
Pre-trained pLMs generate highly diverse sequences mirroring statistical patterns from natural proteins. But here's the challenge: they lack an explicit understanding of function, often failing to generate proteins with enhanced or non-natural activities. RLXF bridges this gap!
comment in response to
post
It was a pleasure meeting you! Y'all are doing super interesting and relevant work. It will be cool to see how we can continue to interact and maybe collaborate in the future!
comment in response to
post
Favorite foods! Tandoori chicken and chili momo's: everestkitchen.ca. Onigiri! www.onigiriya.ca. Pho: www.viethouserestaurant.com.
comment in response to
post
Papers #4: arxiv.org/abs/2406.17692 from the incredible
@gregdnlp.bsky.social. I really like how explore what happens during the alignment of LLM's with RLHF. This was so cool to see having observed similar outcomes in my research.
comment in response to
post
Papers #2-3: arxiv.org/abs/2402.10210 and arxiv.org/abs/2405.00675 from the incredible
@quanquangu.bsky.social. I really like how they explore new techniques for RLHF
comment in response to
post
Paper #1: arxiv.org/abs/2412.12979
Aligning autoregressive pLM's to generate EGFR binders via Direct Policy Optimization (DPO) from the incredible @noeliaferruz.bsky.social who gave a great talk as part of the MLSB workshop
comment in response to
post
Hey Kevin, could I be added? This is really helpful for joining Bluesky! Thank you for doing it