#GoodThingsComeToThoseWait Sorry it's taking us a while. Promise that the wait will be well worth it! #OpenFold3 Love all the great work that's already been done #HelixFold3 #Ligo #Chai1 #Protenix #Boltz1
Comments
Log in with your Bluesky account to leave a comment
Will you repeat your analyses from the first openfold paper showing how model behavior changed as training proceeded? Found that extremely interesting and unique
Your talk in Copenhagen reminded me of some work we did where we showed that if you combine a FF and long-range (coevol) contacts you can determine an accurate structure of CsgA (Tian, JACS, 2015), but that if you switch off the FF then you only get the topology right, ...
and if you switch of the contacts you get the right secondary structure, but wrong topology. Looked a bit like the results you see for TMED1 when you exclude beta-proteins in training, making me think that AF learns a *local* FF which it combines w longer-range contacts
This would be consistent with what weβre seeing as well! I didnβt delve into it during the talk but the implicit FF that AF learns does indeed seem scale-dependent, doing better on smaller scales than larger ones. Basically the model generalizes better for local structure than global fold.
Thanks. Makes sense. Seems also consistent with eg chemical shift+energy function models (eg CS-Rosetta) that if you can get the 2nd structure correct (from NMR CS) and pack the hydrophobic core, then you are likely to have the right structure. So AF only needs to learn local FF and co-evol signal
Comments