No, 200k is currently a pretty big context window! Much bigger than ChatGPT.
The 2 million token thing for ChatGPT isn’t actually context; it’s more like a database it can use to look stuff up. But it doesn’t always know when to do that, so sometimes it will just make stuff up without checking.
Comments
Our codebase is 25 million tokens!
🤯
I’m still exploring the various possibilities.
ChatGPT can handle 2 million tokens worth of documents.
https://en.wikipedia.org/wiki/Retrieval-augmented_generation
The 2 million token thing for ChatGPT isn’t actually context; it’s more like a database it can use to look stuff up. But it doesn’t always know when to do that, so sometimes it will just make stuff up without checking.