Very cool! We've been working on making batch inference faster for this use case too (extraction from 10000 PDFs vs. just 1 extraction, or extracting 10000 things from 1 PDF)
Too bad they donβt do vision, for pdfs at least I found that nothing beats using a screenshot and a vision model in addition to whatever text you have extracted from the pdf to get accurate results.
Comments
That's a use case where small LLMs can really shine And that's where I use LLMs the most currently, besides coding assistance.
/sorry could not resist
Might have a look into it...
Thanks.