ThreadSky
About ThreadSky
Log In
chanret.bsky.social
•
52 days ago
Experimenting with AnythingLLM to save me reading >250 reports. Looks quite promising, if slow.
Comments
Log in
with your Bluesky account to leave a comment
[–]
chanret.bsky.social
•
52 days ago
It did just bungle a question about electorates 🫣
0
reply
[–]
movadek.bsky.social
•
52 days ago
are you running it locally?
0
1
reply
[–]
chanret.bsky.social
•
52 days ago
Yes, using ollama as the LLM
0
1
reply
[–]
movadek.bsky.social
•
52 days ago
on a GPU or CPU? I get decent speeds with ollama on a local low-tier GPU
0
1
reply
[–]
chanret.bsky.social
•
52 days ago
It should be running on the GPU, checking now...
0
1
reply
[–]
chanret.bsky.social
•
52 days ago
Yeah, I'm not seeing anything in nvidia-smi, so I think I have some configuration issues to solve the next time I'm creatively procrastinating :-)
1
1
reply
[–]
movadek.bsky.social
•
52 days ago
the sweet embrace of cuda installations
0
1
reply
[–]
msaeltzer.bsky.social
•
52 days ago
Yes. My hot take is that using llms for measuring concepts is far worse than training BERT but for information extraction its a revolution. If you really want to Check if a text says something, it is a great tool.
0
reply
Posting Rules
Be respectful to others
No spam or self-promotion
Stay on topic
Follow Bluesky's terms of service
×
Reply
Post Reply
Comments