recrudesce.co.uk
Just a guy obsessed with "AI" (LLM's and Agents), LEGO, 3D printing and Astrophotography. Oh, and cats - always cats.
Runs the 3DPrinting feed at https://3dp.blue
758 posts
823 followers
128 following
Regular Contributor
Active Commenter
comment in response to
post
Maybe it's actually a book about the market town and civil parish in Gloucestershire :P
comment in response to
post
I prefer the graphics on the US cover, but have absolutely NO idea why they decided to join the H and the R together...
comment in response to
post
I know, that's why I posted it to show that it being visible isn't a "OMG LOOK WHAT JUST HAPPENED FOR THE FIRST TIME !!!!!!!" thing.
comment in response to
post
This happens every 3 or so years - www.theguardian.com/uk-news/2019... from 2019, www.walesonline.co.uk/news/wales-n... from last year etc.
comment in response to
post
That involves having to use the Epic Store though, so I'll pass :P
comment in response to
post
I thought everyone knew that 10%+2mph wasn't a hard and fast rule - it's entirely down to the council etc to set the leeway.
If it's a ticket, just pay it and get on with your life tbh - easier than the stress and hassle of fighting it, and if you lose it'll be a LOT more expensive.
comment in response to
post
Totally agree - and something I have to explain pretty much daily as part of my job. "Can we make AI do this ?" Yes, but it requires a lot more than just Llama3.2 on an EC2 instance :-P
comment in response to
post
"ChatGPT is not a search engine" is still correct, though, cos it's not a search engine like Google/Yahoo etc. It can search the web using a search engine tool, sure, but it's still not a search engine.
comment in response to
post
So "ChatGPT" if you're talking about the platform, CAN search the web via a search tool that's called by the model, but the model isn't doing the searching - it's instructing a script to get the information from a search endpoint, which is then returned to the model as context for generation.
comment in response to
post
The thing to remember here is that it's not the LLM that's doing the searching - an LLM can't do anything other than generate output tokens based on input tokens. The fact it has the ability to call tools to get it those input tokens from other sources is what gives it the "ability" to search.
comment in response to
post
I feel like this needs to be shipped, tout suite, to @batarong.com for some GAMING.
comment in response to
post
So rather than finding a piece of software that works, you want to completely change your whole operating system and user experience instead ? Kinda what I said about open source stuff being a bit janky.
I know which one I'd do, and it'd be finding a working piece of calibration software :P
comment in response to
post
Why don't you want to use Windows ? What's the catalyst to switch ? Finding decent open source versions of closed source applications is such an arse. Most open source versions of Adobe/Microsoft products are shite in comparison etc etc.
comment in response to
post
I don't have any memory of ever changing the harddisc in this thing, but I guess I did sometime in the last 25 years.
comment in response to
post
Turns out the original harddisc was 10gb, but there's a 80gb one installed. And I'm cloning 24gb of it over USB1.1 :P It's taken 30 minutes to do 1gb. So this is going to take a while...
comment in response to
post
Try www.1bitrainbow.co.uk/parts-store.... or www.thebookyard.com - they're where I get all my vintage Apple parts from.
comment in response to
post
Yeah, my AI box runs a 4090 (thinking about putting a 5090 in it actually) - I like the idea of small self contained AI solutions, but I don't think we're quite there yet unless you shell out for a DGX box.
comment in response to
post
Just noticed, also, that Deepseek has a 1b model which might run well - ollama.com/library/deep... - regardless you're going to be limited due to the fact you're doing inference on the CPU, which will always be slow compared to GPU inference etc.
comment in response to
post
I know that using MS models is prob expected given your employer, but I would look at models like Gemma3:1b (which is <1gb), or other small models: huggingface.co/blog/jjokah/...
OR, if you really want to try it out, give BitNet a go: github.com/microsoft/Bi...
comment in response to
post
Not to mention trying to find all the typos, either in what was printed, or what you actually typed in :P
comment in response to
post
What model did you use and how do you find the performance (what's the tokens per second ?). I've been thinking of doing something with a Pi5, but not really sure how capable it would be compared to other hardware I have.
comment in response to
post
If you have the ability to get to a BBS from your C64, take a look at this stupid thing I coded: github.com/recrudesce/v...
comment in response to
post
Oh man, a RiscPC. We had one at school with a 486 PC card in it, so it could run Win3.11. That's a blast from the past !
comment in response to
post
I used to walk up the road to the local newsagent as a kid to buy my games. They had them all on one of those spinny stands (like the ones they put postcards on).
The newsagent is still there 30 years later, but I don't think they sell tapes any more :P
comment in response to
post
There's a tiny amount of nostalgia for that style of case (turbo button, key etc) for me, and I see why people would want one (retro PC collectors might want to "hide" their modern hardware, etc). Just kinda wish they'd not gone all in with the 5.25" drives as a lot of PC's were 3.5" by then.
comment in response to
post
Planning to reinstall MacOS9.2.2 on my iMac G3 after I've swapped the harddisc to an SSD.
comment in response to
post
Probably when we realised it was a marketing poly to sell covers for electronics ? Maybe it's cos we realised that dust isn't _that_ much of an issue when it comes to electronics.
Or maybe Big Dust won the computing wars of the late 90's :P
comment in response to
post
I've been looking for a list like this for AGES - thanks so much for setting this up :)
comment in response to
post
I don't honestly know how to read these, but 3:1b isn't great at some things compared to 2:2b, but might be worth a shot :)
comment in response to
post
Would be worth looking at Gemma3:1b, or other small models: huggingface.co/blog/jjokah/...
OR, if you really want to try it out, give Microsoft's BitNet a go: github.com/microsoft/Bi...
comment in response to
post
Thats not how you search for alt text. I add alt text on all my image posts, and searching for "alt text" on my profile just shows me posts where I specifically have the term "alt text" in the text of the post (not the image).
But yes, there's no alt text on @xkcd.com's posts :(
comment in response to
post
Isn't that "sour dough" ?
comment in response to
post
Good place to buy stuff from people who have no idea what they're selling, so you can get some crazy cheap stuff like retro computers :P
comment in response to
post
Nice - merged, I was thinking of adding image generation, but the generated images are converted to ANSI/ASCII art before being returned to the user :P
comment in response to
post
I've just seen you forked it :P
comment in response to
post
If you want to go even further into this, look into the security risks around Remote MCP servers. I think OpenAI have their own "tools" implementation (though I know they support MCP natively too), and there's something called "UIM" which I have no idea about :P
gunjanvi.medium.com/what-i-learn...
comment in response to
post
I don't know Rust, so that might take a while :P
comment in response to
post
On a roll. Added the ability to set the OpenAI base_url, allowing the use of OpenAI compatible API's, such as Ollama, OpenWebUI, vLLM. It _might_ work with Perpexity, not tested it.
comment in response to
post
Also added a Dockerfile, and a docker-compose.yml file for those who want to run it in Docker. Yes, I could write you a Helm chart too, but I despise Helm. Maybe I'll write a K8S deployment.yaml, who knows - see how I feel.
comment in response to
post
I have modified this to add the following:
- Added Chat History, so now your chats have context.
- Added the ability to use Gemini, OpenAI, or Anthropic API's.