rambalachandran.bsky.social
Engineer, with passion for Automation. Focused on leveraging AI and Tech for Core Engineering Organizations. Loves to Bike, Hike, Swim and Camp
31 posts
7 followers
28 following
Regular Contributor
Conversation Starter
comment in response to
post
... and if anyone works with projects that use nested .gitignore files (apparently Rust programmers deal with this a lot?) I'd appreciate help testing this bug fix relating to that - instructions for running the fixed branch using uvx are in the issue comment github.com/simonw/files...
comment in response to
post
Thank you, your tool looks much simpler interface compared to aider. Will try to test them both and let you know
comment in response to
post
Simon, any advantage running your llm tool over aider for such analysis?
comment in response to
post
So while #LLM can help you write template #code faster, it does not reason to find if you really need to write that code. Only ingenuity driven by human procrastination can help you achieve that 😄
comment in response to
post
So, recollecting my millenial roots, I went back to good ole #Google and #StackOverflow .Voila, there was a library `jsonpickle` that is purpose built for problems like this which is robust and well tested.
comment in response to
post
In case you are wondering, the output is
=REGEXEXTRACT(A1, "^(?:\d+(?:[-/]\d+|[A-Za-z])?\s+)([^,]+)")
comment in response to
post
DeepSeek is the only state of art model that can integrate web search into its output. hashtag#ChatGPT O1 still can't do this.
More details on the experiment: github.com/rambalachand...
comment in response to
post
This has been true for every major technological change, people who were curious enough to learn and work with ibm mainframe, PCs, web, mobile apps have always come out on top. What might be different this time is that the time needed for investment might be lower
comment in response to
post
However, it will be interesting to see what happens in the longer one. As the reduction in cost help more people to get into training and as a result increase in Nvdia sales? Or will other players like AMD will emerge and cut into Nvidia's sales?
Truly interesting times ahead.
comment in response to
post
Repebble wait list to get the new pebble
repebble.com
comment in response to
post
Google Announcement: opensource.googleblog.com/2025/01/see-...
comment in response to
post
Thank you. Yes now the model loads from cache the next time I open it
comment in response to
post
Beginner question. In WebGPU, if you close the tab/window, do you need to download the model again? is there a way to store the model in cache and reload it everytime you open it?
comment in response to
post
Why doesnt brew fix this issue, or atleast throw a warning when we install R? I had to dig through tidyverse github issues to get to this
github.com/tidyverse/ti...
comment in response to
post
If you have a browser that supports WebGPU (like Google Chrome) you try out the DeepSeek-R1 model based on Qwen2.5-Math-1.5B directly in your browser!
It's a 1.28GB page load: huggingface.co/spaces/webml...
comment in response to
post
The bug in last line is easy to miss and if the function is buried deep in the codebase, might take effort to figure out.
comment in response to
post
env_object["AWS_ACCOUNT"] = aws_credentials.get("account_id", "")
env_object["AWS_ACCESS_KEY"] = aws_credentials.get("access_key_id", "")
os.environ["AWS_SECRET_KEY"] = aws_credentials.get("secret_access_key", "")