markptorres.bsky.social
ML Eng @Northwestern, building recommender algos and LLM apps. Building Bluesky feeds @ https://bsky.app/profile/mindtechnologylab.bsky.social. BS (Statistics) @Yale + MS (Computer Science) @UT Austin. Recovering startup tech bro.
170 posts
137 followers
381 following
Regular Contributor
Active Commenter
comment in response to
post
Self-identifying as some variant of “I’m a critical/free thinker” or “I can think for myself” is almost certainly a signal that one cannot, in fact, think critically for themselves.
comment in response to
post
Overall really good! Great way to digest research papers, plus I've been running out of new podcast episodes lately so it's nice to be able to make my own custom podcast episodes. Wish I could steer the podcasting behavior a little more and wish that it were longer but I'm liking it so far.
comment in response to
post
More NotebookLM notes:
- It has a funny pronunciation of "SQL" that I've never heard before (almost like "sekl"?)
- The two podcast hosts are always the same and I can only mildly steer their behavior with system prompts.
- There's weird times where the hosts like to finish each other's sentences?
comment in response to
post
The Great Fire of Rome happened when the data centers full of the latest 3,000 nm chips caught fire and there weren’t enough aqueducts to cool them down. Completely unrelated to Nero joining AMD’s board just 6 months before and sitting on Palatine Hill with Lisa Su to watch NVIDIA burn.
comment in response to
post
Can you imagine the amount of aqueducts they must've had to build to cool down all their data centers? Back then, Nvidia must've been on their 5,000 nm chips, so hopefully the Romans and Greeks called in ahead to reserve the 4,000 nm chips in advance.
comment in response to
post
I think that's true. I suppose the caveat is that Mookie plays one position for weeks or months at a time, whereas it did seem like you'd know where Zobrist was playing only when the lineup card came out. The Rays seem to like generic IF and OF players instead of by position, especially post-Longo.
comment in response to
post
I wonder if there's anyone who conclusively out-Zobristed Zobrist himself over the course of multiple seasons. Zorilla was a cog in some good Rays and Cubs teams before being a super-utility player was cool.
comment in response to
post
I'm not too aware of AI detection research but this was an interesting way to do it. It's trivial to fool normal AI checkers, since plain zero shot fails. But you can build a better AI checker if you include a retrieval step comparing a text to known AI-generated text.
comment in response to
post
I keep getting ads for "OpenAI pays its LLM engineers 750k, here are 7 projects to get YOU an LLM engineering job", what absolute slop.
Engineers also sometimes forget that they're hired to solve problems that happen to use code, so we can't forget what those problems are in the first place.
comment in response to
post
Oh man, I thought this was just me, my entire app runs entirely on several of these bash scripts LOL
comment in response to
post
Probably what went through his head after it went down:
comment in response to
post
If he were on-call this past week and more pipelines broke in prod, none of this would've happened smh
comment in response to
post
I think they match well with someone like Seattle, they’ve got too much pitching and need some hitting. Mets should gun for someone like Woo or Hancock, that goes a long way towards making that team more complete. Also would be good for better luck with health, Marte needs to stay on the field
comment in response to
post
Yeah I agree, and I think if they get one more big star and then get a few more pieces like they did last year with Manaea, Severino, and Iglesias, they’re going to be a top contender. Problem is if they don’t, that lineup starts looking awfully top heavy like the Yanks or KC.
comment in response to
post
No point for Lindor and Soto to get on base if nobody is going to knock them in 🤣 as top-heavy as the Yankees lineup was, this Mets lineup isn’t that much better. Maybe if they get Alonso back, then things change.
comment in response to
post
I use custom meta-prompts for ChatGPT to have it write the way I want it to write. I pasted in the last 15 paragraphs worth of ChatGPT responses from my latest ChatGPT thread into GPTZero and it marked it as 72% human…
comment in response to
post
The stupid thing about AI detectors is that none of them actually work. It’s trivial to break one. You can just as easily ask ChatGPT “write in the style of [insert author] and write sentences in a clear, declarative manner” and pass any AI detectors. But people love peddling the snake oil
comment in response to
post
At best probably 5% of people I know outside of tech know what ChatGPT, and probably 95% of that subset hasn’t tried it because “doesn’t it just make things up?” and “I went to school and I’m smart, I don’t need AI’s help”.
comment in response to
post
OpenAI was happy to operate as a nonprofit until they realized they actually were sitting on a goldmine worth billions of dollars, and then suddenly we see “oh but we have to fulfill our fiduciary responsibilities to our investors”
comment in response to
post
From the looks of it, OpenAI is just getting started:
“[OpenAI’s] blueprint also outlines a North American AI alliance to compete with China's initiatives and a National Transmission Highway Act "as ambitious as the 1956 National Interstate and Defense Highways Act."
comment in response to
post
All of this to play in a division with the Phillies and Braves and then to get bounced in the playoffs by the Dodgers 🤣 Mets hardly have a rotation behind Senga and Petersen and LA’s slotting Kershaw and Gonsolin as their #5 and #6. Money would’ve been better spent in the AL.
comment in response to
post
I’ve found it helpful to ask ChatGPT things like “why couldn’t I do it like this [insert steps]?” or “explain how they came up with that way of solving the problem. What wasn’t working before and why would an idea like this work?”. It was pretty helpful for understanding why transformers work.
comment in response to
post
Trying to learn more about the Bluesky API! Easiest way is just replying to yourself 🤣
comment in response to
post
reply to my own post!
comment in response to
post
another test reply
comment in response to
post
Yeah this is a problem I’ve had because most political posts are about current events. Works well normally, but errors on, say, if there’s a new politician/person/law/bill in the news and they’re not in the knowledge base, which isn’t rare. I’m tweaking with some RAG-esque ways to get context for it
comment in response to
post
I didn’t know LLMs knew stuff like country-specific politics! Wonder how it does if you asked it to label political parties. I use LLMs to classify Democrat/Republican posts for US politics and it works well, and for language I use fasttext, which can label a batch of millions of posts in seconds.
comment in response to
post
Sometimes Claude will point out that it made an error in the code chunk that it suggested but then it will return the exact same buggy code as before and try to gaslight me by saying “if you run it this time. it should fix your error! Let me know if there’s anything else I can help you with!” 🫠
comment in response to
post
Can confirm, building neural nets from scratch is a really great exercise. Plus you run into practical problems and you fix it and you realize "wow, this is already fixed in PyTorch". I also did this recently for LLMs with Karpathy's videos, and I'm planning to work through more of his videos too.
comment in response to
post
The days of doing a coding bootcamp and having a job in 6 months are over. Because of LLMs, people aren't even hiring junior devs anymore. Devs are definitely going to be fine lol, by the time AGI eliminates experienced devs it'll have automated away so many other jobs too.