namer.bsky.social
(He/Him). MS in CS @UMBC at the LARA Lab | Accessibility and Multimodal DL. My opinions are mine.
I'm on the PhD application cycle this year!
www.shadabchy.com
213 posts
368 followers
438 following
Getting Started
Active Commenter
comment in response to
post
Damn. I saw ICCV claimed 97% reviews in by the deadline May, using the same system, so this is a pretty big gap.
comment in response to
post
Working at xAI must be such a cushy job lmao
Spending every other week just fine tuning or prompt tuning grok to the misinformation-of-the-week then spending the week after that reverting those changes because response quality fell off a cliff
comment in response to
post
uwuspeak tokenizer dropping tomorrow
comment in response to
post
It's incredible how LinkedIn-Blue has become the ubiquitous colour of corporatism.
comment in response to
post
I think it might be a matter of your posts not reaching the feeds your audience uses? I see engagement on most posts in the ML Ranked Feed or Paper Skygest Feed.
How does engagement compare on, uh, other sites?
comment in response to
post
You can check on clearsky.
It's trivial to get on one of those fanatic anti-ai lists by just waiting for a post to go viral then just making a sensible comment somewhere, and that immediately solves half the problems.
comment in response to
post
the visceral pleasure of the dunk is absolute
comment in response to
post
That doesn't solve anything. They could still simply generate an answer using an LLM and the recite that verbatim. They could also use text-to-speech to generate spoken voices (which can now be very, very close to human voices).
comment in response to
post
Windows already has AI in everything. What're you waiting for?
comment in response to
post
There's like a bunch of work on teaching CodeLLMs from documentation only, but of course it's simply not as good as being able to throw thousands of natural questions and answers at it.
comment in response to
post
comment in response to
post
Dug around, it's prompt injection but not directly in Grok's system prompt (which people seem to leak trivially).
They have a separate model (or a separate Grok prompt) that does 'Post Analysis' and injects it into Grok's system prompt. In this case the Post Analysis is where it's all gaga over SA.
comment in response to
post
Creating NGI while calling out AGI is an awesome way to go about it, ahaha
Congratulations, again, to the new parents!
comment in response to
post
Ooh, so it's an academic version of a silent book club.
comment in response to
post
At least there's a silver lining - there probably won't be as many submissions as ACL so arXiv isn't going to get nuked on July 17th and people won't eat rejections for that
comment in response to
post
It's true what they say, no work gets done on planes - not even if you successfully muster the motivation to skip your twelfth re-watch of Shrek 2 on the plane.
comment in response to
post
Every South and South-East Asian has to be crying right now, saying "you had to study this?"
I've had good luck even in the US by just giving any approaching car a death glare.
comment in response to
post
It would be pretty cool to run another set of interviews today, given how different the research climate is!
comment in response to
post
Yeah, I'm seeing this as yet another case of LLMs-as-time-savers.
What's the difference between a human -
- doing research so they can lie accurately;
- giving an LLM a post containing lies and asking it to edit;
- asking an LLM to write the post entirely while asking it to lie in the prompt?
comment in response to
post
I'm positive supervised clustering and triplet CLIP are a thing, just off the top of my head, so not all the gaps noted may be gaps, but it's still interesting to see a taxonomy!
comment in response to
post
alternatively, you're really good at predicting the high entropy state far, far future
comment in response to
post
That's actually standard for the kind of human study where you don't get informed consent because the process of getting the consent would affect the result, at least for US IRB requirements. You contact participants afterward.
The problem was them doing the experiment in the first place.
comment in response to
post
Each time this happens some people manage to bounce to an alternative, but some people also stop posting entirely. That's basic social media/content creation mechanics.
Yes, there are still many great people posting on Twitter (unfortunately), but it'll never be as it used to be.
comment in response to
post
Frankly, I feel like the graph would look fairly similar across all social media, not just Bsky. Obviously, Bsky is the only one we actually have data for.
If there was ever a time for a lot of people to say "fuck this" and detach, it's now.
comment in response to
post
What do you even mean by 'correct visa'?
H1-B can take months to process. A J1 is a perfectly appropriate visa for someone just shifting from a Postdoc to Faculty.
All fascist regimes target intellectuals and academics first anyway, and international scholars are easier targets. That's all it is.
comment in response to
post
Most likely he was still on an F1 OPT or J1 visa rather than an H1B visa, and had some minor infraction or charge on record like a speeding ticket.
apnews.com/article/f1-v...
comment in response to
post
Bureaucratic violence is a nice word for the all the fucking shit German authorities put non-German residents and immigrants through. A belated congratulations on getting it done with!
comment in response to
post
no wait this makes sense
Anthropic releases report, OpenAI realizes they're missing out on market research, they push the promo to get data, Anthropic responds to the promo purely for business reasons
comment in response to
post
This is right on the heels of the Anthropic Education Report, so that's also noteworthy considering the timing.
comment in response to
post
at one point a few years ago I almost talked myself into applying for wholesale PhDs in a topic I was barely interested in on a technical level just because it was chock full of low hanging fruit (like, 3-4 years' worth of straightforward projects)
comment in response to
post
No harm in it! I put mine first thing at the start of my CV so that anyone reading it keeps the context of my research interests in mind when looking at my experience and skills.
comment in response to
post
Quoting the PC comment:
> Per ICLR Reviewer Guidelines, the COLM 2024 paper is within the grace period of not requiring a citation (published after July 1st, 2024)
comment in response to
post
The rejection was predicated on a violation of the double-blind review process (the AC had no way of knowing if the authors were familiar with the earlier paper without Schaeffer tattling *and* preprints are optional cites), so the PCs rightfully rejected the rejection.
comment in response to
post
comment in response to
post
Also the only way the AC found out that the authors were familiar with the paper they refused to cite was because Schaeffer violated the authors' anonymity with his comment.
The fact the AC's rejection was dependent on a violation of the double-blind review process makes it sketchy as hell.
comment in response to
post
There is an official comment. Looks like their criteria was that citing non-peer-reviewed papers is optional and so the grounds for rejection were invalid.
openreview.net/forum?id=et5...
comment in response to
post
ITS SUSAN ZHANG WITH A STEEL CHAIR
twitter.com/suchenzang/s...
(sorry for the gross link there's too much to screencap)
comment in response to
post
Ethical concerns being Schaeffer and Gerstgrasser used Kempe's feedback as part of human verification without their knowledge or consent, and quality concerns being their results are completely different. Schaeffer denies the accusations and notes they were excluded from the discussion with PCs.
comment in response to
post
The "neural" in the title probably caused them to grab it randomly. I doubt they even read even just the titles of half the papers cited, just threw them in there after a word search.
comment in response to
post
Wait am I missing something or is this basically saying it's more effort to publish almost everywhere else compared to ICLR?
Like, I'd have expected it to be more balanced around the 1.0 value lol
comment in response to
post
'evidence' isn't really a word they understand, though