Profile avatar
avikdey.bsky.social
Whatever interests me, but mostly Data, ML, OSS & Society. LLM: Approximately Generated Illusions. AGI has been and continues to be perpetually a decade away. Shadow self of https://www.linkedin.com/in/avik-dey (twitter: @AvikonHadoop -> @_avik_dey_)
181 posts 94 followers 148 following
Prolific Poster
Conversation Starter

Seriously Mark? When did you become this naive? Tweeting about Ukraine's mineral potential without nuance? Ignoring geopolitics, environmental impacts and Ukraine + world’s multifaceted benefits.? Also, today China is the biggest refiner, but a decade from now - who? The incentive is right here.

Very true and laudable: “As part of the open-source community, we believe that every line shared becomes collective momentum that accelerates the journey”; it the end runnable can be recomposed from SCRATCH and hosted as a service from open sourced bits ONLY. Otherwise, just more hype OpenAI style.

Why do AI employees believe they are 'contributing to humanity'? Either: They are persuaded by some AI CEOs, who use that narrative to justify unethical practices and they lack the experience & perspective to know better. Or: They are just in it to collect the paycheck. It’s really that simple.

Dude should quit the hyping while he still has fanboys left.

Better late than never.

Professor, this mainly affects a small segment of the public, those immersed in the relentless AI hype. But the deeper issue is the decline of our education systems helped along by social media echo chambers, which has eroded critical thinking skills, both amongst the educated and those without it.

Wikipedia is rolling out anonymity features piloted in countries with authoritarian governments in the US & is making a change to not show editor IP addresses in response to a global "increase in threats" from Elon Musk, Heritage Foundation, and governments www.404media.co/wikipedia-pr...

Did Microsoft share this with their shark to whale compute will exponentially scale LLMs, exec?

Feels like this is cashing out of TSLA and into AI because Xai isn't sufficient for the task, and he knows he's on the clock with TSLA. *ELON MUSK-LED GROUP MAKES $97.4B BID FOR CONTROL OF OPENAI: WSJ

LLMs have hit their limits leading to post-training rules based patches to curb hallucinations. But calling it neurosymbolic AI is a huge stretch. True neurosymbolic systems must learn and apply symbolic constraints during training and inference. Slapping on buzzwords won’t solve the hard problems.

The chaos behind shutting down agencies is a facade to distract from the real mission - steal our data and hand it over to enemies foreign and domestic. The amount of data that has already been stolen will have devastating effect, both on our internal and foreign policies for decades. Yes, decades.

It’s pretty clear current gen of AI, aka good old ML minus the VC funding, won’t get us to general intelligence. But also be wary of folks who profess what would get us there. Anybody who knows that answer, wouldn’t be telling they would be building. “What I cannot create, I do not understand.”

Great reporting. What Musk doesn’t get is that the code is the easy part. It’s the domain knowledge thats hard and takes years to master. Every single one of his code rewrites will fail because of this.

Why is it that some women have more cojones than most men?

Received a letter from POTUS today purporting to remove me as Commissioner and Chair of the FEC. There's a legal way to replace FEC commissioners-this isn't it. I've been so fortunate to serve the American people and stir up some good trouble along the way. That's not changing anytime soon.

AGI is obsolete in the funding circles. ASI for the win now!

While Gary has indeed been advocating neurosymbolic AI forever, AWS’s Automated Reasoning seems to be more akin to a rules engine. Encoding these rules for non-mathematical problems is near impossible and even if it were possible the combinatorial explosion would be computationally infeasible.

Top reasons why incompetent and insecure leaders like hiring the young and inexperienced to do their bidding: 1. They are impressionable and eager to please. 2. They have limited prior experience to serve as reference point. Results in unquestioned compliance, the kind these leaders like best.

@wired.com: “6 young men—all apparently between the ages of 19 and 24 who have little to no government experience are now playing critical roles in Musk’s Department of Government Efficiency (DOGE)” & are heading up Musk’s hostile takeover of our government www.wired.com/story/elon-m...

NEW: @wired.com built a tool to monitor 1,300+ federal .gov websites, revealing that entire sites are going dark as we speak. @telliotter.bsky.social, @dmehro.bsky.social (who built the tool), and @dell.bsky.social report: www.wired.com/story/us-gov...

NEW: US government websites are disappearing in real time, including sites for the US Agency for International Development, foreign assistance, neglected diseases, and children in adversity. We analyzed more than 1000 .gov websites to track what's been taken down—and what's next. wrd.cm/4aFMs5A

When Trump paused federal funding to all grants and loans, it plunged scientific research into chaos. Though the freeze was rescinded for some sectors, it's still in largely place for universities and research institutions. The damage will be lasting.

Told you OpenAI fans it’s not about the model, it’s always the DATA. Average models with massive, quality data outperforms best top-tier model with sparse data - every single time. Funny how OpenAI is crying about Deepseek 'stealing' data they themselves 'borrowed freely’. Says it all, doesn’t it?

Great observation! One other thing I’ve long admired about @garymarcus.bsky.social is his ability to explain complex ideas without relying on jargon and pretentious language. Speaks to his mastery of the subject that he is always comfortable simplifying unlike those named below. Truly admirable.

Talking? More like shaking. Told you guys over and over: 1) Scaling compute and data ain’t going to get you to AGI 2) Model is never the moat, it’s always the data and you lose that moat at large scale because of data redundancies But hey listening is hard when you are busy perpetually hyping.

Open weights and open access does not make the code open source. Meta’s embouchure saying so over and over, doesn’t make it so.

Trump demands US consumers pay 25% more for coffee and other Colombian imports because he is big mad at them.

This has been evident since the early days of LLMs with the trifecta of AGI hype, Worldcoin and biometrics harvesting all led by Sam Altman. Any pushback against this quackery isn’t a leftist thing. It’s about defending science, ethics and ultimately humanity’s future.

AI needs to prioritize foundational reliability, accuracy, consistency and ethical alignment before pushing agents. Agents are a great abstraction but their performance 100% relies on the core. Like a sports car with a faulty engine no amount of sleek lines will compensate for its flawed foundation.

Don’t have time to do technical diligence on Deepseek latest but having tried few private reasoning prompts, they might intend to open source AGI but the models far from being able to do rudimentary reasoning, forget AGI. Just because their monologue uses “Wait, no …” a lot doesn’t mean it reasoned.

> Doing well on hard benchmarks that you had prior access to is still impressive--but doesn't quite scream "AGI Tomorrow." No it’s not. OpenAI had access to FrontierMath problems & solutions. Approximate lookup by models tuned on the benchmark not impressive. Static benchmarks are useless for AI.

Every time OpenAI has their next Theranos moment I don’t see a need to reassess. It’s been clear for a while now that this is how it’s going to go and I continue to have confidence in my assessment.

I appreciate Google researching alternatives to Transformers and linear recurrent architectures, but fundamental weakness remains: reliance on stochastic optimization and probabilistic mechanisms making them better suited for creative tasks than those requiring precision. arxiv.org/pdf/2501.00663

Somebody need to tell him that all schedulers do async background processing and have been since the 1970s.

The predictability is astonishing.