Profile avatar
jkirshbaum.bsky.social
Helping organizations build with, learn about, and plan for generative AI. Writing at https://handshakefyi.substack.com.
16 posts 4 followers 113 following
Regular Contributor

Step into 2035 with WhatIFpedia.org, IFTF's new world-building platform that transforms how we imagine possible futures. This collaborative tool creates interconnected visions of tomorrow that build on each other, powered by IFTF's strategic forecasting expertise and AI. Built by us at Handshake.fyi

How can we make a static text come alive and dynamically resonate with the world. How is truth a process beyond a set of statements? handshakefyi.substack.com/p/collaborat...

DeepSeek, a private Chinese company, just open sourced R1, a model with comparable performance to OpenAI's o1 model. Just thought you might want to know.

If you want to tell me airplanes aren't flying because they aren't flapping their wings, Im fine with that. But how do we talk about the fact that they're up in the air?

We play around with a lot of interesting AI demos and experiments. Most of them are open source, many are pushing the boundaries of the possible w/ genAI, and alot are just fun. We’ve put together a list of some of our favorites in this month’s Substack. handshakefyi.substack.com?utm_source=b...

I've said it before: people ignore the hard part of RAG -- retrieval. Search is an old problem, but difficult. The title of this article from Elicit puts it perfectly: Build a search engine, not a vector DB. blog.elicit.com/search-vs-ve...

Wow, suno.ai is pretty mind-blowing for music generation. Creates lyrics that are on-rhythm and...actually good. Here is a link to "Samba of Silicon" from the prompt "A song in the style of Gil Gilberto about the future of AI." app.suno.ai/song/27d1073...

Don't get comfortable with anything that we see right now. Even if there were no new models made at all, the tooling and methods for controlling them would continue to advance dramatically. We're going to have new models AND a better understanding of how they work, so expect continued volatility.

Both RAG and ReAct+CoT agents with LLMs are the best we have now, but very unreliable and often ineffective. They will likely continue to be methods for some use cases, but they will exist alongside a much larger toolbelt of options. They are the very beginning, not the end of the story.

Griptape presents a different set of patterns that emphasize switching between deterministic elements of an app w/ long term memory, and creative parts driven by LLMs. One emphasizes orchestration and memory in-prompt, the other emphasizes keeping everything out of an LLM prompt unless necessary.

There are already new alternatives emerging. In LLM development today, it seems like "chains" are the inevitable default way of thinking about LLM applications. Langchain and others have created huge communities leveraging this pattern. "Agents" are just "chains" with a loop around them.

Today's generative AI design and development patterns -- RAG, agents, chat-interfaces, copilots, chains...they are just the beginning.

Google's Gemini Ultra vs. GPT-4: What's the buzz all about? Native text & image output – a first! Handles video, audio, and images. 32k context for precision. Will Google share the AI love with developers, or keep it locked up like PaLM? #GPT4 #Gemini #AI