First impressions of the new Amazon Nova LLMs (via a new llm-bedrock plugin) https://simonwillison.net/2024/Dec/4/amazon-nova/
The vibes are good with these ones - they're price and performance competitive with the Google Gemini family, which means they are _really_ inexpensive
The vibes are good with these ones - they're price and performance competitive with the Google Gemini family, which means they are _really_ inexpensive
Comments
Maybe we need a new FAANG acronym that covers OpenAI, Anthropic, Google, Meta and Amazon
I like GAMOA
(Seriously we need an update here: https://www.databricks.com/blog/author/the-mosaic-research-team)
Depends on upload speed more at that point.
If the past tokens determine the next token, why does the client need to send the entire conversation history for any followup query? Why can’t it “resume” from the last output token?
Challenge with that is that it uses memory, which can be wasteful if the user doesn't continue the conversation
You don't have to do that with the OpenAI "assistants" API, which maintains a copy of the existing conversation on their servers (they call that "persistent threads"): https://platform.openai.com/docs/assistants/overview
A year ago the only GPT-4 class model available was GPT-4 itself, and it felt dangerous to me for that technology to only be available from a single vendor
It's most similar to Google Gemini in terms of price and capabilities
I'm thrilled to see Gemini getting competition on that front