mathisob.bsky.social
Working at http://www.pubgen.ai for local newsrooms. Building http://askair.ai in public
34 posts
36 followers
172 following
Regular Contributor
Active Commenter
comment in response to
post
Yes cognito was the issue for me also, the limit is on the lambda payload size for the full headers. You can maybe try to remove some parts of the cookies you don’t need in a the cloudfront function since it’s going to get called before lambda
comment in response to
post
If I remember correctly I changed the distribution’s behaviour to not forward cookies anymore to the lambda. The issue was that the cookie string was huge but only needed in the front so removing it solved it. Let me know if that makes sense for you !
comment in response to
post
You already have a few translations but here is my take
Découvrez comment créer des interactions charmantes et des touches pleines de magie grâce à CSS, JavaScript, SVG et Canvas.
Je vous dévoile toutes mes astuces ici !
I think this sounds more idiomatic
comment in response to
post
It’s not really a refutation of your point but I think LLMs are quite good at turning a small amount of text (bullet points and short not well written sentences) into a more structured complete text. Agreed that you can’t use that directly without human intervention (proof reading at the very least)
comment in response to
post
I want devin to open a PR anytime there is an issue detected in the sst console, with access to the codebase and the stack trace it should have everything it needs. Do you plan on integrating something like that ?
comment in response to
post
lol I heard anthropic specifically had to instruct Claude to avoid vim in their SWE-bench agent - on the great @latentspacepod.bsky.social
comment in response to
post
If you ever see {"Message":"Request must be smaller than 6291456 bytes for the InvokeFunction operation"}
It might mean that your headers are larger thank 10kb! But you would never guess that by looking at the error.
comment in response to
post
Not sure if you want to add it to the scope but auth for the published blogs was very tricky to implement.
comment in response to
post
Can you add multiplayer on top of the autocomplete everywhere to the wish list ?
comment in response to
post
Thank you ! Exactly what I needed. Glad to see I’m personally responsible for 4 app router websites in the top 1M (with passing CWV)
comment in response to
post
I found a website on similarweb with 1,1M rank and around 40k page views per month so I think it should be in that ballpark if we can trust their ranking
comment in response to
post
That’s cool! Any idea how much traffic does a website need to make in order to be in the top 1M ? I’m wondering if some of our sites show up in those stats.
comment in response to
post
I don’t know if you count it in the category of llm powered UI generation but I let cursor design my frontend and it’s mostly better than what I would have come up with by myself
comment in response to
post
The switching to Chinese in the reasoning step is interesting. I wonder if training models to have a reasoning step in different languages changes the performance as some languages use less tokens to express the same idea.
comment in response to
post
That’s so cool! if you choose to moderate your thread and hide a reply will this also hide it from your blog ? Also the really cool but more complicated feature would be to allow users to post replies directly from the blog.
comment in response to
post
Really excited about this. Ideally there would be something as easy to integrate / use as Facebook comments social plugin developers.facebook.com/docs/plugins... I would be interested in helping build that if anyone wants to join.
comment in response to
post
Yes it seems like the way most people are doing it. I need to look more into it.
comment in response to
post
Getting automatic / updated evals out of you production logs is a challenge ! I would be interested in learning how people have achieved that.
comment in response to
post
Oh right so basically you’re saying you’re not replacing all control flow with LLMs but something that would have been a very complex function (a tangle of control flow logic) is now an LLM call?
comment in response to
post
Did you listen to the @latentspacepod.bsky.social podcast with lindy.ai creator ? He talks about this and how having the control flow not be an llm improves accuracy and makes them usable for a lot of tasks otherwise very complicated to describe solely with prompts.
comment in response to
post
sst.dev I would rather use the same thing wether it’s « just a simple app » or « turned out to be not so simple »
comment in response to
post
Definitely more complex compared to just disabling the transition, I just remembered @kentcdodds.com doing that ! x.com/kentcdodds/s...
comment in response to
post
What about using a theme cookie so it can be server rendered with the right value ?
comment in response to
post
Too bad they don’t do vision, for pdfs at least I found that nothing beats using a screenshot and a vision model in addition to whatever text you have extracted from the pdf to get accurate results.
comment in response to
post
How do you evaluate changes to your prompts / models without just testing random queries you know have issues and are trying to improve. Do people really have automated eval sets?
comment in response to
post
Yes! Also recently learned a similar trick can be used to get better results with the structured output mode which I believe is not yet supported with o1 anyway simonwillison.net/2024/Aug/7/b...
comment in response to
post
That’s basically what gpt4 o1 is so if you use it you shouldn’t need to include it anymore