jakelazaroff.com
nyc-based programmer and designer. alum @recursecenter.bsky.social. public transit enjoyer. thoughts on local-first, javascript frameworks, web components, css and other web minutiae.
🌐 https://jakelazaroff.com
974 posts
2,237 followers
396 following
Getting Started
Active Commenter
comment in response to
post
so homomorphic encryption is one way to mitigate that tradeoff. another for clients to agree on "chunks" of many updates which can then be compressed more efficiently, as in ink & switch's beelay
comment in response to
post
that works but there's a storage/bandwidth tradeoff: the sync server has no idea what's inside each update, so it can't compress them if a lot of updates accrue while a peer is offline.
the other option is to require both peers to be online simultaneously (which is really annoying in practice)
comment in response to
post
honestly i think they look kinda cute. like pixar's cars come to life
comment in response to
post
not sure where you live but in NYC we just got our first open gangway trains! the subway line right by my apartment has two of them and i get so excited whenever i get to ride one
comment in response to
post
just did that this morning 🤘🏻
comment in response to
post
fuck man, i'm sorry to hear that 😔
comment in response to
post
even if you think AI is harmful — setting aside that AI products, like drugs, vary widely — the fact is that people who use it do so because they perceive some benefit! the conversation needs to begin by acknowledging that experience, not by saying "who are you gonna believe, me or your lying eyes?"
comment in response to
post
but to the original point i think we need to acknowledge that even people who are ultimately harming themselves still perceive a short term benefit. imo "you may be gaining something but have you considered these other costs?" works better than "no, you aren't experiencing what you think you are"
comment in response to
post
fully on board with that. and to be more charitable to what you said before, it's admittedly a pretty flimsy defense to say "it's more complicated than just 'harmful' or 'not harmful', you just need to do X and not do Y or Z and then it's fine!"
comment in response to
post
sorry absolutely not trying to do that! i gave a more detailed answer in response to your other post but i'll just say here that it's not theoretical to me either, and the main reason is that i actively use chatgpt/claude very differently than the way in which your students seem to be using it
comment in response to
post
i don't outsource my thinking to it and i treat every answer just as suspiciously as i would treat any random information found on the internet. i make sure i read and fully understand everything it tells me, and i almost always refactor its output rather than copying it verbatim into my projects.
comment in response to
post
speaking for myself but i primarily use it for programming help as a replacement for a question-and-answer forum called stack overflow. so it's a research and information retrieval tool for me (in a domain where a lot of information it surfaces is easy to verify).
comment in response to
post
don't get me wrong, i am not denying that there are intrinsic harms! i'm just saying that we don't need to be reductive about the overall impact. going into detail about the ways in which a product's harms might negate or dominate its perceived benefits is exactly what i'm advocating for.
comment in response to
post
did people get into how they were using LLMs and why they regretted it? there are a lot of different ways you can use them for coding help, and in my experience they vary significantly in effectiveness or lack thereof
comment in response to
post
imo these reductive "beneficial" and "harmful" labels are counterproductive. general tools like chatgpt have many uses; using them in a particular way might even be beneficial along one axis (e.g. easier to get a degree) and harmful along another (e.g. long term atrophy of critical thinking skills).
comment in response to
post
put another way: the "AI is not useful, full stop" rhetoric is akin to abstinence-only sex education, or anti-drug scare campaigns. ultimately they end up backfiring when the most hardline and outlandish claims inevitably make contact with reality.
comment in response to
post
i should've said "what they see as the surest path". it's absolutely fair to say it's being sold to students without acknowledging the risks. but — and i think this is a key point — many critics refuse to acknowledge any benefits, which alienates people when they experience those benefits firsthand.
comment in response to
post
it is about credentials though, isn't it? for better or for worse we've created a system in which a piece of paper can gatekeep financial security. we shouldn't be surprised that students are taking the surest path they see to obtain it.
comment in response to
post
so as far that goes i see LLMs as an incrementally better version of what we've had for the past 20 years
as far as blue sky thinking goes i am *very* interested in bret victor–style totally rethought versions of what programming could be. but i think that's a totally separate conversation
comment in response to
post
like ultimately the challenge as i see it is to take relevant information from the world and load it into my brain. no matter what that will involve 1. aggregating large amounts of knowledge in a searchable form and 2. allowing me to query in order to narrow that knowledge down to the relevant parts
comment in response to
post
the sticking point for me is that i *like* programming and problem solving. one reason agentic coding hasn't stuck for me is that it's managerial rather than creative. and more broadly i'm skeptical that a fundamentally different interface for research or information retrieval is even possible.
comment in response to
post
i am not a Computer Scientist so maybe this is ignorant. but i really like the "LLM pair programmer" model where i ask questions and think out loud in human language and it responds with answers and code snippets, and it's hard to imagine a better tool that takes a fundamentally different shape
comment in response to
post
i think building it would be fun! the problem is that then you need to maintain it indefinitely 🥲
comment in response to
post
can it sync to S3 or some other sort of cloud storage that i don't need to maintain myself?
comment in response to
post
does iCloud support the last requirement? i glanced over the CloudKit JS docs but i can't see a way to access the files directly, just databases.
thinking back on a talk i saw at local-first conf about how we should be trying to use flat files and well-documented formats 🙂
comment in response to
post
i realize this isn't a defense of LLMs per se. it just strikes me as reactionary to reminisce about the good ol' days — when people would signal their "informed consent" to being tracked and their data being sold in god knows how many ways by checking a box without reading the associated legal terms
comment in response to
post
as a thought experiment, how do you think it would affect stack overflow contributions if they explicitly told users "we will monetize what you just wrote and you will not see a cent of it" on the screens where users compose their questions and answers?
comment in response to
post
i mean, do they consent to a company making billions of dollars on the back of their unpaid labor? like, obviously they agreed to the terms of service. but i think the success of most companies that monetize user content is predicated on people *not* understanding their contributions in those terms.
comment in response to
post
okay but do we have actual coherent definition what "mass theft" entails? i use LLMs as a replacement for stack overflow — a business built on users' unpaid contributions which sold for billions to a private equity firm. why are we okay with stack overflow internalizing that value but not anthropic?
comment in response to
post
those damn coastal elites just can't stop making sweeping generalizations
comment in response to
post
i think doing an interview with the bulwark a week before the election is probably not smart, period. but this is not a fumble, it's a good answer that islamophobes are full on lying about
comment in response to
post
ah thanks for the correction
so this is actually… an NYPD officer specifically tasked with protecting a politician, standing by without intervening as he's kidnapped 🥴
comment in response to
post
like goddamn, what is the point of paying the NYPD $10b a year if they're just gonna roll over and let ICE abduct *their own officers* in *their own city*?
"reasonable" people called defunding "too radical" because we need police to fight crime. here's the crime. where are the cops fighting it?
comment in response to
post
point taken about commercial use (although i think in practice most companies will avoid GPL code like the plague)
hating GPL is the hot take i'm interested in! i don't love it either but on the surface it really does seem like it addresses a lot of what you care about
comment in response to
post
shorter (maybe better?) example without the DisposableStack indirection. ideally this would be handled under the hood by the framework tho
really even `Symbol.dispose` is unnecessary; could just be any method. but i like the svelte ethos that things should "feel" native (even if they're really not)
comment in response to
post
i'm not using the `using` keyword (you can't with class members apparently, that was the first thing i tried) so there's no automatic disposal — it happens manually by calling dispose() in componentWillUnmount.
i guess i don't even need the DisposalStack in this case but using it feels right to me.
comment in response to
post
this took me like 10 minutes so i'm sure there are edge cases galore. the biggest caveat i see right now is that this only works for resources that are top-level members of the component class
comment in response to
post
neat! curious why you didn't choose to dual license with the GPL (or at least include a similar "viral" clause mandating derivative works must also use the same license). would be very simpatico with the idea of a "code commons"
comment in response to
post
hell yeah, this is exactly what i want out of `using`
comment in response to
post
yeah one approach i didn't explore in the article is using HE for only a subset of a document (or a subset of docs within a multi-doc system). not sure what you're thinking re: local-first account system but it's possible that you could use HE for that and some other solution for the "content"?
comment in response to
post
this post languished in the drafts for like six months after i realized the result wasn’t what i was hoping for (those interactive circuit diagrams being an enormous pain to build didn’t help either 😮💨).
thank you @pvh.ca for encouraging me to publish it anyway!