brandondail.com
Staff Engineer at Discord working on Design Systems, Accessibility, Other Stuff™️ Collage & generative plotter artist 👋
116 posts
4,800 followers
265 following
Regular Contributor
Active Commenter
comment in response to
post
what are you measuring the contrast of? Ash is 9.49:1 for the chat text, which is lower than the others to approximately match the original dark theme (which is 9.36:1)
comment in response to
post
ouch, I’ll make sure that gets fixed.
comment in response to
post
for example, text-secondary here is defined in terms of a contrast ratio with a background token. This defines the color across all themes; no hand picked colors! that bg token is also defined using a contrast constraint.
If anything in the constraint dep graph changes, it propagates automatically
comment in response to
post
did a human right this
comment in response to
post
In the meantime, as a reader: don’t trust any articles attributed to “staff,” and don’t trust any articles that don’t include quotes from scientists completely uninvolved in the original publication. Science is a gradual, meandering, back and forth process; sometimes we have to let it play out.
comment in response to
post
Which brings us to the scicomm point: we gotta get better at distinguishing between “here’s someone’s speculative idea that they just published” and “here’s a new discovery.” This is the former.
comment in response to
post
details pls
comment in response to
post
he knows if you’ve been bad or good so be good for goodness sake
comment in response to
post
`ps aux | grep rg` showed that rg was being called with a pattern looking for tailwind config files
comment in response to
post
so personally I see “LLMs can’t answer questions” as a technically correct answer but one that lacks nuance, because in practice they can absolutely be used to find answers if you know how to dodge the nonsense, just like with the garbage island Google has created with its ads/SEO search empire
comment in response to
post
both search engines and LLMs are tools that frequently present questionable statements as truth and both require a similar level of skepticism; both have incentives that aren’t providing accurate answers, and both can be useful tools for finding accurate answers if you know how to use them
comment in response to
post
I’d argue that if you have enough media literacy to be critical of online sources from search engines, you probably have equal skepticism with LLM answers; the broader problem is that most people don’t have that and will trust an LLM answer just as much as the first SEO-optimized garbage website
comment in response to
post
It’s just WCAG, but each token has a hand picked contrast ratio, so you can specify a value that has the right apparent contrast based on the color type. So border-primary might be a subtle value like 1.1 (minimum contrast only enforced where WCAG requires it) whereas header-primary would be 6+
comment in response to
post
then we have a sync pipeline that pushes all this to Figma. We also built out a Figma plugin so you can do the same in Figma; code is always source of truth but it’s useful for design explorations
comment in response to
post
will try to write up a blog post when we’re done, but the tl;dr is: every color token is defined in code in a YAML file that can defined as a contrast ratio (e.g. text-muted as 4.5 on background-base) and then we build a dep graph and use colorjs.io to find the closest color in the scale it’s using
comment in response to
post
Enforcing contrast requirements also becomes trivial. You can specify a minimum contrast ratio for text and ensure that no new text tokens can be created that don't meet your contrast requirements on some set of background colors. Designers no longer have to manually check contrast ratios everywhere
comment in response to
post
It makes iterating on colors system-wide trivial; all you have to do is change the colors of the root token in your graph and that automatically propagates to all other color tokens and ensures they still meet your contrast requirements. Exploring a new themes goes from a O(tokens) task to O(1)
comment in response to
post
For one, you can easily visualize the relationships like in the generated graph above. This is just a quick and simple graphviz output but you can get fancy and visualize this graph in interesting ways that can help you get insight into how your system is structured.
comment in response to
post
If you want a really good deep dive into testing with screen readers, I strongly advise reading this guide by @sarasoueidan.com
www.sarasoueidan.com/blog/testing...
#a11y #screenReaders #accessibility
comment in response to
post
We did all the big ones over four days but this day was Magic Kingdom all day and then Hollywood Studios until closing!
comment in response to
post
I don’t think it’s worth it tbh, I’d highly recommend just setting up an image pipeline where it generates .png files for mobile from the source vector files.
we’ve got something like that which pulls straight from Figma and it’s made managing icons a lot easier
comment in response to
post
Being able to re-parent an iframe without losing state is going to be huge
comment in response to
post
idk enough about the runtime but I feel like serializing __closure isnt the only constsant cost to executing a worklet? so even if it only gets serialized once, its still at least reading the serialized prop off of this.__closure each time? 🤷♂️ wonder if this happens with other fns like withDelay
comment in response to
post
Is the difference as large in prod builds? I guess I would expect that the DEV specific checks reanimated does would make a big difference something like withSpring being closed over 🤔 but thats pretty wild if its the same in prod
comment in response to
post
I wrote an internal command to run this plugin on a single module awhile back to debug this kind of stuff. Here's the output with and without that assignment. Looks like when it generates that __closure value it doesn't know withSpring isn't reachable.
comment in response to
post
the babel plugin does a looooot of stuff to make those worklets work, Id guess that just having the assignment is enough to cause it to include some worklet overhead code that it’s not currently smart enough to DCE
comment in response to
post
hear me out, what if we made it even taller
comment in response to
post
Last one I promise