mbhunzaker.bsky.social
Researcher just doing her best to build toward kinder, healthier online spaces |
Currently: Staff Researcher @ New_ Public |
Previously: Safety/Integrity Research @ Facebook, Twitter x Birdwatch, Google; Sociology @ NYU, Duke
200 posts
1,323 followers
324 following
Regular Contributor
Active Commenter
comment in response to
post
My opinion may be a bit biased, but it's a great opportunity for impact on a really timely project in the US today! (Also a great opportunity to join a super collaborative, kind team). š
Please share broadly with folks who you think might be interested and a good fit! š
comment in response to
post
See the JD for more info, but some key points. The role is:
- Mixed Methods (qual leaning, ideally w solid quant survey design & analysis experience)
- Highly cross-functional (working closely w design/prod/eng to develop & implement recs)
- Fully remote (US or CA) in a fully remote org
comment in response to
post
š« š« š«
comment in response to
post
- The Rep representatives for my area? (is there any way that's actually productive?)
- Other elected Dems? (my understanding is this isn't encouraged)
- Dems at the state level?
- Something/someone else?
I tried to search for an answer, but given the state of search these days šµāš«šµāš«šµāš«
comment in response to
post
Iām not sure how to interpret āis hopedā. What was initially planned/hoped(?) that it would be helpful alongside other interventions to reduce spread? What MZ and Musk hope? That it is enough alone? I donāt think anyone here is advocating the latter.
comment in response to
post
*not so much
comment in response to
post
And hard to implement with skeleton staff of mostly ML folks since the takeover. My understanding from following along is model and matching type improvements happen, but so much not UX improvements that were planned
bsky.app/profile/mbhu...
comment in response to
post
Hii! We need bluesky notes I guess š
Hope you are doing wellāI know itās a tough week for you also probably. Sending solidarity!
comment in response to
post
Thereās plenty thatās actually wrong about relying on just this approach and the way FBs likely to implement, wish folks could focus on that vs dragging the program for inaccurate reasons. (Also frustrating to have invested in transparency journos routinely ignore š sorry to rant in your mentions!)
comment in response to
post
I think this is a bad plan on FBs part, but this description is just verifiably inaccurate. Contributors donāt begin with note writing ability - communitynotes.x.com/guide/en/con.... And the goal is not agreement- the pg linked literally states itās to identify notes helpful to difft povs š
comment in response to
post
*The guide and public data/code were ongoing from the start of the pilot (another really unique aspect of this project, & to the current team's credit it is still updated), but we made a big push of updates in the weeks just ahead of takeover.
comment in response to
post
It occurs to me that it might have been helpful to share those things here for interested folks:
- (1) The paper describing methods & early efficacy evals - arxiv.org/abs/2210.15723
- (2) The public guide detailing features, methods, code, and open data - communitynotes.x.com/guide/en/abo...
comment in response to
post
Oh wow, I'd somehow forgotten it was *literally* the night before takeover that we submitted it even š
comment in response to
post
Oh! A key piece of context I forgot here- of ~18 team members at the end, we had 3 full time PhD social scientists. This ratio is absolutely unheard of (usually 1 is split over multiple projects), and speaks to the unique dedication to grappling with the social nature of misinfo & UX on the project.
comment in response to
post
We RUSHED to get our materials published ahead of the sale bc we didnāt know what the programās fate would be. I hope folks take advantage of them and try again with better intent, under more favorable circumstances. š
comment in response to
post
What would have happened if the roadmap could have been carried out is a counterfactual we canāt know. I hope that doesnāt prevent someone from one day trying again in good faith.
comment in response to
post
We left with a huge roadmap of known updates we needed to improve coverage. We INTENTIONALLY rolled out with low coverage to mitigate the risk of false positives early on. We knew this wasnāt the way long term.
comment in response to
post
(4) That CN is not working well now re: coverage, does not mean a community labeling/moderation program cannot work. The resource and support rug was pulled out from under BW JUST AS we were rolling out to the US.
comment in response to
post
Moreover, Twitter was suited to this approach in ways that Meta properties just arenāt ā people truly valued the platform/their communities & were motivated to contribute to give back, there were many expert users who could provide context in their areas, and users generally were more tech savvy.
comment in response to
post
āCommunityā is a misnomer there. The raters were paid labelers via a 3rd party contractor. The idea was floated repeatedly of expanding the program to users. There was NEVER appetite or funding to take the time to address known risks to make a community approach work.
comment in response to
post
(3) Meta is fundamentally incapable of making a program like this work. Way back in the day (~2018), my first project there was āCommunity Reviewā - a program intended to supplement 3PFC misinformation ratings with lay person ratings.āØ
about.fb.com/news/2019/12...
comment in response to
post
This to say, CN is not getting any sort of āaccelerationā. Instead, it is PROPPED UP BY years of careful and diligent work by a team thatās mostly no longer there, without which it would have fallen apart long ago.
comment in response to
post
After Musk took over, over half of the team quit or were laid off. Iāve not heard of any substantial subsequent expansion. Itās possible, but given Muskās attitude towards research, I donāt imagine thatās a part of it.
comment in response to
post
2 years may seem short to academic folks, but realize for contrast MANY products are launched based on interviews of 6-10 people (*often ignored/discounted), and maybe an A/B test or two.
comment in response to
post
*aside - I have no idea if any of the guardrails still exist. Seems unlikely that the expert ratings evals continue in current context or that survey measures continue without researchers, but I have no info on this.
comment in response to
post
We also had leeway to take time & resources to get it right (though of course it was never perfect). We spent 2 years carefully testing, identifying risks & mitigations, setting up monitoring and guardrails*, and slowly expanding as we gained confidence.
communitynotes.x.com/guide/en/und...
comment in response to
post
We were a team of ~8-18 people at different points, mostly fully dedicated to this project. Most folks were super senior & top in their area, recruited directly to work on this project.
comment in response to
post
BW was possible as a program by EXTREMELY unique development circumstances. We were an incubator program, and had resources, focus, and runway unlike anything Iāve ever seen anywhere in tech.
comment in response to
post
(2) It is BANANAS and absolutely sloppy reporting to say Musk āacceleratedā CN as NYT does. Reliance on the program? Sure. Was this met with any increase in resources? Not that Iāve heard. Rather, he pulled the rug out from under the program as it launched.