Profile avatar
robp.bsky.social
Associate Professor of CS @ University of Maryland. Proud Rust advocate! I ♥ science & compiled, statically-typed programming languages! Views are my own.
1,309 posts 3,871 followers 515 following
Regular Contributor
Active Commenter
comment in response to post
An interesting philosophical followup here is "why?". Why is it that problems that are "natural", like finding shortest paths, or maximum matchings are computable, while the vast majority of possible problems are not?
comment in response to post
Just saying, I wouldn't even buy a GPU with just 8GB of RAM now, I certainly wouldn't require a tool to build on such a machine (in 2025). Is it bloated? Probably. But 8G is peanuts these days and we should expect a bit more from modern hardware, IMO.
comment in response to post
8GB of RAM? Is this software from 2013? You could drop it from a phone benchmark ;P.
comment in response to post
What if you change the allocator?
comment in response to post
To give it a try, I formatted my recent manuscript with Typst and updated the PDF on medRvix. Looks much better. A minor disappointment is that I need to accommodate big figures with legend on the next page, but “#figure( placement: auto)” does not work well with “set block(breakable: true)”.
comment in response to post
context?
comment in response to post
& the existence of cargo & (currently) just 1 compiler means they will often be easier to install. On the library side, the existence of package mgmt. systems means that libs written in R, Python, Rust or Julia may often be easier (by lang. support) for others to use than those in e.g. C++. 3/3
comment in response to post
For example, there's a high bar to get into bioconductor, so this often implies a certain level of user experience (that's not R-centric, but bioconductor specific). Likewise, e.g. Rust programs will be more safe on average (by language choice) than C++ ones... 2/3
comment in response to post
I broadly agree with the statements you've made here. However, I just want to add that there is often a non-trivial correlation between the adoption of certain languages / ecosystems and other characteristics on this list. 1/3
comment in response to post
If I did, they’d have to kill me. But for that Luigi’s special Italian, it may be worth it … oh no, I’ve said too much!
comment in response to post
And the Google form for faculty lunch preferences?! So top secret that you don't want to even get me started!
comment in response to post
Michael, you have no idea how much the world wants to see my slide decks hosted on the University Box account. No idea! People would riot to get access to those (publicly posted) slides on suffix array searching and construction!
comment in response to post
I agree, it would be easy to implement, I think. Testing it out would be the challenging part :).
comment in response to post
It turns out that provenance tracking based on individual transcript IDs is hard. Currently, tximeta thinks of reference annotations in terms of "releases"/"checkpoints", not the provenance of each individual txp. Thus, we really want, at quantification time, to create the signature for the set.
comment in response to post
Personally, I also prefer "annotated" and "novel".
comment in response to post
Ultimately they get combined during indexing, after each has had it's reference signature computed. However, considering the status of the transcript during assignment is very interesting --- perhaps even using multimapping between classes to trigger more careful realignment might be useful. 2/2
comment in response to post
This is a *really* interesting idea. The purpose of defining two classes now is just for provenance (we can expect to have a signature on file the annotated set, but not the novel set). This makes metadata propagation easier in long-read analysis where assembly is even more common. 1/2
comment in response to post
Sometimes, if you choose the right name, the (copyrighted) artwork already exists!
comment in response to post
Reminds me of an oldie but a goodie
comment in response to post
also, lol at moarfish ;P
comment in response to post
So the current implementation requires *at least* one of --known (currently --annotated) or --novel. So you could have both or either (but not none). I think the idea is that in e.g. a non-reference organism, someone might do a completely de novo txome assembly & this would still work.
comment in response to post
Eg --annotated --novel Other options for the first group (eg GENCODE txps) would be: --curated --known Other options for the second group would be: --denovo --custom
comment in response to post
Yea, so this falls under the category of "other problems" that I was referring to and I think it's very real. The increase in dev time and budgets means that more games are made cross-gen/cross-platfom for financial reasons, which blurs the lines and benefits of new consoles.
comment in response to post
Honestly, hardware has evolved a lot since PS5 was in development and it makes total sense to bring the subsequent advances to the console. Especially if game devs are going to insist on using the super heavy and (IMO) poorly optimized UE5 for almost everything!
comment in response to post
Yup! I'm seeing a lot of content (from different outlets) pushing this narrative that fans are upset at the "short" lifecycle of the PS5. It feels absurd to me and I wonder if it's astroturfed. Also, it's not like PS5 games will surreptitiously stop on day 1 of the PS6 release! 1/2
comment in response to post
Yea, I get it. But I think the obvious way to measure the length of a generation is release to release, not "when Fred got his" to new release. With the switch 2 (and probably PS6), the bigger problem is that they aren't giving steep discounts on the old hardware. That's a real issue!
comment in response to post
I'm sorry, Elinne. I hope that you can find peace and closure, even if it's way harder than it should have been.