Profile avatar
donskerclass.bsky.social
Econometrics, Statistics, Computational Economics, etc 🏳️‍⚧️ http://donskerclass.github.io
393 posts 2,275 followers 331 following
Regular Contributor
Active Commenter
comment in response to post
Rakesh Vohra is pretty good at titles
comment in response to post
This is basically how diffusion models work. Because the solution of the heat equation is just convolution with Gaussian noise, tracking a signal from its temperature is basically just noising/denoising. This perspective does provide a neat way to define Gaussians on non-Euclidean manifolds.
comment in response to post
Lots is known for no-regret learners, which are related (equivalent? at least in some cases). Repeated play by 2 no-regret learners converges (averaged over rounds) to a (coarse) correlated equilibrium. A no-regret learner against a strategic agent can be forced into the worst equilibrium payoff.
comment in response to post
I taught online learning to undergrads, but after frequentist/Bayes, so I could rederive Bayes rule as Exponential Weights and penalized risk minimization as FTRL so students could skip the philosophy if it was not their thing. I'm not sure how well it worked, honestly.
comment in response to post
I'm working my way through this paper, but the general idea, that forecasting can be a way of arranging existing information to satisfy a specified set of desiderata, rather than generating "knowledge", is often compelling. Though for some desiderata we'd call that "optimal information processing"
comment in response to post
After teaching a class out of Hyndman and Athanasopolous and worrying a fair bit about what distinguished forecasting as a distinct field of study, my view is that it's a set of tools within Operations. Separation of prediction and control is useful as an artifact of an org chart, not of reality.
comment in response to post
Crudely, OR deals with "how do I solve this problem", Econ does "Let's look at this behavior and figure out what problem it's trying to solve". Both useful exercises, but the former is often more useful to decision makers if they have not, in fact, already solved their problems.
comment in response to post
My first reaction to that framing was "isn't that just Operations Research?" That said, "Econ should be more like OR" is a take I've definitely flirted with. Side effect of hanging out too long in a b-school, I guess.
comment in response to post
Agree with 1 & 2, not sure about 3. As I've said before, there are mainstream economists working on developing radical alternatives to capitalism, it's just that to avoid suspicion they call themselves "auction theorists". Also, the things they're building will be worse.
comment in response to post
This thread is making me regret that I let drop, with my sudden moves last year, an earlier project setting out with the ambition to try to kill Bayes Nash Equilibrium. Or maybe we just need to read about how things work in other countries, idk.
comment in response to post
Optimistically, going from evaluation to "design" is the move reaching outside of the existing support. Actually existing market design seems to work okay in small-scale settings, but only in a few areas is it coming to terms with iterative experimentation and failure needed for ambitious changes.
comment in response to post
Conjecture: applied theory declined in academia because w/ realistic strategic &/or behavioral considerations, back-of-the-envelope price theory becomes nearly content-free. Fixes need some constraints: equilibrium refinements? (no), transportable behavioral models? (lol) global data to extrapolate?
comment in response to post
It seems like mirror maps are a special case of mapping to an unconstrained transformed space, which is one of your approaches, it's just that Bregman divergence + convex conjugate gives a "nice" family of maps where the general case is not nice. I agree sequential convexification seems harder.
comment in response to post
On PNODEs, it seems related to projected OGD, which looks like what you'd get by applying your projection approach to constrained gradient flow. 1 Is that true? & 2 the convex opt lit has several related tricks for constraints (eg mirror or proximal maps); how do these fit in your categorization?
comment in response to post
Get Sobolev-norm pilled, losers.
comment in response to post
False. I am my own least favorite bisexual, and I'm going to have a happy pride anyway. 🩷💜💙
comment in response to post
Sure, but also these are kind of a silly use case; they’re basically using the NN as an ODE solver in a parametric model, in cases where a good ODE solver would be fast anyway. You could motivate that (differentiable layer in a bigger system? I’d still use neural ODEs) but rarely on its own.
comment in response to post
PINNs hard to train and underperforming well-tuned baselines? News at 11! Seriously though, I do like GP/RKHS for known linear operators (Wahba was doing it in the 70s!) but this says less about Deep Learning in general and more about PINNs specifically, especially on small problems.
comment in response to post
I had a several-week unit on Bayesian methods in an undergraduate elective course on Forecasting, from an Economics department. Old class notes are at donskerclass.github.io/Forecasting....
comment in response to post
❤️
comment in response to post
There is an R implementation of Torch (through libtorch). Not pure R, obviously, but no fast R library is. I not going to vouch for it without having tried it though. torch.mlverse.org There also (was?) an R Keras with a similar deal, back in the Tensorflow era.
comment in response to post
The new theoretical aspects of M-estimation w/ ∞-dim nuisances are mostly notation: same estimator & formulas, though with a generalized notion of derivative, under regularity. The actual challenge is performing the optimization. Also calculating derivatives, but bootstrap avoids that.
comment in response to post
Congrats to Amazon on snagging you! P.S. Your presentation at Emory was very helpful and I'm glad I could catch the Zoom stream. Any chance they recorded the presentations?
comment in response to post
I don't know; I'm pretty happy to let non-specialists come to me and learn to understand what I know starting from first principles, with time and effort. Whether this counts as "accessible" might involve a discussion with our Office of Student Aid, though.
comment in response to post
"Neural Network Prediction Using The Stock Market" is finally a new idea for a time series class project. Also, "frankly baffling misunderstanding", "verges on self-parody" and "inane hot take" in your summaries of the *non*-SIGBOVIK links suggest why that conference is necessary...
comment in response to post
As a teen my only knowledge of this band was an .mp3 of this one song which I nevertheless listened to frequently on repeat for reasons which will forever remain shrouded in mystery.
comment in response to post
I haven't dived into this part of the literature yet either, so thanks, I will take a look!
comment in response to post
Thanks, I hadn't seen this one! I will take a look; Cristoph and I overlapped at Cowles when he was a postdoc so I tend to appreciate the perspectives he offers.
comment in response to post
See the papers linked in the alt text of the quoted post, particularly the Bayesian GMM methods, which are close to this, or this talk by Yiu et al, which is closer drive.google.com/file/d/1qr-i...
comment in response to post
Bayesian bootstrap of the efficient influence function does the job (and has been widely studied), but relies heavily on the asymptotic approximations; are there similarly efficient approaches for (near-)exact posterior marginals?
comment in response to post
The most natural computational question to me is whether there are computational speedups achievable for targeted posterior functionals that mimic the statistical gains from decoupling target and nuisance. We get lower statistical dimension, but most approaches still need to compute full posteriors.
comment in response to post
Laura Liu et al have some work on this with an empirical Bayes type approach. There'sdefinitely finite sample bias issues that show up in AR1 that are exacerbated in the default panel versions. doi.org/10.3982/ECTA...
comment in response to post
Oh, thanks, I completely missed that! Expect more vibrant threads on Bayesian updating from here on out.
comment in response to post
Thank God that's not me.
comment in response to post
Depends on whether the economist is a good cook. Is there anyone out there who can vouch for Abhijit Banerjee's cookbook? youtu.be/ya7SP_qH5HU?...
comment in response to post
Don’t misunderstand, I am strongly pro-fujoshi. Or, I guess at this point, part of the crowd…
comment in response to post
Not saying, but...
comment in response to post
Some searching finds a bound in the proof of Prop 18 in arxiv.org/abs/1006.1138, claiming that for F={⟨w,x⟩,||w||≤1}, fat_α(F)≤32/α^2 whenever α > 4√2/√T. I have no clue whether the bound is tight.
comment in response to post
There are some very unique suggestions about unconventional sources for a medical literature review in this episode.
comment in response to post
It's not clear that many people ever actually learned Le Cam, is I think the point of the quote. Many of us have read part way through the papers, though. FWIW the reinvention was in fact in a paper on a practical AI problem where the lower bounds matter, and the well-paid people did figure it out.