Profile avatar
fatihguvenen.bsky.social
Economics professor at University of Minnesota. Director, Minnesota Economics Big Data Institute. Co-Director: GRID Project. Research on Macro, Labor Markets, Inequality.
58 posts 2,232 followers 308 following
Regular Contributor
Active Commenter
comment in response to post
Again, yes, it's my cash.. and all I am trying to do is close my account. The cherry on top is when the agent at the branch smiles politely and goes: May I ask why are you closing your account? So... @bankofmontreal.bsky.social still has my $3,500 and I have no clue how to get it back!
comment in response to post
You email the person at the branch who gave you his card to call just in case. Wrong again.. My email bounces back!! Why? Obviously, to make it more fun! You see, the bank's email server sends me a long email full of digits & characters, which I am sure are more clues to solve. Genius! Sigh.. ++
comment in response to post
designed by a genius mind! See you will get back your money - yes from MY checking account - once I solve all the clues! Isn't this fun? So, I go to the branch. Wrong Move. You need to call. I call. Your birthday is wrong, you cannot talk to an agent. I check my online info. Birthday is correct +
comment in response to post
Every time I interact with RePec or FRED, I can't help but think how invaluable your contributions 🙏🏼 to the profession have been!
comment in response to post
Democrats are going to a gun fight with a kitchen knife (famous untouchables quote). Not that anyone expected better from them but still depressing to see how out of ideas they are....
comment in response to post
Thanks for reading. My first Econ research post here. Curious to see how active this platform is. #economics #linearization
comment in response to post
To conclude: I have no horse in this race. I am posting this only because I am concerned. I am also hoping to be persuaded by my colleagues who work on these methods: Show us how these solutions compared to K-S solution, or even better, to a full Recursive GE solution -- for large shocks.
comment in response to post
20/n Famously, the 2010 JEDC special issue on post Krusell-Smith models included one method that took less than 1 second to obtain a solution compared more 7 to 300 minute for other solutions. But it was so inaccurate that it looked nothing like the true solution (Den Haan (JEDC, 2010))
comment in response to post
19/n I know that some colleagues are working to introduce second-order terms into these models but I don’t know if any of them has answered the questions raised above about accuracy for large shocks.
comment in response to post
18/n Suppose we keep developing more and more methods based on linearization that get so complex that we can never solve those models through more accurate methods. We may never know how accurate or inaccurate our solutions are since we will have no benchmark to compare them to.
comment in response to post
17/n Problem is that 3 to 5 sigma shocks are so rare under Gaussian shocks that most solution algorithms effectively ignore them (they are rarely realized in ergodic set solutions). But in reality, they occur much more frequently because aggregate shocks in data have a longer tail.
comment in response to post
16/n A partial answer is provided by Terry (JMCB, 2017) who compares KS to Reiter for varying degrees of counter-cyclicality of firm-level shocks (see figure): For larger values, the linearized solution is both quantitatively AND qualitatively moving in opposite direction of the true solution.
comment in response to post
15/n Are linearization-based methods accurate for such large deviations? Note that even Krusell-Smith is a "local" solution method, valid only near the steady state (inside the ergodic set). How big are the discrepancies of linearized solutions from the “true” solution? I’d love to see the answer.
comment in response to post
14/n So, the greatest benefit of writing and solving HA models is to figure out what to do when a BIG crisis hits. We are already well-equipped to deal with average recessions! So, new methods are only useful to the extent that they can solve complex models accurately for BIG shocks.
comment in response to post
13/n These recessions involve substantial deviations from trend: ~6% to ~10% fall -- or 3 to 5 sigma -- is common. Even bigger declines in developing countries. Plus, idiosyncratic shocks are countercyclical, with their variance rising or skewness becoming more negative during recessions.
comment in response to post
12/n Main use of HA models is to study business cycles & policy responses to recessions. BUT, from a welfare perspective what really matters is big recessions: The Great Recession, the Euro sovereign debt crisis, 1990s recessions in Northern Europe are 10x more important than an average recession.
comment in response to post
11/n So, what is the problem? Linearization methods work well bc the full Krusell-Smith (nonlinear) solution turns out to be pretty (log) flat in aggregate states. However, while this is true for small shocks — 1 to 2 sigma devs from trend — for larger shocks nonlinearities do matter. -->
comment in response to post
10/n I am familiar with Reiter’s (clever) method, but less so with latter ones as I have not used them in my research. So, I am posting my concerns here, hoping that it will lead to a discussion of the pros and cons of these approaches and that hopefully, the authors can respond and persuade us.
comment in response to post
9/n Most recent examples include work by Bardoczy, Auclert, Rognlie, Straub, and others (sequence-based Jacobian methods). These methods promise to be much faster than even Reiter’s method.
comment in response to post
8/n SECOND: Why am I concerned about accuracy? Background: In HA models, linearization-based methods start with Reiter (JEDC, 2009). It is nonlinear in individual states but imposes linearity in aggr states. It solves Khan & Thomas (ECMA, 2008) model about 100x faster than Krusell-Smith method.
comment in response to post
7/n To be clear, I don't think speed is irrelevant. My point is that the objective for developing computational algorithms is to improve speed SUBJECT TO keeping the same level of accuracy of the most accurate solution for large shocks. This brings us to Accuracy for Large Shocks... -->
comment in response to post
6/n Heck, many applied Econ papers use 100s of CPUs and run for months… Do we think macro research is less consequential? The global financial cost of 2008 Global Crisis is estimated to be ~$2 trillion. So, an algo that solves K-S in 10 secs vs. 10 minutes means nothing - unless it’s accurate.
comment in response to post
5/n (Side note: In case you are thinking about DeepSeek, yes, it's revolutionary but not because it is much much cheaper and faster to train. It is because it is ALSO as good as the best LLMs. In other words, ACCURACY. Otherwise, nobody would have cared about it... )
comment in response to post
4/n James Webb Telescope took hundreds of scientists and engineers 5 years to design + 20 years to build and cost $10 billion. LLMs use 100s of thousands of CPUs & cost billions to train... Researchers in climatology, physics, applied math, etc. use tens of thousands of CPUs for simulations.
comment in response to post
3/n FIRST: Why do I discount speed gains? If we believe economics is a science & that our research should be taken seriously to design policies that can cause 100s of billions or trillions of $$ in costs or savings, the speed of an algorithm is largely irrelevant. The #1 priority is ACCURACY.
comment in response to post
2/n While (ii) can be a real advantage, (i) is NOT. Yet the latter is way too much overemphasized... Let me explain why.
comment in response to post
stockholders are heterogenous. You often see the same problem among immigrant populations, who all come with little wealth and try to start a business by half a dozen of them partnering together, which leads to its failure. Anyway, this is my two cents.
comment in response to post
that arise in investment decisions, which slows down or completely shuts down investment. This is not a controversial view as it underlies the the greatest theoretical challenge of studying firm decisions under incomplete markets is how to define the proper discount factor when ..+
comment in response to post
I think that's a very smart argument. But I may be biased because I've always wanted to write a paper making that argument. This is mostly relevant at very low levels of right-tail inequality but I think having some wealth concentration helps avoid coordination and contractual problems +