spiindoctor.bsky.social
Senior Modeler @ discover / Capital One. Houston, TX.
Research interests: Matrix Decomposition, Clustering, Manifold Learning, Networks, Agent-Based Models, Asset Allocation, Risk Modelling | Here for the memes 😊 | 🇨🇮🇿🇦
1,204 posts
1,477 followers
1,935 following
Regular Contributor
Active Commenter
comment in response to
post
wtf why are they reading this website?
comment in response to
post
Most of Texas state politics is really bad. Beside housing policy.
comment in response to
post
delete your previous chats, and clear cookies wont do it?
comment in response to
post
This country shows so much promise but it regularly gets snuffed by special interests. Now we just have to wait until the next democrat comes in power and remembers this exists.
Maybe democrat governors could implement this for state taxes?
comment in response to
post
They want all states to be southern states. The issue: if we moved the average toward southern states it would irremediably alter this country for the worst. They don't mind that at all because they and their friends thrive in these places.
comment in response to
post
Thanks for reading this thread. Feedback and questions are very welcome!
comment in response to
post
Limitations:
• Uniform skill α per agent: real players have strategy-specific strengths.
• Undirected trust links: we assumed influence is symmetric. Directed/follower–leader structures could change dynamics.
• Random matching: exploring homophily or league‐style pairing could yield new patterns.
comment in response to
post
Preliminary conclusions:
• Low‐ω games ⇒ winner‐take‐all, strong stratification, one dominant strategy.
• High‐ω games ⇒ skill-driven performance, diverse strategy ecosystem.
• High β (peer focus) can trap agents on suboptimal choices, especially when matchups are unbalanced.
comment in response to
post
Imbalanced games (ω small) magnify skill gaps—high‐skill agents dominate and form cores. Balanced games dampen stratification; skill still matters, but network is more homogeneous.
comment in response to
post
Network snapshots: After 1,000 rounds, TMFG for ω=0 shows a tight core of high‐α/high‐r agents and peripherals of low performers. For ω=1, graph is more uniform: nodes mix regardless of α and r.
comment in response to
post
For ω=0, one strategy grabs ~80% usage: everyone chases a “meta.” As ω→0.5+, usage spreads: top 5–10 strategies each get ~10–20%. This promotes diversity.
comment in response to
post
We measure NDCG of each agent’s local ĉ ranking vs true ranking. At ω≈0, mean NDCG ≈0.70; at ω≈1, ≈0.85. Balanced games let agents learn matchups more accurately (or there’s nothing to learn since matchups are uniform)
comment in response to
post
For ω≈0, r’s distribution is bimodal—some agents drive the dominant strategy, others slump. For ω near 1, r’s are unimodal but skewed left (most agents hover around average)
comment in response to
post
Influence of β: (previous plot) points are colored by β. Agents with high β (peer‐driven) underperform when ω is small; they herd on a single “dominant” strategy even if it’s not ideal. Low β (self‐driven) agents adapt better.
comment in response to
post
Emergent clustering: When ω is low, agents segregate by skill: high‐skill players find and stick to the top strategy and link to each other; low‐skill players cluster separately. At high ω, less stratification. Skill matters but everyone stays mixed.
comment in response to
post
Scatter of performance ratio r vs skill α for different ω. At ω=1 (balanced), r tracks α smoothly (diminishing returns). At low ω, points split into two blobs—high‐skill cluster and low‐skill cluster.
comment in response to
post
Simulation loop (per round):
Each agent picks strategy by optimizing or Boltzmann sampling.
Pairings are random.
They play one game; win updates ĉ counters.
Every 50 total games, we recompute each r and rebuild the TMFG‐filtered J.
comment in response to
post
Experimental setup: 2,500 agents, 50×50 grid as “home” (but pairings random), T=1,000 rounds. Each agent has 50 strategies parameterized by different ω values (ω∈{0,…,1}). Every round, everyone picks a strategy and faces a random opponent.
comment in response to
post
Local ĉ updates: Whenever player i plays strategy s_i vs j’s s_j, they increment count_{s_i,s_j} if they win. Then ĉ_{s_i,s_j} = wins_{s_i,s_j}/(wins_{s_i,s_j} + wins_{s_j,s_i}). Over many matches, ĉ converges to true c.
comment in response to
post
Network sparsity (TMFG): In real settings, you can’t pay attention to thousands of peers. After computing all J_{ij}, we filter with a Triangular Maximally Filtered Graph (TMFG) to keep only 3N–6 strongest links. This yields a sparse, evolving network.
comment in response to
post
Peer strength J_{ij}: Players track their performance ratio r = (wins – losses)/(wins + losses). If two players share similar r, they form a strong link; if r’s differ a lot, they disconnect. This way, winners cluster together and isolates under‐performers.
comment in response to
post
Decision rules:
• Optimizing: Always pick the strategy that lowers your energy most.
• Stochastic (Boltzmann): Choose with probability ∝ exp(–ΔEnergy) so you sometimes explore suboptimal moves.
comment in response to
post
Energy (inspired by Ising): Think of each player’s “energy” as a weighted sum of (a) agreement with neighbors and (b) best expected win from their local ĉ estimates. A single parameter β ∈ [0,1] chooses between peer conformity (β→1) and personal learning (β→0).
comment in response to
post
Agent model: Each player lives on a social graph G. They pick strategies based on two impulses—conform to neighbors (peer pressure) and trust their own estimates of which strategy wins most often.
comment in response to
post
Win chance & skill: Given a matchup strength c and player skills α_i, α_j (0–1), win probability ≈ (α_i·c)/(α_i·c + α_j·(1−c)). So if ω is small, matchups overshadow skill; if ω→1, skill differences drive wins.
comment in response to
post
Balance ω ranges from 0 (one strategy dominates) to 1 (all equally matched). Noise b widens the spread of c-values. you see above how varying ω and b reshapes the matchup matrix.
comment in response to
post
We model each pairwise win probability c_{i,j} using two intuitive knobs “balance” ω (how fair matchups are) and “noise” b (randomness). Low ω means rock-paper-scissors style; high ω means nearly 50-50 every pairing.
comment in response to
post
The problem: In settings from online gaming to portfolio choice, players pick strategies without full payoff info. I want to see how the hidden matchup structure and peer influence drive performance and strategy diversity.
comment in response to
post
I read a paper which argues that combining PCA and ICA yields better features for supervised learning model (short term stock price prediction I believe). www.scirp.org/journal/pape...
But I myself have worked with UMAP embeddings as features and they just overfit so I am curious but skeptical.
comment in response to
post
For a certain synthetic problem I've designed it seems to be better than PCA at extracting "nonlinear/nongaussian" signal but turns out umap is also able to do this to a certain extent so I wonder if it has any comparative advantage to umap then.
comment in response to
post
haha I love your honesty. yeah I ran into it as a weekend project and I am starting to understand what kind of features it's able to extract from data but then again there is a reason why certain techniques exist and are not popular. So I wonder.