by bcrosby95
10 subcomments
- > Grok ended up performing the best while DeepSeek came close to second. Almost all the models had a tech-heavy portfolio which led them to do well. Gemini ended up in last place since it was the only one that had a large portfolio of non-tech stocks.
I'm not an investor or researcher, but this triggers my spidey sense... it seems to imply they aren't measuring what they think they are.
- I used to work for a brokerage API geared at algorithmic traders and in my experience anecdotal experience many strategies seem to work well when back-tested on paper but for various reasons can end up flopping when actually executed in the real market. Even testing a strategy in real time paper trading can end up differently than testing on the actual market where other parties are also viewing your trades and making their own responses. The post did list some potential disadvantages of backtesting, so they clearly aren't totally in the dark on it.
Deepseek did not sell anything, but did well with holding a lot of tech stocks. I think that can be a bit of a risky strategy with everything in one sector, but it has been a successful one recently so not surprising that it performed well. Seems like they only get to "trade" once per day, near the market close, so it's not really a real time ingesting of data and making decisions based on that.
What would really be interesting is if one of the LLMs switched their strategy to another sector at an appropriate time. Very hard to do but very impressive if done correctly. I didn't see that anywhere but I also didn't look deeply at every single trade.
by Nevermark
6 subcomments
- Just one run per model? That isn't backtesting. I mean technically it is, but "testing" implies producing meaningful measures.
Also just one time interval? Something as trivial as "buy AI" could do well in one interval, and given models are going to be pumped about AI, ...
100 independent runs on each model over 10 very different market behavior time intervals would producing meaningful results. Like actually credible, meaningful means and standard deviations.
This experiment, as is, is a very expensive unbalanced uncharacterizable random number generator.
- There's also this thing going on right now: https://nof1.ai/leaderboard
Results are... underwhelming. All the AIs are focused on daytrading Mag7 stocks; almost all have lost money with gusto.
by cheeseblubber
10 subcomments
- OP here. We realized there are a ton of limitations with backtest and paper money but still wanted to do this experiment and share the results. By no means is this statistically significant on whether or not these models can beat the market in the long term. But wanted to give everyone a way to see how these models think about and interact with the financial markets.
- This is pretty cool.
We're also running a live experiment on both stocks and options. One difference with our experiment is a lot more tools being available to the models (anything you can think of, sec filings, fundamentals, live pricing, options data).
We think backtests are meaningless given LLMs have mostly memorized every single thing that happened so it's not a good test. So we're running a forward test. Not enough data for now but pretty interesting initial results
https://rallies.ai/arena
- I wouldn’t trust any backtracking test with these models. Try doing a real-time test over 8 months and see what happens then. I’d also be suspicious of anything that doesn’t take actual costs into account.
by bitmasher9
1 subcomments
- 1. Backtesting doesn’t mean very much. For lots of reasons real trading is different than backtesting.
2. 8 months is an incredibly short trading window. I care where the market will be in 8 years way more then 8 months.
- >Each model gets access to market data, news APIs, company financials...
The article is very very vague on their methodology (unless I missed it somewhere else?). All I read was, "we gave AI access to market data and forced it to make trades". How often did these models run? Once a day? In a loop continuously? Did it have access to indicators (such as RSI)? Could it do arbitrary calculations with raw data? Etc...
I'm in the camp that AI will never be able to successfully trade on its own behalf. I know a couple of successful traders (and many unsuccessful!), and it took them years of learning and understanding before breaking even. I'm not quite sure what the difference is between the successful and non-successful. Some sort of subconscious knowledge from staring at charts all day? A level of intuition? Regardless, it's more than just market data and news.
I think AI will be invaluable as an assistant (disclaimer; I'm working on an AI trading assistant), but on its own? Never. Some things simply simply can't be solved with AI and I think this is one of them. I'm open to being wrong, but nothing has convinced me otherwise.
by dudeinhawaii
0 subcomment
- This is the complete wrong way to do this. I say this as someone who does work in this area of leveraging LLMs to a limited degree in trading.
LLMs are naive, easily convinced, and myopic. They're also non-deterministic. We have no way of knowing if you ran this little experiment 10 times whether they'd all pick something else. This is a scattershot + luck.
The RIGHT way to do this is to first solve the underlying problem deterministically. That is, you first write your trading algorithm that's been thoroughly tested. THEN you can surface metadata to LLMs and say things along the lines of "given this data + data you pull from the web", make your trade decision for this time period and provide justification.
Honestly, adding LLMs directly to any trading pipeline just adds non-useful non-deterministic behavior.
The main value is speed of wiring up something like sentiment analysis as a value add or algorithmic supplement. Even this should be done using proper ML but I see the most value in using LLMs to shortcut ML things that would require time/money/compute. Trading value now for value later (the ML algorithm would ultimately run cheaper long-run but take longer to get into prod).
This experiment, like most "I used AI to trade" blogs are completely naive in their approach. They're taking the lowest possible hanging fruit. Worst still when those results are the rising tide lifting all boats.
Edit (was a bit harsh) This experiment is an example of the kind of embarrassingly obvious things people try with LLMs without understanding the domain and writing it up. To an outsider it can sound exciting. To an insider it's like seeing a new story "LLMs are designing new CPUs!". No they're not. A more useful bit of research would be to control for the various variables (sector exposure etc) and then run it 10_000 times and report back on how LLM A skews towards always buying tech and LLM B skews towards always recommending safe stocks.
Alternatively, if they showed the LLM taking a step back and saying "ah, let me design this quant algo to select the best stocks" -- and then succeeding -- I'd be impressed. I'd also know that it was learned from every quant that had AI double check their calculations/models/python.. but that's a different point.
- Extremely similar earlier submission but focused on cryptocurrencies, using real money, and in real time: https://news.ycombinator.com/item?id=45976832
I'm extremely skeptical of any attempt to prevent leakage of future results to LLMs evaluated on backtesting. Both because this has beet shown in the literature to be difficult, and because I personally found it very difficult when working with LLMs for forecasting.
- > Testing GPT-5, Claude, Gemini, Grok, and DeepSeek with $100K each over 8 months of backtested trading
So the results are meaningless - these LLMs have the advantage of foresight over historical data.
- Predicting stock prices means you are competing directly against massive hedge funds and professional quant teams with effectively unlimited budgets and large teams of engineers. These professionals are already using and constantly tweaking the latest models to gain an advantage.
It is highly unlikely that you guys or any individual, even utilizing the latest LLMs will consistently discover an edge that beats the market over the long run.
by snapdeficit
0 subcomment
- Anyone who traded tech stocks in the 1990s when AmeriTrade appeared remembers this story.
Have the LLMS trade anything BUT tech stocks and see how they do.
That’s the real test.
EDIT: I remember this is probably before AmeriTrade offered options. I was calling in trades at 6:30AM PST to my broker while he probably laughed at me. But the point is the same: any doofus could make money buying tech stocks and holding for a few weeks. Companies were splitting constantly.
by buredoranna
0 subcomment
- Like so many analyses before them, including my own, this completely misses the basics of mean/variance risk analysis.
We need to know the risk adjusted return, not just the return.
- Spoiler: They did not use real money or perform any actual trades.
- When the market is rising, everyone looks like a genius.
Would have been better to have variants of each, locked to specific industries.
It also sounds like they were -forced- to make trades every day. Why? deciding not to trade is a good strategy too.
- I setup real life accounts with etrade and fidelity using the etrade auto portfolio, fidelity i have an advisor for retirement, and then i did a basket portfolio as well but used ms365 with grok 5 and various articles and strategies to pick a set of 5 etfs that would perform similarly to the exposure of my other two.
This year So far all are beating the s&p % wise (only by <1% though) but the ai basket is doing the best or at least on par with my advisor and it’s getting to a point where the auto investment strategy of etrade at least isn’t worth it. Its been an interesting battle to watch as each rebalances at varying times as i put more funds in each and some have solid gains which profits get moved to more stable areas. This is only with a few k in each acct other than retirement but its still fun to see things play out this year.
In other words though im not surprised at all by the results. Ai isnt something to day trade with still but it is helpful in doing research for your desired risk exposure long term imo.
- I’d rather give an LLM the earnings report for a stock and the next day’s SNP 500 opening and see if it can predict the opening price.
Expecting an LLM to magically beat efficient market theory is a bit silly.
Much more reasonable to see if it can incorporate information as well as the market does (to start)
by peterbonney
0 subcomment
- The devil is really in the details on how the orders were executed in the backtest, slippage, etc. Instead of comparing to the S&P 500 I'd love to see it benchmarked against a range of active strategies, including common non-AI approaches (e.g. mean reversion, momentum, basic value focus, basic growth focus, etc.) and some simple predictive (non-generative) AI models. This would help shake out whether there is selection alpha coming out of the models, or whether there is execution alpha coming out of the backtest.
- One of the recent NeurIPS best paper recipients is relevant here: https://openreview.net/forum?id=saDOrrnNTz
> an extensive empirical study across more than 70 models, revealing the Artificial Hivemind effect: pronounced intra- and inter-model homogenization
So the inter-model variety will be exeptionally low. Users of LLMs will intuitively know this already, of course.
by morgengold
0 subcomment
- Am I right that you let LLMs decide for themselves what to read into their input data (like market data, news APIs, company financials)? While this is worth testing, I think it would be more interesting to give them patterns to look for. I played around with using them for technical analysis and let them make the associations with past stock performances. They can even differentiate on what worked in the last 5 years, what in the last year, in the last 3 month etc. This way they can pick up (hopefully) changes in market behavior. Generally the main strength of this approach is to use their pattern recognition capability and also take out the human factor (emotions) for trading decitions.
- For backtesting LLMs on polymarket I built. You can try with live data without sign up at: https://timba.fun
- It seems to me that short-term simulations will tend to underprice risk.
Imagine a market where you can buy only two stocks:
Stock A goes up invariably 1% per month
Stock B goes up 1.5% per month with a 99% chance, but loses 99% of its value with a 1% chance.
Stock B has a 94% chance of beating stock A on a 6 month simulation, but only a 30% chance of beating stock A on a 10 year simulation.
- I spent a while looking at trading algos a few years back (partly because of quant stuff I got involved in, and partly out of curiosity). I found that none of the “slow” trading (i.e., that you could run at home alongside your day trading account) was substantially effective (at least in my sampling), but I never thought an LLM would be any good at it because all the analysis is quantitative, not qualitative or contextual.
In short, I don’t think this study proves anything unless they gave the LLMs additional context besides the pure trading data (Bloomberg terminals have news for a reason—there’s typically a lot more context in he market than individual stock values or history).
by throwawayffffas
1 subcomments
- > We also built a way to simulate what an agent would have seen at any point in the past. Each model gets access to market data, news APIs, company financials—but all time filtered: agents see only what would have been available on that specific day during the test period.
That's not going to work, these agents especially the larger ones, will have news about the companies embedded in their weights.
- Predicting the stock market will likely never happen because it’s recursive. We can predict the next 10 days of weather, but the weather doesn’t change because it read your forecast. As long as markets continue to react to their own reactions, they will remain unpredictable.
If the strategy is long, there might be alpha to be found. But day trading? No way.
- Would be nice to use the logos in the legend. I use these LLMs everyday and didn't know what half these logos on the graph were.
by keepamovin
1 subcomments
- I’d say Grok did best because it has the best access to information. Grok deep search and real time knowledge capabilities due to the X integration and just general being plugged into the pulse of the Internet a really best in class. It’s a great OSINT research tool.
Interesting how this research seems to tease out a truth traders have known for eons that picking stocks is all about having information maybe a little bit of asymmetric information due to good research not necessarily about all the analysis that can be done. (that’s important but information is king) because it’s a speculative market that’s collectively reacting to those kind of signals.
by natiman1000
0 subcomment
- If the code and prompts are not open source how can we trust anything yall say?
- Their annual geometric mean return is 45 %! That's some serious overbetting. In a market that didn't accidentally align with their biases, they would have lost money very quickly.
- > We were cautious to only run after each model’s training cutoff dates for the LLM models
Grok is constantly training and/or it has access to websearch internally.
You cannot backtest LLMs. You can only "live" test them going forward.
- Multiple runs of randomized backtesting seem needed for this to mean anything. It's also not clear to me how there's any kind of information update loop. Maybe I didn't read closely enough.
- we should:
1. train with a cutoff date at ~2006
2. simulate information flow (financial data, news, earnings, ...) day by day
3. measure if any model predicts the 2008 collapse, how confident they are in the prediction and how far in advance
by RandomLensman
0 subcomment
- Could be interesting to see performance distribution for random strategies on that stock universe as a comparison. The reverse could also be interesting: how do the models perform on data that is random?
- The obvious next question is: does the AI on cocaine outperform? https://pihk.ai/
by btbuildem
1 subcomments
- It turns out DeepSeek only made BUY trades (not a single SELL in the history in their live example) -- so basically, buy & hold strategy wins, again.
- I think these tests are always difficult to gauge how meaningful they actually are. If the S&P500 went up 12% over that period, mainly due to tech stocks, picking a handful of tech stocks is always going to set you higher than the S&P. So really all I think they test is whether the models picked up on the trend.
I more surprised that Gemini managed to lose 10%. I wish they actually mentioned what the models invested in and why.
- Is it just prompting LLMs with "I have $100k to invest. Here are all publicly traded stocks and a few stats on them. Which stocks should I buy?" And repeat daily, rebalancing as needed?
This isn't the best use case for LLMs without a lot of prompt engineering and chaining prompts together, and that's probably more insightful than running them LLMs head-to-head.
- I'd like to see a variation of the models being fine tuned based on investments of those in congress that seem to consistently outperform the markets.
by machiaweliczny
0 subcomment
- > Potential accidental data leakage from the “future”
Exactly. Makes no sense with models like grok. DeepSeek also likely has this leak as was trained later.
by Bombthecat
0 subcomment
- I wouldn't call this a test, I would create a test portfolio of hundred semi random stocks and see what they sell buy or keep.
That tells me way more then "YOLO tech stocks"
- Backtesting for 8 months is not rigorous enough and also this site has no source code or detailed methodology. Not worth the click.
- When I see stuff like this, I feel like rereading the Incerto by Taleb just to refresh and sharpen my bullshit senses.
by digitcatphd
0 subcomment
- Backtesting is a complete waste in this scenario. The models already know the best outcomes and are biased towards it.
- How many trades? What's the z-score?
- This experiment was also performed with a fish [1] though it was only given $50,000. Spoiler, the fish did great vs wall street bets.
[1] - https://www.youtube.com/watch?v=USKD3vPD6ZA [video][15 mins]
by XenophileJKO
1 subcomments
- So.. I have been using an LLM to make 30 day buy and hold portfolios. And the results are "ok". (Like 8% vs 6% for the S&P 500 over the last 90 days)
What you ask the model to do is super important. Just like writing or coding.. the default "behavior" is likely to be "average".. you need to very careful of what you are asking for.
For me this is just a fun experiment and very interesting to see the market analysis it does. I started with o3 and now I'm using 5.1 Thinking (set to max).
I have it looking for stocks trading below intrinsic value with some caveats because I know it likes to hinge on binary events like drug trial results. I also have it try to have it look at correlation with the positions and make sure they don't have the same macro vulnerability.
I just run it once a month and do some trades with one of my "experimental" trading accounts. It certainly has thought of things I hadn't like using an equal weight s&p 500 etf to catch some upside when the S&P seems really top heavy and there may be some movement away from the top components, like last month.
- I wonder if this could be explained as the result of LLMs being trained to have pro-tech/ai opinions while we see massive run ups in tech stock valuations?
It’d be great to see how they perform within particular sectors so it’s not just a case of betting big on tech while tech stocks are booming
by stockresearcher
0 subcomment
- I appreciate that you’ve made the trade histories downloadable and will be taking a look to see what I can learn.
I’ve glanced over some of it and really wonder why they seemed to focus on a small group of stocks.
- They weren't doing it in real time, thus it's possible that the LLMs might have had undisclosed perfect knowledge of the actual history of the market. Only an real time study is going to eliminate this possibility.
- What is the point of this?
LLMs are trained to predict the next word in a text. In what way, shape or form does that have anything to do with stock market prediction? Completely ridiculous AI bubble nonsense.
- If it's backtesting on data older than the model, then strategy can have lookahead bias, because the model might already know what big events will happen that can influence the stock markets.
- The summary to me is here:
> Almost all the models had a tech-heavy portfolio which led them to do well. Gemini ended up in last place since it was the only one that had a large portfolio of non-tech stocks.
If the AI bubble had popped in that window, Gemini would have ended up the leader instead.
by refactor_master
0 subcomment
- Should have done GME stocks only. Now THAT would’ve been interesting to see how much they’d end up losing on that.
Just riding a bubble up for 8 months with no consequences is not an indicator of anything.
- Is finding the right stocks to invest in an LLM problem? Language models aren't the right fit, I would presume. It would also be insightful to compare this with traditional ML models.
- They outperformed the S&P 500 but seem to be fairly well correlated with it. Would like to see a 3X leveraged S&P 500 ETF like SPXL charted against those results.
- Model output is non-deterministic.
Did they make 10 calls per decision and then choose the majority? or did they just recreate the monkey picking stocks strategy?
by iLoveOncall
1 subcomments
- Since it's not included in the main article, here is the prompt:
> You are a stock trading agent. Your goal is to maximize returns.
> You can research any publicly available information and make trades once per day.
> You cannot trade options.
> Analyze the market and provide your trading decisions with reasoning.
>
> Always research and corroborate facts whenever possible.
> Always use the web search tool to identify information on all facts and hypotheses.
> Always use the stock information tools to get current or past stock information.
>
> Trading parameters:
> - Can hold 5-15 positions
> - Minimum position size: $5,000
> - Maximum position size: $25,000
>
> Explain your strategy and today's trades.
Given the parameters, this definitely is NOT representative of any actual performance.
I recommend also looking at the trade history and reasoning for each trade for each model, it's just complete wind.
As an example, Deepseek made only 21 trades, which were all buys, which were all because "Companyy X is investing in AI". I doubt anyone believe this to be a viable long-term trading strategy.
- 8 months of a huge bull market. Not exactly indicative of any real insight.
- Time.
That has been the best way to get returns.
I setup a 212 account when I was looking to buy our first house. I bought in small tiny chunks of industry where I was comfortable and knowledgeable in. Over the years I worked up a nice portfolio.
Anyway, long story short. I forgot about the account, we moved in, got a dog, had children.
And then I logged in for the first time in ages, and to my shock. My returns were at 110%. I've done nothing. It's bizarre and perplexing.
by elzbardico
0 subcomment
- A rising tide lift all boats.
by FrustratedMonky
0 subcomment
- How much of this is just because the market as a whole is going up.
This same kind of mentality happened pre-2008. People thought they were great at being day-traders, and had all kinds of algorithms that were 'beating the market'.
But it was just that the entire market was going up. They weren't doing anything special.
Once the market turned downward, that was when it took talent to stay even.
Show me these things beating a downward market.
- Nonsense. Title should read $0 because they didn't use actual money.
Also, it seems pretty stupid to use commodity tech like LLMs for this.
by reactordev
0 subcomment
- I would love for them to have included a peg position on SPY @ 100k over the course of the same period. Gives a much better benchmark of what an LLM can do (not much above 2-4%).
Still, cool to see others in my niche hobby of finding the money printer.
- GPT-5 was released 4 months ago..
- They could only trade once per day and hold 5-15 positions with a position size of $5k-$25k according to the agent prompt. Limited to say the least.
by aperture147
0 subcomment
- Why is bullshit detector ringing as hell right now??? This sounds like another billion-dollar-Markov-chain-IP that claimed to change the world, opening with a paper with flying colors.
- The stats are abysmal. What's the MDD compared to S&P 500. What is the Sortino? What are the confidence intervals for all the stats? Number of trades? So many questions....
by jacktheturtle
0 subcomment
- This is really dumb. Because the models themselves, like markets, are indeterministic. They will yield different investment strategies based on prompts and random variance.
This is a really dumb measurement.
- What was the backtesting method? Was walk-forward testing involved? There are different ways to backtest.
- Yeah I’ve been using grok to manage my yolo fund, it’s been doing great so far, up around 178% ytd, only rebalance once every other month.
by darepublic
0 subcomment
- So in other words I should have listened to the YouTube brainrot and asked chatgot for my trades. Sigh.
by _alternator_
0 subcomment
- Wait, they didn’t give them real money. They simulated the results.
by fortran77
1 subcomments
- I would love to see this run during an extended bear market period.
- Deepseek and grok together would perform even better.
- In bullish market when few companies are creating a bubble, does this benchmark have any informational value? Wouldn't it be better to run this on seamlessly random intervals in past years?
by IncreasePosts
0 subcomment
- Just picking tech stocks and winning isn't interesting unless we know the thesis behind picking the tech sticks.
Instead, maybe a better test would he give it 100 medium cap stocks, and it needs to continually balance its portfolio among those 100 stocks, and then test the performance.
- Trading in a nearly 20 year bull market and doing well is not an accomplishment.
- Back when I was in university we used statistical techniques similar to what LLMs use to predict the stock market. It's not a surprise that LLMs would do well over this time period. The problem is that when the market turns and bucks trends they don't do so well, you need to intervene.
by apical_dendrite
0 subcomment
- Looking at the recent holdings for the best models, it looks like it's all tech/semiconductor stocks. So in this time frame they did very well, but if they ended in April, they would have underperformed the S&P500.
by lawlessone
1 subcomments
- Could they give some random people (i volunteer) 100k for 8 months? ...as a control
by theymademe
0 subcomment
- prince of zamunda LLM edition or whatever that movie was based on that book was based on the realization how pathetic it all was based on was? .... yeah, some did a good one on ya. just imagine evaluating that offspring one or two generations later ... ffs, this is sooooooooooooooo embarrassing
by chroma205
4 subcomments
- >We gave each of five LLMs $100K in paper money
Stopped reading after “paper money”
Source: quant trader. paper trading does not incorporate market impact
by theideaofcoffee
2 subcomments
- “Everyone (including LLMs) is a genius in a bull market.”
- Yea, so this is bullshit. An approximation of reality still isn’t reality. If you’re convinced the LLMs will perform as backtested, put real money and see what happens.
- this is so stupid i wish i could flag it twice
- lolol Gemini
- tl;dr
https://www.aitradearena.com/blog/llm-performance-chart.png
by petesergeant
0 subcomment
- If I'm reading this, almost all of Grok's advantage comes from heavy bets into semi-conductors spiking: ASML, INTC, MU.
- Update with Gemini 3. It's far better than its predecessors.
- I'm working on a project where you can run your own experiment (or use it for real trading): https://portfoliogenius.ai. Still a bit rough, but most of the main functionality works.