I am a bit curious, did you find this behavior consistent across models or is it more pronounced with certain ones?
"Clear the session," the master said. "Run the same prompt again."
The novice pressed return. The model output: `ls -R /tmp`
"The colons are gone," the novice said. "But my theory explained them perfectly."
"You built a cage for a cloud," the master said. "Do not mistake a single roll of the dice for the rulebook."
Can you objectively analyze how VSCode adapts to your way of working without our interference?
Did you test your theory with the actual frontier LLMs (which Kimi K2.5 is not BTW?)