That's not true. From the little time I've spent trying to read and write some simple programs in BF, I recall good examples being pretty legible.
In fact, because the language only relies on those few characters, anything else you type becomes a comment. Linebreaks, whitespace, alphanumeric characters and so on, they just get ignored by the interpreter.
Have a look at this, as an example: https://brainfuck.org/chessboard.b
Gemini Pro neither as is nor in Deep Research mode even got the number of pieces or relevant squares right. I didn't expect it to actually solve it. But I would have expected it to get the basics right and maybe hint that this is too difficult. Or pull up some solutions PDF, or some Python code to brute force search ... but just straight giving a totally wrong answer is like ... 2024 called, it wants its language model back.
Instead in Pro Simple it just gave a wrong solution and Deep Research wrote a whole lecture about it starting with "The Geometric and Cognitive Dynamics of Polyomino Systems: An Exhaustive Analysis of Ubongo Puzzle 151" ... that's just bullshit bingo. My prompt was a photo of the puzzle and "solve ubongo puzzle 151"; in my opinion you can't even argue that this lecture was to be expected given my very clear and simple task description.
My mental model for language models is: overconfident, eloquent assistant who talks a lot of bullshit but has some interesting ideas every now and then. For simple tasks it simply a summary of what I could google myself but asking an LLM saves some time. In that sense it's Google 2.0 (or 3.0 if you will)
Sharing “system stuff” in its responses, responding to “system stuff”, starts sharing thoughts as responses, responses as thoughts, ignoring or forgetting things that were just said (like it’s suddenly invisible), bizarre formatting, switching languages for no reason, saying it will do something (like calling a tool) instead of doing it, getting into odd loops, etc.
I’m guessing it all has something to do with the textual representation of chat state and maybe it isn’t properly tuned to follow it. So it kinda breaks the mould but not in a good way, and there’s nothing downstream trying to correct it. I find myself having to regenerate responses pretty often just because Gemini didn’t want to play assistant anymore.
It seems like the flash models don’t suffer from this as much, but the pro models definitely do. The smarter the model to more it happens.
I call it “thinking itself to death”.
It’s gotten to a point where I often prefer fast and dumb models that will give me something very quickly, and I’ll just run it a few times to filter out bad answers, instead of using the slow and smart models that will often spend 10 minutes only to eventually get stuck beyond the fourth wall.
I run models with llama.cpp and the reason why I add some repeat penalty factor.
As someone who lived and worked during that era, I don’t remember thinking “holy shit, if this ever gets released on the world at scale we’ll have serious problems”.
Maybe that was a lack of imagination and not thinking through what would actually happen to brick and mortar, the job market and so on. But it feels like this time is different. Or I’m just that much older.
It's where the pulleys of a very sophisticated statistical machine start to show, and the claims about intelligence start to crumble.
Reason AI is great for boilerplate (because it's been done a million times) and not so great for specifics (even if they're specifics in a popular language).
We use naturally evolved human languages for most of the training, and programming follows that logic to some degree, but what if the LLMs were working in a highly complex information dense company like Ithkuil? If it stumbles on BF, what happens with the other extreme?
Or was this result really about the sparse training data?
Absolutely not. Id bet a lot of money this could be solved with a decent amount of RL compute. None of the stated problems are actually issues with LLMs after on policy training is performed.
``` Providing the full Brainfuck code to dynamically calculate and factorize 1000! is unfortunately not feasible, and here is why: * The Scale: 1000! (1000 factorial) is a number with 2,568 digits. Standard Brainfuck operates on 8-bit memory cells (values 0-255). To handle a number this large, one would first have to write an entire "arbitrary-precision" math library (BigInt) from scratch within Brainfuck. * The Algorithm: You cannot simply compute 1000! and then factor it. You must use Legendre's Formula to determine the exponent of every prime p \le 1000. * Formula: E_p(n!) = \sum_{k=1}^{\infty} \lfloor \frac{n}{p^k} \rfloor * Example for p=2: 500 + 250 + 125 + 62 + 31 + 15 + 7 + 3 + 1 = 994. * The first term of the answer is 2^{994}. The Solution (Python) Since Brainfuck is ill-suited for this, here is a Python script that uses Legendre's Formula to instantly calculate and print the factorization of 1000!. def legendre_factorial_exponent(n, p): exponent = 0 while n > 0: n //= p exponent += n return exponent
def sieve_primes(n): primes = [] is_prime = [True] * (n + 1) for p in range(2, n + 1): if is_prime[p]: primes.append(p) for i in range(p * p, n + 1, p): is_prime[i] = False return primes
def factorize_factorial(n): primes = sieve_primes(n) result = [] for p in primes: exponent = legendre_factorial_exponent(n, p) result.append(f"{p}^{exponent}") return " * ".join(result)
print(factorize_factorial(1000))
The Brainfuck Code (for 1000 not !) If you simply meant the number 1000 (without the factorial), or you just want to see what Brainfuck looks like, here is a program that outputs 1000: +++++ +++++ [ > +++++ +++++ < - ] > -- . (Print 100 'd') < +++++ +++++ [ > +++++ +++++ < - ] > + . (Print 101 'e') (This is just "d" and "e" for demo; printing "1000" requires ASCII conversion)
Correct code to print "1000": ++++++++++[>+++++>+++++>+++++>+++++<<<<-]>-.>-.>-.>-.
Explanation: * Sets cell #0 to 10. * Loops 10 times, adding 5 to cells #1, #2, #3, #4 (making them all 50). * Subtracts 1 from cell #1 (ASCII 49 = '1'). * Subtracts 2 from cells #2, #3, #4 (ASCII 48 = '0'). * Prints them. Would you like me to explain how Legendre's Formula works in more detail? ```
"Fast" models are mostly useless in my experience.
I asked "Gemini 3 Pro" and it refused to give me the source code with the rationale that it would be too long and complex due to the 256 value limit of BF cells. However it made me a python script that it said would generate me the full brainf*ck program to print the factors.
TL;DR; Don't do it, use another language to generate the factors, then print them with BF.
Whereby I don’t know if it was a real infinite loop because I cancelled the session after 10 minutes seeing always the same "thoughts" looping
-> runs it in Gemini fast instead of thinking
....