For some time now, companies like OpenAI and Google have Promote advanced “reasoning” capabilities as next big step With the latest artificial intelligence models. But now, new research by six Apple engineers shows that the mathematical “reasoning” displayed by sophisticated large-scale language models can be extremely fragile and unreliable in the face of seemingly trivial changes to common benchmark problems. It has been shown that sex may be reduced.
The vulnerabilities highlighted by these new results suggest that LLM’s use of probabilistic pattern matching lacks the formal understanding of the underlying concepts required for truly reliable mathematical reasoning ability. It helps corroborate previous research. “Current LLMs are incapable of true logical reasoning,” the researchers hypothesize based on these results. “Instead, they attempt to reproduce the inference steps observed in the training data.”
mix it up
“GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models” (currently available) As pre-printed paper—6 Apple researchers begin: GSM8K’s standardized set of over 8,000 elementary school level math word problems,In other words often used as a benchmark Addresses the complex reasoning capabilities of modern LLMs. We then take a new approach by modifying parts of that test set to dynamically replace certain names and numbers with new values. So the question that Sophie got 31 building blocks for her nephew in GSM8K could become the question that Bill got 19 building blocks. His brother in the new GSM symbolic rating.
This approach helps avoid “data pollution” that can arise from static GSM8K questions that are input directly into the AI model’s training data. At the same time, these incidental changes do not at all change the actual difficulty of inherent mathematical reasoning. This means that the model should theoretically test just as well in GSM-Symbolic as in GSM8K.
Instead, the researchers tested more than 20 state-of-the-art LLMs on GSM-Symbolic and found that average accuracy decreased across the board compared to GSM8K, with performance decreasing by 0.3 percent to 9.2 percent depending on the model. I understand that. The results also showed high variance even when running GSM-Symbolic 50 times with different names and values. Within a single model, it is common to have an accuracy gap of up to 15% between the best and worst runs, and for some reason changing the numbers is less accurate than changing the names. There was a tendency to
As the researchers point out, “the overall inference steps required to solve the problem remain the same,” so this kind of difference is important when comparing results within different GSM symbolic runs and with GSM8K. Both cases are more than a little surprising. The fact that such small changes lead to such variable results suggests to the researchers that these models are not making “formal” inferences, but rather “attempts.” I’m doing it.[ing] It performs a type of within-distribution pattern matching, matching the given question and solution steps to similar ones found in the training data. ”
don’t get distracted
Still, the overall variance exhibited by GSM symbolic tests was often relatively small overall. For example, OpenAI’s ChatGPT-4o dropped from 95.2 percent accuracy on GSM8K to a still impressive 94.9 percent on GSM-Symbolic. This is a fairly high success rate using either benchmark, regardless of whether or not the model itself uses “formal” reasoning behind the scenes (although it is important to note that if the researcher takes one logical step to the problem) After adding just one or two, the total accuracy of many models dropped sharply).
However, because Apple researchers modified the GSM-Symbolic benchmark by adding “a seemingly related but ultimately unimportant statement” to the question, the LLM results tested were even more It got worse. In this “GSM-NoOp” benchmark set (short for “No Operations”), the question about how many kiwis you picked over multiple days was changed to include the additional detail “5 of them.” There is a possibility that [the kiwis] It was a little smaller than average. ”
Adding these dangerous issues results in what the researchers called a “catastrophic performance drop” in accuracy compared to GSM8K, ranging from 17.5 percent to a whopping 65.7 percent depending on the model tested. did. Such a significant drop in accuracy highlights the inherent limitations of using simple “pattern matching” to “translate sentences into operations without truly understanding their meaning,” the study says. they wrote.