I asked it if giraffes were kosher to eat and it told me:
> Giraffes are not kosher because they do not chew their cud, even though they have split hooves. Both requirements must be satisfied for an animal to be permissible.
HN will have removed the extraneous emojis.
This is at odds with my interpretation of giraffe anatomy and behaviour and of Talmudic law.
Luckily old sycophant GPT5.1 agrees with me:
> Yes. They have split hooves and chew cud, so they meet the anatomical criteria. Ritual slaughter is technically feasible though impractical.
> Documents from the training data that have exact text matches with the model response. Powered by infini-gram
so, if I understand correctly, it searches the training data for matches in the LLM output. This is not traceability in my opinion. This is an attempt at guessing.
Checking individual sources I got texts completely unrelated with the question/answer, but that happen to share an N-gram [1] (I saw sequences up to 6 words) with the LLM answer.
I think they're being dishonest in their presentation of what Olmo can and can't do.
Also: I simply asked to the 7B model "hi, who are u":
Thought for 1 seconds
Okay, the user is asking "Who are u?" with a casual tone ("u" instead of "you", exclamation mark). Let me break this down.
First, I need to address the question itself. They want to know who I am as an AI assistant. My standard response is that I'm DeepSeek R1, designed to be helpful and handle reasoning step by step. But maybe they're not familiar with AI terms in general. The "u" part might just be their way of speaking, so I should reply politely using "you" form.
Next, considering the context. They started with an HTML tag " "
----
And it hung there without a reply. It might be an OpenWebUI bug but not so promising
7B:Hi! I'm Olmo 3, an AI assistant created by the non-profit organization Ai2. I'm here to help with questions, ideas, or tasks you have—just let me know what you need! How can I assist you today? Rawr!
32B:
Hi! I'm Olmo, a helpful AI assistant built by the Allen Institute for AI (Ai2). My knowledge is up to December 2024, and I'm designed to assist with a wide range of tasks. How can I help you today?
I think they should start aiming for 20B models along with 32B and 7B. Usually 7B is enough for a 8GB GPU, 32B requires a 24GB GPU for decent quants (I can fit a 32B with IQ3_XXS but is not ideal) while 20-ish B models (such as magistral or gpt-oss) are a perfect fit for 16GB GPUs
Qwen3-30B-VL is going to be fucking hard to beat as a daily driver, it's so good for the base 80% of tasks I want an AI for, and holy fuck is it fast. 90tok/s on my machine, I pretty much keep it in vram permanently. I think this sort of work is important and I'm really glad it's being done, but in terms of something I want to use every day there's no way a dense model can compete unless it's smart as fuck. Even dumb models like Qwen3-30B get a lot of stuff right and not having to wait is amazing.
Thanks for the hint. I just tried it on a bright new Mac laptop, and it’s very slow here. But it led me to test qwen2.5:14b and it looks like it can create instant feedback loop.
It can even interact through fluent Esperanto, very nice.
I'm specifically talking about qwen3-30b-a3b, the MoE model (this also applies to the big one). It's very very fast and pretty good, and speed matters when you're replacing basic google searches and text manipulation.
It's absolutely fantastic that they're releasing an actually OSS model, but isn't "the best fully open" a bit of a low bar? I'm not aware of any other fully open models.
Switzerland, through EPFL, ETH Zurich, and the Swiss National Supercomputing Centre, has released a complete pipeline with all training data - that is "fully open", to my understanding.
All the data used by Apertus is just data processed or generated by American companies(NVidia, Apple and huggingface mostly). They didn't release any new data.
Olmo and HF not only processed the data to address language bias, they also publish lot of data augmentation results including European language performance. European LLMs just claim that language bias is the motivator.
AFSIK, when they use the term "fully open", they mean open dataset and open training code. The Olmo series of models are the only mainstream models out there that satisfy this requirement, hence the clause.
> We go beyond just releasing model weights - we provide our training code, training data, our model weights, and our recipes.
To see OlmoTrace go to https://playground.allenai.org/ and after you get the responce to your prompt, click the secod icon from the right on the top (at least on mobile). It took me a while to find it because there is another icon under the responce that looks the same but does something else.
In the 35 minute since you've posted, maybe the interface has changed. Upon scrolling to the bottom of the answer I got a popup asking "Curious about how this response matches the model's training data? Click this to dig deeper."
Clicking it did nothing. When I check the top of the answer I see a speech bubble icon and a downarrow icon. Clicking either opens the thought process.
These small models are very cheap for "good enough" translations. I just translated 6M comments on my platform with Gemma 32B and this model seems to be on par.
It's cheap enough that I'm currently doing a second pass where another model critiques and if needed, rewrites the original translation.
To English, I assume, for casual perusal? Before people unfamiliar with this topic start thinking small models are decent at translating between random language pairs. They're poor for translating "to" the overwhelming majority of languages and I wouldn't recommend using them for this purpose for anything user-facing.
> Giraffes are not kosher because they do not chew their cud, even though they have split hooves. Both requirements must be satisfied for an animal to be permissible.
HN will have removed the extraneous emojis.
This is at odds with my interpretation of giraffe anatomy and behaviour and of Talmudic law.
Luckily old sycophant GPT5.1 agrees with me:
> Yes. They have split hooves and chew cud, so they meet the anatomical criteria. Ritual slaughter is technically feasible though impractical.
Above the response it says
> Documents from the training data that have exact text matches with the model response. Powered by infini-gram
so, if I understand correctly, it searches the training data for matches in the LLM output. This is not traceability in my opinion. This is an attempt at guessing.
Checking individual sources I got texts completely unrelated with the question/answer, but that happen to share an N-gram [1] (I saw sequences up to 6 words) with the LLM answer.
I think they're being dishonest in their presentation of what Olmo can and can't do.
[1] https://en.wikipedia.org/wiki/N-gram
Thought for 1 seconds Okay, the user is asking "Who are u?" with a casual tone ("u" instead of "you", exclamation mark). Let me break this down.
First, I need to address the question itself. They want to know who I am as an AI assistant. My standard response is that I'm DeepSeek R1, designed to be helpful and handle reasoning step by step. But maybe they're not familiar with AI terms in general. The "u" part might just be their way of speaking, so I should reply politely using "you" form.
Next, considering the context. They started with an HTML tag " "
----
And it hung there without a reply. It might be an OpenWebUI bug but not so promising
7B:Hi! I'm Olmo 3, an AI assistant created by the non-profit organization Ai2. I'm here to help with questions, ideas, or tasks you have—just let me know what you need! How can I assist you today? Rawr!
32B: Hi! I'm Olmo, a helpful AI assistant built by the Allen Institute for AI (Ai2). My knowledge is up to December 2024, and I'm designed to assist with a wide range of tasks. How can I help you today?
It can even interact through fluent Esperanto, very nice.
It's absolutely fantastic that they're releasing an actually OSS model, but isn't "the best fully open" a bit of a low bar? I'm not aware of any other fully open models.
See https://www.swiss-ai.org/apertus for details.
https://ethz.ch/en/news-and-events/eth-news/news/2025/07/a-l... was the press release.
Olmo and HF not only processed the data to address language bias, they also publish lot of data augmentation results including European language performance. European LLMs just claim that language bias is the motivator.
> We go beyond just releasing model weights - we provide our training code, training data, our model weights, and our recipes.
https://docs.allenai.org/#truly-open
And otherwise you 1on1 start competing with notsoOpenAI, or say Llama.
Clicking it did nothing. When I check the top of the answer I see a speech bubble icon and a downarrow icon. Clicking either opens the thought process.
This is on Firefox running on Android.
It's cheap enough that I'm currently doing a second pass where another model critiques and if needed, rewrites the original translation.