這將刪除頁面 "Simon Willison's Weblog"。請三思而後行。
That design was trained in part utilizing their unreleased R1 "reasoning" design. Today they've launched R1 itself, together with an entire family of brand-new models obtained from that base.
There's an entire lot of things in the brand-new release.
DeepSeek-R1-Zero appears to be the base model. It's over 650GB in size and, like most of their other releases, is under a clean MIT license. DeepSeek alert that "DeepSeek-R1-Zero experiences challenges such as endless repetition, bad readability, and language blending." ... so they likewise released:
DeepSeek-R1-which "incorporates cold-start data before RL" and "attains efficiency similar to OpenAI-o1 throughout math, code, and thinking jobs". That one is likewise MIT licensed, and is a comparable size.
I do not have the capability to run models larger than about 50GB (I have an M2 with 64GB of RAM), so neither of these 2 models are something I can quickly have fun with myself. That's where the brand-new distilled designs are available in.
To support the research study neighborhood, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and 6 dense designs distilled from DeepSeek-R1 based upon Llama and Qwen.
This is a fascinating flex! They have models based upon Qwen 2.5 (14B, 32B, Math 1.5 B and Math 7B) and Llama 3 (Llama-3.1 8B and Llama 3.3 70B Instruct).
Weirdly those Llama designs have an MIT license attached, which I'm uncertain works with the underlying Llama license. Qwen designs are Apache accredited so possibly MIT is OK?
(I also simply observed the MIT license files say "Copyright (c) 2023 DeepSeek" so they might require to pay a bit more attention to how they copied those in.)
Licensing aside, these distilled models are remarkable beasts.
Running DeepSeek-R1-Distill-Llama-8B-GGUF
Quantized variations are already beginning to show up. So far I've tried simply among those- unsloth/DeepSeek-R 1-Distill-Llama-8B-GGUF launched by Unsloth AI-and it's actually enjoyable to have fun with.
I'm running it utilizing the combination of Ollama, LLM and the llm-ollama plugin.
First I brought and ran the design utilizing Ollama itself:
This downloads around 8.5 GB of design data and begins an interactive chat interface.
Once the model has been fetched, LLM can talk with it as well. I choose utilizing LLM for experiments since it logs everything to SQLite for later exploration.
I installed the plugin and ran it like this:
Each response starts with a ... pseudo-XML tag containing the chain of thought used to help create the response. For "a joke about a pelican and a walrus who run a tea room together" that started like this:
Okay, so I require to come up with a joke about a pelican and a walrus running a tea room together. Hmm, that's an intriguing mix. Let me think about how these 2 characters might connect in a humorous circumstance.
First, let's consider their qualities. Pelicans are known for their long beaks and Webbed feet, frequently seen near the beach or water. Walruses have huge teeth, thick fur, and they're usually discovered in chillier environments, like icebergs or snowy locations. So, integrating these two into a tea space setting is currently an amusing image.
[...]
It then believed for 20 paragraphs before outputting the joke! Here's the complete records and here's the output at the end, after the tag:
The Joke:
A pelican and a walrus decide to open a quaint little tea space together. The walrus, with its big size, has a hard time to discover comfortable chairs, so it sits on the table by mishap, knocking over the teapot. Meanwhile, the pelican, trying to assist, utilizes its beak to put saucers on the table, triggering a few spills.
After a series of funny mishaps, the walrus looks up and says with a grin, "This isn't so fishy any longer." The pelican smirks and says, "Maybe not, but we do have a lot of krill in our tea!"
Explanation:
- Setup: Pelican and Walrus run a not likely tea space. Their sizes result in comedic chaos.
這將刪除頁面 "Simon Willison's Weblog"。請三思而後行。