diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md
new file mode 100644
index 0000000..1b613a1
--- /dev/null
+++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md
@@ -0,0 +1,2 @@
+
DeepSeek open-sourced DeepSeek-R1, [setiathome.berkeley.edu](https://setiathome.berkeley.edu/view_profile.php?userid=11860868) an LLM fine-tuned with reinforcement knowing (RL) to enhance reasoning ability. DeepSeek-R1 [attains](https://gitea.nasilot.me) results on par with OpenAI's o1 design on a number of standards, consisting of MATH-500 and SWE-bench.
+
DeepSeek-R1 is based on DeepSeek-V3, a mix of professionals (MoE) design just recently open-sourced by DeepSeek. This base model is fine-tuned using Group Relative Policy Optimization (GRPO), a reasoning-oriented variation of RL. The research [study team](https://dirkohlmeier.de) likewise carried out understanding distillation from DeepSeek-R1 to [open-source Qwen](https://yezidicommunity.com) and Llama models and launched numerous versions of each
\ No newline at end of file