Goblin
News
AI news by
promptgoblins.ai
|
News
About
News
About
Filtered by:
moe
Clear
Titles
Summaries
Last Week
6
Warp Decode: 1.84x Faster MoE Inference by Flipping the Parallelism Axis on Blackwell GPUs
Research
1
Apr 8
6
Warp Decode: 1.84x Faster MoE Inference by Flipping the Parallelism Axis on Blackwell GPUs
Research
· 1 src · Apr 8
Discuss
2 Weeks Ago
8
Google Launches Gemma 4: Four Open Models with Frontier-Level Efficiency
Models
4
Apr 3
8
Google Launches Gemma 4: Four Open Models with Frontier-Level Efficiency
Top
Models
· 4 srcs · Apr 3
Discuss
3 Weeks Ago
8
NVIDIA Releases Nemotron-Cascade 2: Open 30B MoE Achieves Gold-Medal Reasoning with 20x Efficiency
Models
1
Mar 23
8
NVIDIA Releases Nemotron-Cascade 2: Open 30B MoE Achieves Gold-Medal Reasoning with 20x Efficiency
Models
· 1 src · Mar 23
Discuss
6
LLM Architecture Gallery: 11 Open Models Compared by Design
Research
1
Mar 23
6
LLM Architecture Gallery: 11 Open Models Compared by Design
Research
· 1 src · Mar 23
Discuss
Last Month
8
Moonshot AI Releases Kimi K2 Open-Source Model and Kimi-Researcher Agent
Models
1
Mar 20
8
Moonshot AI Releases Kimi K2 Open-Source Model and Kimi-Researcher Agent
Top
Models
· 1 src · Mar 20
Discuss
7
NVIDIA Nemotron 3 Super Launches on Amazon Bedrock as Serverless Model
Models
1
Mar 19
7
NVIDIA Nemotron 3 Super Launches on Amazon Bedrock as Serverless Model
Models
· 1 src · Mar 19
Discuss
7
Mistral Small 4: Unified 119B MoE Model Released Under Apache 2.0
Models
2
Mar 17
7
Mistral Small 4: Unified 119B MoE Model Released Under Apache 2.0
Models
· 2 srcs · Mar 17
Discuss
6
Open Weights vs. Open Training: Fine-Tuning Large MoE Models Remains Practically Inaccessible
Research
1
Mar 13
6
Open Weights vs. Open Training: Fine-Tuning Large MoE Models Remains Practically Inaccessible
Research
· 1 src · Mar 13
Discuss
Filters
Signal
Title
Category
Sources
Posted
Discuss