| | Cerebras Systems Announces Proposed Initial Public Offering (cerebras.net) |
| 1 point by Jalad on Aug 13, 2024 | past |
|
| | Sparse Llama: 70% Smaller, 3x Faster, Full Accuracy (cerebras.net) |
| 40 points by panabee on May 15, 2024 | past | 1 comment |
|
| | Cerebras CS-3: the fastest and most scalable AI accelerator (cerebras.net) |
| 1 point by danboarder on March 29, 2024 | past |
|
| | Cerebras Systems Unveils Fastest AI Chip with 4T Transistors (cerebras.net) |
| 2 points by doener on March 16, 2024 | past |
|
| | Cerebras and G42 Break Ground on Condor Galaxy 3, an 8 ExaFLOPs AI Supercomputer (cerebras.net) |
| 12 points by sizzle on March 16, 2024 | past |
|
| | Cerebras CS-3: the fastest and most scalable AI accelerator (cerebras.net) |
| 2 points by jwan584 on March 13, 2024 | past |
|
| | Cerebras WSE-3 Chip (cerebras.net) |
| 19 points by averylamp on March 13, 2024 | past | 2 comments |
|
| | Cerebras Unveils Fastest AI Chip with Whopping 4T Transistors (cerebras.net) |
| 6 points by cs-fan-101 on March 13, 2024 | past |
|
| | Cerebras launches PyTorch-based library for sparse training (cerebras.net) |
| 2 points by cs-fan-101 on Feb 8, 2024 | past |
|
| | GigaGPT: GPT-3 sized models in 565 lines of code (cerebras.net) |
| 223 points by georgehill on Dec 11, 2023 | past | 65 comments |
|
| | Cerebras Release 2.0: 50% Faster Training, PyTorch 2.0, Diffusion Transformers (cerebras.net) |
| 16 points by rbanffy on Nov 29, 2023 | past | 7 comments |
|
| | Cerebras Announces 130x Improvement on Monte Carlo Particle Transport over A100 (cerebras.net) |
| 1 point by rbanffy on Nov 14, 2023 | past |
|
| | BTLM-3B-8K: 7B Performance in a 3B Parameter Model (cerebras.net) |
| 3 points by jwan584 on July 24, 2023 | past |
|
| | Cerebras and G42 Unveil Condor Galaxy 1, a 4 ExaFLOPS AI Supercomputer (cerebras.net) |
| 7 points by cs-fan-101 on July 20, 2023 | past | 4 comments |
|
| | SlimPajama: A 627B token cleaned and deduplicated version of RedPajama (cerebras.net) |
| 60 points by andyk on June 11, 2023 | past | 7 comments |
|
| | Andromeda, a 13.5M Core AI Supercomputer (cerebras.net) |
| 2 points by peter_d_sherman on May 5, 2023 | past |
|
| | Cerebras Systems Releases 7 New GPT Models Trained on CS-2 Wafer-Scale Systems (cerebras.net) |
| 1 point by stolsvik on March 28, 2023 | past |
|
| | Cerebras-GPT: A Family of Open, Compute-Efficient, Large Language Models (cerebras.net) |
| 567 points by asb on March 28, 2023 | past | 232 comments |
|
| | Addition of fine-tuning for large language models in Cerebras AI Model Studio (cerebras.net) |
| 3 points by stan_kirdey on Feb 21, 2023 | past | 1 comment |
|
| | Cerebras WSE-2 – 850K Cores, 40GB Memory, 20PB/sec Bandwidth - Superchip (cerebras.net) |
| 1 point by peter_d_sherman on Nov 14, 2022 | past |
|
| | Andromeda, a 13.5M Core AI Supercomputer (cerebras.net) |
| 11 points by stochastimus on Nov 14, 2022 | past | 2 comments |
|
| | 20B parameter GPT-3 model trained on a single Cerebras chip (cerebras.net) |
| 2 points by keveman on June 22, 2022 | past |
|
| | TotalEnergies and Cerebras Create Scalable Stencil Algorithm (cerebras.net) |
| 1 point by tech-sucker on May 4, 2022 | past |
|
| | 850,000 Cores on a Single Chip (cerebras.net) |
| 2 points by guerrilla on Nov 13, 2021 | past |
|
| | Cerebras Cloud Cirrascale, Democratizing High-Performance AI Compute (cerebras.net) |
| 2 points by moritzmeister on Sept 21, 2021 | past |
|
| | Scaling Cerebras AI Training to 120T parameters (cerebras.net) |
| 6 points by icyfox on Aug 24, 2021 | past |
|
| | The Cerebras CS-2 wafer-scale engine (850k cores, 40 GB SRAM) (cerebras.net) |
| 127 points by unwind on June 10, 2021 | past | 52 comments |
|
| | Cerebras CS-2 (cerebras.net) |
| 4 points by asparagui on April 20, 2021 | past | 1 comment |
|
| | Cerebras Wafer Scale Engine (cerebras.net) |
| 2 points by andersson42 on Nov 24, 2020 | past |
|
| | Wafer Scale Compute: Setting Records in Computational Fluid Dynamics (cerebras.net) |
| 41 points by rbanffy on Nov 19, 2020 | past | 28 comments |
|
|
| More |