Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
ilaksh
on Nov 28, 2023
|
parent
|
context
|
favorite
| on:
Running Llama.cpp on AWS Instances
I am more interested on running llama.cpp on CPU-only VPSs/EC2. Although it is probably too slow.
rini17
on Nov 28, 2023
[–]
13b versions of models are running on 8-core CPU fast enough to have a fluid conversation.
borissk
on Nov 28, 2023
|
parent
[–]
What can I run on nVidia RTX 4060 Ti with 16GB RAM?
rini17
on Nov 28, 2023
|
root
|
parent
[–]
Best to try yourself, llama.cpp is refreshingly easy to build.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: