Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
viraptor
72 days ago
|
parent
|
context
|
favorite
| on:
LLMs are getting better at character-level text ma...
Only in the current most popular architectures. Mamba and RWKV style LLMs may suffer a bit but don't get a reduced context in the same sense.
curioussquirrel
72 days ago
[–]
You're right. There was also an experiment in Meta which tokenized bytes directly and it didn't hurt performance
much
in very small models.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: