Wherever you have a firehose of data that needs to be processed.
I've heard of it being used as a sort of message queue for application level events before, but that sounds like a nightmare of trying to reinvent the actor model with 1000x the complexity.
Akka, and a lot of actor model services break microservice availability, durability and general reliability because nuking a random node messes with Akka and now whatever actor happened to be on that node is now stalled until it's transferred.
Just like SOA and ESB, the concept isn't the problem, is the technical constraints of the design at the time. Decoupling and messaging isn't bad, but having a legacy message queue on physical hardware doesn't really hold up. Any derived architecture faces the same problem.
Then again, Kafka isn't an actor model implementation, and Akka isn't a partitioned redundant stream processing system, they don't have all that much overlap ;-)
If you shard your Akka actors, the messages are buffered and passed to the actor when it's initialized on the new node. You get even more stability if you persist your actors backed by a DB or some other persistent store.
Not saying Akka can replace Kafka but many of the issues around availability, durability and reliability have been attempted to be solved in Akka.
I've heard of it being used as a sort of message queue for application level events before, but that sounds like a nightmare of trying to reinvent the actor model with 1000x the complexity.