Raster quality is limited by how much effort engine developers are willing to put into finding computationally cheap approximations of how light/materials behave. But it feels like the easy wins are already taken?
All the biggest innovations in "pure" rasterization renderers in the last 10-15 years have actually been some form of raytracing in a very reduced, limited form.
Screenspace Ambient Occlusion? Marching rays (tracing) against the depth buffer to calculate a terrible but decent looking approximation of light occlusion. Some of the modern SSAO implementations like GTAO need to be denoised by TAA.
Screenspace Reflections? Marching rays against the depth buffer and taking samples from the screen to generate light samples. Often needs denoising too.
Light shafts? Marching rays through the shadow map and approximating back scattering from whether the shadowed light is occluded or not.
That voxel cone tracing thing UE4 never really ended up shipping? Tracing's in the name, you're just tracing cones instead of rays through a super reduced quality version of the scene.
Material and light behavior is not the problem. Those are constantly being researched too, but the changes are more subtle. The big problem is light transport. Rasterization can't solve that, it's fundamentally the wrong tool for the job. Rasterization is just a cheap approximation for shooting primary rays out of the camera into a scene. You can't bounce light with rasterization.
For rasterization to be useful it must approximately do the same thing that light does in the real world. Therefore rasterization that wants to get closer and closer to the real world will have to emulate more and more of the real world.
It will have to cast exactly the rays that rasterization is hoping to avoid.