Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Valve sponsors Mesa development (lunarg.com)
217 points by Rovanion on June 8, 2014 | hide | past | favorite | 28 comments


I don't quite understand how Mesa and device drivers play together... can someone explain this for me? I read the wiki (http://en.wikipedia.org/wiki/Mesa_(computer_graphics)) and this (http://en.wikipedia.org/wiki/File:Linux_kernel_and_OpenGL_vi...) graphic makes it clear that Mesa is the implementation of the OpenGL specification... what I don't get is the relationship between drivers and Mesa. Why aren't the drivers the implementation of the OpenGL specification? Is pushing bits from the GPU to the screen the only thing that the drivers do?


Because the OpenGL specification is actually something very complex, and doesn't map directly to hardware. There are many things that are higher-level, or that are features of the OpenGL implementation, and not of the hardware. (you'll notice that if you run a newer version of Mesa on older hardware you get some new extensions supported)

So it is beneficial if code can be shared between hardware drivers. You could have a full implementation for each GPU/vendor, each with its own set of bugs, missing features and incompatibilities. Or you can do as Mesa did and try to implement the hardware independent parts in a common place, and have just the hardware-specific parts implemented by the drivers.

For example you definitely don't want a separate implementation of the GLSL compiler for each GPU/vendor, GLSL is compiled to an intermediate representation and that is then further translated to hardware specific instructions. (although in practice I think you still have 2 implementations: one for Intel, and another for all other Gallium drivers)

Think of it this way: would you like a full C compiler written from scratch for each CPU architecture, or would you want to share where possible (parser, type checker, hardware-independent optimizers, etc.).

So an analogy would be: GCC frontend is like core Mesa, and a GCC backend is like a Mesa hardware driver. Another analogy would be libc: the majority of the code is hardware-independent that implements the various C99/POSIX APIs (i.e. like core Mesa), with just the hardware-specific parts / optimizations implemented in assembly (i.e. like Mesa drivers).


This is the layer of things that make up Linux graphics:

1. Kernel Module Hardware Drivers (radeon, i915, nvidia, etc)

2. DRI to interface with the kernel

3. Mesa DRI Drivers, providing the translation from libgl to DRI / kernel module. Examples are the Intel driver and Gallium architecture, which provides AMD, Nvidia, Tegra, Mali, and Ardreno drivers and share even more code.

4. LibGL, the shared library provided at /usr/lib/libGL.so

Your application links against libGL, which is the implemented OpenGL API.

The distinction between Gallium and Intel can be useful to clarify here - Gallium has an internal API called Winsys that is the minimum command set drivers can support, "state trackers" that provide translation layers for various APIs (OpenGL, OpenCL, Openmax, Direct3D) to that base API.

Mesa libgl will call into the DRI drivers, and if they are Gallium drivers that is through the OpenGL state tracker, where the vendor specific code will get a pass off of shader code and instructions to translate into the winsys layer, which is what the underlying drivers will translate to the command stream for the GPU hardware.

Intel implements that entire part on their own - they have their own separate implementations of openCL (beignet) and openGL to hardware translation.


You may want to read this (long) article:

http://blog.mecheye.net/2012/06/the-linux-graphics-stack/

Not claiming I understand it all, but it's long because graphics are complicated :) I have been surprised at how device-dependent graphics software is.

I wonder if Windows or OS X has nicer graphics abstractions than Linux? i.e. that make it less of a mess?


I think the graphics stack is a sore point on most or all platforms that have to support multiple GPUs from multiple vendors, at least the stuff in between userland GL calls and the bare metal.

In an ideal world it would be obvious where to draw the line between the GPU- and vendor-independent bits and the common code shared by everyone, but not only is that very hard to do, it can change drastically over the years. Couple that with the fact that platform vendors have to maintain these imperfect interfaces for many years regardless, and that vendors can have their own abstraction layers in between the top layer of their driver code and the bare metal of all their GPUs, and you have a perfect storm of too many layers getting in the way and imperfectly defined interfaces to many of those layers, which necessitates occasional layering violations at every level of the stack if you want to avoid some pathological performance bottlenecks for some GPUs.

At best you end up with a bunch of ugly, imperfect, but generally functional software that more or less gets the job done at an acceptable level of performance.

I would imagine that the situation with Linux just adds yet more issues to that bag of hurt. GPU vendors don't always want to help out, and are likely to withhold valuable technical information that they don't want to be public. The whole process of defining how many layers there are, what functionality exists at what layer, and what interfaces are used between the layers is almost by definition (for GPU driver models at least) a bunch of questions for which there are no good answers, which is also a perfect storm for Open Source development. Any solution anyone comes up with is always going to have some caveats and gotchas, which for open, egalitarian development models means endless arguments and bikeshedding in which everyone involved is correct in pointing out the weaknesses of the system as designed, but there are no alternatives that anyone can point to that don't have similar problems in someone else's eyes.

For example, back in the '90s, for right or wrong I was very optimistic about GGI solving a lot of graphics problems for Linux, and that was limited to the 2D stuff (I can't imagine how controversial it would've been had it included 3D as well):

http://en.wikipedia.org/wiki/General_Graphics_Interface

It didn't pan out that way, but so much energy was burned by so many people arguing about it over so many years. :( I don't have authoritative knowledge of the whole thing, so maybe the doubters were correct, but at the time it felt like a bunch of people shooting down an imperfect solution while offering no solution at all.

EDIT: TL;DR designing a graphics stack that ultimately goes down to the metal is a set of very hard problems, many of which have no perfect answers. This, coupled with secretive vendors and the level of specialized knowledge required to even contribute to stuff like this, makes it an even harder problem in the Open Source world. Not an insurmountable obstacle, but a really hard set of problems nonetheless, for both technical and human reasons.


I cannot say how mesa is structured but graphics drivers are not simple for good reasons.

First you don't want every draw call to cause a system trap. Thus some drivers include a dynamic library which gets loaded into the program’s memory space.

Then after the library comes a daemon. Between the library and the daemon is where the OpenGL implementation lives.

The daemon then hands off to the driver. The driver is meant to be a light interface to the GPU. The thinking being the kernel is not the right place to transform OpenGL to hardware instructions. If one did write an OpenGL implementation in a kernel driver then we could expect some harsh words from Linus.

Now on a pedantic note the GPU already handles pushing the bits to the screen without the driver's help. Which is good because you do not want to introduce yet another timing dependent driver class like the networking stack.

I'm not sure where the shader compilation occurs.


If you were in the market for new hardware and wanted the purchase to support initiatives such as this--what would you buy?

I don't really game any more but like to support the ecosystem.


The standard response is AMD, but both their open source drivers and proprietary drivers are very bad. It's not performance, it's stability. AMD Linux drivers crash on all sorts of things. Having an accelerated desktop can do it.

I had a 7770 for a year before I switched to Nvidia. Better all around.

Still AMD do sponsor open source driver development, I'm told, so it's up to how much you're willing to sacrifice for this.


on older r600 GPUS (4000/5000 series) the open driver is much better than the closed source one, over the past while it has been improving massively.


Plus I can confirm that the open source driver is much more stable on pre-radeonsi hardware than it used to be.

I eventually ended up buying a radeonsi card and while the extra GPU oomph is nice, the lower stability is not.


Probably AMD, they fund open source driver development and release good documentation for their GPUs.

Intel is ok too, but their hardware is not really targeted for gamers.

That being said, if you are fine with using proprietary drivers, currently nothing beats nVidia in terms of performance, features and stability of OpenGl drivers.

You would be supporting the ecosystem with any purchase, but if you are politically inclined on free software then the choice is AMD or Intel.


> Intel is ok too, but their hardware is not really targeted for gamers.

FWIW, I find the latest graphics units from Intel, e.g. HD 4600, rather okay. Even graphically nontrivial things like Crusader Kings II or KSP run decently, although usually far from the highest possible settings. But on the other hand, the drivers are rock-solid, delivered directly in Debian without any fiddling around as was/is necessary for the Nvidia drivers and I don’t need my own powerplant just to run a computer :)


I don’t need my own powerplant just to run a computer

The Maxwell-based 750ti is the card to buy then. It's tiny, draws a reasonable amount of power and I've yet to see a game that doesn't run well on high.


Steam Machine would be the thing to buy once they actually become available. Until that, all choices are not-good for various reasons.


Neither Valve nor LunarG make hardware, so buying new hardware won't matter.


More good news for open source and Linux.

Valve is probably the most influential single entity in PC gaming, for them to embrace open-source to this degree is massive...


You would think with Steam banking it big with the Steambox that they would want people working on these sorts of projects, as it's not only Steam that wins big (having a FOSS OS to run their steam box on), but everyone else.


Valve understands that without other incentives for developers to target Linux, gamers will never switch. It's looking to be a huge win for the FOSS community.


It would be nice to play the Battlefield series on Linux.


The future games I see being game changers for Linux are Half Life 3, Portal 3 and the new Left 4 Dead - these are all games that would work well with the steam controller on the Linux box and would drive sales to the Steambox.


> work well with the steam controller on the Linux box and would drive sales to the Steambox.

I am a gamer on Linux, but there's no reason these games will run better on Linux vs Windows or use the controller better (since they will probably support the controller drivers for Windows systems as well). Valve has repeatedly said they would not do any exclusive titles for any platform as well, so I don't think just a couple of great games will drive much Steambox sales. There are going to be several drivers for Steamboxes: 1) the hardware needs to be cheap enough 2) the experience needs to be seamless and AS GOOD as a console if not more 3) enough big games to ensure you do not miss too much by not staying on Windows.


Battlefield is EA, not going to happen.


I'm not conviced this will help that much for Intel GPUs. Developers from Intel have been optimizing shader compilers specifically for Dota 2 for some time without much impact on performance.

Dota 2 is probably memory bandwidth limited on Intel/Linux/OpenGL and that is likely the case for other games too.

So maybe modifying the OpenGL part of the game engine a bit to take memory bandwidth constraints of Intel GPUs into account would help more.

When I was looking into the issue of Intel GPU peformance on Linux many months ago there were no tools to profile memory bandwidth bottlenecks.


Presently each SCM (Subsea Control Module) uses a specific protocol to communicate with the topside system. The topside system often polls the SCM for data. In the case of RRC there are currently three different types of SCM and these use the following protocols: 2000, 3000 and 1024. how we can program a common driver interface and that will include the three protocols and the top side don't three package to control the three different subsea module ?????


Thought this was going to be an announcement about http://www.blackmesasource.com or HL3


Maybe it's another ARG?


That was a joke, haha, fat chance.


At first reading, I thought it was about Black Mesa [1], the Half-Life (first Valve game) remake made by the community.

[1] http://www.blackmesasource.com/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: