Last week I finally plugged in the camera module I got a while ago to go take a look at what vc4 needs for displaying camera output.
The surprising answer was "nothing." vc4 could successfully import RGB dmabufs and display them as planes, even though I had been expecting to need fixes on that front.
However, the bcm2835 v4l camera driver needs a lot of work. First of all, it doesn't use the proper contiguous memory support in v4l (vb2-dma-contig), and instead asks the firmware to copy from the firmware's contiguous memory into vmalloced kernel memory. This wastes memory and wastes memory bandwidth, and doesn't give us dma-buf support.
Even more, MMAL (the v4l equivalent that the firmware exposes for driving the hardware) wants to output planar buffers with specific padding. However, instead of using the multi-plane format support in v4l to expose buffers with that padding, the bcm2835 driver asks the firmware to do another copy from the firmware's planar layout into the old no-padding V4L planar format.
As a user of the V4L api, you're also in trouble because none of these formats have any priority information that I can see: The camera driver says it's equally happy to give you RGB or planar, even though RGB costs an extra copy. I think properly done today, the camera driver would be exposing multi-plane planar YUV, and giving you a mem2mem adapter that could use MMAL calls to turn the planar YUV into RGB.
For now, I've updated the bug report with links to the demo code and instructions.
I also spent a little bit of time last week finishing off the series to use st/nir in vc4. I managed to get to no regressions, and landed it today. It doesn't eliminate TGSI, but it does mean TGSI is gone from the normal GLSL path.
Finally, I got inspired to do some work on testing. I've been doing some free time work on servo, Mozilla's Rust-based web browser, and their development environment has been a delight as a new developer. All patch submissions, from core developers or from newbies, go through github pull requests. When you generate a PR, Travis builds and runs the unit tests on the PR. Then a core developer reviews the code by adding a "r" comment in the PR or provides feedback. Once it's reviewed, a bot picks up the pull request, tries merging it to master, then runs the full integration test suite on it. If the test suite passes, the bot merges it to master, otherwise the bot writes a comment with a link to the build/test logs.
Compare this to Mesa's development process. You make a patch. You file it in the issue tracker and it gets utterly ignored. You complain, and someone tells you you got the process wrong, so you join the mailing list and send your patch (and then get a flood of email until you unsubscribe). It gets mangled by your email client, and you get told to use git-send-email, so you screw around with that for a while before you get an email that will actually show up in people's inboxes. Then someone reviews it (hopefully) before it scrolls off the end of their inbox, and then it doesn't get committed anyway because your name was familiar enough that the reviewer thought maybe you had commit access. Or they do land your patch, and it turns out you hasn't run the integration tests and then people complain at you for not testing.
So, as a first step toward making a process like Mozilla's possible, I put some time into fixing up Travis on Mesa, and building Travis support for the X Server. If I can get Travis to run piglit and ensure that expected-pass tests don't regress, that at least gives us a documentable path for new developers in these two projects to put their code up on github and get automated testing of the branches they're proposing on the mailing lists.