The Google Summer of Code for 2011 is over now for me. The final state of the UVC driver project while very far from perfect is at least at a point where incremental improvement can be made. Literally the day (maybe 2 days, depending which timezone you’re in) before the “firm pencils down” date I finally managed to get data all the way from the camera to the screen. The decoding of that information is totally wrong at this point, but coloured pixels show up on the screen and they appear to react when things move in front of the camera.
Not so long ago, at the half way mark of the GSoc, I was optimistic that I was near to actually interpretting data from the camera in such a way as to produce images on screen. I was successfully grabbing payload data from the camera, the camera’s in-use light was on, things were looking good. Since that point, progress has been repeatedly stalled by strange and difficult to debug behaviour.
Since my last blog entry a lot of progress has been made. Currently I’m right on the cusp of actually producing images on the screen that have been captured by my camera. Successful communication is occurring between the driver and the camera in at least two different forms.
The first form of communication to be successfully implemented involves the setting of values within the camera which affect image capture. These are the familiar brightness, contrast, sharpness etc. settings which most cameras support. Nearly all of the options available to my camera are now presented for manipulation by end-users and successfully communicated to the camera. These values are maintained within the camera between power cycles and this fact is successfully communicated to the user via the available controls. The controls can be viewed and modified in the media preferences application or the cortex demo application. ParameterWeb control documentation indicates a range of different style controls within the continuous or discrete parameter varieties. However it appears to me that the only discrete value input method currently supported with an appropriate gui interface is the binary on/off option. This is suitable for features like the auto value for white balance which can only be either on or off. However the powerline frequency setting which has three possible values was unable to be represented with the appropriate discrete control of the B_RECSTATE type which has three possible states. To simulate this capability a continuous control was modified to only allow three values which are indicated by placing the sliding control within a +/-10 range of the desired value. The slider snaps to the available values to indicate this behaviour. One future feature which would perhaps be desirable is controls with auto settings which indicate in real time by their movement what values the camera is using in its auto mode. Right now sliders are simply frozen in their last position when the auto mode is in effect. I had some brief discussion with my mentors about this feature, but it was deemed to be unnecessary at this stage as a lot of work is left to be done in actual image capture.
On June 7, I turned in my dissertation and my semester ended. On June 10, I had my first final exam. Now it’s time to produce a progress report for Haiku. Almost miraculously, I’ve actually managed to squeeze some Haiku development time in and am making progress of a kind.
The most tangible progress is mostly in the form of debug messages and crashes into Kernel debug land, but I consider anything that I do which has a measurable effect to be progress.
In the community bonding period I made some brief contacts with my two mentors and did some reading of specification for UVC devices as well as the header for the current USB kit. My mentors confirmed that the EHCI interface for isochronous transfers with USB devices is not working properly and does not have a good set of tests. Neither my camera nor my mentor’s camera are successfully engaged by Codycam at the moment.
As part of the Google Summer of Code I’ll be working on developing a driver for Haiku that allows for the use of high-end webcams. By high-end webcams I mean in this case those which adhere to the USB video device class (UVC) specification. Preliminary work will involve bringing Haiku’s support for the Enhanced Host Controller Interface (EHCI) to a point where UVC driver development proper can begin. Understanding the state of EHCI support and what work needs to be done in order to begin UVC development is my major goal for the community bonding period.
UVC development will entail the detection and exposure of camera features via Haiku’s media kit. This will require (if I understand correctly) the production of a node with an attendant ParameterWeb which will hold the actual feature definitions. Then ideally any interested application will be able to issue commands to a UVC compliant camera and receive back appropriate responses in the form of image frames or video streams in various formats and resolutions, or status reports depending on the camera. The primary test camera will be a Logitech Quickcam Pro 9000 which supports a fairly wide range of resolutions, contains a microphone, and has a hardware button (presumably for taking still photographs). I have also noticed during the course of some computer vision research with the camera that it has what appears to be a hardware driven exposure compensation feature. There is also a similar feature exposed through the Windows Logitech driver software, but when this is turned off some exposure compensation still occurs. It will be intersting to see whether this feature is genuinely rooted in the hardware or is a result of hidden propietary Logitech software.