The vast majority of programmers develop applications using native toolchains. That is, they use a set of development tools (a "toolchain") on a single machine to compile, test, and debug their application running on that machine.
A common situation, however, is where a programmer, for one reason or the other, wants to use one machine (the host machine) to develop applications for another machine (the target machine). A cross toolchain is one that supports this development model.
One of the great features of the GNUPro tools now used in the x86 version of BeOS is that they are designed to make it as easy as possible to create cross toolchains in a huge variety of weird and wonderful combinations. For example, according to the Cygnus website http://www.cygwin.com/, Cygwin currently supports over 125 different configurations of native and cross- development GNUPro toolchains.
This article presents two cross toolchains for developing x86 BeOS R4 applications, one hosted on PPC BeOS R4 and the other hosted on RedHat Linux 5.2. They can be found at:
ftp://ftp.be.com/pub/experimental/tools/gnupro-Hppc-Ti586-bin.tgz
ftp://ftp.be.com/pub/experimental/tools/gnupro-Hlinux-Ti586-bin.tgz
To install the PPC BeOS-hosted toolchain, do something like:
$ mkdir -p /boot/develop/tools $ cd /boot/develop/tools $ tar -xvzf gnupro-Hppc-Ti586-bin.tgz
To install the Linux-hosted toolchain, do something like:
$ su Password: # cd /usr/local # tar -xvzf gnupro-Hlinux-Ti586-bin.tgz
In either case, a directory called "gnupro" will be created and all the relevant files will be extracted into a subtree rooted at the gnupro directory.
In order to provide a concrete example of using the cross toolchain, I
have taken the BeBounce sample application from the R4 release CD and
modified it to use a GNU
autoconf generated configure script and
associated Makefile
template. This example can be found on the Be ftp
site at:
ftp://ftp.be.com/pub/experimental/tools/gnupro-example.tgz
When unpacked, it creates a directory "bounce" and populates it with the sample files. To build it using the Linux- hosted cross tools you would do something like:
$ export PATH=/usr/local/gnupro/bin:$PATH $ CXX=i586-beos-gcc configure -q creating cache ./config.cache updating cache ./config.cache creating ./config.status creating Makefile $ make i586-beos-g++ -c -g -O2 main.cpp i586-beos-g++ -c -g -O2 Crunch.cpp i586-beos-g++ -o BeBounce main.o Crunch.o -lbe $
(NOTE: for more info while configuring, don't use -q )
The currently supplied cross tools include a full set of x86 BeOS R4 headers and libraries, which allows you to create applications that should run on any R4 or later release. If you are a beta tester for a later release and want to update these files, here are the steps to follow:
Create archives of the new include and lib files on an installed version of the updated BeOS system and then transfer them to the machine where the cross tools are installed:
$ cd /boot/develop/headers $ tar -cvzf /tmp/headers.tgz * $ cd /boot/develop/lib/x86 $ tar -cvzhf /tmp/libs.tgz *
(Note: The 'h' option to tar is very important, to ensure that symbolic
links are followed. Copy the headers.tgz
and libs.tgz
files to the
machine running the cross tools.)
Go to the installed crosstools i586-beos directory, preserve some installed files that are part of the toolchain itself, remove the old includes and libs, and install the updated versions:
$ cd /usr/local/gnupro/i586-beos $ mv lib/ldscripts ldscripts-save $ rm -rf lib include $ mkdir lib include $ mv ldscripts-save lib/lscripts $ cd include $ tar -xvzf /tmp/headers.tgz $ cd ../lib $ tar -xvzf /tmp/libs.tgz
The source for the cross tools can be found in:
ftp://ftp.be.com/pub/experimental/tools/gnupro-990420-src.tgz
To build the tools from source, you first have to select a root directory
for the installation. For the PPC host I chose
/boot/develop/tools/gnupro
and for the Linux host I chose
/usr/local/gnupro
.
The following instructions assume you have chosen
/usr/local/gnupro
, as would be a good choice on most UNIX systems. But
the actual path is completely arbitrary.
Next you have to install the BeOS native include and library files, in
much the same way as described earlier for updating these files in the
prepackaged toolchains. In the UNIX case, you would create
/usr/local/gnupro/i586-beos
and then populate an
include
subdirectory
with the BeOS headers and a
lib
subdirectory with the BeOS libraries;
i.e., the following need to exist and contain the headers and libraries
respectively:
/usr/local/gnupro/i586-beos/include
/usr/local/gnupro/i586-beos/lib
One last step before actually building the tools is to create the toolchain's private install directory tree, which would be (again for the Linux case):
/usr/local/gnupro/lib/gcc-lib/i586-beos/2.9-beos-980929
The previous step is needed for the cross compiler to be able to locate
the BeOS include files when building some of the runtime components,
since they need access to the target's include files, and these are found
relative to this path by tacking
../../../../i586-beos/include
on to
the end of it.
Now you're ready to actually configure and build the cross tools. Once you've extracted the 70 Mb or so of sources, do something like:
$ cd gnupro $ configure --prefix=/usr/local/gnupro --target=i586-beos $ make $ make install
Of course, each of the last three steps will generate lots of output to
the terminal, but should succeed with no errors. If there are errors,
you'll have to resolve those before proceeding to the next step. After
doing make install
, your cross tools should be ready for use.
Regenerating the PPC-hosted cross tools is much trickier, since success depends upon having some things from the Geek Gadgets distribution installed. In particular, you have to use the GG bcc front end for mwcc, which makes mwcc behave much more like gcc. If you really feel the need to do this, contact me directly for help, since it is not anywhere near as straighforward as generating the Linux-hosted tools.
In the previous installment, I introduced a node called SelectorNode
.
This node allows you to select one of several inputs to pass to a single
output. This week, I'd like to explore the guts of the node in a little
more detail.
The sample code (plus some scrubbing and general clean-up from last week's effort) can again be found at
ftp://ftp.be.com/pub/samples/media_kit/Selector.zip
I'd like to answer two questions about timing that I encountered whilst wrangling with the selector node for last week's diatribe. They are
When should I send a buffer that I've received?
How can I inform my upstream nodes when my latency changes?
The node looks vastly different from previous nodes I've presented
because it derives from the super-nifty and newly christened
BMediaEventLooper
(formerly known as the
MotherNode
, and featured in
Stephen's article two weeks ago) to manage the timing of buffers and
events. Previously, timing was the trickiest part about developing nodes.
Now, it takes little more than a few method calls, and leaves you with
almost no steaming entrails. What does BMediaEventLooper
give you?
You no longer have to write a Big Bad Service Thread to receive and
process messages on your port. The BMediaEventLooper
creates and
maintains the service thread and control port for you (thus, very
similar to a BLooper
in function).
You no longer have to worry about holding on to events until it's
time to do them (another feature of the Big Bad Service Threads of the
past). The BMediaEventLooper
maintains
a BTimedEventQueue
. You simply
push events onto the queue as you receive them, and the
BMediaEventLooper
simply pops events off the queue when they become due.
If all your node wants is to know when Start, Stop, or Seek requests
become due, your node doesn't have to override Start, Stop, or Seek
anymore. The BMediaEventLooper
automatically intercepts these events
and pushes them onto the queue for you.
Your node doesn't have to be limited to storing one Start, Stop, or
Seek at a time. The BTimedEventQueue
allows you to stack up as many
events as your heart desires. (Can you say "automation?" I knew you
could.)
You can and should use BMediaEventLooper::EventLatency()
to store the
total latency of your node (that is, processing plus downstream
latency). BMediaEventLooper
takes this latency into account when it
determines when to pop events off of the queue.
You'll see the BMediaEventLooper
in action when I show you how
SelectorNode
handles buffers.
Thanks to BMediaEventLooper
, most of the trickiness in getting buffers
handled at the proper time is done for you. All SelectorNode
has to do is
filter the incoming buffers so that only the selected input's buffers get
through. What we have to determine is: when should SelectorNode
filter
and send the buffers that it receives?
The easiest approach would be to filter buffers as soon as they are received, and send them on their merry way as soon as possible, as the following diagram illustrates:
Buffer Received and Handled Buffer Is Late | | V V |OOOOOO|---------------------------------------------------| Legend: |OO| = time spent by processing the buffer ---- = time spent by waiting
For such a simple filter as Selector, this would be a passable approach. However, a more robust node will take pains not to deal with the buffer too early. Why? Consider the pathological case where we receive a Stop, or are told to change the selected input, between the time we receive the buffer (if it was sent super-early) and the time by which the buffer has to be sent. If we have already sent the early buffer, then that buffer will escape whatever commands we might otherwise want to apply that will affect the buffer.
Another way to manage buffers would be to handle and send them as late as possible so that they'll still arrive on time, taking only the amount of time you need to process the buffer into account:
Buffer Received Buffer Is Late | Buffer Handled | | | | V V V |---------------------------------------------------|OOOOOO|
This will take care of the race condition between commands and early buffers. However, because you've eliminated all of the slack time you might otherwise be able to use to process buffers, your node becomes susceptible to jitter from factors such as CPU load and disk activity, and your buffers stand a dangerous chance of arriving late.
The best approach in this case is to strike a compromise. Let's take raw audio and video into account, so that we can calculate how long our buffers will be. In this case, we should try to handle the buffer as soon as possible, *but not earlier than* the duration of one buffer before the time at which the buffer must be sent:
Buffer Received Buffer Handled Buffer Is Late | | | V V V |-----------------------------|OOOOOO|---------------------| <----------------------------> Buffer Duration
This will ensure that we properly handle commands that apply to our buffer, but it will give us enough room to handle unpredictable delays in processing.
Fortunately, with the BMediaEventLooper
's event queue, doing this the
right way is a snap. There are three simple steps:
We report our processing latency as being the duration of one buffer, plus whatever our estimated scheduling jitter is for our thread.
We calculate our total latency by adding this processing latency to the latency downstream.
Finally, we tell the BMediaEventLooper
what our total latency is by
calling SetEventLatency()
, so that it pops events at the proper time.
Here's what it looks like in code:
voidSelectorNode
::Connect
(..., media_format&with
, ...) { ... // Calculate the processing latency. bigtime_tproc
=buffer_duration
(with
);proc
+=estimate_max_scheduling_latency
(); // Calculate the downstream latency. bigtime_tdownstream
; media_node_idts
;FindLatencyFor
(m output.destination
, &downstream
, &ts
); bigtime_ttotalLatency
=proc
+downstream
; // Tell the event queue what our new latency is.SetEventLatency
(totalLatency
); ... }
Now, all we need to do is push buffers onto the queue as we receive them.
At the proper time, they'll be popped from the queue and handed to
HandleEvent()
, which we override to actually send the buffer.
voidSelectorNode
::BufferReceived
(BBuffer
*b
) { // The B RECYCLE constant means that any buffers // inadvertently left in the queue will be recycled // when the queue is deleted.EventQueue()
->PushEvent
(b
->Header()
->start_time
,BTimedEventQueue
::B_HANDLE_BUFFER
,b
,BTimedEventQueue
::B_RECYCLE
, 0); } voidSelectorNode
::HandleEvent
(bigtime_tperformance_time
, int32what
, const void *pointer
, uint32cleanup
, int64data
) { switch (what
) { caseBTimedEventQueue
::B_HANDLE_BUFFER
: { // It's time to handle this buffer.BBuffer
*b
= (BBuffer
*)pointer
; if (b
->Header
()->destination
!= (uint32)m_selectedInput
) { // This buffer doesn't belong to the // selected input, so get rid of it!b
->Recycle
(); break; } // This buffer does belong to our selected // input, so try to send it. if ((!IsConnected
()) ||IsStopped
() || (!m_enabled
) // output enabled ||SendBuffer
(b
,m_output
.destination
) !=B_OK
) { // Either we shouldn't send the buffer // or the send failed. In either case, // get rid of the buffer.b
->Recycle
(); } } break; ... } }
This is a decent approach for the slackers out there like myself.
However, the astute observer will note one disadvantage to this approach:
your node's latency is increased to one whole buffer's duration. Stack a
few of these nodes together in a chain, and you end up with far more
latency than you really need. So, for you bean counters out there, what
you really want is to report your standard processing latency when
handling events as usual, *but* instruct the BMediaEventLooper
to pop
buffers a buffer's duration early if it can. We're working on building
this sophistication into BMediaEventLooper
right now, to show up in a
future release. Stay tuned, O true believers...
Another interesting part of the SelectorNode
deals with negotiating the
format and the connections.
As you can see from the above, the selector node passes buffers blindly, so it must enforce that the output connection have the same format as the input connections. It does this by rejecting all output connections until its first input is connected, and then uses the first input's connection format as the non-negotiable format for all future inputs and outputs.
Because I'm connecting upstream nodes before downstream nodes in this case, there's an additional complication that gets introduced. When I begin, the node chain for my application looks something like this, including the approximate processing latency for the downstream nodes:
File Reader Selector Node Output latency=1ms latency=50ms
The first connection is made between the file reader and the selector node. Once this is done, the file reader will see a downstream latency of 1ms, because only the selector node is currently downstream.
File Reader ----------> Selector Node Output latency=1ms latency=50ms <------------------------> downstream latency=1ms
The next connection is made between the file reader and the selector node:
File Reader ----------> Selector Node --------------> Output latency=1ms latency=50ms <-----------------------------------------------> downstream latency=51ms
Because the Output is now connected, the downstream latency for the File Reader has just increased tremendously. But there is currently no good mechanism for the selector node to tell the File Reader that its latency has just changed, so the File Reader still thinks its latency is only 1ms. The result? All the file reader's buffers end up arriving 50ms late!
We've solved this problem by adding two simple functions to the Media Kit API for genki/5; you can see these functions in action in SelectorNode:
BBufferConsumer::SendLatencyChange()
informs an upstream producer
what the new downstream latency from us is.
BBufferProducer::LatencyChanged()
is the corresponding function
that's called on the upstream producer. The producer then makes
whatever adjustments are necessary to abide by the new latency.
That's it for this week's installment. Hopefully, this week I've managed to give you a taste of stuff we're adding to the Media Kit to make the node writer's life easier. As always, let us know what else we can do to pave the road for you. And have fun developing those tractor nodes!
As I learn more idioms in my adopted tongue, I'm continuously delighted by the expressive power, the conciseness of the American lingo. There are some atrocities, of course, such as incentivize, or unavoidable infelicities, such as reintermediate, but the term "Pricing Power" struck a different chord—the Internet.
On the radio, I heard one sage explain our current low inflation as the result of the Internet reducing companies' Pricing Power. I thought that was as good an explanation as any for what has come to be known as the Internet mania. Looking a little more closely, I felt the phrase encapsulated even more than the NPR oracle explicitly stated—but that's how oracles are—they leave some of the hermeneutics to the reader.
The seer could have said that the Internet makes comparison shopping easier, faster, ubiquitous, universal. As a result, we have a buyer's market, prices stay low and, voilà , low inflation. But there is a little -- or even a lot—more. You can have low prices and low sales, but what the frictionless movement of data also allows is faster, easier, ubiquitous, permanent transactions. So, we have lower prices and twice higher volumes, first because of more attractive tags, and second because of easier purchases.
It looks as though this so called "Internet mania" benefits brick and mortar companies by lowering inflation and stimulating business. In other words, e-stocks take credit for the rise in "old" stocks. To the point where a banker friend of mine, a bricks and mortar banker, worries that the most dangerous situation in the stock market today isn't the stratospheric level of e-stocks, which still represent a smallish share of the stock market's total capitalization. Rather, it is the less-heralded inflation in the P/E of conventional companies. They used to trade at 10, now they're closer to 30, and he wondered if the benefits of less friction in the economy could entirely justify such a rise. In his view, the height of the fall might not look forbidding but, considering the mass of the falling object, one might rethink one's evaluation of potential damages.
But, there is perhaps more to the oracle, more than a buyer's market and its consequences. As any good oracle, it might also pose the opposite interpretation. How about the Internet creating a seller's market? Look at eBay, and its recently announced acquisition of a respected conventional auction house, Butterfield and Butterfield of San Francisco.
Just as the efficient circulation of information contributes to lower prices, it seems that eBay, by efficiently matching sellers and buyers, liberates bottled-up supply and demand, creating an apparently illogical combination of higher prices for the contents of my attic and better prices for people looking for that stuff. Thus the conclusion that a conventional auction house would be synergistic with its cyber-sibling. Butterfield's experience and reputation make higher price items easier appraise and sell, and eBay is a great broadcast vehicle for the information, and a great net to catch goods and sellers with. Another great retroactively obvious Internet story. Where are the next ones?