We've been telling you since the beginning that threads are neat and wonderful. A number of Newsletter articles have been written on how to increase parallelism in your applications by creating more threads and synchronizing them. Here are a few of those articles:
Be Engineering Insights: Fun with Threads, Part 1
Be Engineering Insights: Fun with Threads, Part 2
Be Engineering Insights: Fun with Semaphores
Be Engineering Insights: Summer Vacations and Semaphores
All this is good, but it helped create the impression that programmers who spawn more threads are smarter than others. This perhaps led to some abuse and excess, with what appears to be a contest between some programmers for who will spawn the most threads. This doesn't really come as a surprise. Since multithreading is a relatively new programming model, some abuse is to be expected.
Abuse—or excessive use—of multithreading can cause a number of problems. First, threads are not free. It's true that they are lightweight. A BeOS thread is not as heavyweight as a typical UNIX process—for example, the address space and file descriptors are shared between teams. But they do use up precious system resources: memory, semaphores, etc.
For a start, each thread has a stack, which has to be allocated and consumes memory. How much memory? It depends on the thread, as stack pages are allocated on demand. But worse is the fact that the kernel stack—the part of the stack which is used when the thread runs in the kernel—has to be preallocated and locked down in memory—it can't be paged out to disk. This is about 32K of memory per thread that you can kiss goodbye. In addition, there are kernel structures used to keep the thread state, some of them stored in locked memory as well. As a result, 20 threads eat up approximately 1M of memory. That's a lot.
Perhaps you don't care: you are writing an application targeted to high-end users with plenty of memory. And you're telling me that memory is cheap these days anyway. I could argue that this approach tends to make systems fatter and slower. But I have something more important to warn you about. By creating more threads and having to synchronize them, you greatly increase the risks of deadlocks. That risk does not increase linearly with the number of threads, but with the number of possible interactions between threads—which grows much faster.
Since I'm discussing interactions between threads, it might be useful to distinguish between different kinds of multithreading.
The threads are executing the same code on different data, and there is very little interaction between them. Typically, applications dealing with computations that can be easily parallelized, as on images, resort to this technique. Servers can also fire a thread for every client request, if the request is complex enough. For this type of multithreading, synchronization between the different threads is usually quite basic: the control thread fires all the computing threads and waits for them to complete.
The threads interact in a complex manner. Each thread can be seen as an active agent that processes data and sends messages to other agents. Media Server nodes are an example of such an organization. Synchronization is usually complex. It can be of the producer-consumer type, using ports. Or it can be customized, based on basic tools like semaphores or shared memory.
It is clearly not a universal classification. Just a simple way to look at multithreading from the standpoint of interaction complexity.
With regular multithreading, adding threads does not add to the overall complexity of the application, because the additional threads don't interact between themselves. On the other hand, adding threads in the context of irregular multithreading increases complexity because the additional threads do interact between themselves.
My warning about the increased risk of deadlock clearly only concerns applications that use irregular multithreading. The risk of deadlock is high, and quickly gets beyond anybody's comprehension as the number of threads involved becomes larger than just a few. These deadlocks are usually non-trivial, and it might prove useful to review some of the classic scenarios:
In an architecture where semaphores are nested, it's critical that the semaphores are always acquired in the same order by all the threads. If thread A acquires semaphore a and then semaphore b, and thread B acquires b and then a, a deadlock occurs. This gets increasingly complicated as the number of nested semaphores and threads increases.
It's easy to run into a circular dependence. The loop can involve two or more threads. With two threads, A sends a request to B that sends itself a request to A. If the requests are synchronous, the deadlock appears right away. In the asynchronous case, the message queue A is reading from has to be full for the deadlock to be triggered. This makes it hard to localize these bugs, and that can remain hidden for a long time. A standard but terrible work-around for these bugs is to increase the message buffer size... In addition, circular dependence that involves loops with more than two threads is difficult to find.
A simple implementation of reader/writer synchronization involves a counted semaphore initialized with a count of N. Readers acquire the semaphore with a count of one, writers with a count of N. This implementation allows up to N concurrent readers. But what if a reader wants to become a writer? You might think that it should acquire N-1, but this leads to a deadlock if there is another writer in the wait queue of the semaphore. The correct technique is to release as a reader and reacquire as a writer.
(see Be Engineering Insights: Benaphores)
Benaphores were invented by Dr. Benoît Schillings in the infancy of BeOS. They look like semaphores, taste like semaphores, and they're faster. But they lost some features offered by kernel semaphores. This means you can't always take an application and replace its semaphores by benaphores.
Specifically:
Benaphores require all the threads using the benaphore to live in the same address space, as they rely on shared memory. Semaphores are systemwide.
It's illegal to acquire a deleted benaphore. Acquiring a deleted
semaphore guarantees to return B_BAD_SEM_ID
. Semaphore deletion can hence
be used to synchronize two threads. Not benaphore deletion.
Benaphores cannot be acquired with a count greater than 1 in the general case. For example, implementing the previous reader/writer synchronization using benaphores will lead to deadlocks in the case there is more than one thread asking for write access. Think of the following scenario:
The benaphore is initialized. Initial benaphore count: N. Its associated semaphore has a count of 0.
Thread A gets read access. The benaphore count decrements to N-1.
Thread B asks for write access. The benaphore count decrements to -1. But it is preempted before it acquires the semaphore with a count of 1.
Thread C asks for write access. The benaphore count decrements to -1-N and C waits on the associated semaphore with a count of N.
Thread B finally waits on the semaphore with a count of 1.
Thread A releases its read access. It should unblock B, waiting for its write access. The benaphore count increments to -N and the semaphore is released with a count of 1. Unfortunately, C "passed" B in the semaphore queue, and C won't be released because it's waiting with a count of N.
I remember my initial experience of C++, and I can't help thinking of developers confronting multithreading for the first time. My first reaction to C++ was enthusiasm. I was very excited about seemingly neat features like operator overloading and multiple inheritance. I used it and abused it. What had to happen happened: I got trapped a number of times by subtleties I did not grasp in the first place, many of them being undocumented or compiler dependent. I almost became disillusioned from getting burned so many times. But eventually, I got a good sense of balance, knowing where to stop the house of cards I was building.
It's the same with multithreading: I believe all developers go through the same phases of excitement, disillusion and eventually wisdom. I can only recommend the greatest precautions when designing multithreaded applications. It's tempting to push multithreading to its limits, using a lot of threads that interact in complex ways. But it's always better to start with an architecture you fully comprehend.
Many developers ask how to archive interface objects and reuse them later. There are several reasons why you'd want to do that—here are the main ones:
It makes sharing the work among a team much easier. One person can design the UI, while another actually writes the code for it. It allows non-programmers to design user interfaces, and programmers don't have to change their code each time the user interface design changes.
It doesn't hard-code sizes, colors, layouts, texts, and most other visible interface parameters, making it easier to create different interfaces for the same app (the most striking example is internationalization, but it's not the only one).
It makes it possible to write very little code in the app itself to create the interface objects and put them on screen.
That's enough marketing talk—this is "Engineering Insights" after all. Let's see what has to be engineered—and here's the good news -- there's nothing you have to do, or, more accurately, very little.
Take a look at this example:
ftp://ftp.be.com/pub/samples/interface_kit/archint.zip
It comes with a makefile. Compile it with make
, or use an alternate
resource with make alt
. Believe or not, both versions use the exact
same code (with no cheating by simply checking a resource flag and
changing the whole behavior of the app depending on this flag).
Now let's see the magic code, in Application.cpp
:
The first part loads a BMessage
from a resource (no great magic here):
BMessage
*ReadMessageFromResource
(int32id
) { app_infoai
;BFile
f
;BResources
r
; size_tres_size
; const void*res_addr
;BMessage
* msg=newBMessage
; if ((be_app
->GetAppInfo
(&ai
)!=B_OK
) ||(f
.SetTo
(&ai
.ref
,B_READ_ONLY
)!=B_OK
) ||(r
.SetTo
(&f
)!=B_OK
) ||((res_addr
=r
.LoadResource
(B_MESSAGE_TYPE
,id
,&res_size
))==NULL
) ||(msg
->Unflatten
((const char*)res_addr
)!=B_OK
)) { deletemsg
; returnNULL
; } returnmsg
; }
The second part creates the window from the BMessage
; this is where the
magic lies:
AWindow
::AWindow
(BMessage
*m
):BWindow
(m
) {atomic_add
(&numwindows
,1);numclicks
=m
->FindInt32
("clicks");Show
(); }
As you can see, there's nothing difficult here. You just call the base
class constructor and extract any extra data you might have stored in the
BMessage
. You'll notice that I didn't define any
Instantiate()
function for my class, because I
knew which class was stored in the BMessage
. If
you're doing something more complex (such as a multiple window interface),
you'll want to implement Instantiate()
so that you
can take a bunch of archived objects and feed them to
instantiate_object()
, which will do the job for you.
See the BArchivable
chapter in the Be Book for more
details.
This last part shows why custom BView
s are not needed:
voidAWindow
::MessageReceived
(BMessage
*m
) { switch(m
->what
) { case 'colr' : { // removed for clarity break; } case 'dump' : {DumpArchive
(); break; } case 'clon' : {CloneWindow
(); break; } default:BWindow
::MessageReceived
(m
); break; } } voidAWindow
::NewColor
(rgb_colorcol
) {BView
*v
=FindView
("colorview"); if (v
!=NULL
) {v
->SetViewColor
(col
);v
->Invalidate
(); } }
The trick is that the default target for a BControl
is its BWindow
, and a
BWindow
can find BView
s by name.
It's also a good idea to do some dynamic
type-checking on the BView
returned by
FindView()
.
Now let's look at some drawbacks of using archived interface objects:
You'll probably need one or two custom classes which are missing from the
current API. One is a PictureView
(which simply
displays a BPicture
, just as a
BStringView
displays a String); another is a
PictureRadioButton
(because the current
BPictureButton
has no radio mode).
Unless you reverse-engineer their format, you won't be able to directly
create BMessages
that contain archived
BPicture
s. You'll have to create a
BPicture
and Archive()
it.
For some reason, some BView
s will ignore any
ViewColor
and/or LowColor
that's
stored in the archive, because those views adjust their
ViewColor
and/or LowColor
to match
those from their parent. Most of the time, this is what you want. If it's
not, you have two solutions: 1) you can manually override the colors in
your window constructor (see the example in
Application.cpp
); or 2) you can enclose the
BView
you want to "protect" into another
BView
of the same size (see the example in
AddAlternateResource.cpp
).
This is the first in a series of articles about programming with the new Media Kit.
Soon there will be Media Kit add-ons available that provide support for various video file formats. This week, let's take a look at how to play back a movie using the new Media Kit and these add-ons.
You can download the code for this project at ftp://ftp.be.com/pub/samples/media_kit/simplemovie.zip. It consists of three C++ source files and two header files.
main.cpp
instantiates the MediaApp
object, passing argv[1]
to the object.
This argument should be the name of the movie to play (yes, this is a
command-line movie player, hence the name SimpleMovie
).
The MediaApp
constructor converts the movie's pathname into an entry_ref
and passes that to a function called OpenRef()
, which will actually open
and begin playing the movie file. It also calls SetPulseRate()
to cause
the MediaApp
::Pulse()
function to be called periodically; we'll use the
Pulse()
function to watch for the movie to end.
OpenRef()
looks like this:
MediaPlayer
*MediaApp
::OpenRef
(entry_ref *ref
) {MediaPlayer
*player
= newMediaPlayer
(ref
); if (player
->InitCheck
() !=B_OK
) {printf
("Can't set up the player\n"); deleteplayer
; returnNULL
; } returnplayer
; }
All it has to do is instantiate a MediaPlayer
object
to represent the movie. The movie is started automatically by the
MediaPlayer
constructor, so once this is done, the
movie is playing. If an error occurs setting up the movie,
MediaPlayer
::InitCheck()
will return
NULL
, and we throw away the object.
The Pulse()
function checks the movie periodically to see if it's done
playing:
voidMediaApp
::Pulse
(void) { if (player
) { if (!player
->Playing
()) {SetPulseRate
(0); // No more pulses, please deleteplayer
;player
=NULL
;PostMessage
(B_QUIT_REQUESTED
); } } else {PostMessage
(B_QUIT_REQUESTED
); } }
This is done by calling the MediaPlayer
::Playing()
function. If it
returns true
, the movie is still playing; otherwise, the movie is done.
If it's done, we turn off pulsing by calling SetPulseRate(0)
, delete the
player object (which, as we'll see later, closes the various media
connections), and post a B_QUIT_REQUESTED
message to ourselves to quit
the application.
If the MoviePlayer
object, player, wasn't
opened properly and is NULL
, we
post a quit request to bail out immediately.
Now we get to the meaty, "we've been waiting for this for two months"
part: the MediaPlayer
class. Before we begin, I should point out that
this class assumes that the video is encoded and that the audio is not,
and probably won't work right in any other case. It works fine for
playing typical Cinepak-encoded QuickTime movies, though. Because the
nodes aren't finished yet as I write this, this code isn't as flexible as
it could be.
The MediaPlayer
constructor's job is to find an appropriate node to
process the movie file and instantiate a copy of the node to be used to
actually perform the playback. It sets playingFlag
to false
(we use this
flag to keep track of whether or not the movie is playing), and obtains a
BMediaRoster
:
roster
=BMediaRoster
::Roster
(); // Get the roster object
This static call in the BMediaRoster
class returns a media roster we can
use. This object is used by all applications that interact with the Media
Kit; it acts as an intermediary between the application and the nodes
that provide media functionality.
Then it's necessary to identify a node that can handle the movie file
that the user specified. This is done by calling
BMediaRoster
::SniffRef()
. This function looks at the file and returns a
dormant_node_info structure that describes the add-on that's most
suitable for accessing the node.
initStatus
=roster
->SniffRef
(*ref
, 0, &nodeInfo
); if (initStatus
) { return; }
Note that errors are stashed in a member variable called initStatus
; the
InitStatus()
function returns this value so the application can determine
whether or not the MediaPlayer
was properly initialized.
Once an appropriate add-on has been identified and information about it
stashed in nodeInfo
, a node needs to be instantiated from that add-on.
Nodes located in media add-ons are referred to as "dormant nodes." We
call InstantiateDormantNode()
to accomplish this:
initStatus
=roster
->InstantiateDormantNode
(nodeInfo
, &mediaFileNode
);
Since the media_node mediaFileNode
is a file handling node, we next have
to tell the node what file to handle, by calling SetRefFor()
:
roster
->SetRefFor
(mediaFileNode
, *ref
,false
, &duration
);
This sets up the mediaFileNode
to read from the specified entry_ref. The
movie's length (in microseconds) is stored into the bigtime_t variable
duration.
Now that the file node has been instantiated, it's time to instantiate
the other nodes needed to play back a movie. The Setup()
function handles
this; we'll look at it momentarily. After the Setup()
function returns
safely, MediaPlayer
::Start()
is called to begin playback.
Setup()
begins by locating the standard audio mixer and video output
nodes. These will be used to actually play the movie to the speakers (or
other audio output device, as configured in the Audio preference panel)
and to a window on the screen (or other video output device, as
configured in the Video preference panel):
err
=roster
->GetAudioMixer
(&audioNode
);err
=roster
->GetVideoOutput
(&videoNode
);
Next, a time source is needed. A time source is a node that can be used to synchronize other nodes. By default, nodes are slaved to the system time source, which is the computer's internal clock. However, this time source, while very precise, isn't good for synchronizing media data, since its concept of time has nothing to do with actual media being performed. For this reason, you typically will want to change nodes' time sources to the preferred time source. We get that using the following code:
err
=roster
->GetTimeSource
(&timeSourceNode
);b_timesource
=roster
->MakeTimeSourceFor
(timeSourceNode
);
The first call obtains a media_node representing the preferred time
source to which we'll slave our other nodes. The second call, to
MakeTimeSourceFor()
, actually obtains a
BTimeSource
node object for that
time source. This will be used in making calculations related to timing
later on, that can't be done through the media roster.
You can think of a media node (represented by the media_node structure) as a component in a home theater system. It has inputs for audio and video (possibly multiple inputs for each), and outputs to pass that audio and video along to other components in the system. To use the component, you have to connect wires from the outputs of some other components into the component's inputs, and the outputs into the inputs of other components.
The Media Kit works the same way. We need to locate audio outputs from
the mediaFileNode
and find corresponding
audio inputs on the audioNode
.
This is analogous to choosing an audio output from your new DVD player
and matching it to an audio input jack on your stereo receiver. Since you
can't use ports that are already in use, we call GetFreeOutputsFor()
to
find free output ports on the mediaFileNode
,
and GetFreeInputsFor()
to
locate free input ports on the audioNode
.
err
=roster
->GetFreeOutputsFor
(mediaFileNode
, &fileAudioOutput
, 1, &fileAudioCount
,B_MEDIA_RAW_AUDIO
);err
=roster
->GetFreeInputsFor
(audioNode
, &audioInput
,fileAudioCount
, &audioInputCount
,B_MEDIA_RAW_AUDIO
);
We only want a single audio connection between the two nodes (a single
connection can carry stereo sound), and the connection is of type
B_MEDIA_RAW_AUDIO
. On return, fileAudioOutput
and audioInput
describe the
output from the mediaFileNode
and the input into the audioNode
that will
eventually be connected to play the movie's sound.
We likewise have to find a video output from the mediaFileNode
and an
input into the videoNode
. In this case, though, we expect the video
output from the mediaFileNode
to be encoded, and the videoNode
will want
to receive raw, uncompressed video. We'll work that out in a minute; for
now, let's just find the two ports:
err
=roster
->GetFreeOutputsFor
(mediaFileNode
, &fileNodeOutput
, 1, &fileOutputCount
,B_MEDIA_ENCODED_VIDEO
);err
=roster
->GetFreeInputsFor
(videoNode
, &videoInput
,fileOutputCount
, &videoInputCount
,B_MEDIA_RAW_VIDEO
);
The problem we have now is that the mediaFileNode
is outputting video
that's encoded somehow (in Cinepak format, for instance). The videoNode
,
on the other hand, wants to display raw video. We need another node, in
between these, to decode the video (much like having an adapter to
convert PAL video into
NTSC,
for example). This node will be the codec
that handles decompressing the video into raw form.
We need to locate a codec node that can handle the video format being
output by the mediaFileNode
. It's accomplished like this:
nodeCount
= 1;err
=roster
->GetDormantNodes
(&nodeInfo
, &nodeCount
, &fileNodeOutput
.format
); if (!nodeCount
) {printf
("Can't find the needed codec.\n"); return -1; }
This call to GetDormantNodes()
looks for a dormant node that can handle
the media format specified by the fileNode
's output media_format
structure. Information about the node is returned in nodeInfo
. nodeCount
indicates the number of matching nodes that were found. If it's zero, we
report that no codec was found.
Note that in real life you should ask for several nodes, and search through them, looking at the formats until you find one that best meets your needs.
Then we use InstantiateDormantNode()
to instantiate the codec node, and
locate inputs into the node (that accept encoded video) and outputs from
the node (that output raw video):
err
=roster
->InstantiateDormantNode
(nodeInfo
, &codecNode
);err
=roster
->GetFreeInputsFor
(codecNode
, &codecInput
, 1, &nodeCount
,B_MEDIA_ENCODED_VIDEO
);err
=roster
->GetFreeOutputsFor
(codecNode
, &codecOutput
, 1, &nodeCount
,B_MEDIA_RAW_VIDEO
);
Now we're ready to start connecting these nodes. If we were setting up a home theater system, right about now we'd be getting rug burns on our knees and skinned knuckles on our hands, trying to reach behind the entertainment center to run wires. The Media Kit is way easier than that, and doesn't involve salespeople telling you to get expensive gold-plated cables.
We begin by connecting the file node's video output to the codec's input:
tryFormat
=fileNodeOutput
.format
;err
=roster
->Connect
(fileNodeOutput
.source
,codecInput
.destination
, &tryFormat
, &fileNodeOutput
, &codecInput
);
tryFormat
indicates the format of the encoded video that
will be output by the mediaFileNode
.
Connect()
, in essence, runs a wire between the
output from the media node's video output
(fileNodeOutput
) to the codec node's input.
You may wonder what's up with the fileNodeOutput.source
and
codecInput.destination
structures. These media_source and
media_destination structures are simplified descriptors of the two ends
of the connection. They contain only the data absolutely needed for the
Media Kit to establish the connection. This saves some time when issuing
the Connect()
call (and time is money, especially in the media business).
Next it's necessary to connect the codec to the video output node. This
begins by setting up tryFormat
to describe raw video of the same width
and height as the encoded video being fed into the codec, then calling
Connect()
to establish the connection:
tryFormat
.type
=B_MEDIA_RAW_VIDEO
;tryFormat
.u
.raw_video
= media_raw_video_format::wildcard
;tryFormat
.u
.raw_video
.display
.line_width
=codecInput
.format
.u
.encoded_video
.output
.display
.line_width
;tryFormat
.u
.raw_video
.display
.line_count
=codecInput
.format
.u
.encoded_video
.output
.display
.line_count
;err
=roster
->Connect
(codecOutput
.source
,videoInput
.destination
, &tryFormat
, &codecOutput
, &videoInput
);
Now we connect the audio from the media file to the audio mixer node. We just copy the media_format from the file's audio output, since both ends of the connection should exactly match.
tryFormat
=fileAudioOutput
.format
;err
=roster
->Connect
(fileAudioOutput
.source
,audioInput
.destination
, &tryFormat
, &fileAudioOutput
, &audioInput
);
The last step of configuring the connections is to ensure that all the nodes are slaved to the preferred time source:
err
=roster
->SetTimeSourceFor
(mediaFileNode
.node
,timeSourceNode
.node
);err
=roster
->SetTimeSourceFor
(videoNode
.node
,timeSourceNode
.node
);err
=roster
->SetTimeSourceFor
(codecOutput
.node
.node
,timeSourceNode
.node
);
The Start()
function actually starts the movie playback. Starting
playback involves starting, one at a time, all the nodes involved in
playing back the audio. This includes the audio mixer (audioNode
), the
media file's node (mediaFileNode
), the codec, and the video node. Because
there's lag time between starting each of these nodes, we pick a time a
few moments in the future for playback to begin, and schedule each node
to start playing at that time. So we begin by computing that time in the
future:
err
=roster
->GetStartLatencyFor
(timeSourceNode
, &startTime
);startTime
+=b_timesource
->PerformanceTimeFor
(BTimeSource
::RealTime
() + 1000000 / 50);
The BTimeSource
::RealTime()
static member function is called to obtain
the current real system time. We add a fiftieth of a second to that time,
and convert it into performance time units. This is the time when the
movie will begin (basically a 50th of a second from "now"). This value is
saved in startTime
. These are added to the value returned by
GetStartLatencyFor()
, which returns the time required to actually start
the time source and all the nodes slaved to it.
Then we simply call BMediaRoster
::StartNode()
for each node, specifying
startTime
as the performance time when playback should begin:
err
=roster
->StartNode
(mediaFileNode
,startTime
);err
=roster
->StartNode
(codecNode
,startTime
);err
=roster
->StartNode
(videoNode
,startTime
);
And we set playingFlag
to true
,
since we've now begun playback. At this
point (actually, a 50th of a second after this point, but who's
counting?), the movie will begin playing.
Notice that we don't start the audio mixer node (audioNode
), and, as
we'll see shortly, we don't stop it, either. The audio mixer node is
always running, and an application should never stop it.
The Stop()
function stops movie playback by
calling StopNode()
for each
node. It then sets playingFlag
to false
:
err
=roster
->StopNode
(mediaFileNode, 0, true);err
=roster
->StopNode
(codecNode, 0, true);err
=roster
->StopNode
(videoNode, 0, true);playingFlag
=false
;
The true
values indicate that the nodes should stop playing
immediately, instead of at some time in the future. If we wanted the
nodes to stop in sync with each other, we could compute a performance
time slightly in the future, and specify that time instead.
The Playing()
function determines whether or not the movie is still
playing. Since we don't provide a pause feature, we can do this simply by
looking to see whether or not the current performance time is at or past
the end of the movie:
currentTime
=b_timesource
->PerformanceTimeFor
(BTimeSource
::RealTime
()); if (currentTime
>=startTime
+duration
) {playingFlag
=false
; }
Here we obtain the current performance time by taking the real time and
passing it through PerformanceTimeFor()
, then look to see if it's equal
to or greater than the starting time plus the movie's duration. If it is,
the movie's done playing.
Last, but not least, let's look at the MediaPlayer
class destructor. It
handles disconnecting the nodes from one another:
Stop
();err
=roster
->Disconnect
(mediaFileNode
.node
,fileNodeOutput
.source
,codecNode
.node
,codecInput
.destination
);err
=roster
->Disconnect
(codecNode
.node
,codecOutput
.source
,videoNode
.node
,videoInput
.destination
);err
=roster
->Disconnect
(mediaFileNode
.node
,fileAudioOutput
.source
,audioNode
.node
,audioInput
.destination
);
This code makes sure playback is stopped (disconnecting currently playing
nodes is discouraged), then disconnects each of the three connections we
established in Setup()
. Then we need to let the Media Kit know we're done
with the nodes by releasing them. We don't release the video and mixer
nodes because we didn't actually instantiate them ourselves.
roster
->ReleaseNode
(codecNode
);roster
->ReleaseNode
(mediaFileNode
);
Hopefully this sample code will be helpful in learning to write software
that uses the new Media Kit for playback of media data. Updates to the
Media Kit chapter of the Be Book are coming over the next few weeks (a
large BMediaRoster
update is being reviewed for accuracy now and should
be available soon). Have fun!
Last week's column, "Another Bedtime Story" , triggered more mail than usual. Most of it was positive, with some requests for clarification of our position. Are we against browser integration—yes, no, and why? To paraphrase a highly placed government official, it depends what the meaning of "integration" is—and, one might add for precision, the intent, manner and consequences of integration.
What we have today with Explorer and Windows is what I'd call tentacular integration. (I won't say here what the spellchecker on my diminutive Toshiba Libretto offers as an alternative for tentacular, a word it pretends not to know. It is in the American Heritage Dictionary and my Libretto will run the BeOS once we get Toshiba to release the specs of their sound chip.)
There is a single word for Explorer, but the software itself has many tentacles that reach deep inside the system. As a result, Microsoft seems to be saying, a lot of damage will occur if you forcibly remove Explorer. Specifically, when the now famous "Felten program" attempted to remove Explorer, Microsoft wanted to show how Web access and navigation were still possible, if you knew your way around Windows Registry, where a myriad of settings are stored. Explorer was so tightly integrated with Windows that one couldn't really remove it, just hide and disable it.
Furthermore, as a result of this failed "Explorer-ectomy," performance suffered greatly. Sorry about that, said Microsoft, it's an imperfect world, but through our efforts consumers benefit from tight integration.
I won't add to the sum of ironies provoked by the videotapes that tried to persuade the courts of justice and public opinion of the soundness of this argument. Thinking of Redmond's serene and understated culture, one can imagine the sharing of thoughts following the discovery of unfortunate editing miscues.
But if, as Microsoft contends, Explorer is not a separate application so much as an operating system upgrade—hence its size, depth, and complexity—one might be tempted to explore that line of reasoning. That is, the Web is important and is now tightly woven into our everyday lives. Certainly, a mere application cannot do justice to such an important and pervasive function. That responsibility belongs to the platform, to the operating system.
Perhaps. But couldn't one make a similar argument about word processing, spreadsheets, or presentations? Shouldn't these important functions also be integrated into the operating system? Come to think of it, if Microsoft Office isn't integrated with Windows, it comes pretty close for millions of users who receive Office as a bundled, if not integrated, product. The technical argument that one must "integrate" Explorer, while one can merely "bundle" Office doesn't withstand technical scrutiny. Take two identical PCs loaded with a late version of Windows 95, OSR 2.1, or later. Install (integrate if you prefer) Explorer on one and Navigator on the other, and compare performance. There's no significant difference.
If one reads carefully through Mr. Allchin's deposition and the language used in various video demonstrations, one sees the words "rich" and "richness" used to describe the improved user experience that results from making elements of the user interface more Web-like. This poses another question regarding the "how" of integrating Web capabilities inside the operating system. To oversimplify a bit, couldn't Microsoft have broken the job in two pieces—a system module and an application module? This approach would make removing the application module a simple and safe task. It would also make installing another browser—while preserving the "richness" afforded to other applications by the system module—easier. One might ask whether Microsoft had non-technical motives in choosing a tentacular implementation instead of the modular one.
Lastly, in reading the court transcript, Microsoft seems to allege that the BeOS NetPositive browser is "integrated," and cites the help system as proof. If you trash NetPositive, a simple operation, the help system no longer works: it needs a now absent HTML interpreter. Ergo, NetPositive is integrated.
Assume for a moment that the help system is written in the format of an editor supplied with the OS. If you delete the editor, you can no longer read the help files. Unless, of course, you use a third-party editor. Does this mean that the supplied editor is integrated with the OS? Of course not. And what will Microsoft say when one or more of the third-party browsers currently under development for the BeOS become available and read the help file or the BeOS documentation—even if NetPositive is missing? Will that make them—ipso facto—integrated with the BeOS?