By far and away most of Be's developers are writing applications. I love these guys because they are, through their creative best efforts, writing the killer titles that attract paying customers to the BeOS, thereby paying for my children's tickets to college with their 100 GHz Pentium Wristband firmly strapped on. However, there is another class of programmer developing for the BeOS.
The oppressed community of device driver writers is seldom heard from. Slogging through the digital bilge that accumulates in the hold of any computer platform, they grunt and groan to keep the disks whining, the mouse smooth and the keyboard responsive.
This is not the clean, antiseptic world of BLooper
s
and BView
s. The ugly
reality of ancient hardware, designed by engineers adept at squeezing
maximal functionality into a minimum of gates and thereby rendering the
device inscrutable to anyone but themselves, implemented by penny
pinching silicon sweat shops with little regard for stated timing
parameters, documented by sadistic teases who grudgingly reveal just
enough detail to confuse even the folks at Langley—this is the milieu
of the device driver writer.
Not that I have a chip on my shoulder.
Hardware devices are often capable of transferring data directly to or from the main system memory, bypassing the memory management unit (MMU) in the main processor that translates virtual addresses into actual physical addresses. Drivers written for these Direct Memory Access (DMA) devices need to use the physical addresses in main system memory for their transfers.
Most clients of device drivers, however, use virtual addresses when referring to system memory. Usually, the range of virtual addresses is pageable. That is, the kernel is free to decide almost anytime that it needs the physical memory for something else, after first writing the current content to a disk-based swap file for retrieval later when a program actually gets around to using it.
If a device driver has recorded the physical address of a buffer (using
the get_memory_map()
) call, and starts blasting data into it while the
kernel has decided to use it for something else (perhaps some frilly user
interface widget), things could get exciting.
The lock_memory()
call is an essential function exported from the kernel
for device drivers to use to prevent such excitement. Device drivers
intending to use DMA should first use the lock_memory()
call on the range
of virtual addresses they will be playing with. lock_memory()
will
guarantee that all the underlying data in the range is actually in
physical memory and not gathering dust in the swap file. It also nails
the data down, so it will not be paged out or moved for the duration of
the lock.
There is, of course, an unlock_memory()
call to undo all these Draconian
measures. Martha Stewart recommends that every call to lock_memory()
be
matched by a call to unlock_memory()
, preferable after the
DMA device has
completed its dirty business.
The prototypes for these two calls are:
extern longlock_memory
( void *buf
, /* -> virtual buffer to lock (make resident) */ ulongnum_bytes
, /* size of virtual buffer */ ulongflags
); extern longunlock_memory
( void *buf
, /* -> virtual buffer to unlock */ ulongnum_bytes
, /* size of virtual buffer */ ulongflags
);
The nuances of the flags are not only useful, they even provide insight into some interesting design quirks in a few of the platforms that support the BeOS (or do their damndest not to!).
The flags are:
#defineB_DMA_IO
0x00000001 #defineB_READ_DEVICE
0x00000002
B_READ_DEVICE
is used to tell the kernel that the device being read from
will be depositing data directory into memory. In the BeOS implementation
of virtual memory, the kernel often attempts to be clever and see if a
range of memory has been changed since the last time it was read in from
the swap file on disk. If it has not changed, there is no need to write
it back to disk when the memory needs to be used for something else.
The MMU has a built in mechanism for detecting changes affected by the
processor, but this mechanism does not detect DMA transactions from other
devices. The B_READ_DEVICE
flag tells the kernel that indeed the memory
is going to change, so don't try to be clever about it.
The B_DMA_IO
flag is an unfortunate crutch made necessary by what some
might perceive as a flawed implementation of the bridge between the
PCI
bus and the main system memory bus in certain motherboard designs from
one of our supported platforms. Normally, when a DMA transaction is
started by a device on the PCI bus and travels though the bridge to
system memory, the bridge is responsible for notifying any and all
devices on the main system bus (e.g. the processor(s)) that the memory is
being written, and that those devices should remove any copies of that
memory they may have stashed in a local cache for fast access. This
mechanism was not implemented in a few designs—my suspicion is that it
was implemented, but did not work reliably, and was disabled.
The work around for this lame behavior is to tell all the processors to
NEVER cache locations that may have DMA transactions coming in. This can
result in a significant performance hit if those locations are accessed
frequently. The D_DMA_IO
tells the kernel to temporarily mark the range
uncacheable until the matching lock_memory
.
This turns out to be a very complex operation. Data in that range needs to be flushed from all caches. The data structures used by the cache to determine an address's cacheability need to be updated to reflect the new non-cacheable status—but if a part of the range shares a cache line with anything that gets used in updating the data structures, it will end up back in the cache! This alone probably added a few weeks to the Preview Release's schedule.
This is only one example of the murky world navigated by the unsung heroes of the modern age—the device driver writer. It's time to give them all raises, take them to lunch and admire their cool white socks. Then lock them back in their beloved windowless rooms where they belong.
SPC (single, privately owned, corporation), 7, well-endowed with stock options, seeks multiple partners to work on the OS of the future.
You must have stamina, a sense of humor, communication abilities, and enjoy eating out six nights a week in Menlo Park. Long walks through kernel code, strenuous hikes through the Tracker, and lazy afternoons spent staring at an app_server crash can all be yours.
I can offer you intellectual discussions, health benefits, bleeding edge experiments, and soft couches.
Don't delay, act now...
http://www.be.com/aboutbe/jobs/index.html On a serious note, we are looking for a number of great people to fill positions in all areas: marketing, sales, and engineering. In the past we have hired a number of Be developers, William Adams and Jake Hamby to name two. We like Be Developers. Don't be shy, come talk to us, we want to hire you!
If you say something often enough, doesn't that make it so? The BeOS is the Media OS. Pretty fast, able to handle large amounts of data. Frameworks for sight and sound. Well, we've said it often enough that our developers deliver on the promise and are coming out with some very interesting things indeed. Some more mentionables:
Power Pulsar—Raphael MOLL
This little thing is a tekno funk DJ's dream come true. It displays all sorts of nifty lights, curves and other effects all based on the pulsations of the music being played. And of course the music sources can be varied between CD and digital audio off your hard disk.
Dual Player—Tinic Urou
How about a sound player that will play most sound file formats you can think of and throws in quite a few nifty real-time filters to boot. You just have to play with this one. Tinic is one of my favorite programmers, and is prolific in the extreme. His work really shines with this one. He uses his own MIDI synthesizer, which sounds very good, and the live effects are enviable.
These are both shareware products that can be found in the Audio section of our BeWare web pages:
http://www.be.com/beware/Audio.html
or in general
http://www.be.com/beware/
Of course, these aren't the only things in BeWare. Every time I've given a demo, I am asked naturally enough, where's the software? And, I often times respond with, take a look at our BeWare pages. I took a look recently at the audio section and found no fewer than 5 applications that were capable of playing good quality sound. There were a couple of sound editing systems, and a lot of source code.
What does that mean? Well, it means that we move ever closer to a point at which a person would look foolish for asking the question, "where's the software." Of course we're a long ways off, but one inch closer is one inch.
If you are a developer wanting to use sound in your application, you would do well to study what gets dropped into the BeWare site often. There is so much source available, and often times new stuff that isn't simply a port of something from the UNIX or Mac worlds. It gets me all goose pimply to see such a positive outflow of applications all the time.
Another thing that's exciting is when you can interact with a developer and bounce ideas off one another spiraling up the features further and further. Recently, Doug Fulton pulled me aside and said, "pppssst, hey buddy, you wanna see the killer app?" How could I refuse. Doug showed me this thing called axe. I have no idea what axe stands for, but it was a fun toy to play with. Me, a decidedly non-audio person, had fun laying down notes on this radar screen thing and was able to make interesting compositions in minutes. I'm sure he'll use it to write a nifty newsletter article, so I won't spoil the fun now. The main point of it was that audio on the BeOS is a very easy thing to play with. Anyone has quick and easy access to MIDI services, and real-time audio filtering is a snap.
I hope this trend continues and spreads to the visual realm as well. From what I've been seeing lately, I think we have some very serious applications on the horizon that will further demonstrate the meaning and value of the Media OS.
Scalability is a very attractive idea. Unsurprisingly, it is a much abused one as well. Scalable went from signifying something could be climbed to referring to measurability along a scale. Only recently did scalable become a computerese synonym for extensible, a handle for the ability to work over a wide range, a large scale of sizes. By that definition, a scalable architecture is very desirable. An example of scalability would be a family of processors implementing the same instruction set or, on the software side, the same operating system, for targets ranging from watches to mainframes. Desirable, but not very realistic.
You don't just shrink a 64-bit workstation processor or expand a 4-bit micro-controller, nor do you perform similar operations on the equivalent software engines. In practice, the architectures we know offer limited scalability around their original design center. In other words, they don't stray very far from their place of birth. This is true for PC operating systems such as Windows and the Mac OS, for Unix, or for embedded software engines.
Windows NT is an apparent exception, targeting desktops, workstations, servers and, we are told, mainframes. But, here, we're dealing more with modularity than with scalability. To some degree, Windows NT covers a broader range of targets by adding, removing and replacing modules. This is a benefit arising from Windows NT's modern architecture. Its older siblings are more monolithic and cannot be "edited" as easily. In other words, if you try to cut them, they bleed a lot, they have no way to go but up, they never scale down.
But I didn't set out as a Windows NT advocate, even if I admire the Microsoft combination of engineering and marketing prowess that made NT the holy terror of the enterprise market. I was merely attempting to differentiate scalability from modularity as well as giving an example of the advantages of more modern, more modular architectures.
Besides broadcasting a few hundred thousand of copies of the BeOS through bundling agreements and magazine inserts in the US, Europe and Japan, we've been experimenting with different versions of our product. Architectural features such as the client-server architecture inside the BeOS make it relatively easy to reconfigure modules and to remove functions not needed when addressing a different target application. We've been asking ourselves if our core technology couldn't be also applied to smaller devices, to emerging consumer multimedia and network computing applications.
So far, we've built a small number of interesting test cases; the most extreme one resides on a 1.44 Mb floppy, incorporating the core BeOS and our browser. Take a bare bones Pentium clone costing about $350 (sans hard disk, but including a $19.99 Ethernet card), connect it to the network, boot it from the floppy and "presto," you're browsing the Web.
It would be very premature to read into this we intend to make a play for network computing or embedded consumer multimedia applications. We have completed the first phase of the BeOS development and are broadcasting the Preview Release. Having established the foundation, we are now moving into a new phase and investigating opportunities to broaden the range of applications for the technology we've created.
And we are also hard at work on a forthcoming update fixing bugs and adding a few polishing touches to the current Preview Release.