This past summer, my wife and I went on a wonderful vacation to Prince Edward Island. We toured the island on our tandem bicycle. It was fabulous—country roads, little traffic, and wonderful people. Cycling is a great way to travel, and it is even better on a tandem. Two people on one bike are faster than those same two people on two bikes, and the back rider can actually spend the afternoon enjoying the sights.
What does this have to do with Be and the BeBox? Maybe nothing, I just wanted to reminisce about summer. But if you think about it, a two-person bike, a two-CPU machine—they must have something in common. So actually there is some sense to this column.
While I wouldn't claim that programming a BeBox is as easy as riding a bike, I would like to make an analogy to riding a tandem. If you already know how to ride a bike, riding a tandem is really very simple—once you and your partner learn a few rules. Who balances the bike at stops, and does the bike lean to the left or right? What are the signals to start coasting or to resume pedaling? Who gets on and off the bike first and how do you spin the pedals into position when you mount the bike, without whacking your partner in the shins? Those are basically all the rules of the road. Enjoy the ride.
If you have any sort of programming experience, then we hope you'll find programming the BeBox a breeze, just like moving from a single bike to a tandem. And like riding a tandem, there are just a few new rules to learn in order to protect your shins. Interestingly enough, these rules partially stem from the fact that, like a tandem, the BeBox has two engines (CPUs) instead of one.
For some, perhaps many, the BeBox will be their first experience programming on a true preemptive, multithreaded, multiprocessing machine. Not only is the Be operating system multithreaded, but the system itself makes significant use of this functionality. As described in an earlier column ("Be Engineering Insights: Programming Should Be Fun,"), in the Be operating system each window has its own client-side thread. In addition, every application has a dedicated thread, called the main thread. In many other desktop systems, applications typically consist of a single thread of execution. On the BeBox, applications are inherently multithreaded, simply by virtue of putting up a few windows. This is an important distinction and I would like to discuss two important consequences of this design.
The first concept is that each thread runs asynchronously, and thus can change its associated state and data at any time. The implication of this fact is that code must be designed to avoid using inconsistent or garbage data. For example, suppose the application's main thread has a pointer to a window, and it wants to get some information out of that window. Perhaps the window just decided to close, leaving the application with a garbage pointer, or perhaps the window is in the middle of changing its state so the data is inconsistent. Trying haphazardly to access this window can lead to undefined results.
The programmer has several options to handle this situation. One solution is to prevent the user from closing the window, and have all its state changes controlled by the main thread. In this case the main thread itself controls what that window can do, so it can safely access the necessary data. Another option is to use the messaging system. The main thread can post a message to the window requesting information. When the window receives that message, it can ensure that it's in a consistent state and return the desired information. The message-passing framework on the BeBox is multithread safe—the system handles all the tricky cases, such as the window disappearing in the middle of a "post." The final option is to "lock" the window before directly accessing any data. This locking ensures that the window still exists (that is, the window hasn't closed), and that the window's thread won't execute any code until the window is unlocked. The idea is that given a pointer to a window, the Lock method will either successfully lock the window or gracefully fail, if the window no longer exists. Without this ability, one thread could not safely access data controlled by another thread. This locking is implemented using a semaphore. There's a section about semaphores in the "OS Kit" chapter of The Be Book. You can also learn more about semaphores in almost any OS text book.
That gives you three different methods to access data safely, and there are probably more. The method to use depends on the situation. The important thing is to understand and think about these issues.
Be Commandment #1: Thou shalt not covet another thread's state or data without taking proper precautions.
This brings us to the second and final concept—avoiding deadlocks. Once this is understood, programming the BeBox can be as fun and simple as riding a tandem. When a program starts locking objects to access data safely, deadlocks become a possibility. There has been lots of research on deadlocks, why they occur, how to avoid them, and how to detect them —just take a look at any OS book. But understanding theory is often quite a bit easier than putting that knowledge into practice. Yep, I've programmed myself into a lot of deadlocks.
How can you avoid deadlocks? One good way is to minimize the amount of locking. Always ask yourself if there is a better way to structure the code to eliminate the need for locking. Structuring code to use messages is one alternative to locking. A couple of things to remember are that deadlocks can only occur if a thread holds two or more locks at a time, and that in the Be operating system, whenever code is running in a window thread, the window object is locked by that thread. This implies that whenever a window thread locks another object, it's holding at least two locks, opening the door for a deadlock. For example, suppose two windows, A and B, want to share some information. If each window thread tries to lock the other window in order to access the information, there is the distinct possibility of hitting a deadlock.
In this example the thread for window A first locks window A (this locking is implicit, as described above), and then tries to lock window B. The thread for window B locks the objects in the reverse order. The same two objects are being locked in two different orders, possibly resulting in a deadlock, with A waiting to lock B, but B is already locked, and itself waiting to lock A.
Be Commandment #2: Thou shalt not lock the same objects in differing orders.
One solution to this problem is to restructure the code, so that if multiple locks are needed to access the information, those locks will be locked in the same order by both windows. This eliminates the chance of a deadlock. One possibility might be to move the information associated with each window outside the context of the window, and under the control of a separate lock—let's call those locks X and Y. If both windows try to acquire the locks in the same order, say X first and then Y, a deadlock can't occur. When one of the windows owns lock X, the other window would block trying to acquire that lock. The window that owns X then acquires lock Y, and when that window finishes it releases both locks. This frees up the other window, allowing it to proceed. There's no longer the possibility of a deadlock. The key is acquiring the locks in a consistent order.
That's all there is to safe multithreaded code. In simple Be applications, the above issues usually don't come into play. For example, in a single windowed application you normally don't need to worry about locking and deadlocks. Programming the BeBox is as straightforward as riding a bike. If you can understand and apply the two issues discussed above, then building the most sophisticated application is only slightly more involved—just like riding a tandem.
If you look at a number of Be developers, certain patterns begin to emerge. Many are simply fed up with the existing platforms and they're looking for a better alternative. They're less concerned with opportunities to push into high-volume markets, like Windows, the Macintosh, or UNIX, and they're more concerned with creating the best possible applications for the most advanced technology.
Rafael Denoyo of Salem, Oregon, is a good example. His Dynamic Predictions, Inc. makes software that specializes in numerical analysis. The nine-person company produces custom programs for financial firms and banks that deal with large numbers in analyzing market and financial trends. Denoyo has worked with all of the popular platforms—from Windows NT to Solaris. And he's had it.
"It's never been difficult to go from one platform to the other," he says. "But I'm not satisfied with any of them. With Windows NT, I'm not satisfied with either the OS or the API. I don't like Solaris either."
While NT has a lot of good features, Denoyo says, it's too bogged down with "legacy code...There's too much it has to be compatible with." He appreciates Be's decision to simply build the best possible computer and operating system, without getting weighed down with past compatibility issues. Denoyo says he liked NEXTSTEP, even though the hardware was expensive. But just a few weeks after Steve Jobs assured him that NeXT would stay in the hardware business, Jobs abandoned the NextStations. NEXTSTEP for Intel Processors is a good operating system, he says, but "it's fizzled and nobody is using it. It's good technology, but there's no reason to get hitched to a stalled wagon." But even after getting burned on NEXTSTEP, Denoyo is willing to take a chance on Be. I think it's a better architecture," he says. "There's more power for the money."
Denoyo describes himself as a "classic early adopter." "Be is on the upward curve," he says. "It has a lot of momentum and I like to get in on things at the very beginning." Denoyo has ordered eight BeBoxes, and he's excited about the prospect of a new platform.
A new usenet newsgroup has been created. Its name: comp.sys.be.
The voting results were 337 yes to 31 no. (There's just no pleasing some people....) Thanks go to Julie Petersen for being our third-party proponent, and for shepherding the proposal through the RFD and CFD processes.
At the beginning of December we held a contest to find a name for the Be Operating System. Since then we've received a large number of creative ideas!
We would like to thank all those who have entered and invite you to continue sending proposals to contest@be.com. Enter as often as you 'd like.
We'll select our Be OS name in January!
Five years ago, we started with three basic ideas: Building a hardened architectural advantage against legacy systems, helping software developers change the economics of their business, and choosing the kind of customers we had to please in order to gain a foothold in the marketplace and to detect the new mother lode of innovative applications. So far, time and changes in the marketplace have been kind to these three ideas. In particular, the legacy systems we know and love are getting richer, but also slower, fatter, more complex, and more fragile, thus making a good case for a fresh start in markets and applications outside of the office automation domain. We are also helped by developments we did not foresee: The PCI bus took off, thus strengthening our commitment to the PC clone organ bank, and the Internet, 25 years in the making, finally gained wide currency, offering an ideal vehicle for Be and our developers.
I don't want to represent that everything went according to plan. Ours was not a straightforward path, ahead of schedule and under budget. We experienced a number of difficulties and setbacks along the way, euphemistically called "learning experiences." One such situation deserves the term particularly well. We started with the belief we could put different processors under the same yoke, on the same motherboard. We don't feel that way anymore. Here's why.
In the beginning, we saw DSPs as a way of offering gobs of inexpensive computing power. Our first motherboard sported two RISC microprocessors and three DSPs, all 32-bit, all five from the same vendor. One DSP supported voice, fax, and data communications, the next one sound, the last one video. Software implementation of such functions promised great flexibility: Upgrades would be easy and DSPs could be "borrowed" for other uses requiring their computing power, such as mathematical computations or graphics. At about $30 per DSP, these were inexpensive MIPS. The real price was elsewhere.
First, DSPs are very difficult to program. They are much more finicky than conventional microprocessors and only achieve their rated performance if special care is taken in structuring the sequence of operations they're asked to perform. Otherwise, contention for internal resources causes visible performance degradation. Hand tuning of code is often required and programming tools are much less sophisticated than those found on conventional microprocessors.
Then, some kind of operating system is required to babysit the DSP or DSPs, to schedule tasks, manage resources, and communicate with the rest of the system, and the other processors. This gets very complicated. People developing the system now have to contend with two programming models and two pieces of system software and the coordination headaches between them.
Lastly, there are the challenges of working with the maker of the DSPs who also supplied the necessary system software, which had to be ported to our system. This results in the usual technical and cultural difficulties, regardless of the skill and goodwill deployed by both sides. Changes on one side caused major headaches for the other, communication problems, and delays. We were, in fact, developing not one but two operating systems.
Fortunately, the PowerPC appeared on the scene about two years after we started Be. Two factors drew us to it. One was the raw power its architecture delivered, especially for us, free from emulation requirements. The other attraction was its ability to perform high-speed signal-processing tasks, thanks to a clever implementation of specific floating-point instructions.
Suddenly, we have the native signal processing power we need and we make everyone's life (including application developers) much simpler and safer. And, as demand for more media processing arises—and we know it will—we can add more processors without disturbing the applications or their authors.
We quickly reached the decision to port our operating system to the PowerPC. As an added benefit, we proved to ourselves the system was indeed portable, none of the applications were affected, and we fixed a few low-level problems that the process uncovered.
We were not alone in trying to glue DSPs onto motherboards. So far, none of these attempts have been successful. Apple tried AT&T DSPs with the AV Quadras, NeXT made an earlier attempt with a Motorola signal processing engine and, in the PC clone world, ACER went a similar route, with the same results.
This is not to say DSPs are not successful in the PC world. For instance, every modem is built around a DSP today. But the big difference is that the application programmer doesn't have to know about it. The DSP is so embedded, so hidden, it only manifests itself as an AT command set. It's cheap, invisible, and doesn't create system software headaches.
Now we're witnessing the reappearance of DSPs on PC motherboards. Only they're called multimedia chips this time and, yes, they handle sound, video, voice, fax, and data communications. This is a trend Intel is watching very intently as it could steal motherboard income from them. Time will tell how these new entrants deal with system software heterogeneities. My guess is the less visible to the programmer, the more likely they are to find acceptance.
The current claims of being all-singing, all-dancing, multimedia engines might make the desirable transparency an elusive goal.