Here at Be's galactic headquarters we're in the middle of a release, so I'm having trouble picking a topic for this article. R3 (in engineering terms) is history, and I can't remember now if I worked on any cool new stuff. R4 is fresh in my mind, but still months away from being available outside Be. So—do I review the past, or preview the future?
A flip of the coin came down on the side of telling you about some of the cool new features that will arrive in R4 in the main UI and Tracker. Since I don't want to set any false hopes I'll stick to features that are either already implemented or in progress. I'd hate to tell you R4 will have some nifty feature, and then fail to deliver because of some act of God (or JLG).
R3 was our first foray in the Intel world. People seem pleased with our efforts, although we've had plenty of feedback that some of the little differences between that other native OS and the BeOS are annoying. Two top issues have been the different menu-shortcut keys and the lack of application/window switching via the keyboard.
R4 addresses both issues. Users will be able to choose which key on the keyboard—either Control or Alt (Command on Power Mac systems)—maps to the menu-shortcut key. Also, they'll be able to switch between windows and apps via the keyboard. The key combo will match that of the other OS, but the UI is different, better, and more powerful (we hope you agree).
Is anyone out there interested in UI guidelines? We'll finally deliver on that promise. After what seems like years in the making, the guidelines will soon see the light of day, independent of the R4 release schedule. We'll get them to you as soon as they're spell-checked! R4 will overhaul many of the preferences apps to conform to the guidelines, as well as to increase their functionality.
R4 will have lots of improvements in the Tracker. One feature I personally like is the new, improved Drag-and-Drop protocol, which lets apps support drag and drop with the Tracker. For example, with the addition of about 10 lines of code to the Kits, you can now drag Replicants to the Trash.
The Drag-and-Drop protocol also supports the contextual menu you get when you drop a file using the secondary mouse button. Users see a menu listing the data types the application supports. From there you can drag and drop an image out of a graphics application and pick the format for the new "clip" file.
Tracker's list view will be much improved as well. Resizing and reordering columns is smoother and slicker. The Tracker will also be smarter about which columns are shown by default when creating new folders or showing the results of a query.
We'll also introduce lots of international support in R4. I've seen early versions of an input method, so yes, in R4, you'll be able to input Japanese characters. Behind this support is a new "input" server that enables other cool features, including support for various multiple- input devices.
That's a little peek into the future. Remember, though, these are only some of the highlights of what's coming in R4. There's much more I haven't mentioned, which reminds me—I better get back to work.
There are several ways to send data from one process or thread to another. The three well-established ways are:
send_data()
& receive_data()
ports
pipes
Of course, you could also share an area among threads and exchange data that way; that's the "shared memory" model of interprocess communication (IPC). This model has different semantics and can't easily be compared to the other three, so for the purposes of this article, I'm going to stick to discussing the different "message-passing" methods.
Each of the above three techniques has its advantages. send_data()
and
receive_data()
(exhaustively described in the Be Book, as are ports)
require virtually no setup and can send a single message of any length
directly to an arbitrary thread. Once sent, the data sits in the thread's
message cache, waiting to be read. Any subsequent writes to the thread's
message cache using send_data()
will block until the data is received.
The data is also tagged with a four-byte identifier, making it easy for
the receiving thread to unpack the message and interpret its contents.
This method is very convenient for one-shot messages where the recipient
is well-known, but has the downside of being inflexible.
Most Be developers should be familiar with the second method, ports. Each
port is a unique, system-wide FIFO
buffer which threads can use to read
and write data. The advantage of using a port over send_data()
is that a
port can hold multiple messages at a time, and its data can be read by an
arbitrary thread. The queue length of a port is specified by the number
of messages (not the number of bytes!) and is set at creation time.
Now if I were only to discuss the above two, all I would be doing is rehashing the Be Book. And that ain't much of a Newsletter article. Luckily, I have remaining one more IPC method, pipes, which are not covered by the Be Book and have their own advantages. Others have briefly discussed pipes in previous articles, but I'd like to go a little bit deeper.
The pipe() call creates a FIFO, currently capable of holding up to 4K of data. This FIFO is addressed by two file descriptors, which are passed in an array as the argument to the call. One descriptor is used for reading from the pipe, the other for writing to the pipe. Writes to the pipe don't block until the pipe is full.
Here is a basic example of using pipes to send a byte from one thread to another (Unix weenies can sleep through the rest of this article, by the way):
#include <stdio.h> #include <unistd.h> #include <fcntl.h> #include <stdarg.h> #include <OS.h> static intreader
(void *arg
) { intfd
= *(int *)arg
; char c; if (read
(fd
, &c
, 1) < 0) {exit_thread
(1); }exit_thread
(0); } static voidwriter
(intfd
) { charc
= 'A'; if (write
(fd
, &c
, 1) < 0) < 0) {perror
("write"); } }main
() { intfd
[2]; if (pipe
(fd
) < 0) {perror
("pipe");exit
(1); }resume_thread
(spawn_thread
(reader
, "reader",B_NORMAL_PRIORITY
, (void *)&fd
[0]));writer
(fd
[1]);wait_for_thread
(tid
,NULL
); }
The main advantages of pipes stem from the fact that they are file descriptors. It is a very powerful abstraction to have a uniform way of reading and writing data, whether it be to a file, a raw device, or to another thread. You could then use the same code to do I/O to any one of these "objects," and not have to special case them.
Take the case where your program is expecting data from multiple objects (threads, devices, and files, for example). If each object had a different means of addressing it, you would have to spawn a thread for each, wait for the data in a manner customized to each object, and perform some synchronization and notification to let the parent thread know which object has data available. If on the other hand each object is addressable uniformly, you have the ability (which we will deliver in R4) to "wait" for data from any one of sources using a single call in a single thread. This is a very great convenience to the programmer. At present, pipes are essentially just as fast as ports, so there is no reason to not use them for IPC. If anything, we expect pipes to get faster as we optimize them in future releases.
Unix was the operating system that wanted to treat everything as a file
-- devices, other processes, memory, etc. The BeOS takes this idea very
literally for pipes. For every pipe that is created, a file is created in
/pipe
on the system. Similarly, if a
file is created in /pipe
, a
honest-to-goodness pipe is created, ready for reading and writing. This
allows the creation of "named" pipes, i.e., pipes which are globally
addressable on the system. One thread can create the named pipe, and
another thread in another team just needs to know the name, open the
pipe-file, and start using it. In this way, creating pipes is identical
to creating files.
Replace the pipe(fd)
call in the previous example with this code snippet:
if ((fd
[1] =open
("/pipe/mypipe",O_WRONLY
|O_CREAT
)) < 0) {perror
("pipe 1"); } if ((fd
[0] =open
("/pipe/mypipe",O_RDONLY
)) < 0) {perror
("pipe 2"); }
and the program behaves the same way.
At Be we like pipes, and we encourage you to use 'em too.
In last week's article I talked about static queries, one-time glimpses at entries which matched certain criteria on a given disk. If you have not done so already, please go back and read that article, it will make this article much easier to understand.
Developers' Workshop: Let's See What We Can Find, Part 1: Static Queries
This week we are going to look at using live queries to keep the list of matching entries up-to-date in an ever changing file system. The associated sample code for this week can be found at:
ftp://ftp.be.com/pub/samples/storage_kit/LiveQueryApp.zip
The basic theory behind a live query is pretty straight forward. A
BQuery
is built in the traditional fashion, and then
assigned a target messenger to deliver update messages to. The
BMessenger
specified by
SetTarget()
could receive messages identifying
items that now meet (or no longer meet) the query criteria. These messages
could start arriving as soon as the query is
Fetch()
ed and will continue until the
BQuery
object is deleted or
Clear()
ed.
Two messages with a what
data member of
B_QUERY_UPDATE
can be sent to a target messenger. They
are differentiated by an int32 data item called
"opcode", one being B_ENTRY_REMOVED
the
other being B_ENTRY_CREATED
. The rest of the messages
detail exactly which item needs to be added or removed from the matching
list.
This identification is not done with BEntry
s, or
even entry_refs or node_refs as you might like,
but with the components that build those structures: the nodes
(ino_t), device ids (dev_t) and names of the item
and parent. Much like using the C functions from last week's article, you
will need to build the more useful objects from these blocks.
The full descriptions of these messages can be found at:
One thing you will note from the documentation is the suggestion that the
Node Monitor is utilized in conjunction with live queries. This is an
exceedingly good idea on several different fronts. First and foremost,
the BQuery
will only inform you whether a given entry meets the query
criteria or not. Usually you will be trying to display information about
the item itself, and this can change without ever invoking a
B_QUERY_UPDATED
message. So if you are trying to keep in sync with the
data on disk (and you probably are or you wouldn't have implemented a
live query to begin with) it behooves you to also use the Node Monitor to
let you know when your entry is modified.
A good basic look at the Node Monitor mechanism was provided by Scott Barta in his Folder Watcher Newletter article:
Be Engineering Insights: The Tracker Is Your Friend
and the documentation can be found at:
I will not go into the details of how to implement node monitoring here. A look at the documentation, Scott's article and sample code, and the LiveQueryApp itself will show you what you need to know. It will suffice to say that the Node Monitor sends update messages to a messenger whenever a specified node is modified. You can then use these messages to update your entry information as needed, in much the same way as you might use the query update messages.
One thing worth mentioning is that in many cases you might receive multiple, complimentary updates from the query and the Node Monitor. For example, if you were to use LiveQueryApp to look for all source code that contained Query somewhere in the file name, and then modified the file name so that it changed, but still met the query, you might receive the following 3 messages:
B_QUERY_UPDATED
:B_ENTRY_REMOVED
(the name of the item has changed)B_QUERY_UPDATED
:B_ENTRY_CREATED
(the new name meets the query)B_NODE_MONITOR
:B_ENTRY_MOVED
(the item has moved)
This redundancy in notification is a good thing, but there is work that needs to be done to handle the multiple messages correctly. You definitely don't want the overhead of creating and removing items from your lists unnecessarily. You also need to be able to recognize when a redundant message comes in, so that you do not create new items in your list when an update corresponds to something already there.
It should be obvious at this point is that a live query application needs to have a caching mechanism to track details about its matching nodes. The only way to know what items need to be updated is by caching all of the relevant information needed to identify the node, along with all of the information about the node you care about. Then as changes come in, the appropriate structure can be found and updated with the new information (or removed if it no longer exists or if it no longer matches the query.) It turns out that implementing this caching mechanism is the most difficult part of dealing with live queries (or the Node Monitor for that matter.)
The fundamental idea behind the live query is to keep your internal list of matching items in step with the current state of the file system. A valid question to ask is exactly how live do you need this list to be? It is important to get an answer to this question, as it will determine the lengths to which you need to go to keep in sync with the disk. Certain rare applications have the need to keep a list of items absolutely in sync with the disk. These apps have a very low tolerance for error. The Tracker is a good example of an application that is expected to have its queries keep in step as much as possible. Users expect this of the main interface for the BeOS. There is a lot of work involved with keeping a query absolutely true to the file system, far more than I could easily put into an understandable bit of sample code.
The reason behind all of the work has to do with the nature of the
updating mechanism. There is no guarantee that all of the B_QUERY_UPDATED
and B_NODE_MONITOR
messages will come through to you. The message sending
system will drop these messages if unable to deliver them, rather than
keep trying and slow down the system in general. There are a whole host
of edge conditions that a truly robust live query application would need
to have accommodations for. There are various race conditions that would
need to be dealt with, such as a file moving between the time of
identification of the query match and pulling the entry out of the query.
There is also a period of time between the pulling from the query and the
starting of node monitoring that could cause a given entry to be
untrackable. Developers need to be aware of these "limitations" of the
query and node monitoring systems, and be prepared to deal with failures
to instantiate items identified in the query. As you will see, the sample
code deals with some of these issues.
Also, the nature of your query can determine if you receive proper update messages. As you know, a query needs to have at least a single indexed attribute in the search criteria. But the order in which these attributes change currently has an effect on whether a query update will arrive. Take for example the instance where an application creates its own document type, and has a single indexed attribute and the document's MIME-type (the BEOS:TYPE attribute). The order in which these attributes are written into the file can have a great effect on updates. Let's say that the indexed attribute is written before the MIME-type. If you have an open live query on the MIME-type and the indexed attribute, and someone copies a file, here is the sequence of events:
Indexed attribute is written.
The query notices a new item in the index but sees that the MIME type does not match.
No query update is sent.
The matching MIME-type is set.
This file that matches the query will not be sent in a query update message. If, on the other hand, the non-indexed MIME-type was written before the indexed attribute, the update message would be sent, as the query criteria would be met when the new file was examined.
Right now the only work-around to this problem is to not query on non-indexed fields but instead to institute a filter in the target to check the validity of the items. Or you could make sure that all unindexed attributes are written to a file before indexed attributes.
Still, the main source of problems when dealing with live queries come from the free-form nature of the file system: anyone can create or change a matching entry at any time. One way to make queries more reliable is if you search for items that only your application can create. Then barring a user bring a whole slew of matching items from off disk, it will be much easier to maintain a very tight sync between the file system and the internal list.
If you were to examine side by side this week's LiveQueryApp and last week's QueryApp you would see a significant difference in underlying design, even though the interface itself is nearly identical. (With the exception of the removal of the methods for retrieval of static queries. The combining of these two apps into one is left as an exercise for the reader...although personally I suggest that implementing an application of your own would be much better.) Without going into the code in great detail it is beneficial to note the major changes.
First, due to the nature of live queries and the possibilities of
impending updates it is desirable to move the initial query iteration
code into a separate thread. Not only does this free up the window thread
to handle incoming B_QUERY_UPDATED
and
B_NODE_MONITOR
messages, it also
makes the interface much more responsive. You can see items being added
to the list rather than waiting for the iteration to complete before the
end. One implication of this threaded model is the need to use a BLocker
to protect the BList
s for tracking the contents of the query and insure
the integrity of the lists.
The tracking lists are the most important structural changes in the application. To adequately keep track of the matching items, and to recognize and identify the items which need changes, a caching mechanism has been created. It is a system based on a data structure for caching entry information, and two lists: a valid list and a zombie list.
The live_ref data structure represents the information this application
cares about for each matching entry. This includes items meant to
uniquely identify the item (node, device id, parent node, name), a
pointer to the data we care to display to the user (a BStringItem
), and
two fields for the caching mechanism (status and state). live_ref.status
contains information about how successful the application was in
acquiring the needed information about the entry. If all of the entry's
information was successfully retrieved it is marked B_OK
. If there was an
error initializing the object or retrieving the appropriate information
(as might be the case if the matching node was moved in between the time
the update message was generated and attempting to access the entry
described), then status will log the error encountered, like
B_ENTRY_NOT_FOUND
. The state, on the other hand, records whether the item
should be included in the valid list if all of the information is
successfully retrieved.
So, the valid list is where all entries are cached if they match the
query and if they could be successfully instantiated. Items in the valid
list also have their BStringItem
s added to the
BListView
that displays
results to the world. Any item that no longer matches the query, could
not be instantiated, or gets an update before the initial retrieval from
the BQuery gets moved into the zombie list. As every update comes in from
either the Node Monitor or the query, both lists are checked to see if
the item already has a live_ref cached. If one is located, it is updated,
and if appropriate (i.e., it was not removed from the query and all the
information could be accessed), moved into the valid list.
A short example is probably in order.
voidLiveQueryWindow
::UpdateEntry
(live_ref *rec
, boolvalid
) { entry_refref
;ref
.device
=rec
->pdev
;ref
.directory
=rec
->pnode
;ref
.set_name
(rec
->name
);BEntry
entry
(&ref
); if ((rec
->status
=entry
.InitCheck
()) ==B_OK
) {BPath
path
;entry
.GetPath
(&path
); if ((rec
->status
=path
.InitCheck
()) ==B_OK
) { if (rec
->item
==NULL
)rec
->item
= newBStringItem
(path
.Path
()); else if (strcmp(rec
->item
->Text
(),path
.Path
()) != 0) {printf
("new path: %s\n",path
.Path
()); if (valid
) {Lock
();rec
->item
->SetText
(path
.Path
());fValidView
->InvalidateItem
(fValidView
->IndexOf
(rec
->item
));Unlock
(); } else {rec
->item
->SetText
(path
.Path
()); } } } } if (valid
&&rec
->status
!=B_OK
) ValidToZombie(rec); else if (valid
==false
&&rec
->status
==B_OK
&&rec
->state
==B_OK
)ZombieToValid
(rec
); }
Before we get to this function we have set the live_ref state to
determine whether it should be in the valid list or not. The only time an
entry should not be in the valid list is if it has been removed from the
query. Then we attempt to instantiate the entry and get the info we need.
We record the status of each operation in
as we go. Then
we do a simple test: if the item is in the valid list currently, but we
failed to get the info we need, it is moved to the zombie list. Likewise,
if the item is in the zombie list, we succeed in getting the info we
need, and it is not an item that has been removed, we move it to the
valid list.
live_ref.status
Finally, I will note several suggestions for improving the caching mechanism, but which would have added unnecessary complexity to the sample code. The current caching implementation is not very memory efficient. Items that no longer match the query are kept around indefinitely in the zombie list on the off chance that they would be needed again. In addition, as node monitoring is not turned off, the removed node will still generate updates if it is changed.
One improvement would be the addition of a timed event that regularly
clears the zombie list of items with a B_ENTRY_REMOVED
state. It is also
possible to implement the caching and tracking within a single list, or
to implement the valid list inside fValidView
by using a subclass of
BStringItem
that contains the live_ref information in addition to the
information that is displayed. This subclass would become the standard
data storage for both the valid and zombie lists.
As you can see, live queries are very powerful, but there is a corresponding increase in the amount of work necessary to use them correctly. If you are prepared for the extra effort, and do not blindly depend on accuracy of the updates, live queries can add a lot to your application.
I say "what we hope to offer" in recognition of the fact that we have, by necessity, a biased view of what our little company has going for it. Beyond that, I won't insult the reader's intelligence. After all, this is a newsletter of opinions, not revealed truths.
Last week, I attempted to describe the types of individuals we're looking for to help us actualize our potential. This week, I'd like to discuss what a candidate might find in us, what kind of work, work environment, and culture one might find here at Be.
We've already noted that we were looking for a few good "self-directed missiles." That statement is based on our avowed need to minimize management layers and spend more of our resources on getting things done, as opposed to directing traffic and holding those dreaded cross-functional review meetings.
There is more to this. As much as we can, we're structuring work in ways that minimize another plague—the "n" factorial problem. When you have "n" individuals working on a task, the number of communication paths is a function of "n" factorial. As "n" grows, the function accelerates tremendously and coordination eats up more and more of the available resources.
We're trying to combat this by putting as much of a given problem inside a single human head as we believe it can hold. This makes for shorter meetings, with disagreements handled between one ear and the other. Work done this way tends to be more interesting and less political. Also, one is more able to feel the impact of one's efforts on the company, and on developers and customers, than in an environment where work is more fragmented.
Another aspect of our work environment is that we're cheap. In some companies, the thought police would advise me to say "spartan," but once you see the pair of 8-foot couches I bought for $10 in the summer of 1991, when we set up our first office in San Jose, you'll probably agree that cheap is the word. We pay market-driven compensation, the upside being in stock ownership in the company. We have decent medical coverage and a 401K plan. However, you assemble your own chair, and perks such as massages and free juice aren't our style.
So, if it's not the mahogany that attracts people to Be, it must be the work. We've been criticized for being a little too daring and quixotic: Why write an OS, when we already have one. For our part, we know we're doing something that, if successful (we know it's still a big "if"), will have an impact on the industry. How big, we don't know, but as one investor rationalized his decision to support us, this is either going to be big, or it's going to be nothing.
Indeed, we won't be a $12 million software company. While critics say we're playing a game that's too big for us, supporters agree that we're right to focus our specialized OS on emerging digital A/V media and to position the BeOS as a complement to Windows, rather than foolishly attempt to replace it.
One last element of our corporate culture, if we can use so pompous a term—though perhaps it would be better to call it a group character study. Just as I mentioned the multiethnic composition of our team last week, we are also multiopinionated. We like lively debate. This is not always entirely pleasant—we don't defer to New Age sensitivities -- but issues get aired and sacred cows are looked in the mouth. Companies often suffer from refusal to deal with unpleasant facts, potential facts, and thoughts. At Be, reality prevails over the toxic comforts of groupthink.
These are but a few, admittedly biased, and, by necessity, self-serving, features of our work environment. If you'd like to take a closer look, point your browser to http://www.be.com/aboutbe/jobs/, see if we have something that matches your skills and goals and give us an opportunity to show you how you could help us.