I work in QA. What I do around here at Be is pretty much just break things. How do you break a computer? Besides throwing it off a roof, that is.
First, you make sure the software running on the computer does what it's supposed to. A good way to do that is by "class struggle." Just walking through the Be Book and making sure that each method listed exists as printed and acts as described can help you make sure your code does what you think it should.
One of these class struggle missions grew into a rudimentary graphics benchmark program. Here's the code in its very raw state.
ftp://ftp.be.com/pub/samples/graphics/obsolete/Grafiek_Proef.zip
As it's my first attempt at Be programming, it's rough, a bit dirty, like a giant snowball rolling down hill picking up cruft as it goes. In spite of that, it is occasionally useful.
The zip file contains source code, PPC and x86 project files, and a makefile. If you run Grafiek_Proef from the command line it attempts to measure the number of operations per second of each operation. It also takes arguments of a random number seed to use, [-seed integer] and [-noclip], which change the drawing region from 640x480 to 1024x768. Your comments, corrections, and suggested changes are welcome here.
The second and more entertaining method I use while trying to break the BeOS has given me a reputation as a bit of a necro. I ask a computer to do things it should not be able to do. Very often, however, the BeOS does it anyway.
You may recall that some time back, there was a stir on BeDope and slashdot.org:
http://www.bedope.com/contests/contest1.html
http://slashdot.org/articles/9804141121220.shtml
Here I thought I was telling the BeOS to do something it couldn't do, so I could watch where it broke. But it worked. Some people claimed that the screenshot on the "Russian Doll" contest page of BeOS running SheepShaver, Mac OS, SoftWindows, AppleWin (Apple ][ emulator), and Dig Dug was a fake. Nope. I confess. That was me. In fact I later topped it:
http://www.bedope.com/051898.html
What does all this have to do with the ultra sleek OS of the future? Sometimes it's nice to take a look back and see where you've been. It's much more fun running emulators of antique computers than dealing with legacy code and bloatware.
And while you're at it, download Steve Sakoman's bt848 TV card drivers (http://www.sakoman.com/) and plug your Atari 2600 console into it. When you're finished with that, find your old Diamond Dave Van Halen record collection and download BeMAME from BeWare (http://www.be.com/beware/).
Have some fun and write good code!
Just dropping in to let y'all know of several errors in last week's article by William Adams, "Pixel Packing Mama"
Be Engineering Insights: Pixel Packing Mama
If you enjoyed the original article, you'll find it educational to follow along and learn why each of these things is an error. These are not uncommon mistakes, so learning about them now is the best way to avoid them in the future.
Even if you didn't read the article in question, I'll use this as an opportunity to talk about pixel packing and endianess issues in general. And while I'm at it, I'll present a little design documentation for the Application Server and the usage models it encourages.
First, some actual bugs in the code presented with the article.
The code makes the assumption that the rgb_color structure is a 32-bit pixel value. For the record (as this mistake has been made countless times by many developers, no doubt due to unclear documentation on our part) the rgb_color structure is NOT, when cast to a uint32, in any valid 32-bit pixel format, big or little-endian, whether you're executing on Intel or PowerPC. So the following code snippet, which appears in the article's included code, is always a no-no:
colorUnionaUnion
.value
= *((uint32 *)GetPointer
(x
,y
));aColor
=aUnion
.color
;
To find out why, look at the structure definition of the rgb_color
structure. The format in memory is RGBA. A little- endian pixel in memory
is BGRA, and a big-endian pixel is ARGB. (This is also why it is a Bad
Thing to put B_TRANSPARENT_32_BIT
into a 32-bit bitmap to designate
transparency. B_TRANSPARENT_32_BIT
is an rgb_color structure, not a pixel
value.) The proper way to do the above is to assign the components
manually:
uint32value
= *((uint32 *)GetPointer
(x
,y
));aColor
.alpha
= (value
>>24)&0xff;aColor
.red
= (value
>>16)&0xff;aColor
.green
= (value
>>8)&0xff;aColor
.blue
=value
&0xff;
The 32-bit PutPixel()
routine would be changed to work similarly. This
isn't the fastest way of moving a pixel from one place to another, but
William's intention wasn't to show optimized code, but to present a
learning exercise.
Note that the code as given will not show this bug while simply copying
pixels from one 32-bit bitmap to another. The bug turns up when you copy
between color depths, or try to read out individual color components.
Also, the "proper" code above works only for host endianess pixels (i.e.,
little-endian pixels, or B_RGBA32
, on Intel, and big-endian pixels, or
B_RGBA32_BIG
, on PowerPC).
That brings us to the second bug—one of endianess. The BeOS
color_space enum lets you create bitmaps in 1,8,15,16, or
32-bits-per-pixel format, and in either endianess. Looking at
GraphicsDefs.h
,
we can see that the "default" endianess is little:
B_RGB32
= 0x0008,B_RGBA32
= 0x2008,B_RGB32_BIG
= 0x1008,B_RGBA32_BIG
= 0x3008,B_RGB32_LITTLE
=B_RGB32
,B_RGBA32_LITTLE
=B_RGBA32
,
Thus, the code in William's article, which specified the un-suffixed version of these enums, was working exclusively with little-endian pixels. You may not want this if you're on a big-endian processor, and/or writing to a big-endian frame buffer. It's faster for the Application Server to blit between bitmaps of the same endianess than to have to do the conversion. That's why you may want to take into account the endianess of the destination frame buffer and the host's endianess when picking a colorspace for your bitmap.
But let's say you want to use little-endian pixels or die trying. Take a
look at the 2-byte-per-pixel case in the
PixelBuffer
::GetPixel()
code:
caseB_RGB15
: caseB_RGBA15
: caseB_RGB16
: { uint16indexValue
= *((uint16 *)GetPointer
(x
,y
));aColor
.blue
= (indexValue
& 0x1f) << 3; // low 5 bitsaColor
.green
= ((indexValue
>> 5) &0x1f) << 3;aColor
.red
= ((indexValue
>> 10) &0x1f) << 3;aColor
.alpha
= 255; } break;
This code is invoked on either PowerPC or Intel, in cases in which you're using little-endian 15- or 16-bit pixels. The code works correctly for 15-bit pixels, but only if the host processor is little-endian. (Even in this limited case, the code is still slightly flawed, but I'll get to that in a minute.) To understand what the bug is, and how to fix the code to work correctly, we need to look at how the pixel is structured and how it is laid out in memory.
First, why doesn't the code work for 16-bit pixels? Well, the code assumes five bits per color component. In a 16-bit pixel, red and blue are five bits, but the green component is six bits (the eye is particularly sensitive to green, so this uses the extra bit in a 16-bit word where it can do the most good). For a 16-bit, little-endian pixel on a little-endian processor, the code should look like this:
uint16indexValue
= *((uint16 *)GetPointer
(x
,y
));aColor
.blue
= (indexValue
& 0x1f) << 3; // low 5 bitsaColor
.green
= ((indexValue
>> 5) &0x3f) << 2; // middle 6 bitsaColor
.red
= ((indexValue
>> 11) &0x1f) << 3; // high 5 bitsaColor
.alpha
= 255;
That deals with a simple mistake, so now let's attack the more involved endianess problem. To review, a little-endian 15-bit pixel in memory looks like this:
[gggBBBBB] [ARRRRRGG]
Each letter represents a bit. Here we see two bytes: the first holds the low three bits of the green component (the lowercase "g"s) and the whole blue component; the second holds an alpha bit (mostly unused so far in the BeOS) the red component, and the high two bits of the green component. When read into the low 16 bits of a little-endian machine register for processing, the first byte is the low-order byte, so the register looks like this:
[ARRRRRGGGGGBBBBB]
Again, this is a little-endian pixel in a little-endian machine register. It's easy to see why the code works in this case. To obtain each of the RGB components, we shift right (by 0, 5, or 10), mask the bits, and shift those bits up to the high end of the byte, obtaining an 8-bit component. Consider, however, an instance of this code executing on a big-endian processor. In this case, the first byte is the high byte, and the register looks like this:
[gggBBBBBARRRRRGG]
Horrors! Clearly, this code won't work correctly on PowerPC. The components get all jumbled and everything's a mess.
How do we correct for this? We could split off a different case for big-endian hosts for use with little-endian pixels. But if we step back for a moment and think about the problem, it turns out we can kill two birds with one stone (or, as JLG would say, tractor two birds with one stone). We can fix this problem and make our code handle big-endian pixels for free.
Attentive readers will have noticed that it's not the endianess of the pixel per se that matters, but the endianess of the pixel with respect to the host endianess. That is, once we're actually playing with the pixel as a 16-bit word (and not as a bytestream in memory) a little-endian pixel on a big-endian machine looks like a big-endian pixel on a little endian machine, and a little- endian pixel on a little-endian machine looks like a big-endian pixel on a big-endian machine.
Got that?
Simply said, for reading and writing pixels, there are only two endianess cases: host-endianess and not host-endianess. A host- endianess 15-bit pixel in a register always looks like
[ARRRRRGGGGGBBBBB]
and an anti-host-endianess 15-bit pixel always looks like
[gggBBBBBARRRRRGG]
This means we need only two code paths per endianess-independent pixel format to handle both endianesses of processors and both endianesses of pixels. Here's some code to read a 15-bit pixel that works for all endianess combinations:
uint16value
= *((uint16*)GetPointer
(x
,y
)); if (!IsHostEndianess
(ColorModel
()))value
= (value
>> 8) | ((value
& 0xff) << 8);aColor
.blue
= (value
& 0x1f) << 3;aColor
.green
= (value
& (0x1f << 5)) >> 2;aColor
.red
= (value
& (0x1f << 10)) >> 7;aColor
.alpha
= 255;
Here, we simply byte-swap the register before doing the conversion, to
make sure the value is in a format the conversion code can deal with.
[Aside: Notice that half of the shifting has been moved to operate on
constants; this is faster because the constant is shifted at compile time
rather than run time. The speed gains from this trick would be minimal
for the GetPixel()
/PutPixel()
framework, but very significant in more optimized code.]
I mentioned earlier that the code is still slightly flawed. To see what the problem is, take a look at what we're doing for each component. We obtain a 5-bit value and convert to an 8-bit value. To do this, we're shifting up by three. This appears to work—it gets the values pretty close to the mark—but the rgb_color struct produced is not as accurate as it could (and perhaps should) be. The maximum value of 5-bit value is 31. The maximum value of an 8-bit value is 255. We'd like one to map to another, but (31 << 3) is only 248. Ack!
To do a fully correct conversion, we need to duplicate the three high-order bits of the source 5-bit value into the three low-order bits of the destination 8-bit quantity. This automagically corrects our mapping. In applications where speed is more important than accuracy, you might not want to do those extra logical ops. But this learning exercise is not one of those times.
Is there anything else about this code we can improve? Well, this one is a bit nit-picky, but worth mentioning: 15-bit pixels have an alpha bit. Depending on our application, we might want to preserve the alpha information.
So, the fully correct any-endianess conversion is:
uint16value
= *((uint16*)GetPointer
(x
,y
)); if (!IsHostEndianess
(ColorModel
()))value
= (value
>> 8) | ((value
& 0xff) << 8);aColor
.blue
= (value
& 0x1f) << 3;aColor
.blue
|=aColor
.blue
>> 5;aColor
.green
= (value
& (0x1f << 5)) >> 2;aColor
.green
|=aColor
.green
>> 5;aColor
.red
= (value
& (0x1f << 10)) >> 7;aColor
.red
|=aColor
.red
>> 5;aColor
.alpha
= 0-(value
>> 15);
Whew! A lot of work just to get a friggin' pixel, eh? Now I'll clear up some misleading things that the article either said or implied.
The first is that the Application Server allocates bitmaps only in chunks
of 4K or more, and that smaller bitmaps waste a lot of extra space. This
is actually almost never the case. It's true that some bitmaps reserve an
area for their data, and that areas can consist of no less than one 4K
page, but the only bitmaps that do this are those for which you specify
B_CONTIGUOUS
as a flag.
Under the BeOS, the only way to map a contiguous set of logical pages to a contiguous set of physical pages is to create a special area for those pages; thus, an area is needed for any physically contiguous bitmaps. All other, non-physically- contiguous bitmaps are stored together in a single large area that the app_server shares with the client. So, while there's a small amount of overhead per bitmap, and you might want to group together large numbers of very small bitmaps into one large bitmap anyway (say, 2000 4x4 bitmaps), don't assume that at least a page is needed for any given general bitmap. It ain't true.
The second point is more a clarification than a correction. The article
talks about using these
GetPixel()
/PutPixel()
routines and states, "The
second benefit is that you don't actually have to talk to the app_server
in order for your icon to be drawn into your pixel buffer. And why not
talk to the app_server? Because it's a busy team and you would probably
rather not make requests if you really don't have to."
William is absolutely right that there are things you probably shouldn't use the Application Server for. Putting a pixel into a bitmap, or even several pixels, is definitely one of them. It's true that the app_server is a busy team, but it also has threads reserved for your use and ready to serve you at all times.
The reason is that the app_server is a server. There is context-switching and memory-copy overhead inherent in talking to a server. But we're working to minimize and/or eliminate more of that overhead, and optimize the drawing performance, in every release.
So, while you might not want to use the app_server to put individual
pixels into a bitmap, you probably do want to use it to blit any
decent-sized bitmap (i.e., 32x32 and up) into another or onto the screen.
Just be sure to use DrawBitmapAsync()
and to not make any unnecessary
synchronous calls. If you're not satisfied with the performance, gimme a
call.
That's about it for now. For those of you who have been holding your breath waiting for the article in which I promised to divulge all the Ultra-Secret New R4 Application Server Features, it'll be a couple more weeks. I'm still coding 'em. But don't worry. It'll be good.
The Game of Life is a classic programming exercise, perhaps second in popularity only to "Hello World". It's fun to program and it's really fun to watch the exciting patterns and interactions appear and disappear. But as exciting as that classic, 2D game is, this is BeOS, and nothing less than three dimensions will do!
To visualize the 3D version of Life, we're going to use OpenGL. The beauty of OpenGL is that it allows you to concentrate on what you're trying to draw rather than on how to draw it. Put another way, it makes you look pretty impressive with just a few lines of code!
The focus of this article is more on using OpenGL on BeOS, as opposed to a general OpenGL tutorial. However, only basic OpenGL techniques are used in the code, so even if you're unfamiliar with OpenGL, you should be able to understand it. It helps to arm yourself with a good OpenGL book; personally, I'd recommend OpenGL Programming Guide, 2nd edition by Woo, Neider, and Davis.
The sample code for this application can be found at:
ftp://ftp.be.com/pub/samples/open_gl/3Dlife.zip
Before we dive into the code, we need a little background. Life in three dimensions is played in exactly the same manner as Life in two dimensions. For each cell, a count of the living neighbors is taken. In two dimensions, there are eight neighbors, while in three dimensions, there are 26. The number of living neighbors determines the fate of the cell in question: given enough living neighbors, but not too many, the cell may continue living. Or, if the cell was not alive, it may spring to life in the next generation, if conditions are favorable.
Carter Bays, a computer scientist at the University of South Carolina, has investigated three dimensional analogues of the standard, two-dimensional game of Life. The trick in extending the game by an extra dimension is finding rules which properly balance life and death on our little "planet." He has found two such sets of rules, called Life 4-5-5-5 and Life 5-7-6-6. The first two numbers are the lower and upper bounds for sustaining life. For example, in Life 4-5-5-5, if a living cell has less than four living neighbors, it will die of loneliness, but if it has more than five living neighbors, it will die of overcrowding. Similarly, the next two numbers are the lower and upper bounds for creating new life. Again, in Life 4-5-5-5, if an empty cell has exactly five living neighbors, new life will be born. Otherwise, the cell will remain empty.
Now, onto the code! As always, we want to start with a good, clean design, but one which will allow flexibility for future improvements. Most importantly, we will not concern ourselves with optimizing the Life calculation code, tempting though it might be. As Donald Knuth warns us, "Premature optimization is the root of all evil."
The major component of our design centers around multithreading, which
gives us better performance and better responsiveness than if we lumped
all the code into a single thread. We'll have three threads: one for the
Life calculations, one for the drawing of the Life board, and one for the
window. (Of course, the window thread is created for us when we
instantiate the BWindow
, while the first two must be explicitly spawned.)
So in lifeView::AttachedToWindow()
, we start the life calculation thread,
and the drawing thread:
// start a thread to calculate the board generationslifeTID
=spawn_thread
((thread_entry)lifeThread
, "lifeThread",B_NORMAL_PRIORITY
,this
);resume_thread
(lifeTID
); // start a thread which does all of the drawingdrawTID
=spawn_thread
((thread_entry)drawThread
, "drawThread",B_NORMAL_PRIORITY
,this
);resume_thread
(drawTID
);
Now that we've created them, these two threads need to be synchronized. It seems tempting to let the lifeThread calculate as many generations ahead as possible, queueing them up for display by drawThread, but it really won't buy us anything. A little experimenting with BStopWatch confirms that displaying each generation takes much longer than the calculation of the next generation, so we only need to stay one step ahead of the display.
We use two global semaphores, read_board_sem
and write_board_sem
, to let
the lifeThread know when to calculate a new generation, and to let the
drawThread
know that there's a new generation to be displayed. In
lifeThread, we acquire the write_board_sem
, calculate the next
generation, and release the read_board_sem
:
while(!(mv
->QuitPending
())) {nextGen
= new bool[BOARD_SIZE
][BOARD_SIZE
][BOARD_SIZE
];acquire_sem
(write_board_sem
); if(mv
->QuitPending
()) { // semaphore was released from ExitThreads() break; } // calculate the next generationDoLife
(prevGen
,nextGen
); // "post" the next generationboard
=nextGen
;prevGen
=nextGen
;release_sem
(read_board_sem
); }
QuitPending()
is a function defined in
the lifeView
class which returns a
boolean flag. The flag is set to true
in the
ExitThread()
function if a
B_QUIT_REQUESTED
message has been posted. Checking this flag with each
pass of the loop allows the thread to exit gracefully. It's used in both
the lifeThread()
and the drawThread()
.
Similarly, drawThread tries to acquire the read_board_sem
, but here we
use acquire_sem_etc()
, which allows us to timeout. We may be rotating the
display, and we don't want to have to wait until the next generation is
computed to rotate (as is the case if we're running in the single-step,
rather than continuous, mode of life calculation):
while(!(mv
->QuitPending
())) {mv
->SpinCalc
();mv
->Display
(displayBoard
,generations
,steadyFlag
); if(mv
->continuousMode
() &&steadyFlag
) {mv
->continuousMode
(false
); } if(mv
->continuousMode
() ||mv
->singleStepMode
()) { if(acquire_sem_etc
(read_board_sem
, 1,B_TIMEOUT
,100) !=B_TIMED_OUT
) { if(displayBoard
) { delete []displayBoard
; }displayBoard
=board
;board
=NULL
;generations
++;mv
->singleStepMode
(false
);release_sem
(write_board_sem
); } } else { // if the display isn't rotating and we don't have a // new board to display, give the CPU a break! if(!mv
->spinning
())snooze
(500000); } }
It's important to note that we release these semaphores in ExitThreads()
.
If we didn't do this, the acquire_sem()
in lifeThread()
would never
return. Note also that once we acquire the write_board_sem
, we make sure
that it wasn't released from ExitThreads()
, by checking the status of
QuitPending()
.
The OpenGL portion of the code is very straightforward. First, we
instantiate a class (lifeView
) which is
derived from BGLView
, and pass
the BGLView
constructor the options
BGL_RGB
| BGL_DEPTH
| BGL_DOUBLE
,
indicating that we will enable depth testing and double buffering. Next,
in lifeView::AttachToWindow()
, we prepare the OpenGL state:
LockGL
(); // turn on backface cullingglEnable
(GL_CULL_FACE
);glCullFace
(GL_BACK
);glEnable
(GL_DEPTH_TEST
);glEnableClientState
(GL_VERTEX_ARRAY
);glShadeModel
(GL_FLAT
);glClearColor
(0.0,0.0,0.0,0.0);glOrtho
(-BOARD_SIZE
,BOARD_SIZE
,-BOARD_SIZE
,BOARD_SIZE
, -BOARD_SIZE
,BOARD_SIZE
);UnlockGL
();
Before you call an OpenGL function, you must LockGL()
, which assures that
only one thread at a time will be making OpenGL calls. As with all locks,
this one should be held as briefly as possible.
The Display()
function actually draws the board. We clear the view, make
several glRotatef()
calls to get the view we want, translate the board to
be centered about the origin, and draw the living cells as cubes (in the
DrawFrame()
function). Most importantly, don't forget to call
SwapBuffers()
! Because we've selected double buffering mode all of our
drawing is done off screen, and we need to call SwapBuffers()
to be able
to see what we've just drawn.
There's a lot more fun to be had with this code, so we'll return to this project in future articles. Some of the things we'll be adding include zooming in and out, the ability to "grab" the board with the mouse and rotate it, and transparency, so we can see through the cubes. In the meantime, experiment with this code and your own OpenGL ideas, and have fun!
More acronyms? Yes, but only for the duration of this column, and I'll make my use of them clear over the course of it. OPH stands for Other People's Hardware and OPOS for Other People's Operating Systems.
This week's topic arises from suggestions that we look at running the BeOS on various interesting hardware ranging >from multiprocessor PowerPC machines to Alpha-based systems. Proponents often refer to our [supply your own adjective here] decision to port the BeOS to the Power Mac and Intel-based hardware. Since this is clear evidence of a preference for running the BeOS on OPH rather than limiting it to our proprietary BeBox, why not consider more OPH ports?
Going back to our original OPH decision, I confess it wasn't so much strategic as reactive, borne out of irritation. Or, to be more generous to ourselves, prompted by an incident that triggered a decision that was already lurking in the back of our minds.
Arguably, the process started in November 1995 when we saw the multitude of inexpensive dual and quad processor motherboards displayed at Comdex. Later, a Friday afternoon phone call "de-registering" us, three days before the event, >from the May 1996 Apple Developer's Conference, greatly helped to clarify previously inchoate thoughts.
We had been invited to the conference, confirmed (and reconfirmed at our request because we weren't sure of our status), our check had cleared... and then were informed that we had to beg off—because we weren't a Mac developer.
Legally, this was accurate. But, as one of us—I can't recall exactly who it was—pointed out, we could volunteer to become one. Just do a quick port and qualify as yet another developer adding value to Power Mac hardware.
Since there already was a preexisting, if nebulous, acceptance of OPH thought, everyone quickly embraced the idea, including our friends at Apple. The PowerPC version of the BeOS was launched shortly thereafter. A year later, we started giving demonstrations of an early Intel port. Both actions unquestionably establish that we are in the business of running the BeOS on OPH.
In fact, our OPH bias is not so pellucidly clear, and that's where OPOS comes in. When we looked at our decision to move >from the BeBox to OPH, our first thought was that we could benefit from the much higher volumes enjoyed by established hardware platforms. BeOS developers could reach a wider market and Be shareholders could focus their investment on the operating system alone.
As to why we built multiprocessor hardware in the first place, see some of my earlier Newsletter articles:
Strategy
What Business Are We In, Really?
When, Where Can I Buy One?
Be had inexpensive Symmetric Multi-Processor hardware long before it became a quasi-commodity in the Intel space. In turn, our SMP hardware gave us the foundation for the nimble SMP features in the BeOS.
The wider hardware playing field argument, however, is hard to separate from another factor. That is, the presence or absence of a dominant OS on the hardware platform we're considering. In our opinion, at this stage of our game, we think it's a good idea for us to coexist with the dominant system software life form.
These last two sentences require additional explanation. When I write "hard to separate," I refer to the old, unsettled, hardware and OS, chicken and egg argument. The need for a preexisting strong OS refers to the perceived limitations of a nascent and specialized system software platform such as the BeOS.
A strong general purpose OS on the targeted OPH provides a reassuring presence. For a potential customer, the BeOS applications and the BeOS itself can be seen as adding value to a well-established, safe system -- as opposed to demanding an all-or-nothing bet on a BeOS-only solution.
Besides using the precautionary "in our opinion" above, I also wrote "at this stage of our game." Time will tell how broad and deep the range of BeOS uses will become. But for this stage of our life, we're best in a symbiotic relationship with mature OPOS life forms.
A strange thought, understandably uncomfortable for some. It is, however, the thought that underlies our decision not to support some hardware systems in spite of their otherwise impressive performance credentials.
BeDevTalk is an unmonitored discussion group in which technical information is shared by Be developers and interested parties. In this column, we summarize some of the active threads, listed by their subject lines as they appear, verbatim, in the mail.
To subscribe to BeDevTalk, visit the mailing list page on our web site: http://www.be.com/aboutbe/mailinglists.html.
Mark S. VanderVoord votes "yea" on applications that can act like add-ons: “A user's suite of applications becomes a set of modules, all of which are customizable and/or interchangeable. If a user was to learn a SpellCheck module, why should they need to learn a new one just because they purchase a new word processor?”
So, what are the chances that application writers will publish add-on versions, and that other developers will use these add-ons? Osma Ahvenlampi: “All it requires is some co-operation and self-discipline on the parts of the authors, as well as the capability to accept someone else's work and not fall into the Not Invented Here syndrome. In other words, especially if we're talking about commercial software, and based on prior experience of such, not bloody likely.”
But does anything stand in the way technically? Not much in the way of plumbing, but would a service-providing app know which format the service-requestor wants? Jon Watte: “This is solved already. Something initiating a drag can put 'promised' data formats in the drag message, and the recipient can choose the one that fits best and ask the initiator to fulfill the promise. It's in R4. Part of the protocol is the ability to say 'please hand me a disk location, and I can write data to there' so you can drag clippings to the Tracker.”
How many items is too many? Jim Menard has noticed that a BListView
with,
oh, a thousand items can take a very long time to display. “The
construction of the list items is not the time-waster. Adding the items
to the list is taking all of the time. Any suggestions, or do I have to
roll my own scrollable list widget?”
A number of folks wrote in to say that "thousands of items" is asking for
too much, and that the mechanics of scrolling through that many items
isn't a great interface. Jon Watte summed it up: “If you already have the
data in some indexable storage, I don't think you should use a BListView
.
The problem with BListView
... does not fit well into a framework where
you have alternate storage. Instead, write your own array view. It's not
hard; especially if each item has the same height.”
So what's a good 10,000 item displaying GUI paradigm? Some suggestions were offered, including a background task that reads items in the direction of the scroll.
How do you set the submenu delay, i.e. the times it takes for a submenu to appear after the mouse touches its initiating menu item?
THE BE LINE: (From Pavel Cisler) You can't in R3—but R4 implements a whole new callback mechanism to do this properly, stay tuned.
Joystick talk led to a semi-rhetorical poll from Jon Watte: “What level
of cooked-ness of data should the BJoystick
class provide to your
application? Should it implement things such as scaling of input data,
dead-zone centering, and filtering to eliminate jitter? It seems to me
that you'd be better off with raw-er data, because then your game can
decide what the right thing to do is, but maybe games developers expect
this from the OS?”
Sean Gies answers practically: “There could be a Joystick preferences app
to take the user through the standard calibration routines. That way, I
can always expect to receive a float between -1.0 and 1.0 when I query a
BJoystick
object...Filtering, however, should be left up to us game
developers so that we can determine how 'soft' or 'hard' our game will
feel.”
"And I as well think this," spake others.
Is there anyway to set the mouse position? Will it eat your soul? Most folks think that setting the mouse position should be rare, but it shouldn't be prohibited.
THE BE LINE: Use set_mouse_position()
and link against
libgame
. It works
even if you aren't displaying a BWindowScreen
.