Haiku Admin Meeting 2007-06-18: Discussion about some "missing" articles on the new website - specifically old newsletters. Also desired is a way to make them easier to locate. Mention of the continual "browser for Haiku" topic that pops up in the mailing list and elsewhere. An official decision should be discussed and published. It may be still too early to fully decide on a direction. More discussion took place about a potential physical "
Haiku’s network stack follows a top-down approach with clear boundaries, pure object oriented design. However, TCP/IP protocol stack was designed more than 30 years ago with different idea in mind. It’s mostly speed, fast processing and stability that designers were looking for back then. That’s why many boundaries in the TCP/IP protocol suite design are blurred somehow.
Basic hierarchy is established. Datalink layer protocol (Ethernet, PPP, etc.) is followed by a Network layer protocol (IP), which is followed by a Transport layer protocol (TCP, UDP, SCTP, etc).
Just to keep those of you interested updated, after discussing it with both my mentor and Michael Lotz, and after a very quick chat with Francois Revol, I am going to add isochronous support to both the USBKit and usb_raw driver. Meanwhile Francois, if time is on his side, should add isochronous support to his user space quickcam driver (see src/add-ons/media/media-add-ons/usb_webcam/). This way I can test my previous patches and perhaps everyone can start using his logitech quickcam with Haiku by using codycam.
For those of you who are not following the haiku-commit mailing list, I’ve added the isochronous support to the UHCI driver. I’m working on a quickcam driver to test the code, but if someone of you out there, already have some very simply driver, that needs isochronous support, please contact me and help me with the testing.
The sooner I’m done with the testing, the sooner I’ll move on to the OHCI driver.
(Or: knitting a delicate fabric, part II: sewing it all together)
(Or: Right, Joker, the underwear might be on the outside, but I get to drive the Batmobile!)
(Or: I just had to say something about Batman and the Batmobile. Couldn’t help it.)
Cool, now we have a space- and time-efficient data structure (which is nothing more than a nice structure to handle non-overlapping numeric ranges) we can use to implement the simplified stride scheduling thing I’ve been discussing ou the previous posts, which will remain small most of the time, and won’t change its shape like mad, so it’s also very gentle on the CPU cache. Not only that, but the same effects hold for any trees, shall we decide that red-black trees aren’t adequate and set ourselves to use splay trees, AA trees, AVL trees, Huffman trees, unbalanced binary trees (please: no.), whatever. So far we took care of the (not any more) skewed proportions due to randomness (by using strides), the mapping of “tickets” to tasks (by using trees to store the queues, “indexes” as the tickets, offsets to simulate “special” tickets, and lazy evaluation to efficiently implement offset propagation to upper queues), and the order of complexity (by using the queues as the node elements of the tree, where the key is base priority times number of threads in that queue).
(Or: knitting a delicate fabric, part I: the wool[1])
I sincerely hope you've read the disclaimer by now, but I guess I'd better link to it anyway :) Thanks.
I spent the better part of the last post explaining how simple strides would yield an approximation of the ideal shuffling of tickets. What I didn’t explain, however, was why the hell am I insisting on using tickets when strides/passes avoids those issues completely.
Well, the thing is, I didn’t ditch my previous attempt completely. It had flaws, but there were some gems there as well. I don’t know a single programmer who can’t recognise it’s possible to find sound ideas and really clever excerpts of code even when, on the whole, the code was crap.
(Yes, programmers and software architects are a proud bunch of people.)
And tickets are making a comeback, but in a very different context.
Anyway, suppose we’re starting with a clean slate, and all we have so far are the tasks and their respective priorities. Now let’s scribble a little in that slate :)
(Or: what you finally learn to do after a number of false starts)
But first, the obligatory disclaimer, in case you missed it. Thanks.
After I admitted the basic flaws of my previous attempts, I decided to go back to the basics and use a “bottom-up” approach to come up with a solution that suited the needs of the Haiku scheduler: fairness, responsiveness, responsive, scalability, succinctness and efficiency.
Right. So let’s keep in mind that the BeBook already hints at what the priority numbers should roughly mean: frequency of scheduling. We have 120 priority levels; the higher 20 are reserved to real-time threads.
This post is not going to address these directly. Instead, today I’ll be telling the background story of how quite a number of pieces are being woven together to fulfill those specific needs and constraints. OK, enough metablogging ;)
After letting go of the previous approach, I decided to study the subject on the very traditional literature about it. And while I was studying classic scheduler approaches on the Minix book, I noticed that the description of lottery-based scheduling was very compatible with that description of priorities meaning approximate frequency of being chosen by the scheduler. Lottery scheduling was proposed by Carl Waldspurger in 1994, and it works roughly like this[1]:
(Or: rose-coloured glasses are both the blessing and the curse of being in love)
But first, the obligatory disclaimer, in case you missed it. Thanks.
Remember where I left on the previous post? Now, with greater confidence, I set myself to improve my original algorithm’s performance even further. But that’s just because I knew it sucked. It was extremely inefficient as far as implementation goes; it looked great in the benchmarks because it was being compared to O(n) (n being the number of threads in the system) algorithms, while it had O(1) complexity, so I already had a head-start, so to speak. Still, I knew the algorithm very well and understood that there were plenty of bottlenecks to fix. I wrote it, after all.
[/pride mode=off]
Hi, folks!
For those who don’t know me (or my GSoC assignment) already, I’m the one assigned to ticket #1069, namely:
Create an O(1) thread scheduler with CPU affinity and soft real-time support which targets desktop responsiveness.
I’d like to dedicate my next few blog entries to introducing myself and discussing how I got here, why I wanted to tackle this specific task, what background I have regarding the subject of thread scheduling, how I failed miserably to realise that my first attempt at designing an algorithm suitable for Haiku’s needs had fundamental flaws, how far I am at my second attempt, and the obligatory comparison to Ingo Molnár’s Completely Fair Scheduler that has been making the news in the Linux world.
Hi all,
According to current Haiku source ICMP implementation is only a … framework. Only ICMP echo request messages are processed.
There are essentially two RFCs that should be considered when implementing ICMP (for IPv4). RFC792 - ‘INTERNET CONTROL MESSAGE PROTCOL’ and RFC1122 - ‘Requirements for Internet Hosts – Communication Layers’.
According to RFC1122, paragraph 3.2.2. there are two types of ICMP messages a host must process:
ICMP Error Messages Destination Unreachable Redirect Source Quench Time Exceeded Parameter Problem ICMP query messages: Echo Information Timestamp Address Mask First of all, I plan to implement a generic icmp_send_data() that should be used whenever sending ICMP messages.