Re: OpenTransport...
Re: OpenTransport...
- Subject: Re: OpenTransport...
- From: Zack Morris <email@hidden>
- Date: Fri, 25 Oct 2013 11:55:35 -0600
On Oct 25, 2013, at 4:11 AM, Quinn The Eskimo! <email@hidden> wrote:
>
> On 24 Oct 2013, at 20:43, Jens Alfke <email@hidden> wrote:
>
>> It was all designed around the limitations of the incredibly primitive Classic OS, especially its lack of multithreading. It’s also not based on the common Unix socket API at all; I think the inspiration was a “streams” API from, um, maybe Solaris?
>
> You have to distinguish between the API and the implementation. There are two APIs:
>
> o XTI, and the Open Transport API, which is just a Mac OS flavour of XTI
>
> o BSD Sockets
>
> There are two implementations:
>
> o STREAMS
>
> o BSD networking
>
> Both implementations could host both APIs. STREAMS implementations almost invariably ship with a BSD Sockets API (a notably example being Solaris) and, much to everyone's amazement, Apple managed to support the Open Transport API on BSD Networking (-:
>
> XTI is a bit of a weird API, but so is BSD Sockets when you start looking at it in detail. The real issues with the Open Transport API were:
>
> o lack of blocking semantics
>
> o listening for incoming connections (what a frikkin' nightmare)
>
> The first was more to do with traditional Mac OS than the XTI API per se. And, ironically, the message from Apple in recent years is to avoid blocking semantics anyway.
>
> Apple solved the second problem with some nifty STREAMS module trickery ("tilisten").
>
> On the implementation front, Mentat STREAMS was awesome. Whenever I work inside BSD Networking (not very often these days), I'm reminded how much I miss it.
>
> Share and Enjoy
Ya I have to say that after learning nearly the entirety of Open Transport back in the early 2000s for a defunct P2P gaming API I was working on to replace NetSprocket, I do miss some of its concepts. It had the potential to be very fast because it was fairly lean and let you do low-level things like access buffers without copying them. It felt a bit closer to the metal in the way that you could do things from interrupts. I figured if I got it working then I would understand the fundamentals of networking on any platform.
But what I definitely don't miss is its tendency to encourage pre-optimization. I wrote these huge wrappers for things like atomic functions so I could have something similar to the c++ standard library lists and be able to keep track of objects without them having to have an OTLink inside. I compartmentalized all of the low level calls that in hindsight were kind of odd to be dealing with in the first place. I made the whole thing nonblocking and started down the road of coroutines (cooperative threads) but in an ugly way without realizing what I was doing. The real kicker is that once the code was written and I was in the testing phase with people joining the game "lounge", the code had turned into such a large state machine that I was unable to prove that it was functioning correctly. People would just timeout or lose their connection and it was maddening trying to figure out why over and over again. I ended up scrapping the whole thing because it was simply not possible to make it stable, or guarantee that it was free of exploits. I shudder to think about it now, because after two years of some of the hardest thought I have ever put into something, the code was a dead end. It reminds me of the scene in Animal House where the guys are talking to the professor:
Jennings: Teaching is just a way to pay the bills until I finish my novel.
Boon: How long you been workin' on it?
Jennings: Four and a half years.
Pinto: It must be very good.
Jennings: It's a piece of $#!@. Would anyone like to smoke some pot?
Networking is by far, by an order of magnitude, the hardest thing I have ever worked on. The next closest challenge is probably 3D (OpenGL etc) but even the most tangled 3D spaghetti code is tractable compared to networking because at least it's deterministic. A big part of the problem is that I wrote my lib on top of UDP so that people wouldn't have to open a port in their routers, but I never wrote a proper stream layer over that. I just made versions of read and write that guaranteed you at least 512 bytes if the call succeeded. I thought that it was making things easier but in the end it just passed the buck of maintaining reliability up to the application level. That basically means dozens or hundreds of places in the code with comments like //TODO figure out what happens when it fails here.
If I had it to do all over again, I wouldn't bother with either Open Transport or sockets. They are just too low-level. And they favor bit rate over latency to their detriment. The file/stream metaphor in sockets is brilliant but the problem is that to do anything practical, it ends up needing some kind of serialization for the structures you are passing around anyway. And the original writers went out of their way to prevent you from getting statistics on the connection, like how much you can send/receive, what the bit rate is, etc, so the burden of that falls on you. And you can't tweak some of the fun bells and whistles without having admin permissions. They basically stink for gaming, and most everyone knows it, so we play make-believe that they don't. It's quite an astonishing emperor-has-no-clothes situation when you really stop and analyze it.
I may dabble in it again, but this time will probably use something like ZeroMQ and a safe sandbox layer like lua for the networking logic and serialization to prevent exploits. Either that or run it in a separate Go process and pass the objects back and forth to the app using pipes or something. Getting that to work cross platform is going to stink though. And there is no room for failure or timeouts at the application level. A connection needs to only be considered active or nonexistent. So I would do everything with RPC's or a software transactional memory (STM) and make all of the code (whether blocking or nonblocking) run in a separate thread or coroutine instead of a state machine, and isolated from the app. So the sends would always succeed and use a memory limit instead of a timeout to determine if a peer is there, and remove them from the global state under the hood automatically. Same thing with receives, you would just get notified when complete objects have arrived. It would basically be a distributed document-oriented database like Swift or CouchBase. This might sound a bit esoteric but no matter what you do, you can never escape the HTTP metaphor of sending something to a server and only knowing if it worked by the response you get. Everything else is hand waving (like dead reckoning etc). Sure you can extrapolate out what the future state of the server might be for your peer, but in the end you still have to get a response from the server to know for sure. I tried to have a thread representing each peer on every machine and have them move through the same decision tree (like in a turn based game like Monopoly) but I wasn't able to isolate the logic. It turns out that this is database replication:
http://docs.couchbase.com/couchbase-lite/cbl-concepts/#replication
But with my amateur approach to it, it just never could have worked. I just don't know what I was thinking. I also didn't know it at the time but this stuff is all experimental even today and we are only just starting to see mainstream adoption or best practices (WebRTC etc). It takes many years of thought and multiple iterations to get right, and I made the mistake of doing that research on a personal level with credit cards.
I don't know why I just wrote all of this but I guess I am still looking for closure. So ya, don't do what I did :-/ Open Transport is still useful as a learning tool and helps you see how sockets might be implemented under the hood, but it should probably stay in the 90s.
Zack Morris
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Macnetworkprog mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden