One interesting AtheOS project would be to write a VNC server. VNC is a lightweight 'remote desktop' protocol that has been ported to a great many different architectures. It works best on Unix, where it is written as an X server, as it receives all relevant data to be drawn on the display, but has been implemented (by polling the framebuffer and monitoring various system calls) on Windows (works on both the 9x and NT streams), so there should be a decent chance that it can be ported to run on AtheOS.
The ease of implementation depends on how difficult it is to plug in a new set of input and output drivers and to have them run in parallel with the standard display. With X this is easy - you can run as many X servers as you like. On Windows, VNC operates in single-user mode, with the console user effectively sharing a screen with the remote user. AtheOS has its 32 desktops, so at the very least it should be possible to run VNC on one of them.
It looks like the appserver only supports one display driver at a time, as it stops looking for valid drivers after it finds the first one. Maybe it would be possible to run more than one appserver at once, and force the second appserver to use a set of input and output drivers provided by the VNC server.
(Note that the VNC viewer has been ported to BeOS, but not the server).
Here is a message from Kurt where he mentions some things to consider when writing a VNC server.
AtheOS AppServer:
init forks: (more info in startup) -> child runs appserver -> parent runs /system/init.sh appserver (src/system/appserver/* [main() in server/server.cpp] -> /boot/atheos/sys/appserver) loads config file from /system/config/appserver initialises keyboard creates pcDevice = new AppServer constructor loads default fonts calls pcDevice->Run { creates port "gui_server_cmd" init_desktops() (in desktop.cpp) - video { looks in /system/drivers/appserver/video/ and loads each driver in turn, stopping when one works if it doesn't find one, uses the vesa 2.0 driver } InitInputSystem() (in input.cpp) - input { looks in /system/drivers/appserver/input/ and loads each driver in turn if it doesn't find a mouse driver, uses DosMouseDriver (?) } main loop { wait for msg on gui_server_cmd port processes message in AppServer::DispatchMessage( Message* pcReq); } // main loop } // pcDevice->RunAccessing the desktops:
framebuffer access from a user app: #includeInput (mouse):(after initialising the Application object) os::Desktop cDesktop; void* pFb = cDesktop.GetFrameBuffer(); from inside the appserver: src/system/appserver/server/screenshot.cpp void ScreenShot() { g_cLayerGate.Close(); if ( g_pcTopView != NULL && g_pcTopView->GetBitmap() != NULL ) { write_png( "/tmp/screenshot.png", g_pcTopView->GetBitmap() ); printf( "Done\n" ); } g_cLayerGate.Open(); } Relevant variables: Layer* g_pcTopView; - the top layer (currently visible) Array<Layer>* g_pcLayers; - all the desktops
keyboard: src/system/appserver/server/keyboard.cpp The keyboard driver is implemented inside the appserver, reading scan codes from /dev/keybd. Relevant functions are as follows: InitKeyboard() - starts "keyb_thread" with entry point HandleKeyboard(). HandleKeyboard() - opens /dev/keybd read-only and waits in main loop for scan codes. When a 'key down' code (in nKeyCode) is received, calls: AppServer::GetInstance()->SendKeyCode( nKeyCode, g_nQual ); (g_nQual tells whether left or right shift, alt or ctrl are pressed at the time). mouse: src/system/appserver/server/input.cpp header: src/system/appserver/server/inputnode.h Each input driver is derived from InputNode (inputnode.h), and implements its two virtual functions: Start() and GetType(). Start() - starts the main driver thread (calls EventLoopEntry(), which calls EventLoop()) and returns. The thread names for the two mouse drivers in kernel 0.3.5 are called: sermouse_event_thread - serial mouse driver ps2mouse_event_thread - PS/2 mouse driver GetType() - returns IN_MOUSE if the driver is for a mouse. The work is done in the following functions (private in ps2mouse.cpp): EventLoop() - polls the mouse port and calls DispatchEvent(dx,dy,buttons) when something happens. DispatchEvent() - processes x and y differences (nDeltaX, nDeltaY) and button state (nButtons) and forms a Message* with one of the following event codes: M_MOUSE_DOWN M_MOUSE_UP M_MOUSE_MOVED This is then passed to EnqueueEvent() (inherited from class InputNode), which processes the event and updates the appserver's internal structures: int InputNode::s_nMouseButtons Point InputNode::s_cMousePosOutput (video):