Real Applications in EyeOS

I am considering a region-based rendering engine as a thought experiment.  Effectively, consider sprite-based or tile-based rendering but with variable sized areas.  This experiment aims to propose a simple way to transmit a visual display of a document through a Web browser– in other words, run in EyeOS.

First, we want to establish the scope of this engine.  This engine will build directly into itself, providing an alternate output display.  The software will describe itself to a system serving it through a Web browser (such as a PHP script), including its layout and rendered data.  It will describe the rendered area in pages, as requested; updates only happen to on-screen areas, and to the entire document height.  The method of running this engine will involve asking the binary to create a temporary UNIX socket or pipes and PID file, which gets noticed by the server software and handled accordingly to pass it to the browser client.

For a more focused example, let’s assume wanted to work with EyeOS to make the desktop client run through EyeOS without massive downstream via VNC, or a Flash plug-in, or bastardization through a completely new Java Script rendering engine. would include code to work with this, when called by ooffice_web; EyeOS would have an application that calls local “ooffice_web writer ${USER_CONFIG_DIR}/lapps/” to execute.  EyeOS would further examine each subdirectory under ${USER_CONFIG_DIR}/lapps/ and verify the pid file and that that PID had /proc/$pid/fd/ containing the pipe files, or some such verification that that PID indicates the application you think it does; if this holds false it removes the directory.

EyeOS would use AJAX and a Java Script time-out to repeatedly poll an EyeOS display update script, which I believe it does anyway.  The application would specify a sliding interval, effectively giving the minimum time it expects between updates; EyeOS would follow this or its own minimum, whichever turns out lower.  For multiple applications running, EyeOS would take the lowest update interval needed, and only poll those exceeding their own update interval or its internal minimum.

In order to update, the application would make constant judgments about what regions to update, and how.  The update protocol would basically involve the EyeOS script opening the pipe or socket and requesting a screen update.  A flag for “Blank screen” would signal that EyeOS has no state; would send an identifier with the response.  An identifier would signal that an EyeOS thread has the last state sent to the given identifier; would just send the changes to the display.  EyeOS could also signal that the user made mouse clicks and key presses, or scrolled, or resized the window; in these cases, the contents of the document and/or the viewing window would change, and the application would send updates.

As for the viewing window itself, the application would send the general size of the complete viewing area to EyeOS. would say the whole document consists of three million vertical pixels (more if you zoom in, less if you zoom out), and EyeOS would render a scroll bar for such.  The actual viewing area (scrolled down so far, scrolled right so far, so high, so wide) would dictate what sent; scrolling would produce a full viewing area update.  No need to render the full document to the screen for fast and easy scrolling, after all. has its own rendering engine.  It uses its own font engine, its own image engine, and the like.  It displays things different from how your Web browser will display something formatted exactly the same using CSS and XHTML.  Because of this, the entire rendering has to present itself to the browser as graphics, not text and formatting.  This poses a huge bandwidth problem, which we can only solve with a specialized rendering engine tuned for the application.  In our case, pre-rendered, pre-cached fonts and graphics.

For a zoom level and font size, decides Times New Roman ‘A’ 14pt bold takes up so many vertical pixels exactly, along with a bunch of other things that determine an exact rendering.  It should not matter that the text uses 10pt font at 90% zoom or 9pt font at 100% zoom; the letters take the same size on the screen, and should render exactly the same.  This may not line up with real life, but it makes sense and maybe they should fix the rendering engine.  Graphics and word art behave similarly, rendering to certain dimensions and identifying uniquely within the document.  Table borders involve a 1 pixel dot or something slightly more complex but 1 pixel wide or high, stretched out.

In any case, could use PNG with alpha channel to generate pre-rendered fonts, describing to EyeOS exactly where in the rendered viewing pane (from 0,0 top-left corner down and over by coordinate) the letters fall and allowing it to place them.  A similar tactic would describe table borders and graphics, along with special cursors (i.e. passing the pointer over a table border to resize).  EyeOS would cache these images, and the Web browser would also cache these images as they get reused again and again.  Once the document closes, EyeOS purges the cached images.  In the mean time, the browser only downloads so much stuff (more generated when you change font sizes or zoom level), and mainly reuses the image.  Update time primarily relies on the blinking cursor.

Doing this in the back-end means that the back-end knows exactly what gets placed where, and can simply render to a different output format.  Unlike VNC or video compression, the screen does not get sectioned off and processed as an image; instead, the actual regional layout of the screen–the internal state of the rendering engine state just prior to outputting to a rasterized graphic–gets described as a regional layout of similar image elements (pre-rendered fonts, table borders).  Because of the large amount of repeated data (mostly rendered text), most of the screen gets re-used again and again, persistently, without re-examining the entire screen and having to dig out overlapping similar regions (tracking, kerning) and all kinds of garbage.

I believe this sort of rendering would allow for the embedding of the real (and other) application into EyeOS entirely through the use of Web standards such as CSS, XHTML, and Java Script.  I also believe this would use less server-side CPU and less bandwidth than VNC or X forwarding, due to higher accuracy and easier computation involved in reducing the volume of data sent, mainly because of low-level access to the rendering engine’s internals.  Nobody will do it of course, but it makes a good thought experiment nonetheless.  A proof of concept for drawing just arbitrary text (notepad) would amuse me but I don’t have the patience.


~ by John Moser on March 11, 2009.

%d bloggers like this: