Mittwoch, 29. Juni 2011

Call for content creation OSes

This article is about me and people like me, therefore I would like to introduce myself: I'm an engineer, work in an EDA company, where I design different chips. At my working place I used to have a Unix-workstation couple of years ago, now I have a Windows laptop connected to a powerful server grid (call it a cloud if you want) over Citrix connection for the reasons described here. In my office people call me progressive because I'm using Java Desktop System for my work and not CDE as most of my colleagues. Computer for us engineers is a tool to get the job done, it must be fast, stable and run all the software which can cost up to $1.000.000 US per seat / year.

At home I'm a Mac guy since MacOS X 10.1. There was a time when I really wished that at some point MacOSX would be supported by the EDA software, that was a time when Maya was ported to OS X, when AutoCAD for Mac appeared and some of other engineering tools were ported to Mac. Apple positioned MacOS X as UNIX OS with supported XWindows port, with open-source Darwin core, with FreeBSD userland, with OpenGL, first citizen Java VM and so on. Now latest with introduction of Lion there is absolutely no point in porting professional software to MacOS X. From content creation OS it now becomes a content consumer OS. As I wrote one year ago there should be a differentiation between content creation OS and content consumption OS. Content consumption OS should run on a variety of fast booting, network centric, often mobile devices, it must offer easy access to different media, free or commercial, offer unified messaging, have interfaces to different social networks, must sync with other devices and must be dead simple to use. Software, which runs on content consumption OSes, should be flexible enough to run on variety of devices, which may have different screen sizes or input methods. Often these devices are connected to cloud, where media and personal data is saved. One of the main applications is a powerful browser with support for the latest standards like HTML5 and WebGL. The applications and media are provided through a store.

With introduction of Lion and Win8 Apple and Microsoft are heading exactly in direction of consumer OS. It's understandable business decision, 90% of users are consuming media, only 10% are creating. Application and media stores, cloud offerings bring revenue after the sale of the OS. At the end it's fancy, pads, smartphones are selling like crazy, social networking and digital media distribution is all the rage, lot of office applications are moving into web, so some office workers don't need a powerful PC any more and are happy with devices with consumer OSes.

But what about content creators? People who used to have workstations, MCAD users, DTP-professionals, 3D-content creators, architects, geologists, biologists and other scientists? Do they really need an application store? Their software has completely different sales model, than through an application store. How do they update their Mac at work to Lion? Will every employee need an AppleID and put $29 for Lion update on private expenses? Have you read what Apple says about new UNIX features in Lion? How will professional programs benefit from iCloud, from Auto Save (saving of huge databases might take several seconds and block the user from working)? We don't know much about Win8 yet, but for sure none of professional CAD applications will use JavaScript and HTML5 or Silverlight or XNA for designing of their UI. All these techniques make sense in order to easy port of applications to pads or smartphones, but does anyone need Catia on pad?

So after differentiation between server and desktop OSes, now it is time for differentiation between content creation and content consumption OSes. The aims of the OSes and user groups are too different, so that one OS can fit all needs. Linux world shows how it should be, while Ubuntu seems to target consumers, RedHat Enterprise distros are heading toward professional user. The same should be the case for Windows and MacOS. I don't see lot of chances for MacOSX, Apple stopped to care about the professional users couple of years ago, but Microsoft really should rethink if they should establish an extra line for Windows for Professionals, which is not intended to merge with pads and smartphones, but will remain a powerful, stable OS for power users without lot of experiments on UI and programming models.

Project Nadal and its impact on the future of gaming

On the current gaming trade show E3 in Los Angeles Microsoft presented Project Nadal, which allows completely new forms of interaction with computer or console. In this article I would like to analyze what technologies are combined in this project and what I would like to see coming next from the gaming industry.

First let take a look at the hardware, which was used for the project. Beside the XBOX 360 console the most interesting part was a camera, which is able to catch the third dimension of the person or other object in front of it. The technology reminds me of ZCam developed by an israel-based start-up company 3DV. This company was bought by Microsoft few month ago, but Microsoft denies that the technology for their cam comes from 3DV, the reason why the company was bought were patents. But anyway the technology must be quite similar. ZCam from 3DV uses infrared rays to measure the distance to the object, so it is a kind of radar with a resolution of 1-2 cm, which should be sufficient for most needs. The cam has also a microphone, which is needed for the voice recognition capabilities.

But the most interesting part is the software. Microsoft research center integrated so much goodies, which complement each other, that it is hard to separate them and view at each of them independently.

Let start with face recognition. The software seems not only recognize the user of the console, but also the mimics and make assumptions about his current mood. This requires very advanced pattern recognition. The number of users in a home-based console is probably not that large, so the recognition should be quite easy, but finding out the feelings of a person based on expression on his face isn't that simple. It is interesting, if the software needs any calibration in advance. As can be seen in the video, the mimics are used for gameplay (fire balls out of monster's mouth), advanced KI can change the game, so the dialogs can be more personal, music can be played, or films from the online video store can be suggested based on the mood of the user.

Scanning of the objects, which can be then used in the game is another great improvement in the game play. In the demonstration of a new game, which uses capabilities of Nadal, Peter Molyneux demonstrated how the camera scanned a drawing on the paper, which was used for the continuation of the story. Again a very advanced pattern recognition is needed for that. So I think first games will use this feature, just to get the pattern without recognizing it, so things like skateboards or cloths can be customized. Of course this technics can be used for something what Second Life has promised but never delivered, that is the possibility to become one with the avatar, so that it becomes a truly virtual mapping of yourself. This is actually a dream for all fashion companies, so that the customer can try all cloths (even with combining them with cloths he already has) before ordering them. Probably the resolution is still not high enough for tailoring new cloth, which fit exactly, but with the time the resolution will improve, so it becomes possible to send orders for individual manufacturing of clothes.

Voice recognition is an game controlling element, which hasn't been used too much, because it is too slow to control a game with words and the variety of expressions is too large, so KI must be very advanced to be able of handling it. But there is one type of games, which are perfectly suited for voice recognition, these are all quiz shows. Computer must recognize just the right answer, which is much less complicated then handling free speech. However the demonstration of Milo showed that Milo seemed to understand what the person was saying. The answers reminded me of ELIZA, but recognition of free speech and conversion into format for ELIZA is very big achievement, if it really works as promised.

Recognition of players moves. Sony's EyeToy could detect moves, but without support of measurement of 3rd dimension, the recognition was quite inaccurate. Now Microsoft is promising that the whole body will be recognized and the resolution will be much higher without need of any controller. Peter Molyneux is absolutely right when he is saying that controllers with more and more buttons prevent a natural interaction with the console and Nintendo's success with Wii only proves it. So now even the Nunchuck is not required, which should increase the community of console players even more, because the entry barrier is very low. Just stand in front of the TV and start playing. One often criticized point is that it is unrealistic of driving a car with an air wheel, but player can use every object as a wheel, if he wants to have something in his hands.

So what is missing now for a perfect gaming experience? The input is perfect, but the output still lacks some important features for complete diving into the virtual environment. The visual output is with introduction of HDTV much better, but now the format of the screen is not optimal. Of course a VR-cave would be the perfect solution, but it will remain expensive and consumes too much space, so it is impractical. All VR-helmets failed so far and they do not allow social interaction with other persons in the room. So a solution would be a screen, which has height of human body. A fight is much more realistic, if the opponent has approx. the same size as the player himself, so are all sport competitions. For different games, which need a bright view angle like flight simulators, the screen should be rotatable.
3D-cinemas have a new revival with the new digital systems, so this technology should become affordable for home users as well. Either the user could wear shutter-glasses or there are special monitors, which are able to show 3D even without this. A console must generate double as many pictures, but I don't think it is a big problem.
The biggest issue is the force feedback. A boxing fight without having physical contact with the opponent is not realistic. So while gaming industry is offering vibrating controllers or special seats, these solutions do not work if there are no controllers and the gamer is moving in the room. So fresh ideas are needed here.

As a conclusion I can say that project Nadal is revolutionary for the gaming industry, it combines several very advanced technologies into a solution which makes lot of sense and is very intuitive for the consumer. It is interesting to see what kind of games will be using this technologies and how much effort it will be to create such games. Microsoft promised to deliver the cam in 2010, so it remains to be seem, if all promises can be fulfilled. However there are still lot of wishes open, which could make gaming even more realistic.

SoC

SoC (System on a Chip) is the name for a class of microelectronic designs which consist of several parts. Those parts were on different chips in former technologies, but now they are integrated on one silicon die. SoC is the heart of basically all modern electronic devices, like TV set, set-top box, smartphone, tablet and other. Integration of several chips into one is nothing new, older readers will remember how Intel integrated mathematical coprocessor into main processor and called this family of processors i486. But nowadays there are some new requirements for competitive designs which make designing a SoC a real challenge. The advantage of a SoC compared to several separate chips is higher communication speed between the parts, less space consumption, less energy consumption and hence less amount of heat to be dissipated. There is less wiring required on PCB and less elements means lower costs during manufacturing of the system.

A modern SoC consists of one or several micro processor cores, most likely ARM architecture, several interfaces, like DDR memory controller, USB 2.0, HDMI, I²C, CAN for car entertainment systems, hardware media decoder, 3D-graphic accelerator, analog-digital converter, even physical sensors for acceleration and alike. A single company can hardly develop all these parts alone, so it needs to buy IP from other companies. There are hard and soft IP macros: soft IP means just Verilog description of an IP (think of ARM core, which can be optimized timing wise) while hard IP means a complete layout for a specific technology (think of perfectly layouted USB controller). Since all parts are on the same die and are manufactured at once, all IPs must be available in the same technology of a certain foundry like TSMC. These many blocks introduce additional difficulty for developers of analog parts of the chip because analog parts are much more sensitive to variations during manufacturing. Moreover irregular analog structures cause more problems for lithography than regular digital structures in standard cells. There are solutions for this problem like SiP (System in a Package) or 3D-chips, where analog dies are ordered vertically or horizontally next to the digital die, but this means higher costs, since basically two chips must be manufactured and connected in a tight package. Another problem with integrating different functionalities mixing analog and digital parts is that these parts can disturb each other by injecting noise in substrate, by dissipating more heat on a smaller area as well as consuming more power in smaller areas. So if the power grid inside the design is not calculated carefully one part could soak more current and leave other parts underpowered.

But real challenges arise for SoC designer due to new requirements, which are necessary for a modern SoC design due to the fact that it has to be sold several million times in order to become profitable:

1. New interfaces - A SoC must be able to handle inputs from multi-touch displays, GPS satellites, Hall-sensors (magnetic compass), acceleration meters, light sensors, high resolution cameras, several wireless standards, and other input devices. It must drive high-resolution displays (eventually 3D), output hifi-quality audio, or give even physical feedback using actuators.

2. New possibilities for connectivity - A modern TV can connect to internet wirelessly as well as needs to connect to video input from several devices, like blue-ray player, set-top-box, game console. It has Firewire and USB interfaces for external hard-disks, several slots for memory cards. Connecting all these device types to a SoC must be handled by it.

3. New programmability – since the iPhone and its very successful AppStore concept everybody is talking about the app economy which means generation of revenue after sale of the product. Every smartphone series has its own AppStore, in foreseeable future TV and set-top producer will have their own, also AppStores from car manufacturer for their entertainment systems are expected. What does it mean for a SoC? A completely free programmable SoC must be tested more carefully because in advance it is not known which software will run on the system. Moreover since introduction of Windows for ARM several operation systems must be able to run on a SoC and support all its interfaces.

4. Low-Power - mobile systems need to run as long as possible on a single battery charge, also stationary devices should not consume too much power for environmental reasons. That means that parts of the device which are not needed at the moment can be switched off, must wake-up as soon as they are required and start communication with active parts.

Combining these requirements with advanced technology nodes, short time-to-market window, and the pressure to sell several millions of SoCs it becomes clear that some new approaches are needed rather than old ways like writing software and developing the hardware independently and bringing both together after the design has been produced. The amount of verification and testing of different configurations is basically exploding. In order to fulfill the requirements listed above the development of software and hardware must be tightly coupled. That means the software must be able to run on a model of the hardware design which resembles the functionality of the hardware as exact as possible. But here comes the necessary trade-off: it is possible to simulate the behavior of a single transistor and parasitic effects, but simulation of several millions transistors is simply not possible. So a higher abstraction level is necessary, which is still accurate enough to allow that the results are not too different from the produced silicon. Since a SoC consists of several IPs, one of the requirements of modern IPs is to have different models which can be used in simulation and verification of the whole system. If simulation is still too complex to be handled by software running on regular workstations, there are special hardware solutions, either based on FPGAs or special processor arrays, on this hardware the model of the system can be uploaded and emulated.

All big EDA companies started preparation in order to handle the new requirements. Synopsys bought two verification companies and is the biggest IP provider. Cadence started the EDA360 initiative in which it develops IPs with simulation models and creates partnerships with other IP companies like ARM. Mentor is becoming active in software business: it bought several Linux-oriented companies, so the promise here is to have tightly coupled soft- and hardware. Cadence and Mentor are also partnering in defining standards for verification of SoCs and both have powerful hardware based emulation solutions.

Due to the rising complexity and high manufacturing costs of a design in advanced technologies the main focus for chip industry is not development of the chip but integration of several parts into one design. Only a verified design with optimized drivers is competitive on todays market if it consumes less power and provides great multifunctionality.

The insane world of programming for mobile devices

Yet another IT-revolution is happening now, the smartphones are becoming more and more popular and I guess nobody could predict the popularity of apps (well, except Steve Jobs maybe). One measurement how popular is a platform is the amount of available apps, another one, how easy is it to create apps for it. Just three years back, the mobile world offered following platforms:

PalmOS: Palm transformed its handhelds into smartphones, but all applications written for PalmOS could still be used.

WIndows CE: There were several versions of the mobile Microsoft OS, the apps for different versions were not always compatible, but could be adapted

Symbian: Coming from PSION devices the platform was highly optimised for mobile usage, though not easy to code for.

JavaME: Stripped down Java version was included on lot of featured mobile phones. Was good for games and simple apps, where the UI could be completely customised, but a horror to test on different devices and to certify the code. The expectation was that when the devices become more powerful JavaSE and JavaME could be merged as some point, JavaFX was also a hot candidate for the JavaME replacement, but it seems Oracle is not very successful in convincing the platform creators to include JavaFX in their environment.

With exception of JavaME all platform could be coded in C or C++. For JavaME there was a NetBeans plugin from Sun with all required emulators and debugging tools, Windows CE code could be written in Visual Studio.

Then the iPhone appeared on stage and nothing was the same as before.

iPhone did not run JavaME, iPhone did not run Flash, it had its completely new environment for mobile programmers, who had to learn C Objective and handle XCode. Nevertheless they followed Apple and created a stunning number of 300000 apps, which can be downloaded from the AppStore. The hype around iPhone and the lesson what a phone must have to become successful was learned quite quickly by the other platform creators, so they started developing own coding environments in hope that app programmer will use them and create a similar amount of apps for their platform.

Now the situation is that every modern mobile platform asks for a different language, different API, different coding environment:

PlatformLanguageAPICoding Environment
iOSC ObjectiveCocoa TouchXCode
SymbianC++Qt + SymbianQt Creator
MeegoC++Qt + LinuxQt Creator
AndroidJavaAndroid APIEclipse Plugin
Blackberry ClassicJavaBlackBerryOS APIEclipse Plugin
Blackberry PlaybookFlash, JavaScript, HTML5 Flash Creator
Windows Phone 7C#SilverflashVisual Studio
WebOSJavaScript, HTML5webOS API

Crosscoding between platforms is quite difficult, since not even MVC paradigm can help here, all parts of the code must be transformed in a different language, which is just as much effort as starting programming from scratch.

There are several attempts how to create an app, which works across several platforms:

- HTML5, JavaScript, PhoneGap - All platforms have powerful browsers, which understand a subset of upcoming HTML5 standard and JavaScript. So the idea is to have either an app which consists just of a web-view and a hard-coded internet address. The problem with this approach is that for using this app, the user must be online. Even if all the code is stored offline the second problem is that not all features of the device can be accessed by JavaScript, this is where PhoneGap and similar software step into the ring. They provide a JavaScript API which allows access to device features with JavaScript. Since the API is the same for all supported devices, apps which have been created using PhoneGap can run on different platforms.

- Flash - Adobe is working (marketing) hard to position Flash as a replacement for JavaME, i.e.. a platform which is available for a variety of devices with least common denominator (which is of course much higher than it was for JavaME). So far Flash is available for Android, Symbian, Meego, Blackberries new OS, but there is a compiler for iOS, which transforms Flash into a C Objective app. Probably Flash will be used for the same kind of apps as JavaME, for apps, which do not have to look like native apps e.g. games or fun apps.

But not only programming is different for each plattform, business models are also different, e.g. iPhone user are glad to pay small amount of money for an app, the Android apps should be better financed through ads. The procedures of app-signing, of reviews by AppStore owners, of becoming a publisher in an AppStore, of the policies for an app are all different. All this means that creating apps for different plattform is each time a business decission, which must be reviewed carefully, if it makes sense to support this or that platform. It is not just about going mobile, but going mobile on which platform.

So far there are two clear winners in the app race, this is iOS and Android, with 300000 resp 100000 available apps. But nothing is as fluid as mobile app market. Nokia is still the number one smartphone seller, after bringing Qt on their platform it is possible, that lot of Linux-savvy programmers will be attracted by Meego. Never underestimate the marketing power of Microsoft and Ballmers call for developers. The company behind Android, Google is having hard time from being sued by Oracle for violating patents Oracle has purchased with Sun IP properties. So in half a year the numbers might look completely different and a new competitor can suddenly arise from nowhere. This means that the developer must be prepared to be forced to learn new language and new API. The best thing for them would be if a consolidation would take place to 3-4 platforms and powerful JavaScript/HTML5 API which allows cross-platform programming in one language.

Don't Believe the Hype

Everyone and his dog are talking about cloud computing. This is the future of computing, nobody will have an own server, all data will be send to a hoster, who has unlimited scalability for the given application, the only limitation is the depth of users pocket, but since the resources are shared among multiple users, processing time, bandwidth and memory are much cheaper than having an own server. No configuration, no administration is required, self-healing services allow 24/7 availability of the applications.

Well, this is what we thought, when we started to develop our applications. We are a small startup in Germany and our idea is to provide a new Local Based Service for mobile devices and a portal. When you have a startup you never have enough resources, neither time, nor personal, nor financial, nor lot of experience. Therefore the idea was to use cloud computing as back-end solution. We don't want to have own server, buy expensive upload bandwidth, configure it, administrate it, secure it, backup it. We just want to start programming of our application. We were looking at several alternatives but at the end we decided to go with Google Application Engine. GAE was very intriguing for us, because it offered a Java environment with Google database, fast searching algorithms, low costs and well, this is Google, so what can go wrong? Unfortunately quite a lot.

After installing Eclipse GAE plugin and reading some documentation I was soon able to create first example, test is on localhost and upload it on GAE server and voilá it worked. Encouraged by this success I decided to develop our whole application on GAE.

First difficulties started, when I tried to create references from one object to another and save it in the database. The web is full with happy coders who are praising Google how easy it was to create a 1:1 reference, or even 1:n, but I was not able to create references, which worked flawlessly. At least in documentation it was stated that m:n references are possible just by manual managing of the keys of the referenced objects, so we ended with manual managing of the keys of all objects, so back to the roots. What by the way was cascading?

After a while, the GAE O/R mapper refused to enhance the POJO entities. As O/R mapper GAE is using DataNucleus. Why they are not using widely accepted and proven Hibernate is beyond my imagination. DataNucleus is nowhere as praxis proven as Hibernate and for some coders this might be English humor, but for me the support answers from DataNucleus guys were pure arrogance. After wild configurations orgies finally we gave up the idea of starting the application from Eclipse and started using an ant script, which worked quite nicely as long, as the application had to be started on the localhost.

After our application became more complex and we deployed it more often on the GAE server, so that more people could take a look at the progress, we realized that the starting time of the application were horrible. It took about 15 sec, till the start page appeared in the browser. Last time a webpage took 15 sec for appearing on my browser was in the 90th, when I was surfing for overclocking tips on weird Japanese websites with my 56kbaud modem. After searching in some forums I realized, that I was not the only one affected by the issue. It seems, if the application is not in usage, it disappears from the main memory of GAE server and it takes long time, till it is reloaded (maybe even recompiled) and is ready to serve. On the forum people were discussing how often a client should send a request to the GAE server, so that own application does not disappear from the memory! Seems it is becoming cat and mouse catching between Google and GAE users, because the intervals between the pings are getting shorter and shorter.

After testing on GAE server we started to realize that the behavior of the application on server is different than in testing environment. The application crashed, at situations, where it run flawlessly on localhost. The problem was that the logfile stayed clean, so no indication what might be wrong. Also complete crashes were not uncommon, when not a single user from different locations could access the application. It took me quite a long time to find out how to restart the application after total crash ("appname".appspot.com?restartApplication).

Another problem risen up, when our webdesigner started his work. It is not possible to upload single files, it must be the complete war directory. So even if a picture has changed, the whole application must be redeployed. Why should an external web-designer have access to the whole application code?

Database administration is very basic. Neither it is possible to dump the whole database or delete it completely and start with a new one. A database viewer is available, so it was possible to see, which references again have been either not created or created wrong.

But the biggest issue hit us two weeks ago. For some reason it was not possible to deploy the application on GAE server! Without any reasonable explanation the uploading stopped at 99% and rollbacked to the old version. If such a thing happens during production mode, this is absolutely unacceptable. Two weeks later the deployment is still not possible.

This was the last drop, therefore say hello to MySQL, hello to Hibernate, hello to Tomcat, hello to BlackHats.

The idea of cloud computing is great, just upload your code, the provider cares for the rest. But praxis shows completely different picture. Cloud Computing must be taken with a very big grain of salt, currently only for testing, at least GAE has still a long way to go, till it can provide a viable alternative to an own server.

Call to Apple: Please Open Mac OS X (or Others Will)

This article is about new aspects of the never-ending story of how Apple is protecting MacOS X for running on different hardware than Apple's. The keyword is virtualization, which allows running unmodified version of Mac OS X as virtualized instance.
Two recent news items regarding virtualization of Mac OS X hit the street recently. The first news item is that in their EULA Apple allowed running several Mac OS X server instances on Apple hardware and the well-known company Parallels (which was bought by SWSoft) announced Parallels Server, the company's hypervisor-powered server virtualization solution, which does exactly that. Parallels Server can run on every x86 server and can virtualize Linux and Windows there, but virtualization of Mac OS X is only allowed on Apple hardware. The second news item was the announcement of a German hacker Alexander Graf at the CCC-congress that he modified the popular open-source emulation software Qemu so it can run an unmodified Mac OS X instance on Linux, but since Qemu is portable, it should work on different platforms (e.g. Windows) as well. In his project description Alexander writes that in the EULA Apple does not mention that Mac OS X is not allowed to be installed in a virtualized environment; that means if somebody installs Linux on Apple hardware and runs a single instance of Mac OS X in Qemu it is perfectly legal. But in the wiki Alexander writes that with his modifications Mac OS X can run on other hardware as well.

Virtualization or emulation as such is nothing new for Apple. Projects like Mac-on-Linux allow running a virtualized Mac OS X on non-Apple PowerPC hardware like the Amiga One or Genesi PowerPC based hardware. But this did not really hurt Apple, because such hardware was too exotic to cause too much headaches in Cupertino.

Another kind of virtualization was even highly welcome at Apple's headquarters: virtualization of other operating systems inside Mac OS X. Several emulators for Windows which emulated x86 on PowerPC, were available, but with the transition to x86, companies like Parallels and VMSoft created virtualized environments that make use of Windows applications as transparent as possible for Mac OS X user.

The other way around, virtualization of Mac OS X itself on any other x86 computer, is a completely different thing. Apple does everything to prevent such scenarios technically and legally. But is it still justified to prevent installing of a (licensed) copy of an OS inside a virtualized environment? Mac OS X is the only OS running on x86, which is coupled to the hardware of the manufacturer. All open and closed source OSes of other manufacturers can be installed on any compatible hardware; only Apple is protecting its Mac OS X. So my call to Apple is: open your OS for other computers, or other people will do it and with the availability of the solutions described above - it never has been simpler. Learn your lessons from the iPhone: you can forbid hacking of the iPhone as much as you want, but if enough people are interested in it and the solution is very simple, lots of people will do it regardless of what is written in EULAs.

Here are some arguments, why Apple should open Mac OS X and only wins from this decision:

Apple is afraid of all the hardware it has to support, what will disturb the Mac OS X experience for the user, whose hardware combination might not be supported. Well, OpenSolaris showed how to increase supported hardware in quite a short time. Previous versions of Solaris x86 supported only a small subset of hardware, but this changed quite fast. Take the drivers from FreeBSD, it should not be too hard to adapt them for Mac OS X. Spread a compatibility check program, which tells the user before installation, if his hardware is compatible or not. Every sold package of Mac OS X should contain a CD with such a program, which can be tried out without opening the package itself, so if the results are negative, the package can be returned to the dealer unopened. Vista also does not support every hardware on earth and is still successful.

Apple is afraid that less Apple hardware will be sold. Well, what I can tell from my friends who bought a Mac recently, they did it not because of Mac OS X (they could have done it five years ago as well), but because current hardware offerings from Apple look very slick, they are competitive quality- and performance-wise, the prices are fair and the status of a Mac-owner changed in Germany from freaked-out designer and experienced computer geek to ordinary computer users, who love their iPod and want to have a well-designed computer on their desk. For them it is also good to know that they can use Windows as a fallback solution. So Mac OS X is not a must for them, they would buy Apple hardware anyway.

Apple is afraid that Microsoft will immediately stop shipping Office for Mac. This is a valid point, but I am not sure if Microsoft can allow this, because this would immediately strengthen the monopoly debate, that Microsoft is preventing competition by discontinuing an important software product only because the competitor is becoming dangerous.

The spreading of Mac OS X would only increase software sales for Apple. Currently Apple has very good software offerings for professionals and advanced amateurs for media creation. Adobe shows how to earn very good money with comparable products without selling any hardware at all.

But is there any interest for Mac OS X outside Apple hardware? I think a lot of people would like to try out Mac OS X on their old hardware and if they like it, the probability that their next computer will be a Mac is quite high, isn't it?

Virtualizing Mac OS X on computers of developers helps them to develop multi-platform software, so more of them can consider implementing their software for Mac OS X as well, without investing in a Mac first.
I am sure there are a lot of other arguments as well, so my first wish for the Jobs' keynote at the coming MacWorld: mr Jobs, please open MacOS X! [ ed. note: this article was written before the keynote]

Is the Desktop Becoming Legacy?

A few years ago I wrote on OSNews several articles (1,2) about workstations. After three years I had to stop, because there were no workstations left on the market, they became legacy and were not sold any more. Now with the rise of mobile devices with touchscreen and wireless network connectivity virtually everywhere, the question becomes valid, what will happen with the desktop computers, are they still needed, or will they follow the workstations on their way to computer museums?
First we have to understand why the iPhone, iPad, and so on are better suited for the avarage user than desktop systems. Why has Microsoft's strategy of propagating the PC as the "digital hub" mostly failed? For years, especially Microsoft (but also Linux and Mac evangelists) told us to use a PC or a Mac for sharing media and devices, for creating content and consuming media, for storing selfmade videos and photos. But this strategy has two weak points.

First, to be able to accomplish most of these tasks a PC must be switched on 24/7. Only very few heavy users are really using their PC as a server. It is expensive, a PC can be noisy, the hardware is not really optmized to run round the clock and because of lot of security updates there is still lot of administration required. So the idea of having a home server is not really catchy.

Second, even with a home server it is quite hard to share the media outside the home. Sure, there are tricks like DynDNS or port forwarding, but only very few users do understand these technology and are using them. Even worse, by opening the server towards the Internet, it becomes vulnerable, which means additional administration efforts.

Sharing media is becoming more and more common. People are putting their photos on Flickr and their videos on Youtube and send the URL to their friends. Try this with your media hub at home. This is possible, but how much more effort and knowledge is needed? Regarding copyrighted content - it is no longer necessary to have it stored on your home server. Music and videos can be stored on portable devices as well, since they have enough storage.

So the remaining points are content creation and consuming and sharing of devices. For device sharing, specialized boxes like FritzBox in Germany are much more suited than a full-fledged PC. A FritzBox, which you get as part of your DSL contract, offers a few LAN ports, wireless access point and a USB port for printer and USB hard disk where you can put all the media which is then accessable from all devices in the network. The web-based setup is very simple and still powerful; virtually zero administration is required and the power consumption is very low.

For content consumption there are devices like iPad with no boot time, with an excellent screen, which are transportable, consume very little power and the input possibilities are sufficient to enter a URL, write a short comment or chat. It is possible to connect these devices to your TV or HiFi set, so no PC is required here.

So now comes content creation. This is the area where PCs are to stay. Writing long articles, media production, coding, this is where PCs are strong and will stay for a while. But now there is a fact, that in social networks only 10% of the users are creating the content, the rest is consuming. Also these 10% are not creating all the time; they are also heavy consumers, therefore they will probably have two devices, one for consuming and one for creation, which they have to switch on, to wait the boot time, to start correct program and then start creating. So content creators will still buy a desktop, but it will be a tool for clearly defined tasks, for everything else there will be a consumer device.

Legacy does not mean that a device will disappear. Desktops and their operating systems are and will remain in businesses, they form the backbone of lot of companies and Microsoft will still remain one of the most valued companies and earn lot of money. There will still be lot of Windows versions to follow, but the excitement about them will not be much stronger than excitment of new version of AIX.

Windows vs. MacOSX vs. KDE vs. GNOME vs. BeOS wars are thing of the past. The future discussions and most exciting developments will happen on mobile devices. So watch out for iPhone OS vs. ChromeOS vs. MeeGo (and probably Microsoft if they get their act together with Windows Phone 7 and Slate). For Intel and AMD this development means that they should concentrate on server processors and very low power processors for the consumer devices, since this is the area with the most demand in the future.

What's a Multi-OS Platform and How to Deal with Such a Beast?

After MacOSX and Linux start to become viable alternatives to Windows on the desktop, more and more applications are developed to be cross- platform; all potential users can run them on their platform of choice. In the following article I will discuss different ways of creating a cross-platform application and their (dis)advantages for the user.
The author's native language is not English, so please forgive any grammar and/or spelling mistakes

First I will explain my definition of a multi-OS platform. A platform is an environment which an application is using to communicate and interact with the system and with the user. GUI, IO, and multimedia libraries are a part of a platform. Kernel API is a part of a platform. Applications, which are running on the same platform, have mostly the same look and feel and share the same global settings, which the user can adjust in the common preferences, and are able to communicate with each other. Quite often the applications offer interfaces so the user can write scripts, which are connecting these interfaces into a new application (think AppleScript or AREXX on Amiga). UI techniques like copy/paste and drag/drop are working across the applications; not only for unformatted text, but also for complex objects. From the programmers' view a platform offers a consistent environment with one main programming language (usually the language which has been used for the creation of the shared libraries) and several supported languages with bindings.

About 10 years ago it was fair enough to make the equation that an operating system is a platform. MacOS, Windows, AmigaOS, BeOS, Atari TOS; all these operating systems offered their own unique platforms, their own environments. UNIX was a bit different though; while the combination of POSIX-compliant kernel, X11, and Motif was a very common platform, other variants were also possible. Nowadays an operating system can have several platforms: MacOSX has Classic, Carbon, and Cocoa; Linux has KDE, GNOME, and GnuStep (among lots of others); and Windows has .NET and MFC. They all can be considered as native, but a lot of effort has been done to enable interoperability between the platforms running on the same operating system. One example can be the standardization efforts from Freedesktop.org to enable interoperability between KDE and GNOME.

Now let's talk about Multi-OS. Usually if an application is supposed to run on several operation systems you call it multi-platform. This term is applicable to the above equation 'one OS = one platform'. However, since we want to talk about platforms running on multiple OSes, multi-platform platform sounds a bit silly; instead I prefer the term 'Multi-OS'.

Let's talk about Multi-OS application first. So a Multi-OS application still has to interact with the system and the user, but the difference is, that it must be adaptable to several environments (native platforms), so there must be a kind of translator between the application calls and the current environment. In case of Mozilla and Firefox it is the XUL-Toolkit, in case of OpenOffice.org it is VCL. These toolkits are mini-platforms. The question is how well do they coexist with the native platforms. The answer is: not very well. When the application is launched, the whole mini-platform must be launched as well, which takes time and resources. The communication with the rest of the system can only be good if a lot of integration works has been done. The situation might become even worse if several multi-platforms have to communicate with each other; this happens only through the native platform, so only the least smallest denominator is understood by all three participants. Since the mini-platform is highly optimized for one particular application, it is quite hard to take this platform as foundation for other applications, XUL is used for Sunbird and Thunderbird, VCL is foundation for some OpenOffice.org forks.

Other toolkits are more flexible, so a variety of Multi-OS applications can be written using them. Exmaples of this are wxWidgets and QT (and to some extent GTK, which is available on different OSes. However, it is really optimized only for UNIX/X11). Applications, which have been written based on these toolkits, share the look and feel of the native platform (more or less), but still they are not communicating between each other or the native platform, so their integration is suboptimal.

A different approach is to use a virtual machine. A VM is also a kind of translator, but the difference is that it tries to provide as many own libraries as possible, while not relying on the libraries provided by the native platform. This way, a large variety of different applications can be created, which are all running in a VM. Two main examples are the JAVA platform and Mono.

The problem with this approach is that these applications feel even more alien on a native platform, despite the fact that with the introduction of SWT and Java Swing versions the look and feel of Java programs resembles that of the native platform. However, the weak point is still that communication between different programs inside and outside of the VM is not sufficient and there is no easy way to use parts of JAVA program for interaction with a different application. There is no language like Visual Basic or AppleScript which glues parts of different JAVA-programs together (maybe Groovy can become such a language).

Another point is that there are simply not enough JAVA-programs where such combinations would make sense. A platform is only viable if it has a large ecosystem, which means that a lot of applications have been developed for this particular platform - only then all communication worries about drag/drop, copy/paste, and connections of different application parts are justified.

So now finally we are coming to the Multi-OS platforms. Multi-OS platforms might also have the same weaknesses as a VM has, but the difference is that there is so much software available for this platform, so interaction with the native platform might not be perfect, because most of the required tasks can be done using the applications written for this platform.

Let's take a look at one of such platforms. A few days ago IBM announced Lotus Expeditor, based on the Eclipse Rich Clients platform. Lotus Expeditor runs on Linux, MacOSX and Windows. From the technical point of view it uses JAVA with SWT, but the ecosystem includes groupware Lotus Notes, instant messaging software Lotus Sametime and office software Productivity Tools. All the applications inside this platform are nicely integrated, interconnected and extendable with third-party plug-ins. Interaction with the native platform is not important in this case, as this is a platform for an office worker; most of the tasks he requires for his business are already there.


Before talking about the platform which might become the most important Multi-OS platform in the future, let's take a look at two other failed attempts at creating a Multi-OS platform. The first attempt was OPENSTEP, a platform which was available for Solaris, Windows, and in its incarnation as NextSTEP as its own OS, which was available on several hardware platforms. The reasons for the failure are quite numerous: OPENSTEP was too alien on a platform like Windows and it had its completely own look and feel, meaning it was too hard for a user of a native platform to get used to it. Not many applications existed for this platform and the company Next was too small to push it.

The second attempt was the AVA Desktop. After the introduction of the JAVA Desktop on Linux, Sun ported it to Solaris and was thinking loud about porting its components to Windows to achieve a similar working experience across several platforms, but this remained a wish, and these days Sun changed its business directions like socks, dropping this idea. However, maybe such discussion will appear again, when the Looking Glass project becomes more mature and will become a toolkit for a complete platform.

Now finally I want to introduce the most promising Multi-OS platform: KDE 4.0. What is so exiting about it? KDE4.0 is based on QT, which, as we've already seen, is suited for Multi-OS development. However, KDE 4.0 is much more. KDE consists of thousands of programs, many of them quite high quality, so if a user installs KDE 4.0 on his OS he has potential access to most of them. Technologies like Phonon help to develop Multi-OS applications, because all the access to multimedia codecs, which are different on every native platform, happens through the Phonon layer, so the application does not care if it is Quicktime on Mac, GStreamer on Linux, or DirectShow on Windows. Applications communicate with each other via DBUS. Applications itself are so called KParts, which can be combined into new ones. Copy/paste and drag/drop work across the platform. Look and feel can be configured to resemble the native platform, so a user can quickly get into it.

So what are the advantages and disadvantages to have a complete Multi-OS platform beside the native platform?

Advantages

1. Compatibility across all OSes. Using the same office suite, the same instant messenger, the same groupware regardless of the operating system allows sharing all data without compatibility issues.

2. Painless switch between different OSes. Once the user understands the basic look and feel of the platform he can use the same programs, which might look a bit different, but are called the same (all the Windows users I met were not able to surf the Internet on a Mac, because they had no idea that Safari and Camino were web-browsers) and behave the same. Again compatibility is very important in the case of switching platforms, because all user data can be transfered, without any potential loss through convertion.

3. Since KDE 4.0 is open source and provides tons of applications, an average computer user does not have to buy expensive computer software for basic tasks; hence he can save money for software which really demands special functionality of the OS, which a cross-platform application cannot provide.

4. More programers who are using different OSes can write application software which is available for all desktop users.

5. Multi-OS platform can be ported to less widespread systems like Haiku, AmigaOS, MorphOS, and SkyOS giving the users of these platforms lots of software for their daily work.

Disadvantages

1. While QT mimics quite good the look and feel of a native platform, it still can be recognized as alien, which is highly unwelcome especially by Mac users, who react very sensible to a different look and feel than Cocoa or Carbon. This is especially valid in times when Apple changes the look and feel of the platform (usually with a new MacOSX release) and QT still mimics the old version. But this consistency arguement is true, if only few applications have different appearance, but what if the majority of the applications has different appearance? Is it the common appearance then and the native applications start to look differently?

2. Multi-OS platform can offer only the least common denominator, so the applications cannot use special advantages of the particular OS. This is true, but we're not talking about highly specialized software packages, which really demand some special functionality of the OS, but such applications like an office suite. I'm not saying that everybody should use KOffice, but several different office suits can be created based on Multi-OS, which serve every taste. So a demanding user can use a suite which is as powerful as Microsoft Office, while a less demanding user can use a suite which is as design oriented as iWorks. There is no reason why such applications cannot be cross-platform; they do not require any special capabilities from the OS. The last office suite which was really optimized for an OS, was Gobe Production suite and even that one was ported to Windows.

3. But what about integration with the native platform which was so important in previous cases? Well, the close integration with the native platform is not required, because we have a whole ecosystem with plenty of applications, so communication outside the platform is becoming a nice-to-have and not an absolutely necessity.

Conclusion

We see that it makes a lot of sense to have a Multi-OS platform besides the native platform. Sure, there are disadvantages, but in my opinion most of the people can live with the fact that some applications behave slightly different then the rest. Imagine what will happen if KDE4.0 becomes such a winner like Firefox; a lot of people will use plenty of open source software, first on Windows, but then they see that they can easily switch to any other platform. This will be the moment when finally Windows will face a serious competition on the desktop.

About the author:
Currently I'm located in Bracknell, UK and I work for a major EDA company. If you want to find out more about me and my interests, please read my blog at kloty.blogspot.com (only in German), where you will find my profile and some other interests beyond operating systems.

Opinion: Why Solaris and MacOS X should unite?

There are dozens of articles like this one on the net. Over and over people suggested solutions like this for different reasons and although I know that such thing probably won't happen any time soon, from my point of view now it is the best moment ever in the history of both operating systems to merge in a one powerful alliance. And the hell has already frozen over, hasn't it?

First I will give short description of both OSes, so we can see the strong and the weak sides of them and see if the combination should eliminate the shortcomings and make the good points even better.

MacOS X

After running MacOS X several years exclusively on PowerPC processors from IBM and Freescale (former part of Motorola), Apple decided to make a switch towards x86 architecture, which is completed now. PowerPC version of OS is still developed, although no one outside Apple knows for how long this version will be feature complete with the x86 version. While the kernel of MacOS X is freely available, the complete OS costs about $150 per seat.

The kernel of MacOS X is called Darwin, it is open sourced, but it is not very popular among third party developers outside Apple. First reason is that the development itself is closed source, only the finished kernel is delivered by Apple, sometimes with months of delay. So the developer can study the kernel, write drivers for it, but has only little influence on the development of the kernel itself. The second reason is that the architecture of the kernel is quite unusual. It is an outdated Mach3 microkernel with a FreeBSD "personality". So after understanding of the concept, still hardly anyone can explain the reason (probably heritage of NextStep) and the benefits of having such architecture. Some people are talking about Frankenstein OS, which consists of parts somehow glued together and brought to life. The problem with such approach is that the concepts from other OSes cannot be applied to Darwin. That might affect security, reliability and scalability of the OS, because there is no experience from other OSes, so all these topics require extra effort and research. Virtualization is not even on agenda of the kernel developers. Darwin kernel has also received lot of negative press, because it lost several benchmarks against Linux and Solaris. Even if the benchmarks were not always correct, they still contribute to the negative image of Darwin, which decreases the amount of voluntary programers, who want to spend their time with this program.

The UI and user-land programming on the other hand are among the best in class. MacOS X was the first platform with 3D GUI acceleration, is very consistent and simple to use, but powerful. Lot of technologies like Quicktime, ColorSync, PDF-based compositing system, desktop-search system Spotlight, Core Audio and Core Image are built-in and are used by the OS itself and by the third-party programs. MacOS X can be used by not tech-savvy people without any knowledge of the command line. For command line-aware people, whole UNIX power is available. MacOS X also includes an X-Window server, so even UNIX-programs, which require graphical output can be ported to MacOS X. The programming of MacOS X is either possible using libraries and languages known from other UNIX platforms like TK, QT, Motif for libraries and C, C++, Perl for languages or MacOS X-native Cocoa or Carbon environments.

Main usage of MacOS X is creation and consumption of multimedia content, video and image editing, audio processing, desktop publishing, main focus is desktop user, who might not be aware of command line and should not be. MacOS X is used as server mostly in MacOS X environment. Most of the software has already been ported from PowerPC to x86 or is still in development. For PowerPC-only software there is an emulator available, which translates PowerPC code to x86 commands.

MacOS X is supposed to run on Apple computers only. Apple offers two lines of notebooks, three lines of desktop computers and one server line. Although it is theoretically possible to run MacOS X on an other x86 compatible computer from other manufacturer it is legally forbidden. Drivers exist only for devices, which are used by Apple computers, so running the OS on a computer with different devices, might cause problems. MacOS X has been already tried out for running on 8 processors (2 4-core Intel processors). However, there are no experiences how well it scales for this number of processors or even above (remember, Linux 2.2 was also running with 16 processors, but it did not scale well). The maximum supported amount of memory is 16GB.

Solaris

With the open-sourcing of the Solaris 10 version of its operating system, Sun has awaked new interest in it. Solaris 10 is running on SPARC and x86 processors both versions are feature-complete. There are attempts of porting it to PowerPC processors as well. Solaris 10 is available for free, however you have to pay for support.

The kernel of Solaris is BSD UNIX with some heritage from System V (according to the www.levenez.com/unix/ ). The kernel is very scalable (largest server offered by Sun contains 72 CPUs), secure (merged with parts of Trusted Solaris), reliable (Solaris Fault and Service Managers, Self-Healing technologies). Technologies like Containers for virtualization, DTrace for debugging and performance optimization, ZFS as high-end file system are still not available on other systems (or have been ported from Solaris). Performance-wise Solaris gets very good notes from several benchmarks. It receives lot of attention from the open source community. Sun releases very often previews of the next version of Solaris and works close with the third-party developers.

On user-land side, Solaris 10 is delivered with completely outdated CDE or Java Desktop 3, which is based on GNOME. This software is included in Java Desktop 3:

• GNOME 2.6
• Evolution 1.4.6
• Mozilla 1.7 browser
• OpenOffice.org 1.1 (basis for StarOffice 7 suite)

One can see, that this software is completely outdated (compared with e.g. SUSE Enterprise Desktop). There is no 3D-acceleration included, no desktop search. And nowhere on the net I could find a shipping date for Java Desktop 4. Moreover Sun will have problems sticking with further versions of GNOME, because it seems, that GNOME's high-level language will be Mono's C#, which is big rival for Sun's JAVA. User-land programming is done in JAVA, C, C++ with GTK+, QT or Motif libraries.

Solaris is heavily used in technical and science areas. It is still OS of choice for such tasks as EDA, CAD, CAM, CAE. It is also used as server OS for large databases, file and computation server or websites in mixed environments. The user should be very skilled in usage of the command line and understanding of UNIX. Lot of software, which is available for SPARC only has still not been ported to x86. There is no emulator available, which could translate SPARC commands to x86 (the only emulator demonstrated by Intel translates SPARC code into Itanium code, Sun tries to ignore that one). Currently there are no plans to abandon SPARC processor, but the roadmap for a workstation SPARC processor is not quite clear.

Solaris runs on wide variety of SPARC and x86 based hardware, especially servers from big companies. But it has to share the same problems as Linux with notebooks, where there might be no support for non-standard hardware build-in, or no open available documentation, so no open source drivers may exist. Sun also produces workstations and servers for Solaris.

Elimination of Shortcomings

Now it becomes clear that the weak side of the MacOS X is its kernel. Even if it is technologically interesting, it has negative image in the minds of developers, so there are only few people outside Apple, who are doing research and develop for Darwin. Lost benchmarks, complex architecture (and unknown security holes as a result), not proven scalability, lack of virtualization, self-healing, and a noncompetitive file system also doesn't make it system of choice for large server administrators. Additionally, though MacOS X is a UNIX system, there is hardly commercial software available, which comes from "old" UNIX platforms (e.g. Solaris).

The weak side of Solaris it is user-land. While making good shape on server, Solaris looses ground on workstation market, especially to Linux. Lot of workstation software packages, which were running on SPARC Solaris, are not ported to Solaris x86, but to Linux instead. Linux supports more hardware, it receives more frequent updates and ISVs (Independent Software Vendors) see no point in supporting two very similar (from the workstation point of view) operation systems on the same hardware. Unclear situation on the SPARC side (for the workstation) only increases the problem.

Strengthen the Good Sites

So what kind of advantages will the user have, if both systems will be merged?

- industry-proven, trusted, fast, reliable, secure, scalable kernel as fundament
- server OS with great manageability as known from MacOS X server
- unmatched technologies like DTrace, ZFS, Containers, Spotlight, Time Machine, Quicktime and so on in one package
- attractive UI with modern multimedia, office and communication software for former Solaris users
- programs from both worlds on one platform
- increased number of users, who might attract developers for porting their programs to this platform
- new buzz OS for geeks
- shared development resources
- ...

Is it just a dream?

Both companies Apple and Sun must change their politics to make such a dream happen. Apple has to accept that they do not have the full control over the development of the kernel, Sun has to accept that after open sourcing the whole OS, parts of it will be closed again (Aqua part, Apple will never open-source that one). From the license point of view such a merging is possible, I hope Sun will not do something stupid and put Solaris under GPL as they announced not so long ago. Both OS need to get some attention from users and developers for not being crushed from Windows on one side and Linux on the other. Sun should also declare SPARC workstations as depreciated, so not the complete OS should be ported to SPARC, but only the server relevant one. An emulator should translate SPARC code to x86. Aqua should run only on selected hardware, but kernel can run on a variety of x86 platforms. The Solaris port to PowerPC should be accomplished, but the quality doesn't have to match the x86 package, because, PowerPC platform is not the main business for Apple, they should do it only for the compatibility reasons. The interesting thing is that some of Solaris code has already been ported to Darwin, namely DTrace and there are lot of rumors, that ZFS will also find its way into the new Leopard. So why not make a big step and take the whole portion instead small crumbles.

Why Do Workstations No Longer Matter?

This article tries to explain why workstations are no longer an appropriate tool for the present working environment, what the alternatives are, and what consequences it has for the development of OSes.
First I would like to explain why I feel competent enough to write this article. I'm a hardware engineer and I work as an EDA (Electronic Design Automation) consultant, this means I often change projects and customers and I'm using UNIX-based environments to get my job done. All the developments, which I describe in following, are affecting me, so probably either they are similar for other engineers not necessarily from the EDA industry, or they will affect them in the near future.

The tool is the same, but the task is changing

Let's step back for a moment and remember how it was just a few years ago. Every engineer had his own UNIX-workstation in his office. All the project data was stored on a file server, so for changing the data, first thing to do was to fetch them over LAN. If the data were coming directly from customer, they were stored on tape, so they had to be loaded on local disk, after that the engineer could start working on them. The workstation was powerful enough to handle the amount of data. If data had to be shared, the engineer stored the data back on the file server, so his colleague could access them. The communication was handled over email or over phone. All electronic correspondence inside the company was using the same data format. The team, who was working on data was present in the same office.

There are at least two developments, which changed this peaceful picture: globalization and flexibility.

Globalization

Nowadays several engineering teams from all over the world must have access to the project data. That means that the file server can be located anywhere and must be accessible over comparably slow WAN connection. Since several people might work simultaneously on the same data, versioning systems must be used. The amounts of data are increasing rapidly. It takes too much time to fetch them and store on the local disk and write them back after processing. Additional problem is that providing necessary power to process this data to every single engineer is just too expensive. The resources must be shared. These factors lead to conclusion that it might be easier to let the data on server, or just copy them over a fast connection from the file server to the grid of computing servers, which cost less than the certain number of workstation and can be used more efficient. So the only data connection, which is required is a remote display, which let the engineer start the jobs and see the results. X11 has network transparency build in, but the protocol is not very efficient for WAN connections, so better optimized solutions would be Citrix ICA connection. A free solution is e.g. VNC. Another important point is that Citrix clients are available for Windows, MacOSX, Solaris and Linux, so the OS on engineer's desktop is completely independent from the OS being used on the server. Additionally it is possible to share a connection, that means it is possible to see what another Citrix user is doing, that is very nice for solving problems or providing online training. One solution is to provide an inexpensive terminal with slim Linux distribution, which can run Citrix client and possibly RDP protocol to connect to a Windows server, so the user can use software from both worlds. All the production data is stored on UNIX server, all other data on Windows.

Flexibility

Flexibility means for the engineer two things. Being flexible means not only to work on the technical side on the project, but also contribute more, than just processing the data. Today the engineer must write the documentation, meet the international customers and held presentation about the project status, provide training, fill-out various web-based forms, like timecards or expense reports, attend webinars, telephone and video conferences, communicate with other project teams on various channels. He receives several dozens mails a day from colleagues, mailing lists and customers, works on different projects at the same time, and must always learn new things. He is responsible not only for the project itself, but also for the pre- and post-sales support. Another aspect of flexibility means, that the engineer is not longer bound to his office. Lot of companies do offer possibility to work from home, either because they want to be seen as family-friendly, or just want to avoid expensive offices. Some companies do not have enough space for all the employes, so they come to the office only twice a week. During critical project stages the engineer must have possibility to look at the data without making long way to the office. During customer visits he must have lot of data available to be prepared for every question the customer may ask. To fulfill all these demands the engineer must use a notebook with a OS which helps him organize all the data, which are project related, but which are not production data.

So the combination of these two trends shows that the ideal platform for a nowadays engineer is a notebook with a modern desktop OS, installed VPN and Citrix or VNC client. He can connect it to a broadband connection and have access to the server for working on project data or use the applications of the notebook OS for all the communication and office related work.

What kind of consequences does it have for the development and usage of operating systems?
We can draw a very sharp line between the server OS and the desktop OS. Both systems have completely different demands. From the view of user the server OS is visible in his Citrix client window as an application. In fact it is comparable with a WebOS, which are running in a browser window. Server OS must be stable, reliable and scalable. It must run on big servers, handle lot of load and users, support virtualization, be fault-tolerant and self-healing. The windows manager must be simple, but still effective enough, to help handling several open windows and terminals in a session with a resolution and color depth as small as possible to minimize the network traffic, but still large enough to display all relevant data.

There are already server-only OS, like zOS, VMS or OS/400, but our definition would declare also AIX, HP-UX, Solaris and all BSDs as server OS (Note: I don't mention Linux here, it is a special case). There are lot of minimalistic window-managers and desktop environments available (CDE, FVWM, WindowMaker), which comply with the requirements described above. Which leads us to the question, which software is required for a serverOS? Obviously programs for the work on the project data are needed. Then a development environment with a tool chain to be able to write programs for the OS. A web-browser with PDF-plugin and an IMAP-based email program for simple communication. What software is not required for it? No advanced communication software, no multimedia programs, no office-software, no bloated desktop environments like KDE or GNOME, no 3D-acceleration, no search software, nothing what might disturb the user or the computer system from work. The ideal case would be, if the home directory of the user would stay empty, all the project data are stored in project directories in versioning systems, accessible for other project users.

On the other hand the desktop or notebook OS must have every feature which helps the engineer to organize his work and should help him to be able to communicate with every possible client and manage all his data. He should be able to read and write every document format and be able to access every website. International customers might send him documents in every possible format and he cannot reject it, with an excuse that's because his desktop OS does not have an application, which is able to read it. The stability and reliability do not play a very important role. If the system crashes, it is still possible to connect to the Citrix session and continue working. Currently there are only 3 OSes which to some degree support these demands: Windows, MacOSX and Linux.

By using Windows the chance to have all the programs for communication like VoIP, IM, video-conferencing is higher than on other platforms. Windows-based application like Microsoft Office are used by most customers and non-technical departments in the company. OpenOffice is available for Windows as well, in case that somebody is sending ODF data around. It is sad, but there are still lot of web-forms, which are used in Intranets and which work only with Internet Explorer. For group-ware functionality Exchange-Outlook is still the most popular combination. Multimedia plugins and codecs for all relevant formats are available. Windows Vista has integrated search, which helps to find documents and emails on the basis of different criteria, for earlier Windows version applications like LookOut or Google Toolbar can be used. Windows supports Unicode and lot of different char-sets, which is also important, since customer from Eastern Europa or Asia might use different char-set on their web-page or in the email.

MacOSX is also able to read and write most of the popular formats. It has its problems with multi-platform groupware-functionality and while VoIP and text-messaging with different IMs is possible, video-conferencing with a Windows user might become a bigger problem. Not every web-page can be viewed with Safari and if Microsoft removes VBA-functionality from its next Office for Mac software version, all the Excel tables with Macros cease to work. MacOSX has very advanced searching capabilities and is good suited for writing of documentation, especially because of the build-in PDF creator, so the documents can be viewed on all platforms, even on server OSes.

Linux can be used as server and as desktop OS. While optimized distributions make good shape on server, the Linux desktop still has a long way to go to become as helpful for the engineer as Windows. All the arguments, which are valid for MacOSX, are valid for Linux even more. Even if the company is pure open source and uses only standardized document formats and communication paths, the customers might not and there must always be a way to be able to read everything, what a customer might send. Group-ware solutions on Linux are available, but the Exchange support is fluky, MSOffice-Macros might sometimes work with OpenOffice most of the time they do not, I'm not aware on any cross-platform videoconference software which is available for Linux. Codecs and plugins are often not available as well. Recently Linux also got a search-engine. But the advantage of Linux is, that it is possible use the notebook as development machine and run the code on Linux server, without recompilation. It is possible to demonstrate software and provide training on the notebook, without having connection to the server.

Conclusion

Due to the change of the working environment workstations are not the right tool to do the job anymore. They are too expensive, can be used only by single user, the data amount is too large to be downloaded and processed. Better solution is to leave the data on server and send them through fast network on computing grid. As control station either a terminal or a notebook can be used. Notebooks offer better flexibility as it can be used for work from home or during traveling. ServerOS should not be optimized for desktop usage but concentrate on such tasks like reliability, stability, scalability. Only lightweight windows managers should be used to save the bandwidth and processing power. The OS on the notebook must be able to help the engineer to communicate, manage his data and organize his work. Windows is currently the most advanced OS for these tasks, but Linux's advantage is, that it is flexible enough to be used as serverOS and on desktop.

A Take on the Workstation Market one Year After

There is a saying that one year in IT industries is equal to 8 years in traditional industries. One year ago I wrote an article about the workstation market, if I compare this article to the situation today, quite everything has changed in this pretty short period of time. So now it's time for an update.

Last year I refused to call computers with x86 processor a workstation. 64 bitness for x86 processors was quite a new thing, the operating systems which were supporting this feature were not ready for production, the software packages from ISVs were not supporting 64 bit on these processors. This changed completely.

Today the definition of workstations might be as following: It's a mini-computer for a single user, with a processor, which can also be used for servers, with several gigabytes of memory, big storage, OpenGL-capable graphics system and UNIX or UNIX-like OS. I do not include Windows OS in this definition, because although there is a Windows XP Professional x64 Edition which supports x86 processors with 64 bit extension, there aren't lot of compatible drivers and the usage model of the OS is very different from all the other workstation OSes. This may change with the release of Windows Vista, because there will be a 64-bit version from the beginning with lot of drivers included and through the inclusion of Windows Services for UNIX (called SUA), which should make Windows OS a bit more UNIX like, so the user of traditional unices should become familiar with this OS.

So let see what platforms are still available today:

1. PowerPC 970 and POWER5+ with AIX5L and Linux from IBM

IBM defines its workstations as small server, which can also be used as a workstation. The main area of these workstations is Mechanical Computer Aided Design (MCAD) and Electronic Design Automation (EDA) software. One of the most used software packages is CATIA, which is an engineering software package for mechanical engineers. IBM advertises explicitly the ability of running Linux and AIX5L on these workstations. There are several Linux distributions which support POWER processor and IBM is actively supporting porting of software for this platform. However there isn't lot of commercial software packages yet, which means that this kind of workstation might be used as development machine for the successful embedded PowerPC applications. IBM also does not recommend to use Linux on these workstations for 3D graphics, which underlines its status as development machine

Currently there are two workstation models available:

- IntelliStation POWER 185 Express
This workstation is equipped with 1-2 PowerPC 970 processors, the same processors as G5 for Apple's PowerMac. Each processor can have 2 cores, which are clocked with 2.5 GHz. The maximum memory expansion is 8 GB. Each processor has 1 MB second level cache memory, but no 3rd level cache memory. The graphics subsystem is proprietary IBM, it has 4 PCI-X and one PCI slot. This workstation is the cheapest workstation ever produced by IBM.

- IntelliStation POWER 285 Express
The processor in this workstation is POWER5+, which is the most recent processor also used in p- and iSeries of IBM servers. It can be chosen between 1 or 2 processors, each of them has 2 cores, which are clocked 1.9 or 2.1 GHz. Each processor has 1.9MB L2 and 36 MB of 3rd level cache. The total amount of memory can be up to 32 GB. It has 6 PCI-X slots and a proprietary graphics system.

It is not very certain, if there are plans for a successor of the PowerPC970. While the main customer of these processors Apple has abolished it, IBM is using PowerPC970 in its blades and in the IntelliStation. There are other customers who have plans to build computer systems based on this processor (most famous one is Genesi, who are building PowerPC based computers with Linux), but whether the demand is big enough for funding development of the next generation is more than uncertain. On the other side the success of PowerPC as embedded processor also creates a need for development platform. Wild speculations are rising around the Cell processor, which usage is discussed for multimedia workstations. Workstations which have 1-2 POWER processors remain in IBM products portfolio as long as AIX is alive.

2. Alpha with True64 UNIX/OpenVMS from HP

This platform is still offered for customers as a development solution for their servers with Alpha processors. The last orders for these servers at HP is October 27th, then this platform is officially dead (support offerings will continue for some years of course). No new workstation models are offered since the last article.

3. PA-RISC with HP-UX from HP

There is only one workstation available with this combination of processor and OS: the c8000, which also hasn't changed since last year. PA-RISC systems are still sold, although Itanium processor was meant as a replacement for them. But since Itanium is quite unpopular, HP's PA-RISC systems still remain in their product line. HP is one of the heaviest Linux promoters and certifies and offers x86 systems with latest Linux distributions pre-installed.

4. MIPS with Irix from SGI

SGI's MIPS workstation have not been updated since last year, and this is quite certain that no follow-up models will appear. The software, which has been running on these workstations can be used on SGI PRIZM series without further modification or recompilation.

5. Itanium with Linux from SGI

One of the main surprises this year is the appearance of a family of workstations called PRIZM from SGI, which are powered by combination of Itanium and Linux and are meant as a solution for virtualization of large data sets like in medical research, industries with demand for virtual reality, climate research and so on. PRIZM can be used as a workstation, but also can be connected into a cluster with single, system-wide shared memory, so several processors and graphic pipelines can be combined for virtualizing even larger data-sets. The workstation consists of 1-2 Itanium 2 processors, 1-2 graphic pipes (ATI FireGL cards), the main memory is expandable to 24 GB and it includes 6 PCI/PCI-X slots.

This is one of the technically most interesting solutions currently available. Itanium 2 processors are very fast on optimized software and their EPIC design fits exactly for the tasks, the PRIZM is used for, that means for high speed computation of large data-sets. The possibility of combination of several PRIZMs into one cluster with unified memory, where processor and graphic resources are simply recombined for larger tasks is something which makes it very unique in the world of IT. Unfortunately SGI is currently in financial troubles, so it is hard to say, if PRIZM will be further available in recent future.

6. SPARC with Solaris 10 from Sun

For the last few years the direction Sun is taking was pretty hard to explain. Sun tried out several options of changing its business and expand it in other areas. Currently the business model became more clear, but surprises still have to be expected. However Solaris workstations still remain in the product line, beside x86 based computers they are the most widespread "traditional" workstations. Their main usage is for CAD, CAE, CAM, EDA, JAVA development. The current line-up consists of three models:

- Sun Ultra 25
This workstation includes one UltraSPARC IIIi processor with 1.34 GHz and 1 MB of 2nd level cache. Maximum memory is 8 GB. This workstation offers 3 PCI-Express and two PCI-X slots, one of the PCI-Express slots is occupied by a XVR-2500 3-D graphics accelerator. The operating system is Solaris 10.

-Sun Ultra 45
Ultra 45 includes 1-2 UltraSPARC IIIi processors with 1.6 GHz. Maximum memory is 16 GB, other data equal with Ultra 25.

- Sun Ultra 3 Mobile
This is the only mobile workstation beside the top line of x86 laptops, which are very hard to find with Linux pre-installed or even supported. Mobile is a bit misleading, it can be transported, but it's not meant for work in the train on one's lap. It consists of 550-650 MHz UltraSPARC IIi or 1.2 GHz UltraSPACR IIIi processor, can have up to 2 GB memory and 80 GB of IDE internal disk storage. It has wireless LAN and 15-17-inch display. The main usage of this workstation is presenting something to the customer in a predefined environment or developing some server applications without having a server around. Although the performance of this workstation is very comparable with a desk-side workstation, it lacks a proper 3-D graphics accelerator, which makes all CA* packages unusable on it.

I think SPARC based workstation still have long life, as long, as Sun is producing SPARC-based servers, which is still bread-and-butter business for them. Interesting question arises which SPARC processor will they use for the next workstation. Niagara is certainly not optimized for desktop usage, it is quite hard to keep all the 8 cores busy with desktop applications. The floating point unit is also too weak for computation-intensive tasks. This might change with Niagara II, but still it is very unlikely that Sun will produce Niagara workstations. On the other side the processors Sun will develop with Fujitsu are very expensive server processors, which will rise the prize for the workstations equipped with them. I still speculate for SPARC IV+ based workstation in the next future.

7. Opteron with Solaris 10 from Sun
One attempt of expanding into other business areas for Sun was the introduction of Opteron based workstation which support Windows, Linux and Solaris 10. x86 based Solaris version has been always a kind of training OS where the system administrators could get some skills for working with real fat iron, which was SPARC based. This changed completely with introduction of Solaris10. Now several servers and workstations with Opteron processor are offered by Sun and they cover more and more areas which were exclusively reserved for SPARCs. Here is a workstation overview:

- Sun Ultra 20
One dual core Opteron processor with up to 2.4GHz, single core versions up to 2.8 GHz and 1 MB of 2nd level cache. 4 GB of RAM is maximum memory, 3 PCI-Express and 4 PCI slots. There is a wide variety of graphic controllers to be chosen from, beginning from ATI Rage XL PCI Controller with 8 MB of memory, ending with NVIDIA Quadro FX 3450 PCI Express with 128 MB of graphics RAM and support for two displays. The pre-installed OS is Solaris 10, Red Hat Enterprise Linux, SUSE Linux Enterprise Server and Microsoft Windows are officially supported by Sun.

- Sun Ultra 40

Two dual or single core Opterons, up to 32 GB of RAM, 2 PCI Express x16 slots, 2 PCI Express x4 slots and 2 legacy PCI slots. This allows to use the NVIDIA SLI technology where 2 graphic cards nearly double the graphical performance of the system.

With the opening of Solaris source code, it was required that Solaris can run on common hardware, so interested developers could download and install it easily on their hardware without the need of buying an expensive SPARC box. This attempt was very successful for Sun, Solaris is one of the well recognized OSes and the download numbers are impressive. However the ISVs of the workstation relevant software are very slow in adopting a new platform, even if Sun suggests, that supporting Solaris10 x86 requires only a recompilation of SPARC Solaris based program. More than one year after appearance of x86 Solaris 10 systems there is still no commercial software for workstations. Linux is considered as good enough solution, Solaris for x86 does not provide advantages which would justify the support of an extra platform. The situation might be different for servers, but currently though lot of open source software has been ported to Solaris x86, it is not a workstation platform with the bright variety of software. It might change in the future, but I'm personally quite skeptical about it. What I don't understand is why Sun is not providing an emulation layer for SPARC software on Opteron, like SGI and Apple are doing, just that the transition might become easier, and users of performance critical software can demand the portage of it. I think this would help a lot for the acceptance of Solaris/Opteron among workstation users.

8. PowerPC/Intel with MacOSX from Apple
Last year was very surprising for users of Macs. Steve Jobs announced a change of processor architecture for all Apple computers. New Inter-based PowerMacs (some rumor sites call them Mac Pro) will be probably announced at Apple Developer Conference in August. Though there is already a number of software packages available as universal binaries (which is a package consisting of two binaries, one for PowerPC, one for Intel x86), there is no professional software ported to the new architecture (expect software, which is developed by Apple itself). Adobe, which is the most important ISV for Mac, hasn't ported its software yet, it also remains unclear if the new Creative Suite version which includes the most important programs like Photoshop will support both architectures, or only the Intel one. There is also no ISV for the technical software, who considers porting its applications to MacOSX. So it has to be seen, whether Intel-based Mac platform will become a success among the professional users. It is also interesting to see, how Apple will support remaining PowerPC users who are not able to migrate because the application they're using is not available as x86 binary and emulation is too slow. Current policy of Apple is quite radical one, when an Intel model of Mac is available, the equivalent PowerPC system gets discontinued. Maybe this is OK for home users, but certainly not for professionals, who cannot migrate overnight and still need PowerPC based systems as a replacement for broken ones. If Apple cannot provide them, I suppose this can destroy lot of trust in Apple as professional users friendly company.

9. Opteron/Athlon/Xeon64ET with Linux from various manufacturers
This combination is still the most viable in the current workstation market. In the meantime lot of software, which was available on traditional UNIX 64-bit RISC platforms is ported to 64-bit Linux. One year ago 64-bit Linux was quite experimental, this changed completely, as RedHat and Novell are supporting it with their enterprise distributions. Lot of criticism regarding fast changes of Linux kernel and incompatible distributions are resolved by certifying only these two distributions. Recently Ubuntu also tries to become a distribution which is supported by the ISVs, but I think this will take lot of time and though Ubuntu is well received among home Linux users, it does not have such reputation across professionals. One major advantage which Linux has compared to the other UNIX OSes, is it's user friendly Desktop Environment (KDE or GNOME). It is really a shame, that all traditional UNIces (with the exception of Sun) still have the completely outdated CDE as default DE. One might argue that KDE and GNOME are also available for AIX or HP-UX, but they're not supported by the vendors and a normal user cannot install them, usually only system administrator is able to do this. So a normal user still has a working environment from the beginning on the 90th, which is outdated by any definition. This user friendliness and massive cost advantage will further spread Linux and nag on the user base of other solutions.

10. Thin terminals with server with grid software
This is certainly not what one would call a workstation, but recently more and more workstation users are getting rid of their computers and get thin terminal box on their desk. This box is connected with terminal server and the user can send computing intensive jobs to compute servers. Installed grid software automatically chooses a computing server with required specifications and smallest load. What are the advantages of this solution:

- Better utilization of expensive processors and large memory sets
- Only server have to be upgraded, the user notices the increase of speed/capability without exchange of his hardware
- More place on user's desk and silent offices
- Terminal is OS-independent, that means it is possible to switch between several servers with different OSes, so no extra computer is required for Windows software and the issue of sending jobs for Linux or Solaris server is only a matter of a different parameter to the job sending command
- Terminal has no moving parts, it is quite stable and robust
- Data storage and backups can be better managed on server side
- Some terminals like Sun Ray allow the user to save his session token on a smart card, that means, that if he puts his smart card in every available terminal, the session is restored

Of course not every workstation user can use such solution. 3-D intensive tasks or multimedia programs require great graphic performance and small latency, which a server cannot provide due to limited bandwidth and latency of the network, but e.g. EDA software is perfectly suitable for such scenario.

Conclusion:

Lot of development has happened since the last article. The most noticeable point is certainly the incredible development of Linux, so that nowadays x86 64 bit workstations are probably the most common platform for technics-oriented user. The future of workstations which are using processors from server lines, which are still in production is considered to be safe, they're still needed as a development platform, but the combination of grid-software and thin terminal is very viable alternative to a desk-side system. The future of MacOSX as a system of professionals is quite uncertain, it depends on the support from Apple side during the transition period and their advertising of the advantages of MacOSX compared to Linux and Windows. This might be a hard task, since the OSes are now directly comparable with each other because they are running on the same hardware. Solaris 10 is while very technically advanced system, still the ISV's must be convinced that supporting of this platform is not a waste of resources, because Linux is already there. For me the most interesting approach is still the PRIZM platform, which is certainly usable for a small amount of workstation users, but the concept behind it is ahead of time. But as we know from the history, not always the best concept is accepted by the most buyers.

Disclaimer: All informations about the technical data of workstations have been taken from the product description pages of the manufacturers of these workstations

A Take on the Workstation Market Today

Maybe you all know the old joke about the definition of a workstation: A trainstation is where a train stops, a bus station is where a bus stops, so a workstation ... In this article I will try to define the workstation market, the current models, what they are used for and some thoughts about their future.
Definition:
First the question, who is using a workstation and what is it used for:

Main areas of usage are CAD (Computer Aided Design), CAM (Computer Aided Manufacturing), CAE (Computer Aided Engineering), EDA (Electronical Design Automatisation). Scientists are using worksations for visualizing big data sets or running simulations. Architects are using workstions for constructing new houses, bridges, tunnels and other buildings. Medicals are using workstations for vizualising data they recieve from computer tomograph. Geologists use them for carthography and research for oil and gas deposits explorations. Workstations were the first computers which were capable of processing 3D, which was not only interesting for technical purposes, but also fascinating for artists like Timothy Leary. Financal analysts need them for going through different market scenarios. Workstations are also used for multimedia creation, they are capable of processing high quality audio and video. Software developers are using workstations, if they write software for servers they will find the same environement, so their programs are garanted to run on server, like they do on the workstation. Of course there are lots of other areas where workstations are necessary for everydays work.

What is a workstation:

Till the beginning of the 90th, it was quite an easy task to define a workstation. The cheapest and most spreaded computers were "home computers" from Acorn, Amiga and Atari. IBM-compatible PCs were running MS-DOS, Windows or OS/2 and were used in the offices for text processing or cheat calculations. Apple was used for artist works (at that time multimedia was a spread term, but hardly anyone knew what this is) and DTP. The workstation was one level above. It was a desktop computer for a single user, which had an UNIX-OS and a RISC-CPU. They were expensive beasts (not seldom several ten thousands dollars), so only companies and universities could afford them. The situation changed in the next few years, first Microsoft introduced Windows NT on the market, which was advertised as a workstation OS and second Linux arised from nothere. These both OSes were mainly running on IBM-compatible PCs, which became cheaper but more powerful every year. So at the beginning of the 21th century the border blurred, every big computer maker has offeres workstations which are mostly IBM-compatible PCs with the most recent Windows for Professionals version or RedHat for Workstations, which are better equipped then the average PC customer can buy in computer shops round the corner. They are much more affordable then the workstations a decade ago.

Nevertheless nowadays there still are criterias which separate a workstation from the rest of the computer market. For doing the jobs described above, the computers must be 64-bit capable, OpenGL-capable, the ISVs must provide software for this platform. 64-bitness is necessary because the data volumes a workstation has to handle exceed 4 GB memory space and often 64 bit accuracy is required. OpenGL is still the standart for professional graphics, since DirectX is not available for UNIX-platforms. The third point is very important as well because the software which is used on the workstation is very complex and were developed by the companies for several years so it is not easy to port or rewrite such software for a new platform). The licenses for that kind of software costs usually several thousands dollars ANNUALLY because of quite narrow circle of users (compared to MS Office for example), required support because of its complexity, and demands on this software from the point of its stability (as few crashes as possible even during processing a large amount of data) and accuracy. If we look at the available computers regarding these points, only few platforms are still left:

1. POWER 4+ with AIX5L from IBM:

AIX5L is one of the traditional UNIX-OSes it was certified as UNIX-2003 compliant by the Open Group (in fact it is the only OS which received this brand yet). IBM promotes this platform for CAD, especially because of the Catia software but it is also used for EDA (Cadence or IBM-owned software). POWER is a RISC processor developed by IBM and used in p- and iSeries of their server lines. Currently there are two workstations available:

- IntelliStation POWER 275 This workstation is equipped with single 1.0-1.45 GHz POWER4+ processor, Since this processor is 2-way, the entry edition of this model has one core disabled, which is not always a disadvantage, because the remaining core has access to the whole 8MB 3rd level cache. This model has up to 12 GB of memory and two SCSI harddisks, the graphics adapter has up to 128 MB video RAM.

- p630 Model 6E4: This is mainly a server, which has a better graphics card plugged in, so it became a workstation. Different to the same server it is not certified for Linux usage because of the proprietary graphic card.

The future for this platform is not easy to foresee. On one side IBM is very active Linux supporter, but IBM also is known for supporting old platforms as long as the customer pay for them (like mainframes), so I think AIX will have a long life. More likely the entry server will spend a better graphic card (not necessary from IBM, but from NVidia or ATI or 3D Labs), so it will become Linux-compliant and maybe ISVs will be convinced to port their software to Linux on POWER. AIX is able to execute Linux software, but it still has to be compiled for POWER or at least for PowerPC.

2. Alpha with Tru64 UNIX from HP
Alpha has an interesting history, first it was designed by Digital, as a replacement for PDP series. This processor was quite succesfull, since WindowsNT was ported on it and with an emulator FX!32 it was possible to run Windows x86 binaries on it. Few will remember the advertisings in computer stores, of selling a 600 MHz workstation at the time when Pentiums just reached 100MHz wall. Alpha is a very clean architecture, even too clean, the first models did not even support a byte, because it seems to be unneeded in a 64 bit world (later the support was added though). The rest is history, Digital was bought by Compaq, Compaq was bought by HP and HP declared Alpha and Tru64 Unix as dead. There are still offerings for Alpha workstations on HP page, but I don't think that someone will start his business using Alpha, so they are mainly for business, which are still using Alpha and have not converted to another platform yet. Alphas were used mainly in finacial centers, and for number cranching, simulations.

- HP AlphaStation DS15
Single 1GHz prozessor, 2MB Cache, 4GB RAM, 2GB/s Memory peak, ATI Radeon Graphic card (up to 4 in one system)

- HP AlphaStation DS25
Up to two 1GHz processors, 16GB RAM, 8GB/s Memory peak

- HP AlphaStation ES47
Up to two 1GHz EV7 processors, 8 GB RAM, 12.8GB/s I/O bandwith, 1.75 on-chip cache/processor

Linux has been ported to Alphas but since it is not commercially supported it is, there is no commercial software available. Tru64 was famous for its clustering capabilities, once HP promised to port them to HP-UX, but now sold them to Veritas, which was bought by McAfee, so no one really knows, what will happen with the rest of this software and hardware.

3. PA-RISC with HP-UX from HP

The history here is quite similiar to Alpha. HP will abandon PA-RISC in favour of Itanium. But HP has stopped its Itanium workstation line, so the valid question is, how will I be able to use HP-UX on a workstation? HP-UX was widely used in all workstation-relevant areas, beside Solaris and AIX this was the third platform which ISVs could not ignore when they claimed their software is running on UNIX. PA-RISC was the champion in integrating caches on-chip. It was the first chip which had 8 MB on-chip cache and became a 100 Mio gates monster. These workstations are still available from HP:

- HP b2600
Single 500 MHz PA-8600 processor 4GB RAM, HP fx5 pro Graphic card

- HP c3700
Single 750 MHz PA-8700 processor with 2.25Mb on-chip cache, 8GB RAM, HP Fire GL-UX Graphic card

- HP c3750
Single 875 MHz PA-8700+ processor with 2.25Mb on-chip cache, 8GB RAM, HP Fire GL-UX Graphic card

-HP j6750
Up to two 875 MHz PA-8700+ processors with 2.25Mb on-chip cache, 16GB RAM, HP Fire GL-UX Graphic card

- HP c8000
Up to two 900-1000 MHz PA-8800 dual-core processors, ATI FireGL Graphic card, 32GB RAM, 8xAGP slot

4. MIPS with Irix from SGI

SGI is famous for its graphics workstation, like O2 and Octane. For lang time they were unbeaten when it came to vizualising of large data-sets, 3D-graphics and image processing. Irix was the most comfortable UNIX-system to use, far ahead CDE which is still standart at IBM and HP. SGIs were used by medicals, by film studios, by military and geologists. In recent time SGI decided to drop MIPS and continue with Itanium. They use an emulator which allows running IRIX software on Itanium Linux. There is still two workstations with MIPS-IRIX combination available:

- Silicon Graphics Fuel
Single MIPS R16000A 700-800 MHz processor with 4MB 2nd level cache, 4 GB RAM, V12 Graphic Card with 128 Video RAM (104 MB can be texture memory)

- Silicon Graphics Terzo
Up to 4 MIPS R16000A 800 MHz processors with 4MB 2nd level cache, 16GB RAM and two V12 Graphic Boards

There are rumors about an Itanium workstation, based on their technology used for the succesful Altrix server line, but we have to wait. With the emulation technolgy, they will be able to run all the software they used on MIPS, but we have to see how fast this emulation works

5. SPARC with Solaris from Sun
I think every student of computer sciences had experiences with Sun workstations (Ultra1-10). These workstations were very popular at universities until Linux came up, which was more affordable for small budgets of todays universities. Sun workstations are still very widely used in every area, they are famous for their stability and there is a famous joke which has a lot of truth in it: Sun workstation is slow, Sun workstation with ten users on it is still slow. In recent times Sun had tough competition from x86 market, so they had to introduce workstations with Opteron processors from AMD, which execute x86 code, but they also have 64-bit extension, so they can handle more then 4GB memery/process (all solutions with 32bit processors with extended memory could not provide that) and they can compute 64bit integers in one step. Solaris 10 will also be the first non-Open Source OS which supports these extensions. One very clever step is the Janus technology which allows to run Linux binaries with Solaris 10. So ISVs will not have to provide additional binaries for Solaris 10 x86. However the question remains if the ISVs will support with combination or just certify their software with RedHat Linux and maybe Novell as they are doing today. So here we have SPARC workstations:

- Sun Blade 150
Single 550-650 MHz UltraSPARC IIi, 512 KB 2nd level cache on-chip, 2 GB RAM

- Sun Blade 1500
Single 1 GHz UltraSPARC IIIi, 1 MB 2nd level cache on-chip, 4 GB RAM

- Sun Blade 2500
Up to two 1.28 GHz UltraSPARC IIIi each with 1 MB 2nd level cache on-chip, 8 GB RAM

here are the Opteron based ones:

- Sun Java Workstation W2100z
Two 200-series 1.8-2.4 GHz AMD Opteron, 16 GB RAM with 12.8GB/s bandwith

-Sun Java Workstation W1100z
Single 100-series 1.8-2.4 GHz AMD Opteron, 16 GB RAM with 12.8GB/s bandwith

It is interesting to see what happens with SPARC based workstations in the next future. My prediction is, that they will be upgraded with 2-way SPARCIV processor, but then all the processors on Sun roadmap like Niagara are server-oriented, so noone at Sun could tell me, what happens with workstations then, but probably they don't know it themselves. First they will see how the market accepts Opteron-based workstations and Solaris 10 for x86 and then further decissions will be taken, but with probably long transition time

6. PowerPC with MacOSX from Apple
Since the introduction of very UNIX-like OSX and 64-bit G5, Apple can be counted as a workstation manufacturer. However, none of classical ISVs has ported any software for CAD, CAM, EDA ... to MacOSX. Areas where MacOSX is strong are bioinformatics and multimedia. MacOSX is also becoming popular among scientists, so lot of mathematic, numerical software has been ported. Since this is the most viable platform beside Linux and most popular UNIX-like platform on the desktop it would be really great if ISVs, which write software for traditional workstation platform would consider to port their software to MacOSX as well. PowerPC processor has been developed by the Apple, Motorola and IBM alliance, G5 is mostly a single core POWER4 processor. Here are the workstations:

- PowerMac G5 Single 1.8 GHz
Single G5 1.8 GHz, 600MHz frontside bus, 512 KB 2nd level cache, 4 GB RAM, NVidia GeForce FX 5200 Ultra with 64MB video memory

- PowerMac G5 Dual 1.8 GHz
Dual G5 1.8 GHz, 900 MHz frontside bus, 512 KB 2nd level cache / processor, 4 GB RAM, NVidia GeForce FX 5200 Ultra with 64MB video memory

- PowerMac G5 Dual 2.0 GHz
Dual G5 2.0 GHz, 1 GHz frontside bus, 512 KB 2nd level cache / processor, 8 GB RAM, NVidia GeForce FX 5200 Ultra with 64MB video memory

- PowerMac G5 Dual 2.0 GHz
Dual G5 2.0 GHz, 1 GHz frontside bus, 512 KB 2nd level cache / processor, 8 GB RAM, NVidia GeForce FX 5200 Ultra with 64MB video memory

- PowerMac G5 Dual 2.5 GHz
Dual G5 2.5 GHz, 1.25 GHz frontside bus, 512 KB 2nd level cache / processor, 8 GB RAM, ATI Radeon 9600 XT with 128MB video memory

7. Itanium with Linux/Windows from various manufactors

Since HP was the largest producer of Itanium workstations, but stopped their further development and sales, the destiny of Itanium as a workstion processor is very uncertain. As a consequence Microsoft has stopped further development of Windows for Workstations for Itanium (Windows for Itanium servers is still available). Linux on Itanium is available from several distributors, but will be mainly used as a development workstation for HPC servers

8. Opteron/Athlon/Xeon64ET with Linux/Windows from various manufactors

This combinations is most viable in the current workstation market. But still very few ISV closed-source software is 64-bit ready for this platform. However, it will take 1-2 years till the most needed software will become available. Due to the mass production this platform is very cheap (compared with other platform with exception of MacOSX) and will have the most developers and users. All big and small computer manufactors are selling such systems. Several Linux distributions are available for these systems and Microsoft promised to release the so called x64 Windows version this year.

Conclusion:

In this article we saw a bright variety of different systems, which were used for different areas in production. Some of these systems are still alive and kicking (Solaris, AIX, MacOSX), some will become discontinued in the near future (Tru64, HP-UX, Irix), someones future is uncertain (Itanium) and some are still not ready for production (x86 with 64-bit extensions). All the remaining systems are more or less capable of executing Linux software, so I expect that Linux executables format will become standart in the future and all other OSes will become LSB compliant. However, every combination of OS and processor is a great piece of technology, so everytime it is a pity seeing a technology dissapear.

About the Author:
I work in Munich for one of three biggest EDA ISVs my hobby are different hardware systems and cutting edge IT. My favourite combination is PowerPC with MacOSX, which I use at home and Solaris with SPARC, which I use at work.

Disclaimer: All informations about the technical data of workstations have been taken from the product description pages of the manufactors of these workstations.