I'm a VMWare guy (still am).
I've been virtualizing infrastructures from 1999 on. Back then I had no "big time" enterprise server virtualization software, so I used VMWare Workstation, first under windows and then finally under Linux.
I've sicked to VMWare Server ever since the very first version and always under Linux...then ESX and ESXi came along and things got really serious. I absolutely love ESXi and I've tested in beyond the reasonable (or designed) usage, and it always surprises me.
VMWare workstation however has a different relationship with me. I love it just about as much as I hate it. Let me explain why: VMWare workstation is very UNsupported on Linux. It seems stupid that the very best product they have and sell (the ESX and ESXi) is Linux based and it is just flawless.
So VMWare Workstation works very very well under windows...witch kinda denies it's purpose on the first place! Why oh Why would I want to run a virtual machine built on top of a virtualization solution running under the worse O.S. when it comes to resource usage and management?
When it comes to Linux, VMWare Workstation has historical problems with the best O.S. when it comes to managing and using hardware resources...and that's a bad thing. You see, Windows runs better if virtualized under Linux... it's more stable and a lot faster; the opposite is not true at all.
So things loot like they've been inverted; The best implementation of VMWare Workstation is for the worse possible usage of that product. But this would be bearable if the Linux implementation was, at least, stable...but we're not that lucky.
I'm a Debian/Ubuntu user, than meant constant headaches up until version 7. Whenever I had a Kernel Update, I would have several hours of rebuilding the VMNet kernel modules so that VMWare workstation could run, and that was everything but easy. Some guys produced patches and workarounds, but they normally posted them months after my hell week. Having less and less time to spend over glitches instead of real work problems, I had to switch into the "don't upgrade the kernel until articles with the patches and solutions have been posted" mode.
Then things improved with VMWare Workstation 7 and it's perfect install routine. I though I was in heaven by then but this "honeymoon" was very short lasting. My laptop (obviously running Linux) uses an ATI graphics card...as a result, if I turned On the Virtual machine 3D acceleration, I get screen image corruption. Up till that point, I was blaming ATI on their bad drivers for Linux, as the dual screen functions are very unstable and often produce the same screen corruption I experienced in VMWare. Back home at the workstation with it's dual Nvidia graphics cards, Workstation 7 run smoothly with 3D acceleration.
Then VMWare upgraded to Workstation8. WS8 was an important upgrade as is allowed link to ESXi for usage of the virtual machines (not full management functionality, but at least I could use the machines)...you see, VMWare VSphere client (the management for ESX and ESXi) is windows only! So Linux users had to install a Virtual Machine with Windows and then install the management application (it's goofy to manage your server from a VM running on that same server, but VMWare is well known for leaving you to making goofy decisions for lack of support on the right platforms). But back to the WS7, still the same cool installer and improved performance but, the laptop continued to have real problems with 3D acceleration. At least the Workstation could still kick ass on 3D with the Nvidia cards.
Recently, VMWare upgraded to Workstation 9. I was thrilled to see the claims for better performance so I immediately tried it...and found HELL! You see, not only this one still doesn't work 3d on ATI cards, it crashed BADLY with my nvidia cards and 3D acceleration enabled.
So where does this leave us? VMWare is building worse and worse implementations of better and better products! WEIRD!
So why do I insist on running VMWare Workstation on Linux? Because I keep trying to find a good product and finally stick with it.
My advise to EACH AND EVERY ONE OF YOU out there is: Don't buy VMWare Workstation until you try EVERY function available...use the trial and test before you spend any money.
I mean it's good for office and simple tools, but forget running games or even 3D apps on it without these VERY BAD PROBLEMS solved properly.
Are there any other solutions out there? Sure!
VMWare workstation is not compatible with ESXi, so I can't just stage a machine with Vmware Workstation and then upload to ESXi, I have to use VMWare converter to convert the machine to ESXi. In that sense, why use VMWare Workstation at all?
Welcome to Oracle Virtual Box. Virtual Box, unlike VMWare Workstation, comes from Linux. The Windows implementation is not the prime development but rather the secondary.
Is it perfect? no... not as a good performer as the VMWare workstation, but at least it supports 3D without crashing! Bare in mind that the USB support for the community edition is quite bad, so I recommend you to download form the Website instead of installing through Software Center.
So... unless you want to use nvidia ONLY and keep on Workstation 7, just use Virtual Box and if you stage ESXi machines, don't worry as you would still need VMWare converter anyway if you were using VMWare Workstation.
Sorry VMWare... better start supporting Linux at least as well as you do Windows! It's not that difficult, just grab one or two geniuses you have working on the ESX team and learn from them.
Wednesday, December 5, 2012
Tuesday, December 4, 2012
Ubuntu on unity: is this the end?! NO!! Ubuntu Studio saves the day.
I've been using Linux ever since I was forced to use PC hardware on my everyday job, against my platform of election - The Commodore Amiga.
I started my PC experience with the 286 and then my father brought me an IBM PS2 386 SX, hopping I would drop the brilliant Amiga against the DOS and clumsy windows 3.0....it didn't work! I used the PC for the works at school (mostly programming), and the Amiga for just about everything else from programming to music producing, to video editing to the simple spreadsheet and word processing.
Even when I was studying engineering, I was forced to use the AUTOCAD 12 under MSDOS...but I still used XCAD on the Amiga for the drawings, Lightwave 3D for animation and 3D, and sometimes I even played with Real3D for some very very awesome renderings of my mechanical parts.
When I started working, back in 1997, the Amiga was loosing strength (because of poor visioned management that rotted the corporation for years), and I was running out of time to work 2 different systems constantly. By then Windows NT3.5 and latter the NT4.0 were the poor mans workstation standard, while Sun sparcs ran Solaris and Serious HP workstations ran HPUX.
I had training in UNIX, but it was just impossible to have both a UNIX station and a Windows station back home, so since most of our clients were Windows based, I had to opt for the NT...hell, I even got certified (not something I usually tell people, not by the fact that's a Microsoft certification, but rather not to be confused with today's "Microsoft Certified Professionals"...back then a Microsoft certification was hard to get and implied real knowledge, instead of just good memory for brain-dump).
Still, was constantly amazed with the increasing hardware power, while the results were so damn poor comparing to my good old Amiga... and I was not even comparing my A4000-030, I talk about the 80's A500 running a 7mhz 16bit CPU on 512Kbs of ram.
It wasn't until 1999 that I got enough money to have several computers at home, and the office allowed me to have a laptop, so my work could be on the laptop and use my home machines for exploring other O.S. solutions. Back then Linux was not much easier than a Unix, and far from as productive as an Amiga... but at least it was hardware resource sensible and very fast.
Linux grew in time... lot's of distributions passed and I tried them as I searched for a good Linux. I tried RedHat, TurboLinux, Mandrake and of course Debian, making this last one my preferred one.
Ever since Ubuntu popped in the scene with REAL improvement on user desktop experience (and I'm talking about 6.04LTS), I decided to stick with Ubuntu and Debian alone. If the Hardware was too picky, I would go for Debian, and if I was running on state of the art hardware specs, Ubuntu (with it's constant updates) would be the choice.
For the last years I've been using Ubuntu Studio 9.10 64 on my HP Compaq 8510p laptop, and I've used the same on my home AMD 5000+ Workstation, however problems with the support for Vmware workstation 6 and 7 on Debian based Linux (especially with the compilation of network kernel modules) made me constantly try new kernels, and that led me to the post title.
On one of those Updates, the Ubuntu Studio 11 (if not mistaken) I found my self out of Gnome and into (if I'm not mistaken) XFce... and I really don't like KDE nor Xfce (or I thought I didn't). So I decided to re-install with Ubuntu and then manually install all the other packages from Ubuntu Studio....boy was I on for a surprise. Ubuntu had been defaced into that thing called unity. These's something very wrong about today's Ubuntu Unity and Windows 8! If I want a PAD, I buy one and run the Debian based Android on it!!!! Why would I want that interface on my workstation?!?!
The pity thing is that the kernel is much better and faster (same thing as in windows 8... a much much better kernel on a bad interface), so the Unity interface is just a way to let you...NOT enjoy it and move away to the always reliable and good old DEBIAN. Like I did!
I've been very very disappointed with today's Ubunty Unity, and I understood why would Ubuntu Studio move away from Gnome to a XFce like environment, however, I decided to recheck Ubuntu Studio on the 12.10 version. FINALLY, Ubuntu's good old "Gnome-like" desktop running on the brand new super fast kernel.
So to conclude, If you used to like Ubuntu and feel disappointed (ultimately moving to Mint Linux...as I would if they drop that sick green), try the brilliant Ubuntu Studio 64 12.10... and find your self back into the Desktop Experience Linux game...and you know what? That brilliant Gnome like desktop...is actually Xfce4 :s. Seems like that, while Gnome is getting worse with unity, Xfce found it's way through.
This is a good example for some of you readers queering me about my anti-Microsoft pro-Linux tendencies. I'm not against Microsoft... I actually love, use and teach how to use some of their products....at the same time, I'm not a blind Linux lover.
If I like a product, than I like and write about it; If I don't... well I just don't and write about it.
I started my PC experience with the 286 and then my father brought me an IBM PS2 386 SX, hopping I would drop the brilliant Amiga against the DOS and clumsy windows 3.0....it didn't work! I used the PC for the works at school (mostly programming), and the Amiga for just about everything else from programming to music producing, to video editing to the simple spreadsheet and word processing.
Even when I was studying engineering, I was forced to use the AUTOCAD 12 under MSDOS...but I still used XCAD on the Amiga for the drawings, Lightwave 3D for animation and 3D, and sometimes I even played with Real3D for some very very awesome renderings of my mechanical parts.
When I started working, back in 1997, the Amiga was loosing strength (because of poor visioned management that rotted the corporation for years), and I was running out of time to work 2 different systems constantly. By then Windows NT3.5 and latter the NT4.0 were the poor mans workstation standard, while Sun sparcs ran Solaris and Serious HP workstations ran HPUX.
I had training in UNIX, but it was just impossible to have both a UNIX station and a Windows station back home, so since most of our clients were Windows based, I had to opt for the NT...hell, I even got certified (not something I usually tell people, not by the fact that's a Microsoft certification, but rather not to be confused with today's "Microsoft Certified Professionals"...back then a Microsoft certification was hard to get and implied real knowledge, instead of just good memory for brain-dump).
Still, was constantly amazed with the increasing hardware power, while the results were so damn poor comparing to my good old Amiga... and I was not even comparing my A4000-030, I talk about the 80's A500 running a 7mhz 16bit CPU on 512Kbs of ram.
It wasn't until 1999 that I got enough money to have several computers at home, and the office allowed me to have a laptop, so my work could be on the laptop and use my home machines for exploring other O.S. solutions. Back then Linux was not much easier than a Unix, and far from as productive as an Amiga... but at least it was hardware resource sensible and very fast.
Linux grew in time... lot's of distributions passed and I tried them as I searched for a good Linux. I tried RedHat, TurboLinux, Mandrake and of course Debian, making this last one my preferred one.
Ever since Ubuntu popped in the scene with REAL improvement on user desktop experience (and I'm talking about 6.04LTS), I decided to stick with Ubuntu and Debian alone. If the Hardware was too picky, I would go for Debian, and if I was running on state of the art hardware specs, Ubuntu (with it's constant updates) would be the choice.
For the last years I've been using Ubuntu Studio 9.10 64 on my HP Compaq 8510p laptop, and I've used the same on my home AMD 5000+ Workstation, however problems with the support for Vmware workstation 6 and 7 on Debian based Linux (especially with the compilation of network kernel modules) made me constantly try new kernels, and that led me to the post title.
On one of those Updates, the Ubuntu Studio 11 (if not mistaken) I found my self out of Gnome and into (if I'm not mistaken) XFce... and I really don't like KDE nor Xfce (or I thought I didn't). So I decided to re-install with Ubuntu and then manually install all the other packages from Ubuntu Studio....boy was I on for a surprise. Ubuntu had been defaced into that thing called unity. These's something very wrong about today's Ubuntu Unity and Windows 8! If I want a PAD, I buy one and run the Debian based Android on it!!!! Why would I want that interface on my workstation?!?!
The pity thing is that the kernel is much better and faster (same thing as in windows 8... a much much better kernel on a bad interface), so the Unity interface is just a way to let you...NOT enjoy it and move away to the always reliable and good old DEBIAN. Like I did!
I've been very very disappointed with today's Ubunty Unity, and I understood why would Ubuntu Studio move away from Gnome to a XFce like environment, however, I decided to recheck Ubuntu Studio on the 12.10 version. FINALLY, Ubuntu's good old "Gnome-like" desktop running on the brand new super fast kernel.
So to conclude, If you used to like Ubuntu and feel disappointed (ultimately moving to Mint Linux...as I would if they drop that sick green), try the brilliant Ubuntu Studio 64 12.10... and find your self back into the Desktop Experience Linux game...and you know what? That brilliant Gnome like desktop...is actually Xfce4 :s. Seems like that, while Gnome is getting worse with unity, Xfce found it's way through.
This is a good example for some of you readers queering me about my anti-Microsoft pro-Linux tendencies. I'm not against Microsoft... I actually love, use and teach how to use some of their products....at the same time, I'm not a blind Linux lover.
If I like a product, than I like and write about it; If I don't... well I just don't and write about it.
Saturday, October 27, 2012
Understanding computer performance and architectures
Hi all.
This article is co-authored with David Turner David watched my youtube video showing of a workstation running Linux and multitasking beyond what is expected for that hardware.
David started communicating with me as he has the same hardware base I have and uses windows, so we was both curious, confused... and I think that part of his brain was telling him "fake... it's got to be another fake youtube crap movie".
So I channelled him to this blog and the latest post at the time about the Commodore Amiga and its superiority by design. Dave replied with a lot of confusion as most of the knowledge in it was too technical. We then decided that I would write this article and he would criticize-me whenever I got too technical and difficult to understand, forcing-me to write more "human" and less "techye".
Se he is co-author as he is criticising the article into human readable knowledge. This article will be split into lessons and so this will change with time into a series of articles.
Note, this article will change in time as David forces-me to better explain things. Don't just read-it once and give-up if you don't understand, comment, and register to be warned about the updates.
Starting things up...(update 1)
Lesson 1 : The hardware architecture and the kernels.
Hardware architecture, is always the foundation of things. You may have the best software on earth, but if it runs on a bad hardware...instead of running, it will crawl.
Today's computers are a strange thing to buy. There is increasingly less support for NON-intel architectures, which is plain stupid, because variety will generate competition instead of monopoly, competition will generate progress and improvement. Still most computers today are Intel architecture.
Inside the Intel architecture world, there is another heavy weight that seems to work in bursts. That would be AMD.
AMD started as an Intel clone, and then decided to develop technology further. They were the first to introduce 64bit instructions and hardware with the renown Athlon64. At that time, instead of copying Intel, AMD decided to follow their own path and created something better than Intel. Years latter, they don-it again with the multi-core CPU. As expected, Intel followed and got back on the horse, so now we have to see AMD build more low budget clones of Intel until they decide to get back on the drawing board and innovate.
So what is the main difference between the 2 contenders on the Intel Architecture world?
Back on the first Athlon days, Intel focus development on the CPU chip as pure speed by means of frequency increase. The result is that (physics 101) the more current you have passing on a circuit with less purity of copper/gold/silicon, the more atoms of resisting material will be there to oppose current and generate heat. So Intel developed ways to use less and less material (creating less resistance, requiring less power and generating less heat) that's why Intel CPU have a dye size smaller than most competitors 65nm, 45nm, 37nm and so on. For that reason, they can run at higher speeds and that made Intel development focus not on optimizing the way the chip works, but rather the way they build the chips.
AMD on the other hand doesn't have the same size as Intel, and doesn't sell as much CPUs, so optimizing chip fabrication would have a cost difficult to return. The only way was to improve chip design. That's why Athlon chip would be faster at 2ghz than an Intel at 2.6 or 2.7ghz...it was better in design and execution of instructions.
Since the market really don't know what they buy and just look at specs, AMD was forced to change their product branding to the xx00+... 3200+ meaning that the 2.5gh chip inside, would be compared to (at least) a pentium 3.2ghz in performance. That same branding evolved to the dual core. Since Intel publicized their Hyper-threading CPU (copying the AMD efficiency leap design, but adding a new face to it called the virtual CPU) AMD decided to evolve into the dual core CPU (Intel patented the HyperThreading and thow using the AMD design as inspiration, they managed to lock them out of the marketing to use their own designs.... somehow I feel that Intel has really a lot to do with today's Apple!)... and continued calling it the 5000+ for the 2 core 2500+ 2gh per core CPU.
So to this point in time the AMD and Intel could compete in speed of CPU, the AMD athlon64 5000+ dual core @ 2gh per core would be as fast as an Intel Core2Duo dual core @2.5Ghz!? Not quite. Speed is not always about the GHz as AMD already proved with the Athlon superior design.
At some point in time, your CPU needs to Input/output to memory, and this means the REAL BIG difference in architecture between AMD and Intel.
Intel addresses memory through the chip-set (with the exception of the latest COREix families). Most chip-sets are designed for the consumer market, so they were designed for a single CPU architecture. AMD, again needing to maximize production and adaptability designed their Athlon with an built in memory controller. So the Athlon has a direct path (full bandwidth, high priority and very very fast) to memory, while Intel has to ask the chip-set for permission and channel memory linkage through it. This design removes the chip-set memory bandwidth bottleneck and allows for better scalability.
The result? look at most AMD Athlon, Opteron or Phenom multi-CPU boards and find one memory bank per CPU, while Intel (again) tried to boost the speed of the chip-set and hit a brick-wall immediately. That's why Intel motherboards for servers rarely go over the 2 CPU architecture, while AMD has over 8CPU motherboards. Intel and it's race for GHz rendered it less efficient and a lot less scalable.
If you always stopped to think how intel managed a big performance increase out of the CORE technology (that big leap that CORi3, i5 and i7 have when compared to the design it's based on - the Core2Duo and Core2Quad), then the answer is simple... they already had Ghz performance, when they added a DDR memory controller to the CPU, they jumped into AMD performance territory! Simple, and effective...with much higher CPU clock. AMD had sleep for too long, and now intel rules the entire market in exception for the super computing world.
The Video and the AMD running Linux.
This small difference in architectures play an important role in the Video I've shown with the Linux being able to multitask like hell. The ability to channel data to and from memory directly means the CPU can be processing a lot of data in parallel and without asking(and waiting for the opportunity) the chip-set to move data constantly.
So the first part of this first "lesson" is done.
Yes, today's Intel Core i5 and i7 is far more efficient than AMD equivalence, but still not as scalable, meaning that in big computing, AMD is the only way to go in the x86 compatible world. AMD did try that next leap with the APU recently, but devoted too much time on the development of the hardware and forgot about the software to run-it properly. And I'll leave this to the second part of this "lesson". They also choose ATI as it's partner for GPUs... Not quite the big banger. NVIDEA would be the ones to choose. Raw power of processing power is NVIDEAs ground, while ATI is more focused on the purity of colour and contrast. So when AMD tried to fuse the CPU and the GPU (creating the APU), they could have created a fully integrated HUGE processing engine... but instead they just managed to create a processing chip-set. Lack of vision? Lack of money? Bad choice in the partnership (as NVIDEA is the master of GPU super computing)? I don't know yet... but I screamed "way to go AMD" when I heard about the concept... only to shout "stupid stupid stuuupid people" some months later when it came out.
The software architecture to run on the hardware architecture.
Operating systems are composed of 2 major parts. The presentation layer (normaly called GUI, or Graphical User Interface) which is the one communicating between the user (and the programs) to the Kernel layer. And obviously the kernel layer that will interface between the presentation layer and the hardware.
So...windows and pictures and icons apart, the most important part of a computer next to the hardware architecture, is the kernel architecture.
There are 4 types of kernels:
- MicroKernel - This is coded in a very direct, and simple way. It is built with performance in mind. Microkernels are normally included into routers, or printers, or simple peripherals that have specific usage and don't need to "try to adapt to the user". They are not complex and so eat very little CPU cycles to work, meaning speed and efficiency. They are however very inflexible.
- Monolithic Kernels - Monolithic Kernels are BIG and heavy. They try to include EVERYTHING in it. So it's a kernel very easy to program with, as most features are built in and support just about any usage you can thing of. The down side is that it just eats up lot's of CPU cycles while verifying and comparing things because it tries to consider just about every possible usage. Monolithic kernels are very flexible at the cost of a lot of memory usage and heavy execution.
- Hybrid Kernels - The hybrid-kernel type is a mix. You have a core kernel module that is bigger than the rest, and while loading, that module controls what other modules are loaded to support function. These models are not as heavy as the monolithic, as they only load what they need to work with, but they have to contain a lot of memory protection code to avoid one module to use other modules memory space. So they are not as heavy as the Monolithic, but not necessarily faster.
- Atypical kernels - Atypical kernels are all those kernels out there that don't fit into these categories, mainly because they are too crazy, too good or just too exquisite to be sold in numbers big enough to create their own class. Examples of these are brilliant Amiga kernels and all the wannabes sprung by it (BEOS, AROS, etc), Mainframe operating system kernels and so on.#REFERENCE nr1 (check the end of the article)#
For the record, I personally consider the Linux to be an atypical kernel. A lot of people think the Linux is Monolithic and would be right...in part. Some others would consider it to be Hybrid and be right...in part.
The linux kernel is a full monolithic code block as a monolithic kernel, however, that kernel is hardware match compiled. When you install your copy of Linux, the system probes the hardware you have and then chooses the best code base to use for it. For instance why would you need the kernel base to have code made for the 386 CPU, or the Pentium mmx if you have a Core2Duo, or an AMD Opteron64? The Linux kernel is matched to your CPU and the code is optimized for it. When you install software that needs a direct hardware access (drivers, virtualization tools, etc) you need the source code for your kernel installed and a c++ compiler for one simple reason ->The kernel modules installed to support those calls to hardware are built into your new kernel and it is recompiled for you. So you have a Hybrid-made-monolithic kernel design. Not as brilliant as the Amiga OS kernel, but considering that the Amiga O.S. kernel needs the brilliant Amiga hardware architecture, the Linux kernel is the best thing around for the Intel compatible architecture.
Do I mean that Linux is better for AMD than Intel? Irrelevant! AMD is better than Intel if you need heavy memory usage. Intel is better than AMD if you need raw CPU power for rendering. Linux kernel is better than windows kernel...so comparing to today's windows, Linux is the better choice, regardless of architecture. However AMD users have more to "unleash" while converting to Linux, as windows is more Intel biased on purpose, and less memory efficient.
Resources are limited!
Why is Linux so much more efficient than windows with the same hardware?
Windows kernel is either monolithic (w2k, nt, win 9x) or hybrid (w2k3, xp, vista/7, w2k8, 8). However the base of a hybrid kernel is always the cpu instructions and commands and that is always a big chunk.
Since Microsoft made a crusade against the open-source, they have to keep with their "propaganda" and have a pre-compiled (and closed) CPU kernel module (and this is 50% of why I don't like Windows...they are being stubborn instead of efficient). So while much better that w2k, xp and 7 will still have to load-first a huge chunk of code that has to handle everything from the 386 to the future generations i7 cores and beyond. Meaning that they always operate in a compromised operation mode and will always have code in memory being unused. Microsoft also has a very closed relationship with Intel and tends do favor it against AMD, making any windows run better in Intel than AMD...this is very clear when you dig around AMD FTP and find several drivers to increase windows speed and stability on AMD CPUs...and find nothing like that on Intel. For some reason people call the PC a wintel machine.
So To start, Linux has a smaller memory footprint than windows, it has more CPU instruction-set usage than windows "compatibility mode", it takes advantage of AMDs excellent memory to CPU bus.
Apart from that there is also the way windows manages memory. Windows (up until the vista/7 kernel) was not very good managing memory. When you use software, the system is instancing objects of code and data in memory. Windows addresses memory in chunks made of 4kb pages. So if you have 8kb of code, it will look for a chunk with 2 memory pages of 4kb free and then use-it.... if however your code is made from 2 objects, one with 2kb and another with 10kb, windows will allocate a chunk with one page for the first one, and then a chunk of 3 pages to the second code. You'll consume 4+12kb = 16Kb for 12kb of code. This is causing the so called memory fragmentation. If your computer only had 16Kb of memory, in this last case you would not be able to allocate memory for the next 4kb code. Although you have 4Kb of free memory, it is fragmented into 2 and since it's non continuous, you would not have space to allocate the next 4kb.
The memory fragmentation syndrome grows exponentially if you use a framework to build your code on. Enter the .NET. .Net is very good for code prototyping, but as it's easy to code for, it is so because the guys building it created objects with a lot of functionality built into it (to support any possible usage)... much like the classinc monolithic kernel. The result is that if you examine memory, you'll find out that a simple window with a combo box and an ok button will mean hundreds if not thousands of objects instanced in memory...for nothing as you'll only be using 10% of the coded object's functionality.
Object Oriented programming creates Code objects in memory. A single "class" is Instanced several times to support different usage of the same object types but as different objects. After usage, memory is freed and returned to the operating system for re-usage.
Now picture that your code creates PDF pages. The PDF stamper works with pages that are stamped individually and then glued together in sequence. So your code would be instancing, then freeing to re-instance a bigger object, to free after and re-instance a bigger one...and so on.
For instance:
Memory in pages:
|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|
Your code:
code instance 1 6K
|-C1 C1 C1 C1-|-C1 C1 -|-
Then you add another object to support your data (increasing as you process it) called C2
code instance 2 10K
|-C1 C1 C1 C1-|-C1 C1 -|-C2 C2 C2 C2-|-C2 C2 C2 C2-|-C2 C2 -|-
Then you free your first instance as you no longer need it.
|- -|- -|-C2 C2 C2 C2-|-C2 C2 C2 C2-|-C2 C2 -|-
And then you need to create a new code instance to support even mode data called C3. This time you need 18Kb, so:
code instance 3 18K
|- -|- -|-C2 C2 C2 C2-|-C2 C2 C2 C2-|-C2 C2 -|-C3 C3 C3 C3-|-C3 C3 C3 C3-|-C3 C3 C3 C3-|-C3 C3 C3 C3-|...... and you've run out of memory!!
I know that today's computers have gigs of ram, but today's code also eat up megs of ram and we work video and sound and we use .Net to use it.... you get the picture.
Linux and Unix have a dynamic way to address memory and normally re-arrange memory (memory optimization and de-fragmentation) to avoid this syndrome.
In the Unix/Linux world you have brk, nmap and malloc:
- BRK - can adjust the chunk size end to match the requested memory so a 6k code would eat 6k instead of 8k
- malloc - can grow memory both ways (start and end) and re-allocate more memory as your code grows (something wonderful for object oriented programming because the code starts with little data, and then grow as the program and user starts working it). In windows this will either be handled with a huge chunk pre-allocation (even if you don't use-it), or by jumping your code instance from place to place in memory (increasing fragmentation probability). The only problem with malloc is that it is very good allocating memory and not so good releasing it. So nmap was entered into the equation.
- nmap - works like malloc but it's useful for large memory chunk allocation and it's also very good releasing it back. When you encode video or work out large objects in memory, nmap is the "wizard" behind all that Linux performance over windows. The more data you move in and out of memory, the more perceptible this is.
There is also something important to this. If you thing about this, who does the memory moving in an Intel architecture? The CPU... so even using windows, moving stuff around memory constantly, the AMD has better performance because of the in CPU memory controller while the Intel platform needs to channel everything through the chip-set.
The CPU architecture (both Intel and AMD) have, under normal conditions a "stack" of commands, and not all of them are using the entire CPU processing power, so Intel uses the "virtual processor" in hyper-threading, making 2 different code threads to be calculated at once, while AMD works it's architecture with simultaneous execution (everything from cache to CPU registers is parallel) and doubling the bus speed (100mhz bus, would work as 200mhz bus inside the CPU, allowing the system to divide or share CPU resources and communication from outside would happen at half speed of the processing speed. So if you enter 2x 32bit instructions (on a 64bit Athlon for instance), in theory, if those instructions are actually 32bit only and use the same amount of CPU cycles to be worked out, the CPU would return the result at once. Without this technology, the CPU would accept one instruction at a time and reply accordingly.
Does the Intel CPU return better MIPS on CPU tests? yup. Most of the CPU testing software's induce big calculation instructions and eat up all of the CPU execution stack, so, no parallelization is possible (that part of AMD execution optimization...and Intel Hyper threading), and since the Intel CPU runs a higher clock speed (all those GHz), the results favour them. Still in real life, unless you are rendering 3D, AMD has the ground in true usable speed. Especially if under a good operating system that takes advantage of this and doesn't cripple RAM as it uses it.
It's simple if you think about it.
Both AMD Athlon 64 running at 2GHz and Intel Core2 at 2.5GHz have 64bit architecture. If they both get 2x 32bit instructions, Core2 will show the real CPU for its first 32bit instruction, and the hyper-threading second virtual cpu for the second instruction... and then would do this at 2.5GHz.
At the same time the AMD would receive 2 instructions at once to the one and only CPU, side by side, but then would process each instruction to the CPU internally at double the speed. So the 0.5Ghz the AMD has less, is compensated by the fact that internally, it works writes and reads instructions, results and data twice as fast. If, however you send a full 64bit calculation, neither of CPU's will be able to parallel the execution stack... so the advantage of the double-data-rate inside the Athlon is gone and the only thing in play from that point on is Ghz....and the Intel has more!
So, to conclude this first "lesson":
Linux on a good hardware architecture will multi-task way better than windows because:
- AMD had a direct memory controlled in CPU and a direct memory connection as a result.
- It can take direct advantage of AMD memory bandwidth and CPU functions because the kernel is CPU and hardware matched
- The kernel is lighter because it is hardware matched.
- The kernel doesn't need a lot of memory protection because it's "monolithic" in part.
- Most code for Linux is done in c++ so it has no .net weight behind it (nor the operating system)
- Linux handles memory. Windows juggles things until it "starts to drop"...or crash :S.
The P.S. part :)
#REFERENCE nr1#:
Comment: You like Amiga a lot. Are you implying one can still buy one?
Reply: Yup and No. Yes you can still use an amiga today. Yes you still have hardware updates and software updates today that keep the Amiga alive.
No, not the commodore USA as it's just another wintel computer named as amiga... a grotesque thing for a purist like me.
Keep in mind that the Amiga was so advanced that, if you are looking too buy a computer 10 years into the future, than you have no Amiga to buy. The NATAMI project is the best so far, but from what I've read, it's just an up-to-date of the old Amiga... good and faithful, but not the BANG the Amiga was and has been until commodore gone under. The new Amiga can't just be an update, cause the old one with today's hardware mods can do so! The new Amiga has to show today what wintels will do 10 years from now.
Maybe I can gather enough money to build it myself...I've got the basic schematics and hardware layout and I call this Project TARA (The Amiga Reborn Accurately).
This article is co-authored with David Turner David watched my youtube video showing of a workstation running Linux and multitasking beyond what is expected for that hardware.
David started communicating with me as he has the same hardware base I have and uses windows, so we was both curious, confused... and I think that part of his brain was telling him "fake... it's got to be another fake youtube crap movie".
So I channelled him to this blog and the latest post at the time about the Commodore Amiga and its superiority by design. Dave replied with a lot of confusion as most of the knowledge in it was too technical. We then decided that I would write this article and he would criticize-me whenever I got too technical and difficult to understand, forcing-me to write more "human" and less "techye".
Se he is co-author as he is criticising the article into human readable knowledge. This article will be split into lessons and so this will change with time into a series of articles.
Note, this article will change in time as David forces-me to better explain things. Don't just read-it once and give-up if you don't understand, comment, and register to be warned about the updates.
Starting things up...(update 1)
Lesson 1 : The hardware architecture and the kernels.
Hardware architecture, is always the foundation of things. You may have the best software on earth, but if it runs on a bad hardware...instead of running, it will crawl.
Today's computers are a strange thing to buy. There is increasingly less support for NON-intel architectures, which is plain stupid, because variety will generate competition instead of monopoly, competition will generate progress and improvement. Still most computers today are Intel architecture.
Inside the Intel architecture world, there is another heavy weight that seems to work in bursts. That would be AMD.
AMD started as an Intel clone, and then decided to develop technology further. They were the first to introduce 64bit instructions and hardware with the renown Athlon64. At that time, instead of copying Intel, AMD decided to follow their own path and created something better than Intel. Years latter, they don-it again with the multi-core CPU. As expected, Intel followed and got back on the horse, so now we have to see AMD build more low budget clones of Intel until they decide to get back on the drawing board and innovate.
So what is the main difference between the 2 contenders on the Intel Architecture world?
Back on the first Athlon days, Intel focus development on the CPU chip as pure speed by means of frequency increase. The result is that (physics 101) the more current you have passing on a circuit with less purity of copper/gold/silicon, the more atoms of resisting material will be there to oppose current and generate heat. So Intel developed ways to use less and less material (creating less resistance, requiring less power and generating less heat) that's why Intel CPU have a dye size smaller than most competitors 65nm, 45nm, 37nm and so on. For that reason, they can run at higher speeds and that made Intel development focus not on optimizing the way the chip works, but rather the way they build the chips.
AMD on the other hand doesn't have the same size as Intel, and doesn't sell as much CPUs, so optimizing chip fabrication would have a cost difficult to return. The only way was to improve chip design. That's why Athlon chip would be faster at 2ghz than an Intel at 2.6 or 2.7ghz...it was better in design and execution of instructions.
Since the market really don't know what they buy and just look at specs, AMD was forced to change their product branding to the xx00+... 3200+ meaning that the 2.5gh chip inside, would be compared to (at least) a pentium 3.2ghz in performance. That same branding evolved to the dual core. Since Intel publicized their Hyper-threading CPU (copying the AMD efficiency leap design, but adding a new face to it called the virtual CPU) AMD decided to evolve into the dual core CPU (Intel patented the HyperThreading and thow using the AMD design as inspiration, they managed to lock them out of the marketing to use their own designs.... somehow I feel that Intel has really a lot to do with today's Apple!)... and continued calling it the 5000+ for the 2 core 2500+ 2gh per core CPU.
So to this point in time the AMD and Intel could compete in speed of CPU, the AMD athlon64 5000+ dual core @ 2gh per core would be as fast as an Intel Core2Duo dual core @2.5Ghz!? Not quite. Speed is not always about the GHz as AMD already proved with the Athlon superior design.
At some point in time, your CPU needs to Input/output to memory, and this means the REAL BIG difference in architecture between AMD and Intel.
Intel addresses memory through the chip-set (with the exception of the latest COREix families). Most chip-sets are designed for the consumer market, so they were designed for a single CPU architecture. AMD, again needing to maximize production and adaptability designed their Athlon with an built in memory controller. So the Athlon has a direct path (full bandwidth, high priority and very very fast) to memory, while Intel has to ask the chip-set for permission and channel memory linkage through it. This design removes the chip-set memory bandwidth bottleneck and allows for better scalability.
The result? look at most AMD Athlon, Opteron or Phenom multi-CPU boards and find one memory bank per CPU, while Intel (again) tried to boost the speed of the chip-set and hit a brick-wall immediately. That's why Intel motherboards for servers rarely go over the 2 CPU architecture, while AMD has over 8CPU motherboards. Intel and it's race for GHz rendered it less efficient and a lot less scalable.
If you always stopped to think how intel managed a big performance increase out of the CORE technology (that big leap that CORi3, i5 and i7 have when compared to the design it's based on - the Core2Duo and Core2Quad), then the answer is simple... they already had Ghz performance, when they added a DDR memory controller to the CPU, they jumped into AMD performance territory! Simple, and effective...with much higher CPU clock. AMD had sleep for too long, and now intel rules the entire market in exception for the super computing world.
The Video and the AMD running Linux.
This small difference in architectures play an important role in the Video I've shown with the Linux being able to multitask like hell. The ability to channel data to and from memory directly means the CPU can be processing a lot of data in parallel and without asking(and waiting for the opportunity) the chip-set to move data constantly.
So the first part of this first "lesson" is done.
Yes, today's Intel Core i5 and i7 is far more efficient than AMD equivalence, but still not as scalable, meaning that in big computing, AMD is the only way to go in the x86 compatible world. AMD did try that next leap with the APU recently, but devoted too much time on the development of the hardware and forgot about the software to run-it properly. And I'll leave this to the second part of this "lesson". They also choose ATI as it's partner for GPUs... Not quite the big banger. NVIDEA would be the ones to choose. Raw power of processing power is NVIDEAs ground, while ATI is more focused on the purity of colour and contrast. So when AMD tried to fuse the CPU and the GPU (creating the APU), they could have created a fully integrated HUGE processing engine... but instead they just managed to create a processing chip-set. Lack of vision? Lack of money? Bad choice in the partnership (as NVIDEA is the master of GPU super computing)? I don't know yet... but I screamed "way to go AMD" when I heard about the concept... only to shout "stupid stupid stuuupid people" some months later when it came out.
The software architecture to run on the hardware architecture.
Operating systems are composed of 2 major parts. The presentation layer (normaly called GUI, or Graphical User Interface) which is the one communicating between the user (and the programs) to the Kernel layer. And obviously the kernel layer that will interface between the presentation layer and the hardware.
So...windows and pictures and icons apart, the most important part of a computer next to the hardware architecture, is the kernel architecture.
There are 4 types of kernels:
- MicroKernel - This is coded in a very direct, and simple way. It is built with performance in mind. Microkernels are normally included into routers, or printers, or simple peripherals that have specific usage and don't need to "try to adapt to the user". They are not complex and so eat very little CPU cycles to work, meaning speed and efficiency. They are however very inflexible.
- Monolithic Kernels - Monolithic Kernels are BIG and heavy. They try to include EVERYTHING in it. So it's a kernel very easy to program with, as most features are built in and support just about any usage you can thing of. The down side is that it just eats up lot's of CPU cycles while verifying and comparing things because it tries to consider just about every possible usage. Monolithic kernels are very flexible at the cost of a lot of memory usage and heavy execution.
- Hybrid Kernels - The hybrid-kernel type is a mix. You have a core kernel module that is bigger than the rest, and while loading, that module controls what other modules are loaded to support function. These models are not as heavy as the monolithic, as they only load what they need to work with, but they have to contain a lot of memory protection code to avoid one module to use other modules memory space. So they are not as heavy as the Monolithic, but not necessarily faster.
- Atypical kernels - Atypical kernels are all those kernels out there that don't fit into these categories, mainly because they are too crazy, too good or just too exquisite to be sold in numbers big enough to create their own class. Examples of these are brilliant Amiga kernels and all the wannabes sprung by it (BEOS, AROS, etc), Mainframe operating system kernels and so on.#REFERENCE nr1 (check the end of the article)#
For the record, I personally consider the Linux to be an atypical kernel. A lot of people think the Linux is Monolithic and would be right...in part. Some others would consider it to be Hybrid and be right...in part.
The linux kernel is a full monolithic code block as a monolithic kernel, however, that kernel is hardware match compiled. When you install your copy of Linux, the system probes the hardware you have and then chooses the best code base to use for it. For instance why would you need the kernel base to have code made for the 386 CPU, or the Pentium mmx if you have a Core2Duo, or an AMD Opteron64? The Linux kernel is matched to your CPU and the code is optimized for it. When you install software that needs a direct hardware access (drivers, virtualization tools, etc) you need the source code for your kernel installed and a c++ compiler for one simple reason ->The kernel modules installed to support those calls to hardware are built into your new kernel and it is recompiled for you. So you have a Hybrid-made-monolithic kernel design. Not as brilliant as the Amiga OS kernel, but considering that the Amiga O.S. kernel needs the brilliant Amiga hardware architecture, the Linux kernel is the best thing around for the Intel compatible architecture.
Do I mean that Linux is better for AMD than Intel? Irrelevant! AMD is better than Intel if you need heavy memory usage. Intel is better than AMD if you need raw CPU power for rendering. Linux kernel is better than windows kernel...so comparing to today's windows, Linux is the better choice, regardless of architecture. However AMD users have more to "unleash" while converting to Linux, as windows is more Intel biased on purpose, and less memory efficient.
Resources are limited!
Why is Linux so much more efficient than windows with the same hardware?
Windows kernel is either monolithic (w2k, nt, win 9x) or hybrid (w2k3, xp, vista/7, w2k8, 8). However the base of a hybrid kernel is always the cpu instructions and commands and that is always a big chunk.
Since Microsoft made a crusade against the open-source, they have to keep with their "propaganda" and have a pre-compiled (and closed) CPU kernel module (and this is 50% of why I don't like Windows...they are being stubborn instead of efficient). So while much better that w2k, xp and 7 will still have to load-first a huge chunk of code that has to handle everything from the 386 to the future generations i7 cores and beyond. Meaning that they always operate in a compromised operation mode and will always have code in memory being unused. Microsoft also has a very closed relationship with Intel and tends do favor it against AMD, making any windows run better in Intel than AMD...this is very clear when you dig around AMD FTP and find several drivers to increase windows speed and stability on AMD CPUs...and find nothing like that on Intel. For some reason people call the PC a wintel machine.
So To start, Linux has a smaller memory footprint than windows, it has more CPU instruction-set usage than windows "compatibility mode", it takes advantage of AMDs excellent memory to CPU bus.
Apart from that there is also the way windows manages memory. Windows (up until the vista/7 kernel) was not very good managing memory. When you use software, the system is instancing objects of code and data in memory. Windows addresses memory in chunks made of 4kb pages. So if you have 8kb of code, it will look for a chunk with 2 memory pages of 4kb free and then use-it.... if however your code is made from 2 objects, one with 2kb and another with 10kb, windows will allocate a chunk with one page for the first one, and then a chunk of 3 pages to the second code. You'll consume 4+12kb = 16Kb for 12kb of code. This is causing the so called memory fragmentation. If your computer only had 16Kb of memory, in this last case you would not be able to allocate memory for the next 4kb code. Although you have 4Kb of free memory, it is fragmented into 2 and since it's non continuous, you would not have space to allocate the next 4kb.
The memory fragmentation syndrome grows exponentially if you use a framework to build your code on. Enter the .NET. .Net is very good for code prototyping, but as it's easy to code for, it is so because the guys building it created objects with a lot of functionality built into it (to support any possible usage)... much like the classinc monolithic kernel. The result is that if you examine memory, you'll find out that a simple window with a combo box and an ok button will mean hundreds if not thousands of objects instanced in memory...for nothing as you'll only be using 10% of the coded object's functionality.
Object Oriented programming creates Code objects in memory. A single "class" is Instanced several times to support different usage of the same object types but as different objects. After usage, memory is freed and returned to the operating system for re-usage.
Now picture that your code creates PDF pages. The PDF stamper works with pages that are stamped individually and then glued together in sequence. So your code would be instancing, then freeing to re-instance a bigger object, to free after and re-instance a bigger one...and so on.
For instance:
Memory in pages:
|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|-1K 1K 1K 1K-|
Your code:
code instance 1 6K
|-C1 C1 C1 C1-|-C1 C1 -|-
Then you add another object to support your data (increasing as you process it) called C2
code instance 2 10K
|-C1 C1 C1 C1-|-C1 C1 -|-C2 C2 C2 C2-|-C2 C2 C2 C2-|-C2 C2 -|-
Then you free your first instance as you no longer need it.
|- -|- -|-C2 C2 C2 C2-|-C2 C2 C2 C2-|-C2 C2 -|-
And then you need to create a new code instance to support even mode data called C3. This time you need 18Kb, so:
code instance 3 18K
|- -|- -|-C2 C2 C2 C2-|-C2 C2 C2 C2-|-C2 C2 -|-C3 C3 C3 C3-|-C3 C3 C3 C3-|-C3 C3 C3 C3-|-C3 C3 C3 C3-|...... and you've run out of memory!!
I know that today's computers have gigs of ram, but today's code also eat up megs of ram and we work video and sound and we use .Net to use it.... you get the picture.
Linux and Unix have a dynamic way to address memory and normally re-arrange memory (memory optimization and de-fragmentation) to avoid this syndrome.
In the Unix/Linux world you have brk, nmap and malloc:
- BRK - can adjust the chunk size end to match the requested memory so a 6k code would eat 6k instead of 8k
- malloc - can grow memory both ways (start and end) and re-allocate more memory as your code grows (something wonderful for object oriented programming because the code starts with little data, and then grow as the program and user starts working it). In windows this will either be handled with a huge chunk pre-allocation (even if you don't use-it), or by jumping your code instance from place to place in memory (increasing fragmentation probability). The only problem with malloc is that it is very good allocating memory and not so good releasing it. So nmap was entered into the equation.
- nmap - works like malloc but it's useful for large memory chunk allocation and it's also very good releasing it back. When you encode video or work out large objects in memory, nmap is the "wizard" behind all that Linux performance over windows. The more data you move in and out of memory, the more perceptible this is.
There is also something important to this. If you thing about this, who does the memory moving in an Intel architecture? The CPU... so even using windows, moving stuff around memory constantly, the AMD has better performance because of the in CPU memory controller while the Intel platform needs to channel everything through the chip-set.
The CPU architecture (both Intel and AMD) have, under normal conditions a "stack" of commands, and not all of them are using the entire CPU processing power, so Intel uses the "virtual processor" in hyper-threading, making 2 different code threads to be calculated at once, while AMD works it's architecture with simultaneous execution (everything from cache to CPU registers is parallel) and doubling the bus speed (100mhz bus, would work as 200mhz bus inside the CPU, allowing the system to divide or share CPU resources and communication from outside would happen at half speed of the processing speed. So if you enter 2x 32bit instructions (on a 64bit Athlon for instance), in theory, if those instructions are actually 32bit only and use the same amount of CPU cycles to be worked out, the CPU would return the result at once. Without this technology, the CPU would accept one instruction at a time and reply accordingly.
Does the Intel CPU return better MIPS on CPU tests? yup. Most of the CPU testing software's induce big calculation instructions and eat up all of the CPU execution stack, so, no parallelization is possible (that part of AMD execution optimization...and Intel Hyper threading), and since the Intel CPU runs a higher clock speed (all those GHz), the results favour them. Still in real life, unless you are rendering 3D, AMD has the ground in true usable speed. Especially if under a good operating system that takes advantage of this and doesn't cripple RAM as it uses it.
It's simple if you think about it.
Both AMD Athlon 64 running at 2GHz and Intel Core2 at 2.5GHz have 64bit architecture. If they both get 2x 32bit instructions, Core2 will show the real CPU for its first 32bit instruction, and the hyper-threading second virtual cpu for the second instruction... and then would do this at 2.5GHz.
At the same time the AMD would receive 2 instructions at once to the one and only CPU, side by side, but then would process each instruction to the CPU internally at double the speed. So the 0.5Ghz the AMD has less, is compensated by the fact that internally, it works writes and reads instructions, results and data twice as fast. If, however you send a full 64bit calculation, neither of CPU's will be able to parallel the execution stack... so the advantage of the double-data-rate inside the Athlon is gone and the only thing in play from that point on is Ghz....and the Intel has more!
So, to conclude this first "lesson":
Linux on a good hardware architecture will multi-task way better than windows because:
- AMD had a direct memory controlled in CPU and a direct memory connection as a result.
- It can take direct advantage of AMD memory bandwidth and CPU functions because the kernel is CPU and hardware matched
- The kernel is lighter because it is hardware matched.
- The kernel doesn't need a lot of memory protection because it's "monolithic" in part.
- Most code for Linux is done in c++ so it has no .net weight behind it (nor the operating system)
- Linux handles memory. Windows juggles things until it "starts to drop"...or crash :S.
The P.S. part :)
#REFERENCE nr1#:
Comment: You like Amiga a lot. Are you implying one can still buy one?
Reply: Yup and No. Yes you can still use an amiga today. Yes you still have hardware updates and software updates today that keep the Amiga alive.
No, not the commodore USA as it's just another wintel computer named as amiga... a grotesque thing for a purist like me.
Keep in mind that the Amiga was so advanced that, if you are looking too buy a computer 10 years into the future, than you have no Amiga to buy. The NATAMI project is the best so far, but from what I've read, it's just an up-to-date of the old Amiga... good and faithful, but not the BANG the Amiga was and has been until commodore gone under. The new Amiga can't just be an update, cause the old one with today's hardware mods can do so! The new Amiga has to show today what wintels will do 10 years from now.
Maybe I can gather enough money to build it myself...I've got the basic schematics and hardware layout and I call this Project TARA (The Amiga Reborn Accurately).
Wednesday, October 24, 2012
Renaissance on the silicon world has happened looong ago. It's just that very few noticed it.
The renaissance of the silicon...nooo I'm not talking about Lola Ferrari nor Pamela Anderson and not even Ana Nicolle Smith. I am talking about the computer renaissance.
A lot of people think that this day and age are the days of the renaissance. They are wrong. Computer renaissance happened long ago. It's just that very few noticed it. Those are lucky ones that were blessed with a true Silicon based Leonardo da Vinci Workshop. And out of those, the ones that were able to get the full picture, bloomed into a daVinci type brain.
Why do I state this in the MultiCoreCPU and ZillionCoreGPU world where your refrigerator chip is more powerfull than the early IBM mainframes? Well, bare with me for a couple of minutes and continue reading.
Renaissance was not about vulgar display of power, but rather an era of intellectual growth and multiplicity of knowledge. The renaissance created some of the worlds best ever polymaths (people that master several areas of knowledge and have open-to-knowledge minds)...such as Leonardo da Vinci.
So back to this day and age. The corei7 has multi cores of processing power able to process around 100 GigaFlops, a Nvidea card can have 512 GPU cores and kick out around 130 GigaFlops of parallel processing power.Today we play 3d games rendered at 50frames per second in resolutions exceeding the 1900X1200 mark, while back in the early 90's, the best desktop computer would take 48hours to render one 640x480 frame.
Still, when did we leap from the "electronic typewriter" linked to a amber display and the rudimentary graphics to the computer that can render graphics in visual quality, produce video, produce sound, play games on...and still is able of word processing and spreadsheets? Because that was the turning point. That was the computer renaissance.
Still following me? It's difficult to pin point in time when exactly did this all start and witch brand kicked it.
Some say that it was Steve Jobs and the early 84 Macintosh... and thought not entirely wrong, are far from actually being right. The first "Mac" had an operating system copied from the XEROX project (that same project that Microsoft later brought from XEROX and spawned into MSWindows 1)...and that first Mac design was actually fathered by Jef Raskin (that left the LISA project) while only after the first prototype, had Steve Jobs gain interest in the Mac project and also he left the LISA project...kicking Jef our of the Mac project (some character this Jobs boy).
The next logical contestant is the Commodore VIC20. It was aimed strait to the Mac market and with some success. But still not exactly able to kick the renaissance era, much like the first Mac.
So.. was it Commodore C64/128 family? ahhh now we are talking more on the kind of flexibility needed to kick that so much needed renaissance, still short in ambition. They were brilliant gaming machines with some flexibility but not enough gut to take it through.
Most would now be shouting "ATARI... the ATARI-ST" and would be... wrong. It's a good machine with a too-conventional-to-bloom architecture. Good? Yes! Brilliant? No!
It's clear by now that the computer renaissance podium is taken by the Commodore Amiga. I'm not talking about the late 90's 4000... nor the 1200... or the 600...or even the world renown 500. I'm referring to the Amiga architecture. And that's something that will date back to the very first A1000 (yes the A1000 has a lower spec than the A500 and it's the father of them all).
The Amiga (unlike most will think) is not:
- ATARI technology stolen by engineers leaving the company
- Commodore own technology
The Amiga Corporation project started life in 1982 as Hi-Toro, and the Amiga its self as Lorraine game machine. It was a startup company with a group of people gathered by Larry Kaplan who "fished" Jay Miner and some other colleagues (some from Atari) that were tired of ATARI's management and were disappointed with the "way things headed". Jay (called the father of the Amiga, but actually not the father of the Amiga, but rather it's brilliant architecture) was able to choose passionate people that were trying to do their absolute best.
They were not worried about the chip power as that was something that Moore law would take care (in time), but rather the flexibility of chip design and the flexibility of architecture design.
They were not worried with software features (another thing that the community would pick up in time) but rather on building a flexible and growable base.
And above all I think, they were totally committed to giving the ability to code for the Lorraine console out of the box with the Lorraine console (unlike the standards back then, when everything was done in specific coding workstations... and if you think about that, much like any non computer device today)...that bloomed the latter called Amiga Computer.
The TEAM
Jay chose an original team of very dedicated and commited to excel people
The team has changed over the years and the full Amiga evolution history has a huge list of people (source: http://www.amigahistory.co.uk/people.html):
Mehdi Ali- A former boss at Commodore who made a number of bad decisions, including cancelling the A3000+ project and the release of the A600. He has been largely blamed for the fall of Commodore during 1994 and is universally disliked by most Amiga users.
Greg Berlin- Responsible for high-end systems at Commodore. He is recognised as the father of the A3000.
David Braben- Single-handedly programmed Frontier: Elite II and all round good egg.
Andy Braybrook- Converted all his brilliant C64 games to Amiga, and got our eternal thanks.
Martyn Brown- Founder of Team 17. Not related to Charlie.
Arthur C. Clarke- Author of the famous 2001AD book and well known A3000 fan.
Jason Compton- Amiga journo, responsible for the brilliant Amiga Report online mag.
Wolf Dietrich- head of Phase 5 who are responsible for the PowerUP PowerPC boards.
Jim Drew- Controversial Emplant headman who has done a great job of bringing other systems closer to the Amiga.
Lew Eggebrecht- Former hardware design chief.
Andy Finkel- Known as the Amiga Wizard Extraordinaire. He was head of Workbench 2.0 development, as well as an advisor to Amiga Technologies on the PowerAmiga, PPC-based Amiga system. He currently works for PIOS.
Fred Fish- Responsible for the range of Fish disks and CDs.
Steve Franklin- Former head of Commodore UK.
Keith Gabryelski- head of development for Amiga UNIX who made sure the product was finished before faxing the entire Amiga Unix teams resignation to Mehdi Ali.
Irving Gould- The investor that allowed Jack Tramiel to develop calculator and, eventually desktop computers. He did not care about the Amiga as a computer but saw the opportunity for computer commodification with the failed CDTV.
Simon Goodwin- Expert on nearly every computer known to man. Formerly of Crash magazine.
Rolf Harris- Tie me kangaroo down sport etc. Australian geezer who used the Amiga in his cartoon club.
Allen Hastings- Author of VideoScape in 1986, who was hired by NewTek to update the program for the 90's creating a little known application called Lightwave, the rendering software that for a long time was tied to the Video Toaster. This has made a huge number of shows possible, including Star Trek and Babylon 5.
Dave Haynie- One of the original team that designed the Amiga. Also responsible for the life saving DiskSalv. He has been very public in the Amiga community and has revealed a great deal about the proposed devices coming from Commodore in their heyday. His design proposal on the AAA and Hombre chipsets show what the Amiga could have been if they had survived. He also played an important part in the development of the Escom PowerAmiga, PIOS, and the open source operating system, KOSH.
Larry Hickmott- So dedicated to the serious side of the Amiga that he set up his own company, LH publishing.
John Kennedy- Amiga journalist. Told the Amiga user how to get the most of their machine
Dr. Peter Kittel- He worked for Commodore Germany in the engineering department. He was hired by Escom in 1995 for Amiga Technologies as their documentation writer and web services manager. When Amiga Technologies was shut down he worked for a brief time at went to work for the German branch of PIOS.
Dale Luck- A member of the original Amiga team and, along with R.J. Mical wrote the famous "Boing" demo.
R. J. Mical- member of the original Amiga, Corp. at Los Gatos and author of Intuition. He left Commodore in disgust when Commodore choose the German A2000 design over the Los Gatos one, commenting "If it doesn't have a keyboard garage, it's not an Amiga."
Jeff Minter- Llama lover who produced some of the best Amiga games of all time and has a surname that begins with mint.
Jay Miner(R.I.P.)- The father of the Amiga. Died in 1994. Before his time at Amiga Corp. he was an Atari engineer and created the Atari 800). He was a founding member of Hi-Toro in 1982 and all three Amiga patents list him as the inventor. He left Amiga Corp after it was bought by Commodore and later created the Atari Lynx handheld, and during the early 1990's continued to create revolutionary designs such as adjustable pacemakers.
Mitchy- Jay Miner's dog. He is alleged to have played an important part in the decision making at Amiga Corp. and made his mark with the pawprint inside the A1000 case.
Urban Mueller- Mr. Internet himself. Solely responsible for Aminet, the biggest Amiga, and some say computer archive in existance. Responsible for bringing together Amiga software in one place he deserves to be worshipped, from afar.
Peter Molyneux- Responsible for reinventing the games world with Syndicate and Populous. He is also famed for being interviewed in nearly every single computer mag imaginable IN THE SAME MONTH.
Bryce Nesbitt- The former Commodore joker and author of Workbench 2.0 and the original Enforcer program.
Paul Overaa- Amiga journalist. Helped to expand the readers knowledge of the Amiga.
David Pleasance- the final MD of Commodore UK and one-time competitor for the Amiga crown. Owes me 1 PENCE from World of Amiga '96.
Colin Proudfoot- Former Amiga buyout hopeful.
George Robbins- He developed low-end Amiga systems such as the unreleased A300, which was turned into A600, the A1200 and CD32. He was also responsible for Amiga motherboards including B52's lyrics. After losing his driver's license, Robbins literally lived at the Commodore West Chester site for more than a year, showering in sinks and sleeping in his offices.
Eric Schwartz- Producer of hundreds of Amiga artwork and animations.
Carl Sassenrath- helped to create the CDTV, CDXL and has recently developed the Rebol scripting language.
Kelly Sumner- Former head of Commodore UK. Now head of Gametek UK.
Bill Sydnes- A former manager at IBM who was responsible for the stripped down PCjr. He was hired by Commodore in 1991 to repeat that success with the A600. However, at the time the Amiga was already at the low-end of the market and a smaller version of the A500 was not needed.
Petro Tyschtschenko- Head of Amiga International, formerly Amiga Technologies. Responsible for keeping the Amiga on track since 1995.
So why was this such a brilliant machine?
It starts with the hardware.
The Amiga was based on the most flexible CPU of it's time. The Motorola MC68000 family. Motorola had the MC680x0 CISC CPU and MC68881/MC68882 FPU combination for workstations, and the MC880x0 RISK CPU family for the Unix workstations. That DNA fused into the PowerPC platform together with IBM RS6000 series RISK. Now some of you may say "yeah the PowerPC was such a flop that not even Apple and IBM stayed to it" and be ultimately wrong about it. The PowerPC problem was it's huge power consumption and dissipation when the CPU production couldn't go beyond the 90nm miniaturization. So a complex design with a lot of big transistors eat-up power and ultimately generate heat. That's why it got stuck. Ever tried to think why today's CPU's go Multi-core and rarely above 3ghz? yup .. better split the design and not let things get too hot...and today CPU are built on 22nm dye size miniaturization.
The PowerPC is very much alive. Inside your XBox 360, and you Nintendo Wii, and your Playstation3 lives a 65nm PowerPC configuring from single to triple core applications. There is even a 2Ghz Dual core PowerPC from Palo Alto Semiconductors..and IBM..just check their servers for not Microsoft software and drool all over the PowerPC cpu specs.
OK the CPU was important but was it all? NO!
The heart of the Amiga is called the AGNUS (later called Fat AGNUS, and Fatter AGNUS) processor. That's Jay Miner's most valuable DNA..and ultimately gave him the title of "father of the Amiga".
Consider the Agnus as a blazing fast and competent switchboard operator.
On one side you have the bus for the CPU, on the other side the memory bus and even a chipset bus. All conveying down to the Agnus. What's the catch? Well, picture you want to play a tune while working your graphics on the Amiga. The CPU loads the tune to memory, and then instructs the Agnus to stream this memory bank to the audio processing DAC chip. By doing that the CPU is then free to do all the processing needs. This is just one example. The graphics was actually the mostly used example o the Agnus chip, but it could do just about anything. That's why you have Amiga machines with addon CPU cards running at different speeds and all in sync. The Agnus is the maestro.
The Agnus, shown here in it's dye miniaturized form, started life as a very complex set of boards to imprint Jay's brilliance. Just take a good look at the complexity:
This was the true heart of the Amiga and it's brilliant architecture capable of true multitasking (instead of time-shared multitasking).
There were other chips for I/O, sound and graphics, but they all had a huge highway like, directly linked to memory at the hands of the Agnus.
These are pictures of the early prototypes and design sketches:
Then we get to the software.
The Amiga OS was built to take advantage of this brilliant design. Most Kernels exist around 3 base kernel models (Monolithic, Microkernel & Hybrid Kernels)
In Short, Monolithic kernel is big and has all the software packages needed to control hardware and provide software function (some call the Linux kernel monolithic...it is...ish... the linux kernel is compiled to the hardware and requested modules so it actually is a hybrid made monolithic kernel). The Microkernel is ofter seen on routers and simple devices that run a very fast yet little featured kernel design. The Hybrid is the kernel type that has a big chunk for the basic CPU and chipset functionality and then loads small microkernels as needed depending on the hardware available.
The Amiga kernel on the other hand it a beast on it's own league (followed by BeOS, AROS, MorphOS and Atheos/Syllabl).
It is a Microkernel design but threads each and every module. So from the kernel to software running, each and every one of them is a separate execution thread on the cpu, with it's own switches to memory from the Agnus and it's own address spaces. It's hugelly fast and stable, being the only draw back, the need for the coders to respect their given memory space (if not, the code could write memory from the loaded kernel space and crash the system...giving the Amiga known "Guru meditation error").
So this is the superior architecture that spawned the computer renaissance and allowed for the bloom that created the computers we have today.
Today, kids at school have a lot on knowledge that Leonardo da Vinci had. Back in the renaissance era he was one of the few having that knowledge...today every one has at least a good part of that knowledge.
Take this thought into the computer world and its progress timeline on steroids and you'll be comparing the 80's Commodore Amiga polymath-ability to today computer and even mobile phone. The Amiga was 10 to 15 year ahead of everything else out there... and in terms of hardware architecture, still is ahead of everything out there.
I was one of the lucky ones that migrated from the C64 to the Amiga500... and only 4 years later i was given an IBM PC. Had the Amiga been replaced by a PC computer (the Olivetti PC1 and the Schneider EURO-PC were big back then), my brain would be closed to the "electronic typewriter" sad reality. I am a polymath today because of the Amiga. It was the tool that (together with my parents investment in excellent and varied education) formed my brain in an open and exploratory way. Thank you Amiga.
References used for pictures and team list:
http://uber-leet.com/HistoryOfTheAmiga/
http://www.amigahistory.co.uk/people.html/
A lot of people think that this day and age are the days of the renaissance. They are wrong. Computer renaissance happened long ago. It's just that very few noticed it. Those are lucky ones that were blessed with a true Silicon based Leonardo da Vinci Workshop. And out of those, the ones that were able to get the full picture, bloomed into a daVinci type brain.
Why do I state this in the MultiCoreCPU and ZillionCoreGPU world where your refrigerator chip is more powerfull than the early IBM mainframes? Well, bare with me for a couple of minutes and continue reading.
Renaissance was not about vulgar display of power, but rather an era of intellectual growth and multiplicity of knowledge. The renaissance created some of the worlds best ever polymaths (people that master several areas of knowledge and have open-to-knowledge minds)...such as Leonardo da Vinci.
So back to this day and age. The corei7 has multi cores of processing power able to process around 100 GigaFlops, a Nvidea card can have 512 GPU cores and kick out around 130 GigaFlops of parallel processing power.Today we play 3d games rendered at 50frames per second in resolutions exceeding the 1900X1200 mark, while back in the early 90's, the best desktop computer would take 48hours to render one 640x480 frame.
Still, when did we leap from the "electronic typewriter" linked to a amber display and the rudimentary graphics to the computer that can render graphics in visual quality, produce video, produce sound, play games on...and still is able of word processing and spreadsheets? Because that was the turning point. That was the computer renaissance.
Still following me? It's difficult to pin point in time when exactly did this all start and witch brand kicked it.
Some say that it was Steve Jobs and the early 84 Macintosh... and thought not entirely wrong, are far from actually being right. The first "Mac" had an operating system copied from the XEROX project (that same project that Microsoft later brought from XEROX and spawned into MSWindows 1)...and that first Mac design was actually fathered by Jef Raskin (that left the LISA project) while only after the first prototype, had Steve Jobs gain interest in the Mac project and also he left the LISA project...kicking Jef our of the Mac project (some character this Jobs boy).
The next logical contestant is the Commodore VIC20. It was aimed strait to the Mac market and with some success. But still not exactly able to kick the renaissance era, much like the first Mac.
So.. was it Commodore C64/128 family? ahhh now we are talking more on the kind of flexibility needed to kick that so much needed renaissance, still short in ambition. They were brilliant gaming machines with some flexibility but not enough gut to take it through.
Most would now be shouting "ATARI... the ATARI-ST" and would be... wrong. It's a good machine with a too-conventional-to-bloom architecture. Good? Yes! Brilliant? No!
It's clear by now that the computer renaissance podium is taken by the Commodore Amiga. I'm not talking about the late 90's 4000... nor the 1200... or the 600...or even the world renown 500. I'm referring to the Amiga architecture. And that's something that will date back to the very first A1000 (yes the A1000 has a lower spec than the A500 and it's the father of them all).
The Amiga (unlike most will think) is not:
- ATARI technology stolen by engineers leaving the company
- Commodore own technology
The Amiga Corporation project started life in 1982 as Hi-Toro, and the Amiga its self as Lorraine game machine. It was a startup company with a group of people gathered by Larry Kaplan who "fished" Jay Miner and some other colleagues (some from Atari) that were tired of ATARI's management and were disappointed with the "way things headed". Jay (called the father of the Amiga, but actually not the father of the Amiga, but rather it's brilliant architecture) was able to choose passionate people that were trying to do their absolute best.
They were not worried about the chip power as that was something that Moore law would take care (in time), but rather the flexibility of chip design and the flexibility of architecture design.
They were not worried with software features (another thing that the community would pick up in time) but rather on building a flexible and growable base.
And above all I think, they were totally committed to giving the ability to code for the Lorraine console out of the box with the Lorraine console (unlike the standards back then, when everything was done in specific coding workstations... and if you think about that, much like any non computer device today)...that bloomed the latter called Amiga Computer.
The TEAM
Jay chose an original team of very dedicated and commited to excel people
1985
2007
Some pictures (courtesy of http://uber-leet.com/HistoryOfTheAmiga/) taken from the "History of the Amiga documentary" available on youtube:
Mehdi Ali- A former boss at Commodore who made a number of bad decisions, including cancelling the A3000+ project and the release of the A600. He has been largely blamed for the fall of Commodore during 1994 and is universally disliked by most Amiga users.
Greg Berlin- Responsible for high-end systems at Commodore. He is recognised as the father of the A3000.
David Braben- Single-handedly programmed Frontier: Elite II and all round good egg.
Andy Braybrook- Converted all his brilliant C64 games to Amiga, and got our eternal thanks.
Martyn Brown- Founder of Team 17. Not related to Charlie.
Arthur C. Clarke- Author of the famous 2001AD book and well known A3000 fan.
Jason Compton- Amiga journo, responsible for the brilliant Amiga Report online mag.
Wolf Dietrich- head of Phase 5 who are responsible for the PowerUP PowerPC boards.
Jim Drew- Controversial Emplant headman who has done a great job of bringing other systems closer to the Amiga.
Lew Eggebrecht- Former hardware design chief.
Andy Finkel- Known as the Amiga Wizard Extraordinaire. He was head of Workbench 2.0 development, as well as an advisor to Amiga Technologies on the PowerAmiga, PPC-based Amiga system. He currently works for PIOS.
Fred Fish- Responsible for the range of Fish disks and CDs.
Steve Franklin- Former head of Commodore UK.
Keith Gabryelski- head of development for Amiga UNIX who made sure the product was finished before faxing the entire Amiga Unix teams resignation to Mehdi Ali.
Irving Gould- The investor that allowed Jack Tramiel to develop calculator and, eventually desktop computers. He did not care about the Amiga as a computer but saw the opportunity for computer commodification with the failed CDTV.
Simon Goodwin- Expert on nearly every computer known to man. Formerly of Crash magazine.
Rolf Harris- Tie me kangaroo down sport etc. Australian geezer who used the Amiga in his cartoon club.
Allen Hastings- Author of VideoScape in 1986, who was hired by NewTek to update the program for the 90's creating a little known application called Lightwave, the rendering software that for a long time was tied to the Video Toaster. This has made a huge number of shows possible, including Star Trek and Babylon 5.
Dave Haynie- One of the original team that designed the Amiga. Also responsible for the life saving DiskSalv. He has been very public in the Amiga community and has revealed a great deal about the proposed devices coming from Commodore in their heyday. His design proposal on the AAA and Hombre chipsets show what the Amiga could have been if they had survived. He also played an important part in the development of the Escom PowerAmiga, PIOS, and the open source operating system, KOSH.
Larry Hickmott- So dedicated to the serious side of the Amiga that he set up his own company, LH publishing.
John Kennedy- Amiga journalist. Told the Amiga user how to get the most of their machine
Dr. Peter Kittel- He worked for Commodore Germany in the engineering department. He was hired by Escom in 1995 for Amiga Technologies as their documentation writer and web services manager. When Amiga Technologies was shut down he worked for a brief time at went to work for the German branch of PIOS.
Dale Luck- A member of the original Amiga team and, along with R.J. Mical wrote the famous "Boing" demo.
R. J. Mical- member of the original Amiga, Corp. at Los Gatos and author of Intuition. He left Commodore in disgust when Commodore choose the German A2000 design over the Los Gatos one, commenting "If it doesn't have a keyboard garage, it's not an Amiga."
Jeff Minter- Llama lover who produced some of the best Amiga games of all time and has a surname that begins with mint.
Jay Miner(R.I.P.)- The father of the Amiga. Died in 1994. Before his time at Amiga Corp. he was an Atari engineer and created the Atari 800). He was a founding member of Hi-Toro in 1982 and all three Amiga patents list him as the inventor. He left Amiga Corp after it was bought by Commodore and later created the Atari Lynx handheld, and during the early 1990's continued to create revolutionary designs such as adjustable pacemakers.
Mitchy- Jay Miner's dog. He is alleged to have played an important part in the decision making at Amiga Corp. and made his mark with the pawprint inside the A1000 case.
Urban Mueller- Mr. Internet himself. Solely responsible for Aminet, the biggest Amiga, and some say computer archive in existance. Responsible for bringing together Amiga software in one place he deserves to be worshipped, from afar.
Peter Molyneux- Responsible for reinventing the games world with Syndicate and Populous. He is also famed for being interviewed in nearly every single computer mag imaginable IN THE SAME MONTH.
Bryce Nesbitt- The former Commodore joker and author of Workbench 2.0 and the original Enforcer program.
Paul Overaa- Amiga journalist. Helped to expand the readers knowledge of the Amiga.
David Pleasance- the final MD of Commodore UK and one-time competitor for the Amiga crown. Owes me 1 PENCE from World of Amiga '96.
Colin Proudfoot- Former Amiga buyout hopeful.
George Robbins- He developed low-end Amiga systems such as the unreleased A300, which was turned into A600, the A1200 and CD32. He was also responsible for Amiga motherboards including B52's lyrics. After losing his driver's license, Robbins literally lived at the Commodore West Chester site for more than a year, showering in sinks and sleeping in his offices.
Eric Schwartz- Producer of hundreds of Amiga artwork and animations.
Carl Sassenrath- helped to create the CDTV, CDXL and has recently developed the Rebol scripting language.
Kelly Sumner- Former head of Commodore UK. Now head of Gametek UK.
Bill Sydnes- A former manager at IBM who was responsible for the stripped down PCjr. He was hired by Commodore in 1991 to repeat that success with the A600. However, at the time the Amiga was already at the low-end of the market and a smaller version of the A500 was not needed.
Petro Tyschtschenko- Head of Amiga International, formerly Amiga Technologies. Responsible for keeping the Amiga on track since 1995.
So why was this such a brilliant machine?
It starts with the hardware.
The Amiga was based on the most flexible CPU of it's time. The Motorola MC68000 family. Motorola had the MC680x0 CISC CPU and MC68881/MC68882 FPU combination for workstations, and the MC880x0 RISK CPU family for the Unix workstations. That DNA fused into the PowerPC platform together with IBM RS6000 series RISK. Now some of you may say "yeah the PowerPC was such a flop that not even Apple and IBM stayed to it" and be ultimately wrong about it. The PowerPC problem was it's huge power consumption and dissipation when the CPU production couldn't go beyond the 90nm miniaturization. So a complex design with a lot of big transistors eat-up power and ultimately generate heat. That's why it got stuck. Ever tried to think why today's CPU's go Multi-core and rarely above 3ghz? yup .. better split the design and not let things get too hot...and today CPU are built on 22nm dye size miniaturization.
The PowerPC is very much alive. Inside your XBox 360, and you Nintendo Wii, and your Playstation3 lives a 65nm PowerPC configuring from single to triple core applications. There is even a 2Ghz Dual core PowerPC from Palo Alto Semiconductors..and IBM..just check their servers for not Microsoft software and drool all over the PowerPC cpu specs.
OK the CPU was important but was it all? NO!
The heart of the Amiga is called the AGNUS (later called Fat AGNUS, and Fatter AGNUS) processor. That's Jay Miner's most valuable DNA..and ultimately gave him the title of "father of the Amiga".
Consider the Agnus as a blazing fast and competent switchboard operator.
On one side you have the bus for the CPU, on the other side the memory bus and even a chipset bus. All conveying down to the Agnus. What's the catch? Well, picture you want to play a tune while working your graphics on the Amiga. The CPU loads the tune to memory, and then instructs the Agnus to stream this memory bank to the audio processing DAC chip. By doing that the CPU is then free to do all the processing needs. This is just one example. The graphics was actually the mostly used example o the Agnus chip, but it could do just about anything. That's why you have Amiga machines with addon CPU cards running at different speeds and all in sync. The Agnus is the maestro.
The Agnus, shown here in it's dye miniaturized form, started life as a very complex set of boards to imprint Jay's brilliance. Just take a good look at the complexity:
This was the true heart of the Amiga and it's brilliant architecture capable of true multitasking (instead of time-shared multitasking).
There were other chips for I/O, sound and graphics, but they all had a huge highway like, directly linked to memory at the hands of the Agnus.
These are pictures of the early prototypes and design sketches:
Then we get to the software.
The Amiga OS was built to take advantage of this brilliant design. Most Kernels exist around 3 base kernel models (Monolithic, Microkernel & Hybrid Kernels)
In Short, Monolithic kernel is big and has all the software packages needed to control hardware and provide software function (some call the Linux kernel monolithic...it is...ish... the linux kernel is compiled to the hardware and requested modules so it actually is a hybrid made monolithic kernel). The Microkernel is ofter seen on routers and simple devices that run a very fast yet little featured kernel design. The Hybrid is the kernel type that has a big chunk for the basic CPU and chipset functionality and then loads small microkernels as needed depending on the hardware available.
The Amiga kernel on the other hand it a beast on it's own league (followed by BeOS, AROS, MorphOS and Atheos/Syllabl).
It is a Microkernel design but threads each and every module. So from the kernel to software running, each and every one of them is a separate execution thread on the cpu, with it's own switches to memory from the Agnus and it's own address spaces. It's hugelly fast and stable, being the only draw back, the need for the coders to respect their given memory space (if not, the code could write memory from the loaded kernel space and crash the system...giving the Amiga known "Guru meditation error").
So this is the superior architecture that spawned the computer renaissance and allowed for the bloom that created the computers we have today.
Today, kids at school have a lot on knowledge that Leonardo da Vinci had. Back in the renaissance era he was one of the few having that knowledge...today every one has at least a good part of that knowledge.
Take this thought into the computer world and its progress timeline on steroids and you'll be comparing the 80's Commodore Amiga polymath-ability to today computer and even mobile phone. The Amiga was 10 to 15 year ahead of everything else out there... and in terms of hardware architecture, still is ahead of everything out there.
I was one of the lucky ones that migrated from the C64 to the Amiga500... and only 4 years later i was given an IBM PC. Had the Amiga been replaced by a PC computer (the Olivetti PC1 and the Schneider EURO-PC were big back then), my brain would be closed to the "electronic typewriter" sad reality. I am a polymath today because of the Amiga. It was the tool that (together with my parents investment in excellent and varied education) formed my brain in an open and exploratory way. Thank you Amiga.
References used for pictures and team list:
http://uber-leet.com/HistoryOfTheAmiga/
http://www.amigahistory.co.uk/people.html/
Saturday, September 22, 2012
IPhone5 vs the Samsung Galaxy S3
The eternal market illusion fight...for some reason WWF just popped in my mind.
There is a reason for me not writing "The eternal battle". The reason is that there is no fight if, instead of placing a fighter against each other inside a ring under controlled environment, you just place them in front of a TV screeen and brag their strong point off at one another.
In a true fight, each fighter has to be in the same weight class and age class or it would be an annihilation instead of a fight. Today's iPhone vs Samsung Galaxy is a fight as such. It's unbalanced as it was back when apple launched the iPhone. It's just inverted it's way through time...and it's all apple's fault.
iPhone 5 = iPhone 4s = iPhone4 = iPhone 3 = iPhone 2 = iPhone
And the 4 to 4s "evolution" is so stupid that they might just drop their act and call today's phone the iPhone sssss!
The first iPhone WAS a clear evolution to all smartphones and I'm grateful for Apple cause they kicked off the actual smartphone revolution. But that's just about it. The phone was good for being able to gather in on package several good ideas that weren't even developed by apple, but they had the vision to pick and integrate in a good product. This is an important part.
A lot of people talk about the apple better and unique design...sure-thing
Still on the Design, the iPhone was a copy of Samsung F700, launched a full half year earlier than the first iPhone.
And a lot more talk about the grid-like icons...that Nokia also had years before the iPhone and ERICSSON with the M610i, Pli and p700.
And more will say that the IOS is the thing... well, IOS is the cut-down version of famous MacOSx. Now the OSx is a brilliant concept. Apple decided to cut down hardware and software production costs in able to free resources for the concept team. So they done a very bad thing (they switched to the crappy intel platform) abandoning the mighty RISK architecture, but an excellent thing developing the MacOSx on top of the UNIX FreeBSD Kernel. So while Microsoft was making a "crusade against" opensource, apple went right the opposite way. Apple realized that what they were actually good at, was the Graphical User Interface and the conceptual Look&Feel thing. So they grabbed a robust and efficient Kernel (the FreeBSD) and built on top of it. Brilliant, but ultimately an improved copy of something that wasn't made by them.
Still on the IOs subject, the slide menu navigation that make the user love-it so, was present in the very first AndroidOS preview 1...and that was released 3years earlier.
Some will say that the "Slide to answer" is an apple improvement, but 2 years earlier than the iPhone, any device running Microsoft Windows CE platform would do that :S. You see, apple designs good GUI's, but Microsoft conquered the world precisely by building good GUI's. And in mean time, even managed to save apple from bankruptcy.
This goes on and on... The SIRI? Ever wondered why the IRIS was so fast to surge against it in the Android market? Well, because the code was already present in the XIAOI BOT android app...since 2010!?
Conclusion: Apple is excellent copying separate ideas from others and integrating them into a full winning product.
So back to the text:
From that point in history forward, the iPhone is just something with no place. As time passed, apple developed it into progressively smaller steps and maintained or increased the cost.
Today, it costs twice as much as it should and delivers half as much as it should... so in comparison with the android rivals, the iPhone is a merely 1/4th of what others are offering.
Is the S3 perfect? No. There is a big problem with Samsung High end phones: they are over priced. Not twice the price as they are worth in hardware (like anything from apple), but still a good 20 to 30% over price. It's getting trendy and the market trends are a stupid reason to make necessary profit.
Back in the iPhone 4S vs Galaxy S2 war, I used to advise people to buy the LG Maximus X2. It has the right price for the hardware.
So S3 vs iPhone? yup that's right... the question is also the answer. There is no iPhone5, it is just a repacked, overpriced iPhone.
It's got no chance against the pinnacle of android phones evolution to date.
So why does the iPhone sells? Because human beings are in their majority strangely illogical.
Why does Gucci shoes sell? you can buy a set of Nike for 1/100000000th of the price. You would run better, walk better, feel better and your bank account would be infinitely better.
Gucci sells because it's Gucci and for some reason some hollywood star told everyone that they look better and cost millions more because they are exquisite.
Period!
The iPhone sells because the same Hollywood star uses an iPhone. The latest Samsung movie draws this fact in evidence.... the iPhone fans are pictured as futile, dumb and un-knowledgeable people (just like the typical Hollywood stereotype)...while the Samsung users are sensible, self-aware and intelligent people.
As any add its an exaggeration, but sadly this does represent over 50% apple fans. Still Samsung should watch their own sales department and cut their phones at least 15 to 20% in price if they don't want to get into what I call the "Apple zone" - too expensive for the offer.
There is a reason for me not writing "The eternal battle". The reason is that there is no fight if, instead of placing a fighter against each other inside a ring under controlled environment, you just place them in front of a TV screeen and brag their strong point off at one another.
In a true fight, each fighter has to be in the same weight class and age class or it would be an annihilation instead of a fight. Today's iPhone vs Samsung Galaxy is a fight as such. It's unbalanced as it was back when apple launched the iPhone. It's just inverted it's way through time...and it's all apple's fault.
iPhone 5 = iPhone 4s = iPhone4 = iPhone 3 = iPhone 2 = iPhone
And the 4 to 4s "evolution" is so stupid that they might just drop their act and call today's phone the iPhone sssss!
The first iPhone WAS a clear evolution to all smartphones and I'm grateful for Apple cause they kicked off the actual smartphone revolution. But that's just about it. The phone was good for being able to gather in on package several good ideas that weren't even developed by apple, but they had the vision to pick and integrate in a good product. This is an important part.
A lot of people talk about the apple better and unique design...sure-thing
Still on the Design, the iPhone was a copy of Samsung F700, launched a full half year earlier than the first iPhone.
And a lot more talk about the grid-like icons...that Nokia also had years before the iPhone and ERICSSON with the M610i, Pli and p700.
And more will say that the IOS is the thing... well, IOS is the cut-down version of famous MacOSx. Now the OSx is a brilliant concept. Apple decided to cut down hardware and software production costs in able to free resources for the concept team. So they done a very bad thing (they switched to the crappy intel platform) abandoning the mighty RISK architecture, but an excellent thing developing the MacOSx on top of the UNIX FreeBSD Kernel. So while Microsoft was making a "crusade against" opensource, apple went right the opposite way. Apple realized that what they were actually good at, was the Graphical User Interface and the conceptual Look&Feel thing. So they grabbed a robust and efficient Kernel (the FreeBSD) and built on top of it. Brilliant, but ultimately an improved copy of something that wasn't made by them.
Still on the IOs subject, the slide menu navigation that make the user love-it so, was present in the very first AndroidOS preview 1...and that was released 3years earlier.
Some will say that the "Slide to answer" is an apple improvement, but 2 years earlier than the iPhone, any device running Microsoft Windows CE platform would do that :S. You see, apple designs good GUI's, but Microsoft conquered the world precisely by building good GUI's. And in mean time, even managed to save apple from bankruptcy.
This goes on and on... The SIRI? Ever wondered why the IRIS was so fast to surge against it in the Android market? Well, because the code was already present in the XIAOI BOT android app...since 2010!?
Conclusion: Apple is excellent copying separate ideas from others and integrating them into a full winning product.
So back to the text:
From that point in history forward, the iPhone is just something with no place. As time passed, apple developed it into progressively smaller steps and maintained or increased the cost.
Today, it costs twice as much as it should and delivers half as much as it should... so in comparison with the android rivals, the iPhone is a merely 1/4th of what others are offering.
Is the S3 perfect? No. There is a big problem with Samsung High end phones: they are over priced. Not twice the price as they are worth in hardware (like anything from apple), but still a good 20 to 30% over price. It's getting trendy and the market trends are a stupid reason to make necessary profit.
Back in the iPhone 4S vs Galaxy S2 war, I used to advise people to buy the LG Maximus X2. It has the right price for the hardware.
So S3 vs iPhone? yup that's right... the question is also the answer. There is no iPhone5, it is just a repacked, overpriced iPhone.
It's got no chance against the pinnacle of android phones evolution to date.
So why does the iPhone sells? Because human beings are in their majority strangely illogical.
Why does Gucci shoes sell? you can buy a set of Nike for 1/100000000th of the price. You would run better, walk better, feel better and your bank account would be infinitely better.
Gucci sells because it's Gucci and for some reason some hollywood star told everyone that they look better and cost millions more because they are exquisite.
Period!
The iPhone sells because the same Hollywood star uses an iPhone. The latest Samsung movie draws this fact in evidence.... the iPhone fans are pictured as futile, dumb and un-knowledgeable people (just like the typical Hollywood stereotype)...while the Samsung users are sensible, self-aware and intelligent people.
As any add its an exaggeration, but sadly this does represent over 50% apple fans. Still Samsung should watch their own sales department and cut their phones at least 15 to 20% in price if they don't want to get into what I call the "Apple zone" - too expensive for the offer.
Sunday, January 15, 2012
Recover files from a VMFS virtual volume inside a iSCSI virtual volume
The problem:
Some time ago, I had a real bad RAID failure. Since I use my VMWARE ESXi hardware just to boot from a pen drive and connect to my QNAP NAS via iSCSI, a raid failure means the RAID degrades and enters read-only mode. Linux Virtual Machines keep working with errors on some packages, but windows virtual machines freeze in a couple of seconds.
The Solution:
Since I'm not rich and buying a full NAS system to copy from the read-only to the new one is out of question, I brought 2 2tb USB drives, connected them to my second NAS and moved the files out to the drives making room to move files from one NAS to another.
Now until this part it's all very simple; grab NAS02, make a desktop terminal connect into both iSCSI target on NAS01 and NFS share on NAS02 and move files from one to the other...and you start by installing open-iscsi.
HOWEVER, vmware hypervisors use a specially designed file system called VMFS, so mounting your iSCSI volumes will get you nowhere.
That's when vmfs tools get into action. Install the vmfs tools package and then all you have to do is mount the drives. It's a crazy opp, but it works. Basically, You'll be mounting the virtual iSCSI volume into a virtual directory, then the VMFS virtual volume that the iSCSI virtual volume contains into another virtual directory.
Better have Gigabit Ethernet for the next part: After this mount the NFS from the second NAS, and copy everything...for a couple of days, depending on your network speed :S
Sources I used to study:
- http://planetvm.net/blog/?p=1592
- http://www.cyberciti.biz/faq/howto-setup-debian-ubuntu-linux-iscsi-initiator/
- http://www.howtoforge.com/using-iscsi-on-ubuntu-9.04-initiator-and-target
How did I mount the iSCSI, then then VMFS virtual volumes in my linux desktop? here goes the command LOG:
administrator@RWS01:~$ su
Password:"SECRET"
root@RWS01:/home/administrator# /etc/init.d/open-iscsi start
* Setting up iSCSI targets [ OK ]
root@RWS01:/home/administrator# mount 10.0.101.246:/RECOVER /iscsi/rec
root@RWS01:/home/administrator# /etc/init.d/open-iscsi restart
* Disconnecting iSCSI targets [ OK ]
* Stopping iSCSI initiator service [ OK ]
* Starting iSCSI initiator service iscsid [ OK ]
* Setting up iSCSI targets [ OK ]
root@RWS01:/home/administrator# iscsiadm --mode discovery --type sendtargets --portal 10.0.101.247
iscsiadm: config file line 42 do not has value
iscsiadm: config file line 43 do not has value
iscsiadm: config file line 56 do not has value
iscsiadm: config file line 57 do not has value
iscsiadm: config file line 42 do not has value
iscsiadm: config file line 43 do not has value
iscsiadm: config file line 56 do not has value
iscsiadm: config file line 57 do not has value
iscsiadm: config file line 42 do not has value
iscsiadm: config file line 43 do not has value
iscsiadm: config file line 56 do not has value
iscsiadm: config file line 57 do not has value
iscsiadm: config file line 42 do not has value
iscsiadm: config file line 43 do not has value
iscsiadm: config file line 56 do not has value
iscsiadm: config file line 57 do not has value
iscsiadm: config file line 42 do not has value
iscsiadm: config file line 43 do not has value
iscsiadm: config file line 56 do not has value
iscsiadm: config file line 57 do not has value
iscsiadm: config file line 42 do not has value
iscsiadm: config file line 43 do not has value
iscsiadm: config file line 56 do not has value
iscsiadm: config file line 57 do not has value
iscsiadm: config file line 42 do not has value
iscsiadm: config file line 43 do not has value
iscsiadm: config file line 56 do not has value
iscsiadm: config file line 57 do not has value
iscsiadm: config file line 42 do not has value
iscsiadm: config file line 43 do not has value
iscsiadm: config file line 56 do not has value
iscsiadm: config file line 57 do not has value
iscsiadm: config file line 42 do not has value
iscsiadm: config file line 43 do not has value
iscsiadm: config file line 56 do not has value
iscsiadm: config file line 57 do not has value
iscsiadm: config file line 42 do not has value
iscsiadm: config file line 43 do not has value
iscsiadm: config file line 56 do not has value
iscsiadm: config file line 57 do not has value
iscsiadm: config file line 42 do not has value
iscsiadm: config file line 43 do not has value
iscsiadm: config file line 56 do not has value
iscsiadm: config file line 57 do not has value
iscsiadm: config file line 42 do not has value
iscsiadm: config file line 43 do not has value
iscsiadm: config file line 56 do not has value
iscsiadm: config file line 57 do not has value
10.0.101.247:3260,1 iqn.2004-04.com.qnap:ts-639:iscsi.vmwareisdata.bd99c2
10.0.101.247:3260,1 iqn.2004-04.com.qnap:ts-639:iscsi.vmwareboots.bd99c2
root@RWS01:/home/administrator# /etc/init.d/open-iscsi restart
* Disconnecting iSCSI targets [ OK ]
* Stopping iSCSI initiator service [ OK ]
* Starting iSCSI initiator service iscsid [ OK ]
* Setting up iSCSI targets [ OK ]
root@RWS01:/home/administrator# tail -f /var/log/messages
Nov 23 01:44:01 RWS01 kernel: [ 391.203611] sd 22:0:0:0: reservation conflict
Nov 23 01:44:01 RWS01 kernel: [ 391.203638] sd 22:0:0:0: [sdc] READ CAPACITY failed
Nov 23 01:44:01 RWS01 kernel: [ 391.203640] sd 22:0:0:0: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_OK
Nov 23 01:44:01 RWS01 kernel: [ 391.203643] sd 22:0:0:0: [sdc] Sense not available.
Nov 23 01:44:01 RWS01 kernel: [ 391.206527] sd 22:0:0:0: reservation conflict
Nov 23 01:44:01 RWS01 kernel: [ 391.207253] sd 22:0:0:0: reservation conflict
Nov 23 01:44:01 RWS01 kernel: [ 391.208968] sd 22:0:0:0: reservation conflict
Nov 23 01:44:01 RWS01 kernel: [ 391.209013] sd 22:0:0:0: [sdc] Test WP failed, assume Write Enabled
Nov 23 01:44:01 RWS01 kernel: [ 391.211881] sd 22:0:0:0: reservation conflict
Nov 23 01:44:01 RWS01 kernel: [ 391.211903] sd 22:0:0:0: [sdc] Attached SCSI disk
tail -f /var/log/messages
^C
root@RWS01:/home/administrator# mkdir /iscsimount
root@RWS01:/home/administrator# mount /dev/sdb1 /iscsimount
mount: you must specify the filesystem type
root@RWS01:/home/administrator# sudo vmfs-fuse /dev/sdb1 /iscsi/4
root@RWS01:/home/administrator# sudo ls /iscsi/4 -alh
total 4.0K
drwxr-xr-t 9 root root 2.0K 2010-06-05 21:06 .
drwxr-xr-x 9 root root 4.0K 2010-11-22 20:18 ..
-r-------- 1 root root 2.5M 2010-02-28 18:44 .fbb.sf
-r-------- 1 root root 61M 2010-02-28 18:44 .fdc.sf
-r-------- 1 root root 244M 2010-02-28 18:44 .pbc.sf
-r-------- 1 root root 249M 2010-02-28 18:44 .sbc.sf
-r-------- 1 root root 4.0M 2010-02-28 18:44 .vh.sf
drwxr-xr-x 2 root root 560 2010-11-14 18:19 VNAS01_BackupServer_UNX
drwxr-xr-x 2 root root 2.5K 2010-11-15 19:18 VSRV06_DomainServer_2k8
drwxr-xr-x 2 root root 6.6K 2010-11-15 19:15 VSRV07_LTSDomainServer_2k8
drwxr-xr-x 2 root root 3.1K 2010-10-19 01:31 VSRV07_Sharepoint_2k8
drwxr-xr-x 2 root root 2.8K 2010-11-15 19:07 VSRV08_WebServer_2k8_R2_64
drwxr-xr-x 2 root root 2.5K 2010-08-07 03:20 VWKS03_Private_XP32
drwxr-xr-x 2 root root 3.5K 2010-08-23 16:45 VWKS04_Testbench1_XP32
root@RWS01:/home/administrator# sudo vmfs-fuse /dev/sdd1 /iscsi/5
root@RWS01:/home/administrator# sudo ls /iscsi/5 -alh
total 4.0K
drwxr-xr-t 11 root root 2.2K 2010-03-06 04:29 .
drwxr-xr-x 9 root root 4.0K 2010-11-22 20:18 ..
-r-------- 1 root root 2.5M 2010-02-28 04:20 .fbb.sf
-r-------- 1 root root 61M 2010-02-28 04:20 .fdc.sf
-r-------- 1 root root 244M 2010-02-28 04:20 .pbc.sf
-r-------- 1 root root 249M 2010-02-28 04:20 .sbc.sf
-r-------- 1 root root 4.0M 2010-02-28 04:20 .vh.sf
drwxr-xr-x 2 root root 2.6K 2010-11-15 19:22 VRV01_DEVServer_2k3
drwxr-xr-x 2 root root 280 2010-02-28 16:32 VSRV01_Development_2k3
drwxr-xr-x 2 root root 2.9K 2010-11-15 19:22 VSRV02_LNX_VPN_WEB_DB
drwxr-xr-x 2 root root 280 2010-02-28 16:16 VSRV02_VPN_MYSQL_APACHE
drwxr-xr-x 2 root root 2.8K 2010-11-15 19:16 VSRV03_DBServer_2k3
drwxr-xr-x 2 root root 3.2K 2010-11-15 19:14 VSRV04_WEBServer_2k8
drwxr-xr-x 2 root root 2.6K 2010-11-15 19:22 VSRV05_DBServer_LNX
drwxr-xr-x 2 root root 2.8K 2010-08-07 03:22 VWKS01_Downloader_XP32
drwxr-xr-x 2 root root 2.5K 2010-11-15 19:22 VWKS02_Monitor
root@RWS01:/home/administrator#
THAT's COPY TIME NOW
Subscribe to:
Posts (Atom)