Desktop virtualization

A number of virtualization technologies can be deployed for end user devices. Application virtualization can be used to run applications on an underlying virtualized operating system. And instead of running applications on end user devices themselves, using a thin client, applications can also be run on virtualized PCs based on Server Based Computing (SBC) or Virtual Desktop Infrastructure (VDI). All of these technologies are explained in the next sections.

Application virtualization
The term application virtualization is a bit misleading, as the application itself is not virtualized, but the operating system resources the application uses are virtualized. Application virtualization isolates applications from some resources of the underlying operating system and from other applications, to increase compatibility and manageability.

The application is fooled into believing that it is directly interfacing with the original operating system and all the resources managed by it. But in reality the application virtualization layer provides the application with virtualized parts of the runtime environment normally provided by the operating system.

Application virtualization is typically implemented in a Windows based environment


The application virtualization layer proxies all requests to the operating system and intercepts all file and registry operations of the virtualized applications. These operations are transparently redirected to a virtualized location, often a single real file.

Since the application is now working with one file instead of many files and registry entries spread throughout the system, it becomes easy to run the application on a different computer, and previously incompatible applications or application versions can be run side-by-side.

Examples of application virtualization products are Microsoft App-V and VMware ThinApp.

Server Based Computing
Server Based Computing (SBC) is a concept where applications and/or desktops running on remote servers relay their virtual display to the user's device. The user’s device runs a relatively lightweight application (a thin client agent) that displays the video output and that fetches the keyboard strokes and mouse movements, sending them to the application on the remote server. The keyboard and mouse information is processed by the application on the server, and the resulting display changes are sent back to the user device.


SBC requires a limited amount of network bandwidth, because only changed display information is sent to the end user device and only keyboard strokes and mouse movements are sent to the server.

SBC is typically implemented in a Windows based environment, where the SBC server is either Windows Remote Desktop Service (RDS, formerly known as Windows Terminal Services) or Citrix XenApp (formerly known as MetaFrame Presentation Server). XenApp provides more functionality than RDS, but is a separate product, whereas RDS is part of the Windows operating system.

A big advantage of using SBC is that maintenance (like applying patches and upgrades) can be done at the server level. The changes are available instantly to all users – freeing systems managers of managing a large set of PC deployments.

With SBC, server-side CPU and RAM capacity is shared with applications from all users. Extensive use of CPU and/or RAM in one user's session can influence the performance of sessions of other users on the same server.
Application configurations are the same for all users and use the graphical properties of the SBC server instead of that of the client end user device.

Limitations on the desktop experience (slow response or keyboard lag) are mostly due to network latency or the configuration of the remote desktop. In most cases security and stability settings (protecting changes to shared resources) could also influence the experience. With a good configuration of the roaming user profile, folder redirection for network storage of user data, and the latest application virtualization techniques, limitations in desktop usage can be minimal.

Virtual Desktop Infrastructure (VDI)
Virtual Desktop Infrastructure (VDI) is a similar concept as SBC, only in VDI user applications run in their own virtual machine.


VDI utilizes a virtual desktop running on top of a hypervisor, typically VMware View, Citrix XenDesktop, or Microsoft MED-V. The hypervisor's primary task is to distribute the available hardware resources between the virtual machines hosted on the physical machine.

Just like with a physical PC, with VDI, each user has exclusive use of the operating system, CPU, and RAM, whereas with SBC users share these resources. VDI enables applications and operating systems to run next to each other in complete isolation without interference.

Protocols supported to exchange video, keyboard, and mouse from client to virtual machine are the ICA (Independent Computing Architecture) protocol of Citrix, Microsoft’s RDP (Remote Desktop Protocol), and the VMware PCoIP protocol.

VDI tends not to scale well in terms of CPU resources and storage IOPS, because each client uses an entire virtual machine. Booting a system leads to much I/O to the server. A so-called 'Logon storm' occurs when a lot of virtualized systems boot up at the same time. These logon storms can partly be prevented by pre-starting a predefined number of virtual machines at configured time slots.

Thin clients
VDI and SBC both enable the hosting of desktops on central server farms and use the same protocols to deliver the output of application screens to users. Thin clients communicate with the SBC or VDI server. They come in two flavors: hardware and software based thin clients.

Hardware based thin clients are lightweight computers that are relatively inexpensive and have no moving parts nor local disk drives. The devices have no configuration and can be used directly after plugging them into the network, making it easy to replace when one fails. They eliminate the requirement for upgrading PCs or laptops on a regular basis.

Software based thin clients are applications running in a normal client operating system like Windows, Linux, or Mac OS X. They can also run on mobile devices like tablets and smartphones.

This entry was posted on Vrijdag 27 November 2015

Stakeholder management

Stakeholders are people that have a stake in the system that is designed, built,  implemented, managed and used. Stakeholders have concerns about the system and these concerns must be addressed. To manage the communication with  stakeholders a stakeholder analysis should be performed at the start of the project. This analyis comprises:

  • a stakeholder landscape
  • a ranking
  • a  stakeholder map
  • a communication plan

Stakeholder landscape
A list of stakeholders must be compiled to effectively manage stakeholders. A good way to do this is to create a visual map. Put the main system in the centre and the main components of the system around it. For each main component define the roles, like the business owner, the user, external parties, and the system  manager. Then define the actual persons working in these roles.

Ranking the stakeholders
When the stakeholder landscape is clear, a list of the stakeholders can created. All stakeholders are categorized based on their interest to the project and  the  influence they have on the success of the project. Most project leads to changes in both the IT landscape and (often) the business processes and therefore to concerns of the stakeholders. For every stakeholder their concerns are weighted and given a number  between between one and three. This number is called interest. One is low interest for this stakeholder. An interest of three shows a the project brings many, or complex  changes to the stakeholder.

Some stakeholders have more influence on the project than others. This influence is also ranked between one and three. One means the stakeholder is  considered to have very little influence on the project or the solution being built. An influence of three means the stakeholder has much power to resist or  support the project or solution.
Based on the ranking for all concerns, an average is calculated per stakeholder.

Stakeholder map
In this stage noth the interest and the influence are ranked either high or low per stakeholder.Communication planWhen the stakeholders are ranked, they are caracterised using the following stakeholder map.


The stakeholder map classifies all stakeholders in four groups:

Weight Communication strategy
Low interest, low influence Occasionally Contact These are relatively unimportant  stakeholders, but keeping in touch with them  is a good idea,  just in case their status  changes.
High interest, low influence Keep Informed These stakeholders are easy to ignore as  they apparently cannot derail the project,  although if sufficiently upset they may gain  influence by low-level blocking and other  techniques of resistance to the project. Do  remember that minorities can be very  powerful, particularly if they work together  or if they get powerful allies.
Low interest, high influence Keep Satisfied Stakeholders with a low interest in the  project will not be particularly worried it, so  are not too much of a problem in the actual  project. A problem can appear when they are  persuaded to act for those who oppose the  project. It is thus important to keep them satiesfied, for example with regular meetings that explain what is happening.
High interest, high influence Actively Engage These stakeholders are both  significantly  affected by the project and most able to do  something about it, either by  supporting or by opposing the project. It is particularly  important to engage these  stakeholders in the project, ensuring that they  understand what is going on and  also  to create buy-in as they feel a sense of  ownership of what is being done.

Based on the classification of the stakeholders a communication plan must be created.  In the communications plan the stakeholders and the frequency and  type of contact per  stakeholder are listed. This way it is ensured that the stakeholders get the attention they  need and deserve.

Communication plan
At the beginning of the project individual interviews should be held by the architect and the relevant project members with  the high interest, high influence  stakeholders. This opens up communication channels between the architects and the most important stakeholders, enabling smooth communications in the  future. In the interviews the interests of the stakeholders are discussed and arrangements are  made about the frequency and form of future  communications. It is always a good idea to have the follow-up stakeholder discussions with multiple stakeholders  in one room. This not only saves time for the project team, but also  opens up  communications between the stakeholders about the project. It is not unusual that the stakeholders never exchangeds ideas and concerns amongst each other. In such a setting conflicting concerns can often be cleared up easy and early.

It is important for the architect to address all stakeholders’ concerns, even if it means  that concerns might not be mitigated. Addressing the concerns of all  stakeholders must  be done during the full project life cycle, as during the project new concerns will arise. This  is perfectly normal as all stakeholders get  more insight in the results of the project and  as business continues to move forward during the project's life span. These new concerns must be  handled in the same way as  the original concerns.

Typically, only when the stakeholders feel their concerns are taken care of and get serious attention, they are willing to support the project.

This entry was posted on Vrijdag 06 November 2015

x86 platform architecture

The x86 platform is the most dominant server architecture today. While the x86 platform was originally designed for personal computers, it is now implemented in all types of systems, from netbooks up to the fastest multi CPU servers.

x86 servers are produced by many vendors. Best known vendors are HP, Dell, HDS (Hitachi Data Systems) and Lenovo (the former IBM x86 server business that Lenovo acquired in 2014 ). These vendors typically purchase most server parts (like video graphics cards, power supplies, RAM, and disk drives) from other vendors. This makes x86 server implementations very diverse. So while the x86 architecture is standardized, the implementation of it is highly dependent on the vendor and the components available at a certain moment.
x86 servers typically run operating systems not provided by the vendors of the hardware. Most often Microsoft Windows and Linux are used, but x86 systems are also capable of running special purpose operating systems.

Most servers in datacenters today are based on the x86 architecture. This x86 architecture (also known as PC architecture) is based on the original IBM PC. The IBM PC’s history is described in more detail in chapter 14.
In the 1990s x86 servers first started to appear. They were basically PCs, but were housed in 19” racks without dedicated keyboards and monitors.

Over the years, x86 servers became the de-facto standard for servers. Their low cost, the fact that there are many manufacturers and their ability to run familiar operating systems like Microsoft Windows and Linux, made them extremely popular.

x86 architecture
The x86 architecture consists of several building blocks, integrated in a number of specialized chips. These chips are also known as the x86 chip set.

The heart of an x86 based system is a CPU from the x86 family. The CPU contains a large number of connection pins to connect address lines, data lines, clock lines, and additional logic connections.

Northbridge/Southbridge x86 architecture
Earlier x86 systems utilized a Northbridge/Southbridge architecture. In this architecture, the data path of the CPU, called the Front Side Bus (FSB), was connected to a fast Northbridge chip, transporting data between the CPU and both the RAM memory and the PCIe bus. The Northbridge was also connected to the Southbridge chip by a bus called the Direct Media Interface (DMI). The relatively slow Southbridge chip connected components with slower data paths, like the BIOS, the SATA adaptors, USB ports, and the PCI bus.


PCH based x86 architecture
In 2008, with the introduction of the Intel 5 Series chipset, the Northbridge/Southbridge architecture was replaced by the Platform Controller Hub (PCH) architecture. In this architecture, the Southbridge functionality is managed by the PCH chip, which is directly connected to the CPU via the DMI.


Most of the Northbridge functions were integrated into the CPU while the PCH took over the remaining functions in addition to the traditional roles of the Southbridge. In the PCH architecture, the RAM and PCIe data paths are directly connected to the CPU. Examples of x86 architectures that have the Northbridge integrated in the CPU are Intel’s Sandy Bridge and AMD's Fusion.

In 2015, the Skylake architecture is the most recent Intel x86 architecture. Some variants of Skylake will have the PCH integrated in the CPU as well, which makes the CPU effectively a full system on a chip (SoC). In 2015, Intel announced the Broadwell-based Xeon D as its first platform to fully incorporate the PCH in an SoC configuration.

This entry was posted on Vrijdag 16 Oktober 2015

Midrange systems architecture

The midrange platform is positioned between the mainframe platform and the x86 platform. The size and cost of the systems, the workload, the availability, their performance, and the maturity of the platform is higher than that of the x86 platforms, but lower than that of a mainframe.

Today midrange systems are produced by three vendors:

  • IBM produces the Power Systems series of midrange servers (the former RS/6000, System p, AS/400, and System i series).
  • Hewlett-Packard produces the HP Integrity systems.
  • Oracle produces the original Sun Microsystems’s based SPARC servers.

Midrange systems are typically built using parts from only one vendor, and run an operating system provided by that same vendor. This makes the platform relatively stable, leading to high availability and security.

The term minicomputer evolved in the 1960s to describe the small computers that became possible with the use of IC and core memory technologies. Small was relative, however; a single minicomputer typically was housed in a few cabinets the size of a 19” rack.

The first commercially successful minicomputer was DEC PDP-8, launched in 1964. The PDP-8 sold for one-fifth the price of the smallest IBM 360 mainframe. This enabled manufacturing plants, small businesses, and scientific laboratories to have a computer of their own.

In the late 1970s, DEC produced another very successful minicomputer series called the VAX. VAX systems came in a wide range of different models. They could easily be setup as a VAXcluster for high availability and performance.

DEC was the leading minicomputer manufacturer and the 2nd largest computer company (after IBM). DEC was sold to Compaq in 1998 which in its turn became part of HP some years later.

Minicomputers became powerful systems that ran full multi-user, multitasking operating systems like OpenVMS and UNIX. Halfway through the 1980s minicomputers became less popular as a result of the lower cost of microprocessor based PCs, and the emergence of LANs. In places where high availability, performance, and security are very important, minicomputers (now better known as midrange systems) are still used.
Most midrange systems today run a flavor of the UNIX operating system, OpenVMS or IBM i:

  • HP Integrity servers run HP-UX UNIX and OpenVMS.
  • Oracle/Sun’s SPARC servers run Solaris UNIX.
  • IBM's Power systems run AIX UNIX, Linux and IBM i.

Midrange systems architecture
Midrange systems used to be based on specialized Reduced Instruction Set Computer (RISC) CPUs. These CPUs were optimized for speed and simplicity, but much of the technologies originating from RISC are now implemented in general purpose CPUs. Some midrange systems therefore are moving from RISC based CPUs to general purpose CPUs from Intel, AMD, or IBM.

The architecture of most midrange systems typically use multiple CPUs and is based on a shared memory architecture. In a shared memory architecture all CPUs in the server can access all installed memory blocks. This means that changes made in memory by one CPU are immediately seen by all other CPUs. Each CPU operates independently from the others. To connect all CPUs with all memory blocks, an interconnection network is used based on a shared bus, or a crossbar.

A shared bus connects all CPUs and all RAM, much like a network hub does. The available bandwidth is shared between all users of the shared bus. A crossbar is much like a network switch, in which every communication channel between one CPU and one memory block gets full bandwidth.

The I/O system is also connected to the interconnection network, connecting I/O devices like disks or PCI based expansion cards.

Since each CPU has its own cache, and memory can be changed by other CPUs, cache coherence is needed in midrange systems. Cache coherence means that if one CPU writes to a location in shared memory, all other CPUs must update their caches to reflect the changed data. Maintaining cache coherence introduces a significant overhead. Special-purpose hardware is used to communicate between cache controllers to keep a consistent memory image.

Shared memory architectures come in two flavors: Uniform Memory Access (UMA), and Non Uniform Memory Access (NUMA). Their cache coherent versions are known as ccUMA and ccNUMA.

The UMA architecture is one of the earliest styles of multi-CPU architectures, typically used in servers with no more than 8 CPUs. In an UMA system the machine is organized into a series of nodes containing either a processor, or a memory block. These nodes are interconnected, usually by a shared bus. Via the shared bus, each processor can access all memory blocks, creating a single system image.


UMA systems are also known as Symmetric Multi-Processor (SMP) systems. SMP is used in x86 servers as well as early midrange systems.

SMP technology is also used inside multi-core CPUs, in which the interconnect is implemented on-chip and a single path to the main memory is provided between the chip and the memory subsystem elsewhere in the system.


UMA is supported by all major operating systems and can be implemented using most of today’s CPUs.

In contrast to UMA, NUMA is a server architecture in which the machine is organized into a series of nodes, each containing processors and memory, that are interconnected, typically using a crossbar. NUMA is a newer architecture style than UMA and is better suited for systems with many processors.


A node can use memory on all other nodes, creating a single system image. But when a processor accesses memory not within its own node, the data must be transferred over the interconnect, which is slower than accessing local memory. Thus, memory access times are non-uniform, depending on the location of the memory, as the architecture’s name implies.

Some of the current servers using NUMA architectures include systems based on AMD Opteron processors, Intel Itanium systems, and HP Integrity and Superdome systems. Most popular operating systems such as OpenVMS, AIX, HP-UX, Solaris, and Windows, and virtualization hypervisors like VMware fully support NUMA systems.

This entry was posted on Vrijdag 25 September 2015

Mainframe Architecture

A mainframe is a high-performance computer made for high-volume, processor-intensive computing. Mainframes were the first commercially available computers. They were produced by vendors like IBM, Unisys, Hitachi, Bull, Fujitsu, and NEC. But IBM always was the largest vendor – it still has 90% market share in the mainframe market.

Mainframes used to have no interactive user interface. Instead, they ran batch processes, using punched cards, paper tape, and magnetic tape as input, and produced printed paper as output. In the early 1970s, most mainframes got interactive user interfaces, based on terminals, simultaneously serving hundreds of users.

While the end of the mainframe is predicted for decades now, mainframes are still widely used. Today’s mainframes are still relatively large (the size of a few 19" racks), but they don’t fill-up a room anymore. They are expensive computers, mostly used for administrative processes, optimized for handling high volumes of data.

The latest IBM z13 mainframe, introduced in 2015, can host up to 10TB of memory and 141 processors, running at a 5GHz clock speed. It has enough resources to run up to 8000 virtual servers simultaneously.


Mainframes are highly reliable, typically running for years without downtime. Much redundancy is built in, enabling hardware upgrades and repairs while the mainframe is operating without downtime. Sometimes a separate system is added to the mainframe which primary job it is to continuously check the mainframe’s health. When a hardware failure is detected, automatically an IBM engineer is called, sometimes even without the systems managers knowing it!

All IBM mainframes are backwards compatible with older mainframes. For instance, the 64 bits mainframes of today can still run the 24-bit System/360 code from the early days of mainframe computing. Much effort is spent in ensuring all software continues to work without modification.

Mainframe architecture
A mainframe consists of processing units (PUs), memory, I/O channels, control units, and devices, all placed in racks (frames). The architecture of a mainframe is shown below.


The various parts of the architecture are described below.

Processing Units
In the mainframe world the term PU (Processing Unit) is used instead of the more ambiguous term CPU. A mainframe has multiple PUs, so there is no central processing unit. The total of all PUs in a mainframe is called a Central Processor Complex (CPC).

The CPC resides in its own cage inside the mainframe, and consists of one to four so-called book packages. Each book package consists of processors, memory, and I/O connections, much like x86 system boards.

Mainframes use specialized PUs (like the quad core z10 mainframe processor) instead of off-the-shelf Intel or AMD supplied CPUs.

All processors in the CPC start as equivalent processor units (PUs). Each processor is characterized during installation or at a later time, sometimes because of a specific task the processor is configured to do. Some examples of characterizations are:

Processor unit (PU) Task
Central processors (CP) Central processors are the main processors of the system that can be used to run applications running on VM, z/OS, and ESA/390 operating systems.
CP Assist for Cryptographic Function (CPACF) CPACF assists the CPs by handling workload associated with encryption/decryption.
Integrated Facility for Linux (IFL) IFL assists with Linux workloads: they are regular PUs with a few specific instructions that are needed by Linux.
Integrated Coupling Facility (ICF) This facility executes licensed internal code to coordinate system tasks.
System Assisted Processor (SAP) A SAP assists the CP with workload for the I/O subsystem, for instance by translating logical channel paths to physical paths.
IBM System z Application Assist Processors (zAAP) Used for Java code execution
zIIP Processing certain database workloads
Spares Used to replace any CP or SAP failure

Main Storage
Each book package in the CPC cage contains from four to eight memory cards. For example, a fully loaded z9 mainframe has four book packages that can provide up to 512 GB of memory.

The memory cards are hot swappable, which means that you can add or remove a memory card without powering down the mainframe.

Channels, ESCON and FICON
A channel provides a data and control path between I/O devices and memory.

Today’s largest mainframes have 1024 channels. Channels connect to control units, either directly or via switches. Specific slots in the I/O cages are reserved for specific types of channels, which include the following:

  • Open Systems Adapter (OSA) – this adapter provides connectivity to various industry standard networking technologies, including Ethernet
  • Fiber Connection (FICON) - this is the most flexible channel technology. With FICON, input/output devices can be located many kilometers from the mainframe to which they are attached.
  • Enterprise Systems Connection (ESCON) - this is an earlier type of fiber-optic technology. ESCON channels can provide performance almost as fast as FICON channels, but at a shorter distance.

The FICON or ESCON switches may be connected to several mainframes, sharing the control units and I/O devices.

The channels are high speed – today’s FICON Express16S channels provide up to 320 links of 16 Gbit/s each.

Control units
A control unit is similar to an expansion card in an x86 or midrange system. It contains logic to work with a particular type of I/O device, like a printer or a tape drive.

Some control units can have multiple channel connections providing multiple paths to the control unit and its devices, increasing performance and availability.

Control units can be connected to multiple mainframes, creating shared I/O systems. Sharing devices, especially disk drives, is complicated and there are hardware and software techniques used by the operating system to control updating the same disk data at the same time from two independent systems.

Control units connect to devices, like disk drives, tape drives, and communication interfaces. Disks in mainframes are called DASD (Direct Attached Storage Device), which is comparable to a SAN (Storage Area Network) in a midrange or x86 environment.

This entry was posted on Vrijdag 04 September 2015

Earlier articles

Desktop virtualization

Stakeholder management

x86 platform architecture

Midrange systems architecture

Mainframe Architecture

Software Defined Data Center - SDDC

The Virtualization Model

Software Defined Computing (SDC), Networking (SDN) and Storage (SDS)

What are concurrent users?

Performance and availability monitoring in levels

Een impressie van het LAC 2014

UX/UI has no business rules

Technical debt: a time related issue

Solution shaping workshops

Architecture life cycle

Project managers and architects

Using ArchiMate for describing infrastructures

Kruchten’s 4+1 views for solution architecture

The SEI stack of solution architecture frameworks

TOGAF and infrastructure architecture

The Zachman framework

An introduction to architecture frameworks

How to handle a Distributed Denial of Service (DDoS) attack

Architecture Principles

Views and viewpoints explained

Stakeholders and their concerns

Skills of a solution architect architect

Solution architects versus enterprise architects

Definition of IT Architecture

My Book

What is Big Data?

How to make your IT "Greener"

What is Cloud computing and IaaS?

Purchasing of IT infrastructure technologies and services

IDS/IPS systems

IP Protocol (IPv4) classes and subnets

Infrastructure Architecture - Course materials

Introduction to Bring Your Own Device (BYOD)

IT Infrastructure Architecture model

Fire prevention in the datacenter

Where to build your datacenter

Availability - Fall-back, hot site, warm site

Reliabilty of infrastructure components

Human factors in availability of systems

Business Continuity Management (BCM) and Disaster Recovery Plan (DRP)

Performance - Design for use

Performance concepts - Load balancing

Performance concepts - Scaling

Performance concept - Caching

Perceived performance

Ethical hacking

Computer crime

Introduction to Cryptography

Introduction to Risk management

The history of UNIX and Linux

The history of Microsoft Windows

Engelse woorden in het Nederlands

Infosecurity beurs 2010

The history of Storage

The history of Networking

The first computers

Cloud: waar staat mijn data?

Tips voor het behalen van uw ITAC / Open CA certificaat

Ervaringen met het bestuderen van TOGAF

De beveiliging van uw data in de cloud

Proof of concept

Een consistente back-up? Nergens voor nodig.

Measuring Enterprise Architecture Maturity

The Long Tail

Open group ITAC /Open CA Certification

Human factors in security

Google outage

SAS 70

De Mythe van de Man-Maand

TOGAF 9 - wat is veranderd?

DYA: Ontwikkelen Zonder architectuur

Landelijk Architectuur Congres LAC 2008

InfoSecurity beurs 2008

Spam is big business

Waarom IT projecten mislukken

Stroom en koeling

Laat beheerders meedraaien in projecten

De zeven eigenschappen van effectief leiderschap


Een ontmoeting met John Zachman

Open CA (voorheen: ITAC) - IT Architect certification

Persoonlijk Informatie Eigendom


Live computable webcast

Lezing Trends in IT Security

Hardeningscontrole en hacktesting


Information Lifecycle Management - Wat is ILM

LEAP: de trip naar Redmond

LEAP: De laatste Nederlandse masterclasses

Scada systemen

LEAP - Halverwege de Nederlandse masterclasses

Beveiliging van data - Het kasteel en de tank

Waarom je geen ICT architect moet worden

Non-functional requirements

Redenen om te backuppen

Log analyse - gebruik logging informatie

LEAP - Microsoft Lead Enterprise Architect Program

Archivering data - more than backup

Patterns in IT architectuur

Tot de dood ons scheidt

High Availability clusters

Hoe geef ik een goede presentatie

Lagen in ICT Beveiliging

Zachman architectuur model

High performance clusters en grids

Redenen om te kiezen voor Open Source software

Monitoring door systeembeheerders

Wat is VMS?

IT Architectuur certificeringen

Storage Area Network's (SAN's)

Systeembeheer documentatie

Wat zijn Rootkits

Virtualisatie van operating systems

Kenmerken van Open Source software

Linux certificering: RHCE en LPI

99,999% beschikbaarheid

Het infrastructuur model

Sjaak Laan

Recommended links

Genootschap voor Informatie Architecten
Ruth Malan
Informatiekundig bekeken
Gaudi site
XR Magazine
Esther Barthel's site on virtualization


XML: RSS Feed 
XML: Atom Feed 


The postings on this site are my opinions and do not necessarily represent CGI’s strategies, views or opinions.


Copyright Sjaak Laan