Public wireless networks

In the past years wireless networks have become more popular than wired networks for end user devices. Apart from WLANs based on Wi-Fi, public wireless networks based on GPRS, EDGE, UMTS, and HSDPA are getting more used every day. The reason is obvious – public wireless networks provide freedom to move around for mobile users and provide connectivity from places where wired connections are impossible (like on the road).

Public wireless networks are much less reliable than private. Users moving around will often temporarily lose connectivity, and bad signals lead to frequent re-sending of network packets. The bandwidth is also much lower than when using private networks; noise and other signal interference, usage of available bandwidth by (many) other users and retransmissions lead to low effective bandwidth per end point.

GSM, GPRS and EDGE
Global System for Mobile Communications (GSM) is the world's most popular standard for mobile telephone systems in which both signaling and speech channels are digital. This technology is also called 1G: the first-generation of mobile technology.

General packet radio service (GPRS) is a packet oriented mobile data service providing data rates of 56 to 114 kbit/s based on GSM technology. This technology is also called 2G.

Enhanced Data rates for GSM Evolution (EDGE), also known as Enhanced GPRS or 2.5G, allows improved data transmission rates as a backward-compatible extension of GSM. EDGE delivers data rates up to 384 kbit/s.

UMTS (3G) / HSDPA
Universal Mobile Telecommunications System (UMTS) is an umbrella term for the third-generation (3G) mobile telecommunications transmission standard. UMTS is also known as FOMA or W-CDMA. Compared to GSM, UMTS requires new base stations and new frequency allocations, but it uses a core network derived from GSM, ensuring backward compatibility. UMTS was designed to provide maximum data transfer rates of 45 Mbit/s.

High Speed Downlink Packet Access (HSDPA) is part of the UMTS standard, providing a maximum speed of 7.2 Mbit/s. HSDPA+ is also known as HSDPA Evolution and Evolved HSDPA. It is an upgrade to HSDPA networks, providing 42 Mbit/s download and 11.5 Mbit/s upload speeds.

LTE (4G)
LTE (Long Term Evolution) is a 4G network technology, designed from the start to transport data (IP packets) rather than voice. LTE is a set of enhancements to UMTS. In order to use LTE, the core UMTS network must be adapted, leading to changes in the transmitting equipment. The LTE specification provides download peak rates of at least 100 Mbit/s (up to 326 Mbit/s), and an upload speed of at least 50 Mbit/s (up to 86.4 Mbit/s).

LTE is not designed to handle voice transmissions. When placing or receiving a voice call, LTE handsets will typically fall back to old 2G or 3G networks for the duration of the call. In 2015, the Voice over LTE (VoLTE) protocol is about to be rolled out to allow the decommissioning of the old 2G and 3G networks in the future.


This entry was posted on Donderdag 24 December 2015

Supercomputer architecture

A supercomputer is a computer architecture designed to maximize calculation speed. This in contrast with a mainframe, which is optimized for high I/O throughput. Supercomputers are the fastest machines available at any given time. Since computing speed increases continuously, supercomputers are superseded by new supercomputers all the time.

Supercomputers are used for many tasks, from weather forecast calculations to the rendering of movies like Toy Story and Shrek.

Originally, supercomputers were produced primarily by a company named Cray Research. The Cray-1 was a major success when it was released in 1976. It was faster than all other computers at the time and it went on to become one of the best known and most successful supercomputers in history. The machine cost $8.9 million when introduced.

Cray supercomputers used specially designed CPUs for performing calculations on large sets of data. Together with dedicated hardware for certain instructions (like multiply and divide) this increased performance.

The entire chassis of the Cray supercomputers was bent into a large C-shape. Speed-dependent portions of the system were placed on the "inside edge" of the chassis where the wire-lengths were shorter to decrease delays. The system could peak at 250 MFLOPS (Million Floating Point Operations per second).

2015-09/cray-2-supercomputer.jpg

In 1985, the very advanced Cray-2 was released, capable of 1.9 billion floating point operations per second (GFLOPS) peak performance, almost eight times as much as the Cray-1. In comparison, in 2015, the Intel Core i7 5960X CPU has a peak performance of 354 GFLOPS ; more than 185 times faster than the Cray-2!

Supercomputers as single machines started to disappear in the 1990s. Their work was taken over by clustered computers – a large number of off-the-shelf x86 based servers, connected by fast networks to form one large computer array. Nowadays high performance computing is done mainly with large arrays of x86 systems. In 2015, the fastest computer array was a cluster with more than 3,120,000 CPU cores, calculating at 54,902,400 GFLOPS, running Linux .

In some cases specialized hardware is used to realize high performance. For example, graphics processors (GPUs) can be used for fast vector based calculations and Intel CPUs now contain special instructions to speed up AES encryption.

In 2015, the NVidia's Tesla GPU PCIe card (basically a graphics card but without a graphics connector) provides hundreds of vector based computing cores and more than 8,000 GFLOPS of computing power . Four of these cards can be combined in one system for extremely high performance calculations, for just a fraction of the cost of traditional supercomputers.


This entry was posted on Vrijdag 11 December 2015

Desktop virtualization

A number of virtualization technologies can be deployed for end user devices. Application virtualization can be used to run applications on an underlying virtualized operating system. And instead of running applications on end user devices themselves, using a thin client, applications can also be run on virtualized PCs based on Server Based Computing (SBC) or Virtual Desktop Infrastructure (VDI). All of these technologies are explained in the next sections.

Application virtualization
The term application virtualization is a bit misleading, as the application itself is not virtualized, but the operating system resources the application uses are virtualized. Application virtualization isolates applications from some resources of the underlying operating system and from other applications, to increase compatibility and manageability.

The application is fooled into believing that it is directly interfacing with the original operating system and all the resources managed by it. But in reality the application virtualization layer provides the application with virtualized parts of the runtime environment normally provided by the operating system.

Application virtualization is typically implemented in a Windows based environment

2015-09/application-virtualization.jpg

The application virtualization layer proxies all requests to the operating system and intercepts all file and registry operations of the virtualized applications. These operations are transparently redirected to a virtualized location, often a single real file.

Since the application is now working with one file instead of many files and registry entries spread throughout the system, it becomes easy to run the application on a different computer, and previously incompatible applications or application versions can be run side-by-side.

Examples of application virtualization products are Microsoft App-V and VMware ThinApp.

Server Based Computing
Server Based Computing (SBC) is a concept where applications and/or desktops running on remote servers relay their virtual display to the user's device. The user’s device runs a relatively lightweight application (a thin client agent) that displays the video output and that fetches the keyboard strokes and mouse movements, sending them to the application on the remote server. The keyboard and mouse information is processed by the application on the server, and the resulting display changes are sent back to the user device.

2015-09/server-based-computing.jpg

SBC requires a limited amount of network bandwidth, because only changed display information is sent to the end user device and only keyboard strokes and mouse movements are sent to the server.

SBC is typically implemented in a Windows based environment, where the SBC server is either Windows Remote Desktop Service (RDS, formerly known as Windows Terminal Services) or Citrix XenApp (formerly known as MetaFrame Presentation Server). XenApp provides more functionality than RDS, but is a separate product, whereas RDS is part of the Windows operating system.

A big advantage of using SBC is that maintenance (like applying patches and upgrades) can be done at the server level. The changes are available instantly to all users – freeing systems managers of managing a large set of PC deployments.

With SBC, server-side CPU and RAM capacity is shared with applications from all users. Extensive use of CPU and/or RAM in one user's session can influence the performance of sessions of other users on the same server.
Application configurations are the same for all users and use the graphical properties of the SBC server instead of that of the client end user device.

Limitations on the desktop experience (slow response or keyboard lag) are mostly due to network latency or the configuration of the remote desktop. In most cases security and stability settings (protecting changes to shared resources) could also influence the experience. With a good configuration of the roaming user profile, folder redirection for network storage of user data, and the latest application virtualization techniques, limitations in desktop usage can be minimal.

Virtual Desktop Infrastructure (VDI)
Virtual Desktop Infrastructure (VDI) is a similar concept as SBC, only in VDI user applications run in their own virtual machine.

2015-09/vdi.jpg

VDI utilizes a virtual desktop running on top of a hypervisor, typically VMware View, Citrix XenDesktop, or Microsoft MED-V. The hypervisor's primary task is to distribute the available hardware resources between the virtual machines hosted on the physical machine.

Just like with a physical PC, with VDI, each user has exclusive use of the operating system, CPU, and RAM, whereas with SBC users share these resources. VDI enables applications and operating systems to run next to each other in complete isolation without interference.

Protocols supported to exchange video, keyboard, and mouse from client to virtual machine are the ICA (Independent Computing Architecture) protocol of Citrix, Microsoft’s RDP (Remote Desktop Protocol), and the VMware PCoIP protocol.

VDI tends not to scale well in terms of CPU resources and storage IOPS, because each client uses an entire virtual machine. Booting a system leads to much I/O to the server. A so-called 'Logon storm' occurs when a lot of virtualized systems boot up at the same time. These logon storms can partly be prevented by pre-starting a predefined number of virtual machines at configured time slots.

Thin clients
VDI and SBC both enable the hosting of desktops on central server farms and use the same protocols to deliver the output of application screens to users. Thin clients communicate with the SBC or VDI server. They come in two flavors: hardware and software based thin clients.

Hardware based thin clients are lightweight computers that are relatively inexpensive and have no moving parts nor local disk drives. The devices have no configuration and can be used directly after plugging them into the network, making it easy to replace when one fails. They eliminate the requirement for upgrading PCs or laptops on a regular basis.

Software based thin clients are applications running in a normal client operating system like Windows, Linux, or Mac OS X. They can also run on mobile devices like tablets and smartphones.


This entry was posted on Vrijdag 27 November 2015

Stakeholder management

Stakeholders are people that have a stake in the system that is designed, built,  implemented, managed and used. Stakeholders have concerns about the system and these concerns must be addressed. To manage the communication with  stakeholders a stakeholder analysis should be performed at the start of the project. This analyis comprises:

  • a stakeholder landscape
  • a ranking
  • a  stakeholder map
  • a communication plan

Stakeholder landscape
A list of stakeholders must be compiled to effectively manage stakeholders. A good way to do this is to create a visual map. Put the main system in the centre and the main components of the system around it. For each main component define the roles, like the business owner, the user, external parties, and the system  manager. Then define the actual persons working in these roles.

Ranking the stakeholders
When the stakeholder landscape is clear, a list of the stakeholders can created. All stakeholders are categorized based on their interest to the project and  the  influence they have on the success of the project. Most project leads to changes in both the IT landscape and (often) the business processes and therefore to concerns of the stakeholders. For every stakeholder their concerns are weighted and given a number  between between one and three. This number is called interest. One is low interest for this stakeholder. An interest of three shows a the project brings many, or complex  changes to the stakeholder.

Some stakeholders have more influence on the project than others. This influence is also ranked between one and three. One means the stakeholder is  considered to have very little influence on the project or the solution being built. An influence of three means the stakeholder has much power to resist or  support the project or solution.
Based on the ranking for all concerns, an average is calculated per stakeholder.

Stakeholder map
In this stage noth the interest and the influence are ranked either high or low per stakeholder.Communication planWhen the stakeholders are ranked, they are caracterised using the following stakeholder map.

2015-10/stakeholder-map.jpg


The stakeholder map classifies all stakeholders in four groups:

Weight Communication strategy
Explaination
Low interest, low influence Occasionally Contact These are relatively unimportant  stakeholders, but keeping in touch with them  is a good idea,  just in case their status  changes.
High interest, low influence Keep Informed These stakeholders are easy to ignore as  they apparently cannot derail the project,  although if sufficiently upset they may gain  influence by low-level blocking and other  techniques of resistance to the project. Do  remember that minorities can be very  powerful, particularly if they work together  or if they get powerful allies.
Low interest, high influence Keep Satisfied Stakeholders with a low interest in the  project will not be particularly worried it, so  are not too much of a problem in the actual  project. A problem can appear when they are  persuaded to act for those who oppose the  project. It is thus important to keep them satiesfied, for example with regular meetings that explain what is happening.
High interest, high influence Actively Engage These stakeholders are both  significantly  affected by the project and most able to do  something about it, either by  supporting or by opposing the project. It is particularly  important to engage these  stakeholders in the project, ensuring that they  understand what is going on and  also  to create buy-in as they feel a sense of  ownership of what is being done.


Based on the classification of the stakeholders a communication plan must be created.  In the communications plan the stakeholders and the frequency and  type of contact per  stakeholder are listed. This way it is ensured that the stakeholders get the attention they  need and deserve.

Communication plan
At the beginning of the project individual interviews should be held by the architect and the relevant project members with  the high interest, high influence  stakeholders. This opens up communication channels between the architects and the most important stakeholders, enabling smooth communications in the  future. In the interviews the interests of the stakeholders are discussed and arrangements are  made about the frequency and form of future  communications. It is always a good idea to have the follow-up stakeholder discussions with multiple stakeholders  in one room. This not only saves time for the project team, but also  opens up  communications between the stakeholders about the project. It is not unusual that the stakeholders never exchangeds ideas and concerns amongst each other. In such a setting conflicting concerns can often be cleared up easy and early.

It is important for the architect to address all stakeholders’ concerns, even if it means  that concerns might not be mitigated. Addressing the concerns of all  stakeholders must  be done during the full project life cycle, as during the project new concerns will arise. This  is perfectly normal as all stakeholders get  more insight in the results of the project and  as business continues to move forward during the project's life span. These new concerns must be  handled in the same way as  the original concerns.

Typically, only when the stakeholders feel their concerns are taken care of and get serious attention, they are willing to support the project.


This entry was posted on Vrijdag 06 November 2015

x86 platform architecture

Introduction
The x86 platform is the most dominant server architecture today. While the x86 platform was originally designed for personal computers, it is now implemented in all types of systems, from netbooks up to the fastest multi CPU servers.

x86 servers are produced by many vendors. Best known vendors are HP, Dell, HDS (Hitachi Data Systems) and Lenovo (the former IBM x86 server business that Lenovo acquired in 2014 ). These vendors typically purchase most server parts (like video graphics cards, power supplies, RAM, and disk drives) from other vendors. This makes x86 server implementations very diverse. So while the x86 architecture is standardized, the implementation of it is highly dependent on the vendor and the components available at a certain moment.
x86 servers typically run operating systems not provided by the vendors of the hardware. Most often Microsoft Windows and Linux are used, but x86 systems are also capable of running special purpose operating systems.

History
Most servers in datacenters today are based on the x86 architecture. This x86 architecture (also known as PC architecture) is based on the original IBM PC. The IBM PC’s history is described in more detail in chapter 14.
In the 1990s x86 servers first started to appear. They were basically PCs, but were housed in 19” racks without dedicated keyboards and monitors.

Over the years, x86 servers became the de-facto standard for servers. Their low cost, the fact that there are many manufacturers and their ability to run familiar operating systems like Microsoft Windows and Linux, made them extremely popular.

x86 architecture
The x86 architecture consists of several building blocks, integrated in a number of specialized chips. These chips are also known as the x86 chip set.

The heart of an x86 based system is a CPU from the x86 family. The CPU contains a large number of connection pins to connect address lines, data lines, clock lines, and additional logic connections.

Northbridge/Southbridge x86 architecture
Earlier x86 systems utilized a Northbridge/Southbridge architecture. In this architecture, the data path of the CPU, called the Front Side Bus (FSB), was connected to a fast Northbridge chip, transporting data between the CPU and both the RAM memory and the PCIe bus. The Northbridge was also connected to the Southbridge chip by a bus called the Direct Media Interface (DMI). The relatively slow Southbridge chip connected components with slower data paths, like the BIOS, the SATA adaptors, USB ports, and the PCI bus.

2015-09/northbridge-southbridge-x86-architecture.jpg

PCH based x86 architecture
In 2008, with the introduction of the Intel 5 Series chipset, the Northbridge/Southbridge architecture was replaced by the Platform Controller Hub (PCH) architecture. In this architecture, the Southbridge functionality is managed by the PCH chip, which is directly connected to the CPU via the DMI.

2015-09/pch-based-x86-architecture.jpg

Most of the Northbridge functions were integrated into the CPU while the PCH took over the remaining functions in addition to the traditional roles of the Southbridge. In the PCH architecture, the RAM and PCIe data paths are directly connected to the CPU. Examples of x86 architectures that have the Northbridge integrated in the CPU are Intel’s Sandy Bridge and AMD's Fusion.

In 2015, the Skylake architecture is the most recent Intel x86 architecture. Some variants of Skylake will have the PCH integrated in the CPU as well, which makes the CPU effectively a full system on a chip (SoC). In 2015, Intel announced the Broadwell-based Xeon D as its first platform to fully incorporate the PCH in an SoC configuration.


This entry was posted on Vrijdag 16 Oktober 2015


Earlier articles

Public wireless networks

Supercomputer architecture

Desktop virtualization

Stakeholder management

x86 platform architecture

Midrange systems architecture

Mainframe Architecture

Software Defined Data Center - SDDC

The Virtualization Model

Software Defined Computing (SDC), Networking (SDN) and Storage (SDS)

What are concurrent users?

Performance and availability monitoring in levels

Een impressie van het LAC 2014

UX/UI has no business rules

Technical debt: a time related issue

Solution shaping workshops

Architecture life cycle

Project managers and architects

Using ArchiMate for describing infrastructures

Kruchten’s 4+1 views for solution architecture

The SEI stack of solution architecture frameworks

TOGAF and infrastructure architecture

The Zachman framework

An introduction to architecture frameworks

How to handle a Distributed Denial of Service (DDoS) attack

Architecture Principles

Views and viewpoints explained

Stakeholders and their concerns

Skills of a solution architect architect

Solution architects versus enterprise architects

Definition of IT Architecture

My Book

What is Big Data?

How to make your IT "Greener"

What is Cloud computing and IaaS?

Purchasing of IT infrastructure technologies and services

IDS/IPS systems

IP Protocol (IPv4) classes and subnets

Infrastructure Architecture - Course materials

Introduction to Bring Your Own Device (BYOD)

IT Infrastructure Architecture model

Fire prevention in the datacenter

Where to build your datacenter

Availability - Fall-back, hot site, warm site

Reliabilty of infrastructure components

Human factors in availability of systems

Business Continuity Management (BCM) and Disaster Recovery Plan (DRP)

Performance - Design for use

Performance concepts - Load balancing

Performance concepts - Scaling

Performance concept - Caching

Perceived performance

Ethical hacking

Computer crime

Introduction to Cryptography

Introduction to Risk management

The history of UNIX and Linux

The history of Microsoft Windows

Engelse woorden in het Nederlands

Infosecurity beurs 2010

The history of Storage

The history of Networking

The first computers

Cloud: waar staat mijn data?

Tips voor het behalen van uw ITAC / Open CA certificaat

Ervaringen met het bestuderen van TOGAF

De beveiliging van uw data in de cloud

Proof of concept

Een consistente back-up? Nergens voor nodig.

Measuring Enterprise Architecture Maturity

The Long Tail

Open group ITAC /Open CA Certification

Human factors in security

Google outage

SAS 70

De Mythe van de Man-Maand

TOGAF 9 - wat is veranderd?

Landelijk Architectuur Congres LAC 2008

InfoSecurity beurs 2008

Spam is big business

De zeven eigenschappen van effectief leiderschap

Een ontmoeting met John Zachman

Persoonlijk Informatie Eigendom

Archivering data - more than backup

Sjaak Laan


Recommended links

Genootschap voor Informatie Architecten
Ruth Malan
Informatiekundig bekeken
Gaudi site
Byelex
XR Magazine
Esther Barthel's site on virtualization


Feeds

 
XML: RSS Feed 
XML: Atom Feed 


Disclaimer

The postings on this site are my opinions and do not necessarily represent CGI’s strategies, views or opinions.

 

Copyright Sjaak Laan