Tag : architecture

Virtual Desktop Infrastructure (VDI); Session Based Computing

Spring is fully upon us and the summer heat is looming in the not too distant future. Many of us are planning out our summer vacations to beat the heat and spend time with our friends and families. While our minds are probably already off to some beachside locale, there is still a bit of time before we’ll be flying there ourselves. In the meantime, perhaps now is as good a time as any to look into moving your business over to an older and simpler way of computing.  Session based technology has been around for many years and at one point in the late 90’s/early 2000’s it was a very popular desktop architecture.  For a variety of reasons it became less popular primarily due to the desktop hardware cost decreasing significantly.  Session Based computing is where you take all the data and processing activity off the local desktop and have it take place on a robust server.  By doing this you can have multiply desktop sessions running on a single server if you were so inclined.  For best practice methodology, bva recommends putting all sessions spread over two (2) servers to ensure up-time and load balancing for the user community.  The great advantages of Session Based Computing are the following:

  • Smaller Footprint
  • Eco-Friendly and More Green
  • All Data on Servers, No Loss of Data
  • Seamless and Consistent Interface over Different PC’s
  • Ability to Leverage Older PC Hardware for Production
  • Ability to Leverage Newer Operating Systems Virtually Without Conflict
  • Application Virtualization Ensures Seamless User Experience

The most popular products leveraged today for this type of architecture are as followed:

  • Remote Desktop Services (Terminal Server)
  • Citrix Systems
  • Vmware View

Virtual Desktop Infrastructure (VDI) is another name for Session Based Technology. VDI is an emerging architectural model where a Windows client operating system runs in server-based virtual machines (VMs) in the data center and interacts with the user’s client device such as a PC or a thin client. Similar to session virtualization (formerly known as Terminal Services), VDI provides IT with the ability to centralize a user’s desktop; instead of a server session, however, a full client environment is virtualized within a server-based hypervisor. With VDI, the user can get a rich and individualized desktop experience with full administrative control over desktop and applications. However, this architecture, while flexible, requires significantly more server hardware resources than the traditional session virtualization approach.

Key benefits of VDI are:

  • Better enablement of flexible work scenarios, such as work from home and hot-desking
  • Increased data security and compliance
  • Easy and efficient management of the desktop OS and applications

Application Virtualization – The Basics

Application Virtualization is the future and it’s more clear today than it has ever been.  I always find it funny how people always revert back to the basics after every other form of architecture is explored.  Application virtualization refers to several techniques that make running applications more protected, more flexible or easier to manage.  Modern operating systems attempt to keep programs isolated from each other. If one program crashes, the remaining programs generally keep running. However, bugs in the operating system or applications can cause the entire system to come to a screeching halt or, at least, impede other operations.  Full application virtualization requires a virtualization layer.  Application virtualization layers replace part of the runtime environment normally provided by the operating system. The layer intercepts all file and Registry operations of virtualized applications and transparently redirects them to a virtualized location, often a single file.  The application never knows that it’s accessing a virtual resource instead of a physical one. Since the application is now working with one file instead of many files and registry entries spread throughout the system, it becomes easy to run the application on a different computer and previously incompatible applications can be run side-by-side.   Examples of this technology for the Windows platform are Cameyo, Ceedo, Evalaze, InstallFree, Citrix XenApp, Novell ZENworks Application VIrtualization, Endeavors Technologies Application Jukebox, Microsoft Application Virtualization, Software Virtualization Solution, VMware ThinApp and InstallAware Virtualization.

Technology categories that fall under Application Virtualization include:

  • Application Streaming-Pieces of the application’s code, data, and settings are delivered when they’re first needed, instead of the entire application being delivered before startup. Running the packaged application may require the installation of a lightweight client application. Packages are usually delivered over a protocol such as HTTP, CIFS or RTSP.
  • Desktop Virtualization/Virtual Desktop Infrastructure (VDI)-The application is hosted in a VM or blade PC that also includes the operating system (OS). These solutions include a management infrastructure for automating the creation of virtual desktops, and providing for access control to target virtual desktop. VDI solutions can usually fill the gaps where application streaming falls short.

Provided below are some basic terms as well as architecutral frameworks when considering in deploying a solution of this nature:

  • Application Streaming=  Rather than installing all applications in every user’s machine, applications are delivered to each user’s PC as needed. This enables the applications to be updated centrally and also provides a way to measure each users’ application requirements over time. See application streaming.
  • Terminals to a Central Computer=  The oldest network architecture, all applications and data are stored in a centralized server or cluster of servers. The user’s PC functions like a terminal to the server or dedicated terminals are used. The applications are said to be “virtualized” because they function as if they were running on the client. See thin client.
  • Partition the Hardware=  This is the traditional meaning of “virtualization” and refers to partitioning a computer in order to run several applications without interference, each in their own “virtual machine.” Deployed in servers and clients, this is more accurately called “server virtualization” and “client virtualization.” Contrast with OS virtualization. See virtual machine.
  • Write the Program Once, Run Everywhere=  An interpreted programming language enables the same program to run on different machine platforms, with Java and Visual Basic being the major examples (see Java Virtual Machine and Visual Basic). The applications are said to be “virtualized” because they run on any platform that has a runtime engine for that language.
  • Dynamic Application Assignment=  This approach treats servers in the datacenter as a pool of operating system resources and assigns those resources to applications based on demand in real time. The pioneer in this area is Data Synapse Inc. The applications are said to be “virtualized” because they can be run in any server.

Benefits of application Virtualization

  • Allows applications to run in environments that do not suit the native application.
  • May protect the operating system and other applications from poorly written or buggy code.
  • Uses fewer resources than a separate virtual machine.
  • Run applications that are not written correctly, for example applications that try to store user data in a read-only system-owned location.
  • Run incompatible applications side-by-side, at the same time and with minimal regression testing against one another.
  • Maintain a standard configuration in the underlying operating system across multiple computers in an organization, regardless of the applications being used, thereby keeping costs down.
  • Implement the security principle of least privilege by removing the requirement for end-users to have Administrator privileges in order to run poorly written applications.
  • Simplified operating system migrations.
  • Accelerated application deployment, through on-demand application streaming.
  • Improved security, by isolating applications from the operating system.
  • Enterprises can easily track license usage. Application usage history can then be used to save on license costs.
  • Fast application provisioning to the desktop based upon user’s roaming profile.
  • Allows applications to be copied to portable media and then imported to client computers without need of installing them.

Limitations of application Virtualization

  • Not all software can be virtualized. Some examples include applications that require a device driver and 16-bit applications that need to run in shared memory space.
  • Some types of software such as anti-virus packages and application that require heavy OS integration.
  • Only file and Registry-level compatibility issues between legacy applications and newer operating systems can be addressed by application virtualization.

Establish Standards for Cloud Computing

Cloud computing is the latest hot trend in the IT world and among technology consulting companies.  To a point where almost every meeting I go on talks about this subject matter and does so in a very misinformed way.  The perception out in the marketplace is that the cloud is cheaper, more reliable, and secure.  That is simply just not the case unless the proper steps and procedures are followed.  When will we see cloud standards?  That is a really great question because the security questions of encryption and penetration capability still have not been addressed.  How reliable is the data in the cloud?

The protocol, data format and program-interface standards for using cloud services are mostly in place, which is why the market has been able to grow so fast. But standards for configuration and management of cloud services are not here yet. The crucial  standards for practices, methods and conceptual architecture are still evolving and we are nowhere close.  Cloud computing will not reach its full potential until the management and architectural standards are fully developed and stable. Until these standards are formalized and agreed upon there will be pitfalls and mishaps, which cannot take place.

The main premise of Cloud protocol is  TCP/IP.  The cloud usually uses established standard Web and Web Service data formats and protocols. When it comes to configuration and management, the lack of effective, widely accepted standards is beginning to be felt and I have seen the negative results.  There are several agencies and organizations working on cloud configuration and management standards, including the Distributed Management Task Force (www.dmtf.org), the Open Grid Forum (www.ogf.org), and the Storage Networking Industry Association (www.snia.org).

Currently there are, as of yet, no widely accepted frameworks to assist the integration of cloud services into enterprise architectures.   An area of concern is the possibility of changing cloud suppliers. You should have an exit strategy before finding a provider and signing a cloud contract. There’s no point in insisting that you own the data and can remove it from the provider’s systems at any time if you have nowhere else to store the data, and no other systems to support your business.

When selecting an enterprise cloud computing provider, its architecture should have the following:

• the cloud services form a stable, reliable component of the architecture for the long term;
• they are integrated with each other and with the IT systems operated by the enterprise; and
• they support the business operations effectively and efficiently.

Other groups that are looking to establish industry standards include the U.S. National Institute of Standards and Technology (http://csrc.nist.gov), the Object Management Group (www.omg.org) and the Organization for Advancement of Structured Information Systems (www.oasis-open.org).