Considering the Network
Much information already exists concerning methods for deploying networks and hardware. We shall try to clarify differences between using a network with personal computers and thin clients with the help of anecdotes. Certain designs have also proven to be very stable and provide the best possible solution.
Your fi rst thought might be that your current network will work fi ne with thin clients and that is entirely possible. But your network might be something that has grown through the years and is not that well designed. Your implementation of thin clients then might be a good time to review the design and make upgrades as needed.
Personal Computers versus ThinClients
Based on conversations with some hardware vendors, it’s clear that most of the testing is done with the expectation that personal computers will be deployed.
The biggest difference is in how the two platforms use the network. When running a personal computer, often software applications are stored on network servers. When you activate an icon, the network pushes the executable down to your PC. Once downloaded into memory, the application runs and then very little interaction takes place until you save a fi le. Or in other cases, the executables are on the local PC, and network activity is not used until fi les are saved. If an executable takes a few seconds longer to download, you won’t really notice it when using a personal computer. Some networking devices seem better designed for effi ciency of download instead of being designed for the smaller and more plentiful packets of network computing. When you activate a software application on a thin client, the presentation of the user interface is pushed to you from the server, and then all keystrokes and mouse activity are transmitted back and forth to the server in real time. The network needs to be very fast, have low latency, and be confi gured to pass packets immediately to the servers.
For implementing your network, the network backbone should be Gigabit if possible. Obviously if your solution is for only a small number of users, this might not be required. Ideally fi bre optic lines are then run to each of the wiring closets, and each switch should have it’s own line. It is advisable to avoid daisy-chaining the switches together in order to avoid any kind of contention between them. The servers are all plugged into the backbone at Gigabit as well. If a server is required away from a centralized computer room, then it is better to run a separate line instead of plugging it into a switch that will be shared with thin clients. It’s important to keep the data paths solidly designed so that all of your realtime interaction will not be delayed.
X windows, RDP, or Citrix are used to display the user presentation. This means that the software is running on the server, but the image of that software is transmitted over the network. It’s important that a strong network exists or repaints of windows will be slower and feel sluggish. This issue will cause people problems, with perceptions that a personal computer can run software faster than a network. A correctly designed network will provide excellent response time and the user community should not even see a difference.
Font servers are used to distribute fonts to users. A font server is just a process or application that runs on the server. When a user requests a font, it’s sent over the network to the thin client and made available to
them immediately. The strength of this design is that all your employees will have the same fonts and while sharing documents, they will render exactly the same way no matter from where you log into the network. Anyone that has shipped documents between personal computers with different fonts, will greatly appreciate this design. When the network is confi gured correctly, font download and interaction is immediate and undetected by the user community.
NFS mounts are used to connect disk drives between Linux servers. This allows applications to share data between the various servers on your network. Response time needs to be excellent to provide very fast fi le saves and retrievals, at the same time avoiding applications that lock or timeout while trying to interact with files.
It would be wonderful if Gigabit could be run to all of your facilities. But the truth is that often you are not able to deploy that speed, because of cost or physical locations of buildings. Once networking speeds get below 100 Megabit, you will no longer be able to use native X windows and must consider deploying products that compress presentation data. Microsoft RDP will do this, along with Citrix Metaframe, tight-VNC, and NX/Nomachine.
Most of the products the author has tested that compress data seem to become usable around 100K of speed. Dialup connections will work, but repaints will be tedious and not very effi cient. A good formula to use is to multiply the number of concurrent users at each remote site by 100 to get a rough estimate of bandwidth required. Using this formula, 3 concurrent users would require roughly 300K of bandwidth. Remember too, that very possibly print jobs will be running on the same circuit, which will consume bandwidth as well. The user community will perceive ‘slowness’ mostly in the user presentation itself. Print jobs that take a bit longer are not normally noticed. So, one might consider running two circuits to remote sites and splitting the user sessions on one, and the print jobs on the other. That way if massive print jobs are sent, the users won’t notice and can continue working. It should be noted as well that some bandwidth management products such as Citrix have designed their software to support printer connectivity that is also compressed.
The most important issue with remote sites is stability and uptime. When you centralize all of your software, it’s critical that the network be available or users will not be able to log into the servers and do their job. Many people do not care how it works, just that it’s reliable. Consider all of your options such as T1 connections, DSL, and cable modems and then select the solution that seems like the best fi t. One effective method is to create a list of all available networking methods, and then create a chart that clearly spells out the features and speed available within each category. As the line becomes cheaper, it normally becomes slower. The decision makers need to understand that at a certain point presentation and software application speed will start to degrade. It is also important
to obtain from the vendor exact service levels for each of the connection methods. Commercial and business lines often will guarantee a minimum amount of bandwidth. Regular home-use circuits often are rated for ‘burst rates’ and run considerably slower than the specifi ed rate.
Some remote users will be using wireless connections. They too will require an application that will perform compression, and should be considered as well. Cellular wireless broadband is providing plenty of bandwidth these days to run with centralized computers.