This article is designed to help customers assess the symptoms of the impact of latency, packet loss, and bandwidth on the RDP user experience. Most importantly, this article provides options and recommends robust tools that you can use to troubleshoot issues that undermine RDP user experience.
Latency describes the length of time it takes for a packet of data to go from the computer hosting the user desktop to the RDS Host and back, known as round-trip-time (RTT). It is also referred to as ping and expressed in milliseconds in one second (ms).
Ideal Latency for RDP
The ideal latency for the best user experience over RDP is less than 100ms. Good user experience can be maintained even after latency increases up to 120ms. The user experience will begin to deteriorate when the latency is more than 150ms. High latency increasingly becomes problematic as networks expand to include remote desktops and connect to cloud servers to deliver virtual workspaces.
Symptoms of High Latency
The main symptom of high latency is “laggy” performance and response, which manifests as:
The lagging response of pages
Slowness in uploads and downloads
Lagging response to mouse clicks.
Websites loading very slowly
A slowdown in accessing servers and online-based applications
The slowness of screen refreshes
Delay in the appearance of typed characters on the screen
The fact that the RDP user experience is very latency-sensitive means that you can only improve RDP user experience by addressing network latency. So, you have to measure network latency.
Measuring Network Latency
Checking your network latency should be the first step you take after lags in performance and response undermine the user experience. How do you check your network latency using Windows?
Open command prompt
Type tracert followed by the destination affected
The tracert command will display a list of all routers between the user's computer hosting the user desktop and the RDS Host as well as a time measurement in milliseconds (ms).
Add up the measurements. The total sum of the measurements is the latency between the device hosting the user desktop and the RDS Host.
The milliseconds attained from this basic latency measurement are the RTT between the user's device affected by lags and the host. Latency can also be measured as the time it takes for the first byte of data in the user's request to be received by the RDS Host known as Time to First Byte (TTFB).
If the RTT attained in this latency measurement procedure is higher than the ideal latency required to deliver good RDP user experience, you will have to employ some tactics to reduce network latency.
How to Reduce High Network Latency
Reducing high latency in your network requires an assessment of the different steps you can take at various points across the network to eradicate or suppress the impact of the factors contributing to high latency. So, what are the factors to suppress or eradicate?
Congestion: Having too many users on your network can increase network latency, particularly if they are streaming or downloading large and heavy files that consume too much bandwidth. Once congestion is detected, TCP implements congestion control, which cuts TCP’s sliding window by half based on your network latency. This congestion control protocol reduces the host’s sending rate and the speed of the flow of data to the remote desktop and back, leading to prolonged lags and bad user experience. This is why it is important to reduce congestion on your network.
Application Performance: Latency can also be affected by applications that execute and perform functions poorly, putting pressure on the network.
Interference: Some wireless devices can interfere with the effectiveness of your network, leading to higher latency.
Based on this analysis, you can troubleshoot network latency issues to suppress or eliminate the impact of these and other factors contributing to high latency on your network.
How to Troubleshoot Network Latency Issues
When it comes to addressing problems due to multifaceted factors, you start with the basic factors and move towards more complex contributors.
Execute simple reboot by disconnecting and restarting computers or network devices.
Use device monitor to check and remove the devices causing interference on your network.
Ensure all applications are performing ideally as intended.
Group endpoints that communicate with each other most frequently by subnetting your network.
Prioritize the most critical parts of your network using traffic shaping and bandwidth allocation measures.
Offload traffic to parts of the network with the capacity to handle more users and activity using use a load balancer.
If the bad latency-related experience persists, latency issues are coming from the larger network connecting the RDS Host to the user desktop. You have to test the latency of the network connecting the user’s desktop host to the RDS host. The most effective approach is to measure the user's latency to the GCP Region hosting the RDS Host.
Testing Latency to GCP Regions
We recommend using www.gcping.com, a highly effective online-based tool developed especially to measure latency and test GCP connectivity. Once you open the GCP, the browser on the user's device hosting RDP makes HTTP requests to instances in different GCP regions and displays the median time between request and response.
If the test determines that latency to the GCP Region hosting the RDS Host is more than 100ms, you may have to move the customer to a GCP Region that is closer.
Information is transmitted as packets containing discrete units of data, which are meaningless individually until they all come together to recreate the content or message being transmitted. During transmission, packets can be lost or delayed as the message being transmitted moves from one hub to the other on the network. The main causes of packet loss include faulty cables, insufficient bandwidth, congestion, software issues, and insufficiency of hardware components like switches, routers, and firewalls.
How does Packet Loss Manifest?
Packet loss negatively affects RDP user experience by stalling data in the network before it is delivered to the user. To maintain reliable and orderly delivery of data, TCP requires any lost packets to be retransmitted by the sender and creates a queue for lost packets awaiting retransmission. All packets sent after the lost packet are held in the user’s TCP queue and can’t be delivered until the lost packet is retransmitted, which leads to stalling known as Head of Line Blocking.
Since all the packets are held, packet loss manifests as a black screen due to a lack of data input flowing into the user's device. Using the black screen as the symptom of packet loss, you can use it as a diagnostics yardstick to eliminate potential causes.
Troubleshooting Packet Loss
First, you eliminate physical connections and software then proceed to network connections. To rule out physical connections and software as the cause of packet loss, you have to:
Check and connect all cables and ports properly.
Restart routers and hardware to stop technical faults or bugs.
Remove any sources of interference, such as cameras, wireless speakers, and phones.
Update device software to eliminate bugs in the OS as the cause of packet loss.
Use offloading, grouping, and allocation measures to reduce congestion
Whenever you execute any of these checks, and the black screen persists, it eliminates physical connection, hardware, or software as the cause of packet loss. If the black screen persists, you can proceed to test packet loss on your wireless coverage and the larger network connection.
How to Test Packet Loss
You can test packet loss caused by issues either with wireless signal coverage or the larger network connection. If you are using Windows 10 connected to a standard Wi-Fi domestic network, you can test packet loss in two steps:
First Step: Command Prompt
Open the "run" application to get to the command line interface
Open the command prompt ("cmd” application)
Second Step: Testing packet loss
Create some distance with concrete walls between you and the access point (AP) that act as the router and internet gateway.
Find the AP IPv4 address using the "config" command or check the wireless LAN adapter Wi-Fi section.
Run a “ping [target IP] -n 25” command to ping the wireless AP by sending 25 ICMP packets to the AP.
Analyze the packet loss percentage and the average RTT provided.
If the results indicate 100% success and 0% fail, you can eliminate Wi-Fi as the cause of package loss and proceed to test packet loss due to issues on the larger network connection.
There are several online tools you can use to test packet loss on the larger network. For one, you can use Google.com to test the internet connection against potential congestion. If these tests do not reveal the cause of packet loss, you can retest by changing the device, time, or location.
Network bandwidth refers to the capacity of a wired or wireless network to transmit the maximum amount of data from one point to another per second and is measured as kbps. Even though speed and bandwidth are used interchangeably, they do not mean the same. Speed is the rate at which data is sent, while bandwidth is the capacity of the network at a given speed. Bandwidth consumption on a network can undermine the capacity of the network leading and reduce the volume of data transmitted.
How Bandwidth Issues Manifest
Bandwidth consumption increases when too many users are downloading or streaming large and heavy content on the network. The overload leads to:
Slow screen refresh
Bursts of letters while typing
Degraded or distorted audio
Although these are the symptoms of excessive bandwidth usage, you still need to identify the activities consuming bandwidth.
Identifying High Bandwidth Consumption
This task can be accomplished using built-in bandwidth-usage monitoring tools available in the user’s Windows.
Right-click the taskbar and select "Task Manager" or press "Ctrl+Shift+Esc."
Click "More details" in the Task Manager window
Select the "Processes" tab
Analyze the "Network" column for bandwidth consumption of each process
Once you identify the devices and processes consuming large amounts of bandwidth, turn them off or ask the users of these devices to refrain from activities that consume too much bandwidth. If the problem persists, contact your service provider.
Does Bandwidth affect RDP UX?
RDP is designed impeccably to consume as little bandwidth as possible without affecting user experience. RDP attains very low bandwidth usage in two ways:
Adjustment Protocol: RDP adjusts bandwidth usage per user, depending on the application and scenario. Unlike other screen scrape approaches that use a constant amount of bandwidth, RDP tries to adjust and reduce bandwidth usage to zero when the content displayed on the screen does change. If the content displayed changes, RDP's bandwidth consumption increases proportionally to the changes on screen.
Presentation Virtualization: RDP is a virtual driver that can plug into the graphics system of Windows, just like a display driver. As a virtual driver, it can scale bandwidth consumption up and down by making intelligent encoding decisions to encode commands into RDP wire format. This comes in handy when users are streaming because it can encode bitmaps as well as stream commands like "Render this at this location," leading to a better experience even with very low bandwidth. With its high scalability, RDP consumes only a small percentage of the total CPU load to encode and transmit the graphics.
For standard business applications, RDP delivers the best user experience with a connectivity of as low as 256kbps. For instance, if users are working on Excel and Word, they will be sending out just a few Kbps of bandwidth.
Latency vs. Bandwidth
When it comes to improving user experiences, latency is more important than bandwidth. For one, the importance of bandwidth in improving user experience is restricted to applications that send or receive large amounts of data. For example, bandwidth is important for downloading videos, watching videos on Netflix, uploading videos to YouTube, and so on. In other applications that don't involve a large amount of data, latency plays a bigger role in the user experience. From web browsing to online gaming, video conferencing, and chatting, latency enhances the performance of applications that are more interactive or real-time. Also, you can increase bandwidth drastically by reducing latency.
Sometimes, improving user’s experiences can come down to the basics, either more CPU or RAM or both. We recommend monitoring and alerting tools for Compute resources.
At itopia, we provide historical resource utilization insights fromthe VM Instances module under the Cloud manager section. You can access Compute utilization insights easily by just clicking on the problematic server. Using itopia’s VM Instances module, you can select the day from the calendar, and the Compute utilization graph for the RAM and CPU will be displayed instantly. If real-time alerting is needed, then Google Stackdriver is the way to go.