TOFFEE项目
文档更新影片研究下载赞助商联系


RESEARCH 》 Live demo - Data Transfer - High bandwidth to Low bandwidth

I always wanted to do some real experiments and research on packet flow patterns from High-bandwidth to Low-bandwidth networks via networking devices. This is something can be analyzed via capturing Network stack buffer data and other parameters, bench-marking, and so on. But eventually the data-transfer nature and other aspects is often contaminated due to the underlying OS and the way Network stack is implemented. So to understand the nature of packet flow from Higher to Lower bandwidth and vice-versa such as Lower to higher bandwidth, I thought I experiment with various tools and things which physically we can observe this phenomena. What we observe in a software test results is not accurate, Operating system is bound by hardware limitations, bound by non-real-time digitized instruction processing and other latencies. And if we cannot understand this fundamental concept in low-speed networks, then we cannot ever understand its complication in a gigabit or high-speed network. Since the high-speed network processing need even more better hardware and OS capabilities. In this case let us assume everything we are trying to experiment is in OS layer (i.e) not in a dedicated hardware or a complete hardware packet processing platform.
network_research_live_demo_kiran

At times there are cases when packet processing happens in Linux Kernel networking subsystem, at each phase there can be a packet buffer or a packet queue. This "phase" can be a module or a sub-module or a component or individual tiny network stacks and so on. So whenever such a packet queue exists we can assume it almost represents a funnel. Since a funnel will have a interface or guide or hole to pour water/liquid. This is similar to packets getting added to the packet queue. And the funnel has the bottom hole or interface where the liquid pours out (exits) of funnel. This again represents a packet queue where the stored packets are taken out (or fetched) and sent out for further processing. The amount of time liquid spends inside the funnel or the size of funnel more or less represents the packet queue length. It is quite common that we can change these parameters in Linux Kernel via /proc interface.

Suppose if you do the ifconfig command you can find the default txqueuelen parameter.

kiran@desktop-i7-5820k:~$ ifconfig
enx00808e8e90f4 Link encap:Ethernet  HWaddr 00:80:8e:8e:90:f4  
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr fc:aa:14:98:cb:66  
          inet addr:192.168.0.101  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::feaa:14ff:fe98:cb66/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:156020 errors:0 dropped:0 overruns:0 frame:0
          TX packets:104413 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:165547621 (165.5 MB)  TX bytes:15061686 (15.0 MB)
          Interrupt:20 Memory:fb100000-fb120000 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:65256 errors:0 dropped:0 overruns:0 frame:0
          TX packets:65256 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:8107654 (8.1 MB)  TX bytes:8107654 (8.1 MB)

In this case the "eth0" interface is my inbuilt motherboard Gigabit Ethernet NIC card. "enx00808e8e90f4" is my USB2 to 100Mbps Ethernet NIC card adapter. And the "lo" is the local host. You can notice the txqueuelen is 1000. txqueuelen is nothing but TX (transmit) queue length. Although as end-users you may see only this in the outside world. But within the Linux kernel you may find packet queues at intermediate levels too. And this almost look like cascading funnels. And each time the funnel sizes varies. This is the reason I took this screenshot of my research video where I am trying to show how they cascade.
network_research_live_demo_kiran_2

Now lets assume you hold a funnel (of average size) as shown and pour water from a tea cup. The capacity of the tea cup is almost the same as funnel's capacity. No matter how fast you pour water, the water never overflows from the top of the funnel brim. This is almost the same case as producer-consumer situation. Where producer-consumer is at the same rate. This is optimal and a good sign.
Note: to highlight the water in the video, I have added red fountain pen ink in the water. So that it is clearly visible.
network_research_live_demo_kiran_3

Now let us imagine we pour water with a large jar as shown in the screenshot. The funnel size is the same, but the water is now being added at a higher rate than what funnel can handle. In this case after a certain point the water will start to overflow from the funnel's top brim. This represents the lack of processing power in your networking device, than the network bandwidth or in other words you are receiving packets at higher rate but you lack the processing power, and at the same time your packet buffers are NOT configured optimally. And the overflow of water from the top of the funnel's brim represents packet drops.

Remember always packet drops happens in a device:

  • anytime if there is congestion
  • if there is data flow from high bandwidth to low-bandwidth
  • some processing delays
  • inadequate buffer capacity (buffer overflow)
  • and so on
network_research_live_demo_kiran_4

In the same way now lets experiment with a small syringe such as Insulin syringe. Insulin syringes have extremely fine needle. And this is a perfect way to obstruct (or add resistance) to liquid flow. As shown in the screenshot each time you such the ink inside the syringe, once you pull the piston, the ink will continue to flow into the syringe although you stopped pulling the piston. This is due to the vacuum created inside and the needle is extremely small to feed the ink quickly. Same way whenever you push the piston, the ink will come out of the syringe slowly and continue to come out of the syringe although you stopped pressing the piston. This is due to the high-pressure inside the syringe and extremely small needle. Infact this is what we can observe in electronic transistors, MOSFETs, vacuum tubes etc dynamically restricting the flow of electrons as well amplifying the electron flow.
network_research_live_demo_kiran_5

So this gives a physical visualization of packet flow in a networking device. Especially cases like networking ports having different bandwidths. As well a case assume you are processing these packets say suppose it is a VPN device, WAN Optimization device, HTTP caching device and so on. The processing of packets adds latency, which means adds resistance to the packet flow. In such cases the Network stack parameters needs to be configured to get optimal performance. Especially the Network packet buffers and so on. And sometimes selecting the right hardware (CPU/RAM/ and so on). So I captured these aspects in a complete video so that my experiments and research may help you to understand these concepts and better architect your Networking components.

Here is the Youtube video link of my complete research:



Suggested Topics:


WAN Optimization and Network Optimization

💎 TOFFEE-MOCHA new bootable ISO: Download
💎 TOFFEE Data-Center Big picture and Overview: Download PDF


推荐主题:

Network Latency in WAN Networks and performance optimization ↗
Saturday' 13-Mar-2021
Here is my video article on Network Latency in WAN Networks (such as long distance Satellite links, etc) and how you can optimize the same to achieve better network performance.

TOFFEE-Mocha Documentation :: TOFFEE-Mocha-1.0.14-1-rpi2 - Raspberry Pi WAN Emulator ↗
Saturday' 13-Mar-2021

TOFFEE-DataCenter WAN Optimization software development - Update: 19-Aug-2016 ↗
Saturday' 13-Mar-2021
This is my next software development update of TOFFEE-DataCenter which I am working since past few weeks. I was very busy in implementing the core TOFFEE-DataCenter components along with prototyping, benchmarking, implementing and testing the same. However today is the first time ever I did a fresh new CLI interface for the upcoming new TOFFEE-DataCenter.

TOFFEE-Butterscotch a TOFFEE for Home/SOHO Internet/WAN bandwidth ↗
Saturday' 13-Mar-2021
TOFFEE-Butterscotch a TOFFEE for Home/SOHO Internet/WAN bandwidth

Skype VOIP Data - WAN Acceleration ↗
Saturday' 13-Mar-2021

Tweaking Network Latency - Live Demo - via TOFFEE-DataCenter ↗
Saturday' 13-Mar-2021



First TOFFEE Code Release ↗
Saturday' 13-Mar-2021
I started working on the new TOFFEE project (which is the fork of my earlier TrafficSqueezer open-source project) starting from 1st January 2016 onwards. Ever since I was busy in research and altering certain old features so that it is more minimal than TrafficSqueezer, a more focused agenda, deliver refined code and a broader vision. I have lined up more things to follow in the upcoming months. I want to focus about all aspects of WAN communication technologies not just on core WAN Optimization research and technology.

Timelapse Screen Capture of TOFFEE-DataCenter Network Acceleration - with new RRDtool graph support ↗
Saturday' 13-Mar-2021
Timelapse Screen Capture of TOFFEE-DataCenter Network Acceleration - with new RRDtool graph support

TCP Tune-up and Performance Analysis Graphs - Congestion Control - Research - Dos and Don'ts ↗
Saturday' 13-Mar-2021

TEST CASES :: TEST RESULTS :: TOFFEE-Mocha-1.0.14 Development version ↗
Saturday' 13-Mar-2021



Featured Educational Video:
在YouTube上观看 - [943//1] x23e TrueNAS ZFS Pool Resilver over and over again issue | ZFS NAS Storage | Forever Resilver ↗

Introducing TOFFEE-DataCenter ↗
Saturday' 13-Mar-2021
TOFFEE TOFFEE Data-Center is specifically meant for Data Center, Cluster Computing, HPC applications. TOFFEE is built in Linux Kernel core. This makes it inflexible to adapt according to the hardware configuration. It does sequential packet processing and does not scale up well in large multi-core CPU based systems (such as Intel Xeon servers, Core i7 Extreme Desktop systems,etc). Apart from this since it is kernel based, if there is an issue in kernel, it may crash entire system. This becomes a challenge for any carrier grade equipment (CGE) hardware build.

Timelapse Screen Capture of TOFFEE-DataCenter Network Acceleration - with new RRDtool graph support ↗
Saturday' 13-Mar-2021
Timelapse Screen Capture of TOFFEE-DataCenter Network Acceleration - with new RRDtool graph support

First TOFFEE-Butterscotch Code Release ↗
Saturday' 13-Mar-2021
TOFFEE-Butterscotch is a variant of TOFFEE can be used to save and optimize your Home/SOHO Internet/WAN bandwidth. Unlike TOFFEE (and TOFFEE-DataCenter) TOFFEE-Butterscotch is a non peer-to-peer (and asymmetric) network optimization solution. This makes TOFFEE-Butterscotch an ideal tool for all Home and SOHO users.

TOFFEE (and TOFFEE-DataCenter) deployment with web-proxy cache ↗
Saturday' 13-Mar-2021
If you want to deploy TOFFEE along with a web-proxy cache (such as Squid Proxy) you can deploy the same as shown below. TOFFEE does not cache files. TOFFEE does packet level network optimization. So if you want caching your web content you can use transparent mode web-proxy cache intercepting your WAN links. A web-proxy may reduce amount of data being processed (optimized) within these TOFFEE devices and so reduce the CPU overheads and improve its performance.




Tracking Live Network Application Data - in a WAN Acceleration (WAN Optimization) Device ↗
Saturday' 13-Mar-2021



Research :: Optimization of network data (WAN Optimization) at various levels:
Network File level network data WAN Optimization


Learn Linux Systems Software and Kernel Programming:
Linux, Kernel, Networking and Systems-Software online classes [CDN]


Hardware Compression and Decompression Accelerator Cards:
TOFFEE Architecture with Compression and Decompression Accelerator Card


TOFFEE-DataCenter on a Dell Server - Intel Xeon E5645 CPU:
TOFFEE-DataCenter screenshots on a Dual CPU - Intel(R) Xeon(R) CPU E5645 @ 2.40GHz - Dell Server