TOFFEEプロジェクト
ホームドキュメンテーション更新ビデオ研究ダウンロードスポンサー接触


RESEARCH 》 Live demo - Data Transfer - High bandwidth to Low bandwidth

I always wanted to do some real experiments and research on packet flow patterns from High-bandwidth to Low-bandwidth networks via networking devices. This is something can be analyzed via capturing Network stack buffer data and other parameters, bench-marking, and so on. But eventually the data-transfer nature and other aspects is often contaminated due to the underlying OS and the way Network stack is implemented. So to understand the nature of packet flow from Higher to Lower bandwidth and vice-versa such as Lower to higher bandwidth, I thought I experiment with various tools and things which physically we can observe this phenomena. What we observe in a software test results is not accurate, Operating system is bound by hardware limitations, bound by non-real-time digitized instruction processing and other latencies. And if we cannot understand this fundamental concept in low-speed networks, then we cannot ever understand its complication in a gigabit or high-speed network. Since the high-speed network processing need even more better hardware and OS capabilities. In this case let us assume everything we are trying to experiment is in OS layer (i.e) not in a dedicated hardware or a complete hardware packet processing platform.
network_research_live_demo_kiran

At times there are cases when packet processing happens in Linux Kernel networking subsystem, at each phase there can be a packet buffer or a packet queue. This "phase" can be a module or a sub-module or a component or individual tiny network stacks and so on. So whenever such a packet queue exists we can assume it almost represents a funnel. Since a funnel will have a interface or guide or hole to pour water/liquid. This is similar to packets getting added to the packet queue. And the funnel has the bottom hole or interface where the liquid pours out (exits) of funnel. This again represents a packet queue where the stored packets are taken out (or fetched) and sent out for further processing. The amount of time liquid spends inside the funnel or the size of funnel more or less represents the packet queue length. It is quite common that we can change these parameters in Linux Kernel via /proc interface.

Suppose if you do the ifconfig command you can find the default txqueuelen parameter.

kiran@desktop-i7-5820k:~$ ifconfig
enx00808e8e90f4 Link encap:Ethernet  HWaddr 00:80:8e:8e:90:f4  
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr fc:aa:14:98:cb:66  
          inet addr:192.168.0.101  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::feaa:14ff:fe98:cb66/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:156020 errors:0 dropped:0 overruns:0 frame:0
          TX packets:104413 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:165547621 (165.5 MB)  TX bytes:15061686 (15.0 MB)
          Interrupt:20 Memory:fb100000-fb120000 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:65256 errors:0 dropped:0 overruns:0 frame:0
          TX packets:65256 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:8107654 (8.1 MB)  TX bytes:8107654 (8.1 MB)

In this case the "eth0" interface is my inbuilt motherboard Gigabit Ethernet NIC card. "enx00808e8e90f4" is my USB2 to 100Mbps Ethernet NIC card adapter. And the "lo" is the local host. You can notice the txqueuelen is 1000. txqueuelen is nothing but TX (transmit) queue length. Although as end-users you may see only this in the outside world. But within the Linux kernel you may find packet queues at intermediate levels too. And this almost look like cascading funnels. And each time the funnel sizes varies. This is the reason I took this screenshot of my research video where I am trying to show how they cascade.
network_research_live_demo_kiran_2

Now lets assume you hold a funnel (of average size) as shown and pour water from a tea cup. The capacity of the tea cup is almost the same as funnel's capacity. No matter how fast you pour water, the water never overflows from the top of the funnel brim. This is almost the same case as producer-consumer situation. Where producer-consumer is at the same rate. This is optimal and a good sign.
Note: to highlight the water in the video, I have added red fountain pen ink in the water. So that it is clearly visible.
network_research_live_demo_kiran_3

Now let us imagine we pour water with a large jar as shown in the screenshot. The funnel size is the same, but the water is now being added at a higher rate than what funnel can handle. In this case after a certain point the water will start to overflow from the funnel's top brim. This represents the lack of processing power in your networking device, than the network bandwidth or in other words you are receiving packets at higher rate but you lack the processing power, and at the same time your packet buffers are NOT configured optimally. And the overflow of water from the top of the funnel's brim represents packet drops.

Remember always packet drops happens in a device:

  • anytime if there is congestion
  • if there is data flow from high bandwidth to low-bandwidth
  • some processing delays
  • inadequate buffer capacity (buffer overflow)
  • and so on
network_research_live_demo_kiran_4

In the same way now lets experiment with a small syringe such as Insulin syringe. Insulin syringes have extremely fine needle. And this is a perfect way to obstruct (or add resistance) to liquid flow. As shown in the screenshot each time you such the ink inside the syringe, once you pull the piston, the ink will continue to flow into the syringe although you stopped pulling the piston. This is due to the vacuum created inside and the needle is extremely small to feed the ink quickly. Same way whenever you push the piston, the ink will come out of the syringe slowly and continue to come out of the syringe although you stopped pressing the piston. This is due to the high-pressure inside the syringe and extremely small needle. Infact this is what we can observe in electronic transistors, MOSFETs, vacuum tubes etc dynamically restricting the flow of electrons as well amplifying the electron flow.
network_research_live_demo_kiran_5

So this gives a physical visualization of packet flow in a networking device. Especially cases like networking ports having different bandwidths. As well a case assume you are processing these packets say suppose it is a VPN device, WAN Optimization device, HTTP caching device and so on. The processing of packets adds latency, which means adds resistance to the packet flow. In such cases the Network stack parameters needs to be configured to get optimal performance. Especially the Network packet buffers and so on. And sometimes selecting the right hardware (CPU/RAM/ and so on). So I captured these aspects in a complete video so that my experiments and research may help you to understand these concepts and better architect your Networking components.

Here is the Youtube video link of my complete research:



Suggested Topics:


WAN Optimization and Network Optimization

💎 TOFFEE-MOCHA new bootable ISO: Download
💎 TOFFEE Data-Center Big picture and Overview: Download PDF


おすすめトピック:

TOFFEE WAN Optimization software development, roadmap, live-demo - Update: 06-Nov-2016 ↗
Saturday' 13-Mar-2021
Here are some of the screenshots of the new upcoming TOFFEE WAN Optimization release and live demo.

TOFFEE-Mocha WAN emulator Lab deployment and topology guide ↗
Saturday' 13-Mar-2021

TOFFEE-Butterscotch Bandwidth saver software development - Update: 17-Nov-2016 ↗
Saturday' 13-Mar-2021
Here is my second software development update of TOFFEE-Butterscotch. In the previous update (28-Oct-2016) I discussed about the Alerts, etc. Whereas in my first TOFFEE-Butterscotch news update I have introduced about TOFFEE-Butterscotch research, project specifications, use-cases, etc.

Bulk Ping Tests - WAN Acceleration ↗
Saturday' 13-Mar-2021

Internet optimization through TOFFEE-DataCenter WAN Optimization Demo ↗
Saturday' 13-Mar-2021
Internet optimization through TOFFEE-DataCenter WAN Optimization Demo

My sample Wireshark packet capture files for research ↗
Saturday' 13-Mar-2021
I have a huge repository (or collection) of sample Wireshark packet capture files for reference. I use them extensively for research and development of TOFFEE as well to understand various protocol PDUs and protocol standards. I personally collected various test captures via Wireshark during my test and experimental research setup during the course of TOFFEE development. Say if you are a student and learning Networking and or say VoIP data and VoIP packets, you can analyse my VoIP sample Wireshark captures. Or in other case assume you are doing some quick research (or development) and want to refer few handful of VoIP packets then you can download and analyse my sample packet capture files.

Youtubeで見る - [466//1] 158 VLOG - TOFFEE WAN Optimization Software Development live update - 6-Nov-2016 ↗


PiPG - Raspberry Pi Network Packet Generator ↗
Saturday' 13-Mar-2021
PiPG is a powerful and yet simple Raspberry Pi Network Packet Generator. With PiPG you can now fabricate custom network packets and send via any Network Interface. Supports all kinds of standard Network Ports (Linux Kernel driver generated) such as Physical Network Interface ports, and an array of virtual ports such as loopback, tun/tap, bridge, etc. indispensable tool for: Network Debugging, Testing and Performance analysis Network Administrators Students Network R&D Protocol Analysis and Study Network Software Development Compliance Testing Ethical Hackers you can generate the following test traffic: L2-Bridging/Slow protocols: STP, LACP, OAM, LLDP, EAP, etc Routing protocols: RIPv1, RIPv2, IGMPv1, IGMPv2, OSPF, IS-IS, EIGRP, HSRP, VRRP, etc Proprietary protocols: CISCO, etc Generic: IPv4 TCP/UDP, etc Malformed random packets

Recording Lab Monthly power-consumption readings for Research ↗
Saturday' 13-Mar-2021
Here is my home lab monthly power-consumption readings for research. This will help to measure and monitor the overall power usage and assess the power requirements. This will help me in future purchases such as UPS, battery upgrades and so on. And as well remove replace old obsolete hardware with new less power-consuming devices.

Introducing TOFFEE-DataCenter ↗
Saturday' 13-Mar-2021
TOFFEE TOFFEE Data-Center is specifically meant for Data Center, Cluster Computing, HPC applications. TOFFEE is built in Linux Kernel core. This makes it inflexible to adapt according to the hardware configuration. It does sequential packet processing and does not scale up well in large multi-core CPU based systems (such as Intel Xeon servers, Core i7 Extreme Desktop systems,etc). Apart from this since it is kernel based, if there is an issue in kernel, it may crash entire system. This becomes a challenge for any carrier grade equipment (CGE) hardware build.

Building my own CDN - choosing a web-hosting to deploy my CDN - Update: 28-July-2016 ↗
Saturday' 13-Mar-2021
The TOFFEE Project website is hosted on Inmotion Hosting. And so I am looking for alternate hosting provider to build my first CDN node. My plan is to make multiple sub-domains of my website such as cdn1.the-toffee-project.org, cdn2.the-toffee-project.org and point each of this corresponding subdomain(s) to various alternative web hosting servers geographically spread across the world. Sometimes choosing the same vendor for multiple CDN nodes may result multiple servers existing in the data-center. And this becomes an issue if there is some catastrophic network disaster.



Featured Educational Video:
Youtubeで見る - [435//1] 0x1d3 Who gets Laid off (or Fired) during a recession ? #TheLinuxChannel #KiranKankipati ↗

Upgrading Ubuntu 17.10 to 18.04 via TOFFEE-DataCenter WAN Optimization Screenshots ↗
Saturday' 13-Mar-2021

Raspberry Pi as a Networking Device ↗
Saturday' 13-Mar-2021
Raspberry Pi is often used as a single board computer for applications such as IoT, hobby projects, DIY, education aid, research and prototyping device. But apart from these applications Raspberry Pi can be used for real-world applications such as in making a full-fledged networking devices. Raspberry Pi is a single board ARM based hardware which is why it is also classified as ARM based SoC. Since it is ARM based it is highly efficient, tiny form-factor and lower in power consumption with moderate computational power. This will allow it to work several hours on emergency battery backup power supply such as low-cost domestic UPS and or some renewable energy source, which is a prerequisite for a typical networking device.

TOFFEE (and TOFFEE-DataCenter) deployment with web-proxy cache ↗
Saturday' 13-Mar-2021
If you want to deploy TOFFEE along with a web-proxy cache (such as Squid Proxy) you can deploy the same as shown below. TOFFEE does not cache files. TOFFEE does packet level network optimization. So if you want caching your web content you can use transparent mode web-proxy cache intercepting your WAN links. A web-proxy may reduce amount of data being processed (optimized) within these TOFFEE devices and so reduce the CPU overheads and improve its performance.

TOFFEE WAN Optimization software development, roadmap, live-demo - Update: 06-Nov-2016 ↗
Saturday' 13-Mar-2021
Here are some of the screenshots of the new upcoming TOFFEE WAN Optimization release and live demo.



Youtubeで見る - [1888//1] Deep Space Communication - Episode1 - Introduction ↗

Advantages of CDN - Content Delivery Networks or Content Distribution Networks ↗
Saturday' 13-Mar-2021



Research :: Optimization of network data (WAN Optimization) at various levels:
Network File level network data WAN Optimization


Learn Linux Systems Software and Kernel Programming:
Linux, Kernel, Networking and Systems-Software online classes


Hardware Compression and Decompression Accelerator Cards:
TOFFEE Architecture with Compression and Decompression Accelerator Card [CDN]


TOFFEE-DataCenter on a Dell Server - Intel Xeon E5645 CPU:
TOFFEE-DataCenter screenshots on a Dual CPU - Intel(R) Xeon(R) CPU E5645 @ 2.40GHz - Dell Server