The TOFFEE Project
HOMEDOCUMENTATIONUPDATESVIDEOSRESEARCHDOWNLOADSPONSORSCONTACT


DOCUMENTATION 》 TEST CASES :: TEST RESULTS :: TOFFEE-Mocha-1.0.14 Development version

Here are the TOFFEE-Mocha test cases and test results of the upcoming new TOFFEE-Mocha which is still under development. The features of this TOFFEE-Mocha are discussed in the software development update: TOFFEE-Mocha WAN Emulation software development - Update: 1-July-2016

Test case1 :: 999 millisecond constant packet delay: As you can see unlike 40 milliseconds the maximum limit which existed earlier, the new 999 milliseconds delay range allows users to slow down the transfer rates even further.

kiran@HP-ENVY-15:~/temp$ ping 192.168.0.1 -s 1000
PING 192.168.0.1 (192.168.0.1) 1000(1028) bytes of data.
1008 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=2000 ms
1008 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=2000 ms
1008 bytes from 192.168.0.1: icmp_seq=3 ttl=64 time=2000 ms
1008 bytes from 192.168.0.1: icmp_seq=4 ttl=64 time=2000 ms
1008 bytes from 192.168.0.1: icmp_seq=5 ttl=64 time=2998 ms
1008 bytes from 192.168.0.1: icmp_seq=6 ttl=64 time=2997 ms
1008 bytes from 192.168.0.1: icmp_seq=7 ttl=64 time=3995 ms
1008 bytes from 192.168.0.1: icmp_seq=8 ttl=64 time=3985 ms
1008 bytes from 192.168.0.1: icmp_seq=9 ttl=64 time=3984 ms
1008 bytes from 192.168.0.1: icmp_seq=10 ttl=64 time=3984 ms
1008 bytes from 192.168.0.1: icmp_seq=11 ttl=64 time=3983 ms
1008 bytes from 192.168.0.1: icmp_seq=12 ttl=64 time=3982 ms
1008 bytes from 192.168.0.1: icmp_seq=13 ttl=64 time=3984 ms
1008 bytes from 192.168.0.1: icmp_seq=14 ttl=64 time=3982 ms
^C
--- 192.168.0.1 ping statistics ---
18 packets transmitted, 14 received, 22% packet loss, time 17007ms
rtt min/avg/max/mdev = 2000.042/3277.214/3995.537/873.965 ms, pipe 4
kiran@HP-ENVY-15:~/temp$

Test case2 :: 500 millisecond constant packet delay: With 500 milliseconds you get roughly double the performance of 999 milliseconds.

kiran@HP-ENVY-15:~/temp$ ping 192.168.0.1 -s 1000
PING 192.168.0.1 (192.168.0.1) 1000(1028) bytes of data.
1008 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=3 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=4 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=5 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=6 ttl=64 time=1488 ms
1008 bytes from 192.168.0.1: icmp_seq=7 ttl=64 time=1481 ms
1008 bytes from 192.168.0.1: icmp_seq=8 ttl=64 time=1481 ms
1008 bytes from 192.168.0.1: icmp_seq=9 ttl=64 time=1008 ms
1008 bytes from 192.168.0.1: icmp_seq=10 ttl=64 time=1002 ms
^C
--- 192.168.0.1 ping statistics ---
11 packets transmitted, 10 received, 9% packet loss, time 10017ms
rtt min/avg/max/mdev = 1002.077/1147.151/1488.063/220.133 ms, pipe 2
kiran@HP-ENVY-15:~/temp$

Test case3 :: 500 millisecond constant packet delay + random packet delay: With constant delay (in this case 500 milliseconds) if you enable the new random packet delay feature, it will skip delay randomly few packets. Which can be controlled via random delay factor. In this case the random delay factor value is set to 1. And you can see below few packets are not delayed. Hence their ping response time almost reduced to half (i.e around 500 ms).

kiran@HP-ENVY-15:~/temp$ ping 192.168.0.1 -s 1000
PING 192.168.0.1 (192.168.0.1) 1000(1028) bytes of data.
1008 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=1503 ms
1008 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=1497 ms
1008 bytes from 192.168.0.1: icmp_seq=3 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=4 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=5 ttl=64 time=1001 ms
1008 bytes from 192.168.0.1: icmp_seq=6 ttl=64 time=1001 ms
1008 bytes from 192.168.0.1: icmp_seq=7 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=8 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=9 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=10 ttl=64 time=419 ms
1008 bytes from 192.168.0.1: icmp_seq=11 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=12 ttl=64 time=1001 ms
1008 bytes from 192.168.0.1: icmp_seq=13 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=14 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=15 ttl=64 time=1001 ms
1008 bytes from 192.168.0.1: icmp_seq=16 ttl=64 time=502 ms
1008 bytes from 192.168.0.1: icmp_seq=17 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=18 ttl=64 time=502 ms
1008 bytes from 192.168.0.1: icmp_seq=19 ttl=64 time=1002 ms
1008 bytes from 192.168.0.1: icmp_seq=20 ttl=64 time=1001 ms
1008 bytes from 192.168.0.1: icmp_seq=21 ttl=64 time=1002 ms
^C
--- 192.168.0.1 ping statistics ---
22 packets transmitted, 21 received, 4% packet loss, time 21029ms
rtt min/avg/max/mdev = 419.093/974.135/1503.026/250.662 ms, pipe 2
kiran@HP-ENVY-15:~/temp$

Random Packet delay: As discussed in my VLOG/update earlier, the idea of Random packet delay is to introduce the fluctuating, bursty nature of packet flow. So here are various tests done which shows the same in action. These tests below are performed while downloading a large file by enabling random packet delay along with various values of constant packet delay.

Test case4 :: 2 millisecond constant packet delay + random packet delay: With constant delay of 2 millisecond and random packet delay you can notice the blue curve which almost appears constant. The traffic in this case is bursty but it is not that significant to notice in the graph shown below.
TOFFEE_Mocha_2ms_delay_with_random_packet_delay

Test case5 :: 10 millisecond constant packet delay + random packet delay: With constant delay of 10 millisecond and random packet delay you can notice the blue curve which almost appears constant. The traffic in this case is bursty but it is not that significant to notice in the graph shown below. But it appears somewhat fluctuating than the 5 millisecond test case4 above.
TOFFEE_Mocha_10ms_delay_with_random_packet_delay

Test case6 :: 200 millisecond constant packet delay + random packet delay: With constant delay of 200 millisecond and random packet delay you can notice the fluctuating blue curve. With this we can understand the true purpose of random packet delay.
TOFFEE_Mocha_200ms_delay_with_random_packet_delay

Test case7 :: 200 millisecond constant packet delay + WITHOUT random packet delay: With constant delay of 200 millisecond and WITHOUT random packet delay feature enabled you can notice the steady blue curve. This is a direct comparison of a test case with constant packet delay 200 millisecond with and without random packet delay. With random packet delay it makes the network performance choppy, fluctuating and bursty, but without random packet delay feature the network performance appears almost constant.
TOFFEE_Mocha_200ms_delay_without_random_packet_delay

So in my next upcoming TOFFEE-Mocha release I may include all these new features and updated old features. If you are in need of any specific feature (or scenario) you can kindly let know. If plausible and feasible I can support the same and release as a part of my upcoming TOFFEE-Mocha release. Kindly stay tuned !



Suggested Topics:


TOFFEE-Mocha - WAN Emulator


Categories

💎 TOFFEE-MOCHA new bootable ISO: Download
💎 TOFFEE Data-Center Big picture and Overview: Download PDF


Recommended Topics:

TOFFEE-Mocha Documentation :: TOFFEE-Mocha-1.0.14-1-x86_64 ↗
Saturday' 13-Mar-2021

Live demo - Data Transfer - High bandwidth to Low bandwidth ↗
Saturday' 13-Mar-2021
I always wanted to do some real experiments and research on packet flow patterns from High-bandwidth to Low-bandwidth networks via networking devices. This is something can be analyzed via capturing Network stack buffer data and other parameters, bench-marking, and so on. But eventually the data-transfer nature and other aspects is often contaminated due to the underlying OS and the way Network stack is implemented. So to understand the nature of packet flow from Higher to Lower bandwidth and vice-versa such as Lower to higher bandwidth, I thought I experiment with various tools and things which physically we can observe this phenomena.

Replacing in Lab Intel Core i7 5820K Desktop PC with Intel Celeron 1037U Mini-PC ↗
Saturday' 13-Mar-2021
As a research experiment I replaced my Intel Core i7 5820K desktop PC with my Intel Celeron 1037U Mini-PC as my everyday desktop system. This is an attempt to reduce my overall monthly power consumption. As well an attempt to do feasibility tests and research to know how far Mini PC will dominate the market in future and to study the real potential of Mini PCs.

Upgrading Ubuntu 17.10 to 18.04 via TOFFEE-DataCenter WAN Optimization Screenshots ↗
Saturday' 13-Mar-2021

TOFFEE-Butterscotch Bandwidth saver software development - Update: 17-Nov-2016 ↗
Saturday' 13-Mar-2021
Here is my second software development update of TOFFEE-Butterscotch. In the previous update (28-Oct-2016) I discussed about the Alerts, etc. Whereas in my first TOFFEE-Butterscotch news update I have introduced about TOFFEE-Butterscotch research, project specifications, use-cases, etc.

TOFFEE Documentation :: TOFFEE-1.1.24-3-rpi2 ↗
Saturday' 13-Mar-2021
Here is my VLOG Youtube video of the same which includes details about version release notes, future road-map and so on. The TOFFEE release is highly optimized and customized for hardware platforms such as x86-64 based Intel NUC and other Intel mobile computing platforms such as laptops and so on. This version (or release) is not suited and so not recommended to be used for high-end desktop and server hardware platform.



TOFFEE-Mocha WAN Emulator Jitter Feature ↗
Saturday' 13-Mar-2021

Advantages of CDN - Content Delivery Networks or Content Distribution Networks ↗
Saturday' 13-Mar-2021

TOFFEE-Butterscotch Documentation :: TOFFEE-Butterscotch-1.0.11-rpi2-23-nov-2016 ↗
Saturday' 13-Mar-2021
TOFFEE-Butterscotch Documentation :: TOFFEE-Butterscotch-1.0.11-rpi2-23-nov-2016

Network Packet Queue or Buffer - Packet Flow Control, Fragmentation and MTU ↗
Saturday' 13-Mar-2021
Network Packet Queue or Buffer - Packet Flow Control, Fragmentation and MTU



Featured Educational Video:
Watch on Youtube - [17445//1] 294 - VRF - Virtual Routing and Forwarding - Introduction ↗

TOFFEE-DataCenter WAN Optimization software development - Update: 13-Aug-2016 ↗
Saturday' 13-Mar-2021
Earlier the TOFFEE is intended to work on IoT devices, Satellite Networks, branch office/SOHO deployments. In most cases the users may deploy just one or couple of TOFFEE devices per site. But in the case of TOFFEE-DataCenter, users can scale-up deploying the same in multiple servers in a sort of distributed cluster computing scenario. Besides the core TOFFEE-DataCenter components (such as packet processing engine/framework), I need to do lot of changes in its Graphical User Interface (GUI) too to address these new requirements.

Advantages of CDN - Content Delivery Networks or Content Distribution Networks ↗
Saturday' 13-Mar-2021

Optimization of network data (WAN Optimization) at various levels ↗
Saturday' 13-Mar-2021
WAN Network data can be optimized at various levels depending upon the network applications, protocols, topology and use-cases. So the amount of data you can optimize will depend on the strategy you choose to optimize. Such as: Network Packet level optimization, Session level optimization, File level optimization, etc.

TOFFEE (and TOFFEE-DataCenter) deployment in Large Infrastructure and or ISP Networks ↗
Saturday' 13-Mar-2021
Large Infrastructure or ISP setup: In case if you are an ISP and interested in deploying a large customer WAN Optimized network or an add-on enhanced (WAN Optimized) network for select few customers, then you can deploy something as shown below. Although this case is not meant for hobby/DIY users. This is a feasible solution for high-end professional application and the same can be deployed.




TOFFEE-DataCenter screenshots on a Dual CPU - Intel(R) Xeon(R) CPU E5645 @ 2.40GHz - Dell Server ↗
Saturday' 13-Mar-2021



Research :: Optimization of network data (WAN Optimization) at various levels:
Network File level network data WAN Optimization


Learn Linux Systems Software and Kernel Programming:
Linux, Kernel, Networking and Systems-Software online classes


Hardware Compression and Decompression Accelerator Cards:
TOFFEE Architecture with Compression and Decompression Accelerator Card [CDN]


TOFFEE-DataCenter on a Dell Server - Intel Xeon E5645 CPU:
TOFFEE-DataCenter screenshots on a Dual CPU - Intel(R) Xeon(R) CPU E5645 @ 2.40GHz - Dell Server