Random Linear Network Coding (RLNC) can be used in a variety of implementations over broad range of applications. Depending on the application, implementations can be done at several different layers of the network stack.
The use of WiFi networks is ceaselessly expanding beyond local wireless networks.
According to the 2013 WBA industry report, the progress towards “near-ubiquitous” WiFi is exemplified by the steady growth of carrier-operated public WiFi hotspots, from 5.2m in 2012 to 10.5m by 2018. The report goes on to list the most promising new expansion models that are encouraged by demand for service ubiquity and subsequent investment.
Such new venues include stadium WiFi, but also coffee shops, business workspaces, and public spaces such as supermarkets and hospitals.
Crowded WiFi settings require architectures capable of managing an unprecedented volume of connections. Such architectures are designed to support new services and applications that are required at crowded WiFi venues, such as broadcast support.
Broadcasting services enable users in crowded WiFi settings such as stadiums to access popular videos (e.g., replays) simultaneously. A growing number of wireless service providers see our technology as an indispensable ingredient for crowded WiFi scenarios. Our algorithm represents a technological leap in broadcasting and multicasting services, both in performance and management.
Researchers have used our algorithm to implement three technical feats that potentially revolutionize broadcasting without hardware upgrades:
1. Simplified retransmissions: Retransmissions of erroneous packets are minimized and their management is greatly simplified.
2. Optimal cooperation: Devices share missing packets in a way that minimizes communication with access points and among themselves.
3. Multipath communications: Devices can seamlessly combine WiFi with existing 4G access so as to minimize costs.
Mobile WiFi mesh experiments have shown that our technology brings 4x throughput gains and 3.6x reductions in file download times, all while consuming 2 to 3 orders of magnitude (i.e., 100x to 1000x) less energy than WiFi uses for transmission for computation purposes.
More information can be found on our Multicast Topology Section.
Check out product solutions by Code On’s licensee, Streambolico.
Technology integration trends are turning wireless access networks into heterogeneous multipath environments.
The main driver is the proliferation of small cells and homespots (owned WiFi access points that are partially shared).
Homespots, for example, are projected to exceed 100 million by 2018, vastly overtaking commercial hotspots (Maravedis-Rethink forecasts).
Carriers are struggling to integrate WiFi and 3GPP technologies. Similar integration trends are well underway elsewhere; as service providers continue to combine satellite and 4G infrastructures and WISPs incorporate different generations of WiFi running on various frequencies.
Devices and wireless access infrastructure now support multiple streams at many levels. This is happening not only at lower layers with the proliferation of MIMO and multiple channel integration (channel bonding), but also at the transport layer, with increasing attention to multipath transport solutions such as MPTCP.
At the application layer, multipath is becoming the norm, as hosting ISPs grow their mesh overlays and multi-homing in datacenters and server farms becomes crucial for reliability and performance.
Although the efficiency and reliability gains of multipath communications is beyond any doubt, implementing multipath in heterogeneous networks turns out to be a complex problem.
In order to avoid performance losses compared with single-path solutions, operators and service providers need to dynamically combine load balancing with advanced packet scheduling and signaling.
Our core technology is a powerful multipath enabler as it removes the need for packet scheduling and greatly simplifies signaling. Our algorithm yields up to 5x reductions in delays in a multipath configuration. When combined with MPTCP, our technology improves uptime and increases throughput by up to 20x over conventional MPTCP solutions in combined cellular, WiFi, and satellite networks.
Since our technology can operate at any network layer and can easily be integrated as software patches, it stands to transform not only wireless access but also hosting and storage services.
For example, the successful demonstration of WiFi channel bonding featured by our partners is implemented entirely at the application layer.
The Transfer Control Protocol (TCP) plays a central role in today’s networks. A vast majority of Internet traffic uses TCP, including most of streaming video.
However, the explosion of wireless access in the last decade has revealed severe limitations in TCP, as typical wireless packet losses of 2% may cause throughput drops reaching 60%.
Although TCP was not designed for a wireless environment, our coding technology enables it to overcome its wireless limitations in the following manner.
TCP was designed to deliver packets in sequence and combat congestion in wired networks. The former goal is accomplished by checking the packet order and requesting the retransmission of missing packets, while the latter is accomplished through halting transmissions when congestion is sensed (backing off).
Combining these two objectives has proved to be arduous in wireless networks. Packet loss, a common occurrence in wireless and mobile networks, causes TCP to halt transmissions due to the sequential delivery constraint.
To make things worse, TCP misinterprets the occasional missing packet as congestion, leading to frequent and unnecessary backoff events. This is what causes the all too common buffering circle seen while streaming videos over wireless networks.
Our technology enables TCP to deliver packets sequentially using the wireless connection’s full bandwidth, while still avoiding congestion adequately. This is accomplished through our algorithm’s unique capability of removing state distinctions between packets and injecting small levels of redundancy when required.
Coded TCP was shown to eliminate video buffer overruns (interruptions) over a 25Mbps link even with 20% packet losses — conditions where conventional TCP implementations fail. In addition, Coded TCP has shown up to 2.5x throughput improvements in public WiFi networks.
Our partners at Speedy Packets develop CTCP products.
Most of global email and calendaring data is currently stored ‘in the cloud’. Other applications are following the trend, with 33% of office applications to be cloud-based by 2017.
A survey conducted at the AWS re:Invent conference shows that 69% of respondents are deploying business-critical applications into an IaaS cloud.
However, the cloud is not reliable or secure enough for such a shift. The same survey reveals that the top four concerns around cloud deployment are reliability (90%) performance (88%), security (86%), and costs (84%).
Could outages are growing in frequency. In 2013, outages affected most major cloud services, including cloud storage (Dropbox, Apple, Amazon, Microsoft, CloudFare) and email (Yahoo, Gmail).
To ensure a level of reliability, service providers usually replicate user data across multiple cloud locations (data centers). In the case of cloud failure or disconnection, requests are fulfilled through connections to mirror storage facilities. The duplication of both storage and connection are crucial for reliability. The Gmail failure of September 2013, for example, was reportedly due to “redundant network paths” failing “at the same time”.
Replication increases storage and energy costs significantly. Moreover, the existence of copies at multiple remote locations reduces data security, and further drives costs, as each copy needs to be equally secure. Excessive replication and mirroring may also have an adverse effect on reliability by causing storage and communication overloads, hence increasing outage events.
What if an operator were to distribute a large number of file copies to different storage locations, where none of the copies represents the complete original file? Oddly enough, this method has been proven to deliver data to a given location more rapidly. An experiment recently conducted at Aalborg University (Denmark) shows that storing less than 65% of a 32-packet file in five commercial clouds yields similar reconstruction delays as storing the while file in each cloud. Furthermore, storing partial copies is more secure.
But how to manage the transmission of file fragments from multiple clouds?
The increase in global data outages shows the complexity inherent to managing high levels of replication, as signaling is used to avoid the transfer of identical packets from different clouds. This underscores the need ‘smarter data or smarter storage technology’, particularly in an increasingly dynamic storage environment.
By removing state distinctions between packets of the same file, our technology replaces duplicate files with smart data. This guarantees that coded packets arriving from all clouds contribute to the reconstruction of the original file. In the 5-cloud example cited above, our algorithms yield a 35% speedup of average file reconstruction times.
Moreover, the same technology unleashes the full potential of mesh networking and peer-to-peer communications.
The explosion of cloud solutions in recent years as well as the recent shift to business-critical applications has increased technical requirements of cloud solutions. The growing customer base expects no less than immediate access to any data at all times.
As storage volumes soar, it has becomes crucial to devise smart storing models. The full replication of files lacks the flexibility required in dynamic storage conditions where outages are more and more likely. In addition, replication increases storage and energy costs. Traditional codes such as Reed-Solomon codes yield better results than replication but cannot leverage dynamic applications.
Edge caching brings content closer to the user. It improves download times and facilitates the distribution of popular content. However, despite the success of edge caching solutions, failures still occur, and more advanced solutions such as meshed edge caching are employed to reduce blockage at the edge cache. For example, CloudFare’s one-hour outage on March 3rd, 2013, was attributed to “systemwide failure of edge routers”.
Our technology realizes the potential of edge caching in a number of ways. First, it offloads Content Distribution Networks (CDNs) through implementing coded distributed storage.
As in conventional uncoded caches, the caching of a small proportion of the coded files at edge nodes enables users to speed up their downloads. Unlike conventional caching solutions, coded caching requires less storage resources and simplifies download transactions as any coded packet can replace any missing file packet. Most importantly, when the locally cached packets are not sufficient to replace missing packets, users can connect to other coded caches rather than return to the server.
Coded edge caching not only reduces server blocking, but also enables coded caches to act as a peer-to-peer infrastructure, allowing them to scale naturally with local data demands. This unique feature of our algorithm may reduce cache sizes and increase data availability significantly.
Our technology thus uniquely allows applications to combine the contents of different caches without prior coordination of cache content. The availability of coded packets or sectors at edge caches offloads servers more efficiently. Moreover, coded caching is backward compatible, as users are able to compute missing packets from both coded and un-encoded caches.
For more information, contact our storage partner Chocolate Cloud.
Mesh networks are becoming ubiquitous in both wireless access and wide-area overlay networks. The Internet of Things (IoT), the tagging and virtual representation of everyday appliances as a smart network, is the ultimate embodiment of a global mesh network. Such a vision is starting to be realized through the development and integration of various sensor networks, wireless local and metro networks, device-to-device (D2D) networks, as well as satellite networks.
Whether they are built of wireless links or overlay tunnels, mesh networks are often subject to harsh packet losses.
The ITU-T G.hn family of home network standards, for instance, specifies local mesh networks built over power lines, phone lines and coaxial cables. While coaxial segments benefit from higher rates, noisy power lines pose particular technical challenges and support limited rates. Wireless sensor networks such as monitoring networks are often vulnerable to weather conditions and geographical layout, also leading to high packet losses.
To counter packet losses, mesh networks resort to frequent packet retransmissions. The resulting large energy consumption represent a fundamental limitation in network planning, not only for wireless sensor networks but also in WiFi meshes.
Our technology reduces signaling by simplifying broadcasting, dissemination, and retransmission operations. Furthermore, it minimizes the required number of transmissions across the network in dynamic loss and connectivity conditions. Our algorithms are shown to decrease delay in D2D networks by 3.6x and to decrease overall energy consumption in wireless mesh networks by a factor of 3.9x.
By facilitating D2D cooperative networks, our technology creates new opportunities for file sharing and gaming applications.
Our partners at Steinwurf provide optimized mesh networking tools.
Satellite links are characterized by long round trip times (RTTs) and harsh channel conditions that are sensitive to atmospheric conditions such as rain fade, fog, sun outages, and attenuations.
Geostationary satellite links, for instance, have typical round trip latencies of 1,000-1,400ms, two orders of magnitude higher than typical RTTs seen by high-speed internet users. While packet losses depend on physical channel coding, packet and frame dimensioning, and flow rates, satellite link loss rates can reach 50% in extreme conditions.
Current solutions to counter the high and bursty packet losses are based on implementing multiple layers of redundancy. The resulting bandwidth inefficiencies can be significant, particularly in conditions where spectrum is very expensive. Furthermore, conventional forward error correction methods cannot adjust to changing loss patterns.
Our technology offers new link implementations that optimize bandwidth usage by removing unnecessary redundancy. It is particularly effective in dynamic satellite conditions with fluctuating signal levels.
In particular, our coding approach is unique in that it enables the straight-forward implementation and management of sliding-window coding techniques. Such techniques are capable not only of adjusting to losses in real time, but also of integrating into ubiquitous transport protocols such as TCP.
In addition to its solid performance gains [link: Coded TCP use case] in conventional networks, Coded TCP (CTCP) has shown excellent results in high-RTT satellite emulation experiments. It consistently surpasses conventional TCP implementations within a wide range of loss levels. At high loss levels (20%), CTCP realizes has demonstrated a throughput level that is 20x the throughput of its closest commercial TCP competitor.
Our partners at Speedy Packets offer TCP products for satellite links.