Data Blast

Data, Telecom, Maths, Astronomy, Origami, and so on

Leave a comment

Discovering RHIPE with SDN-Mininet

Some days ago I attended a series of lectures organized by Telefonica Research (TID) where they explained several projects that have been developing in the last years in the field of Big Data. These projects or use cases mostly are related to the use of data gathered from their mobile phone communications, besides other sources such as credit card transactions and social networks (e.g. twitter and facebook). In general, the talks showed interesting information both at content and format level.  In addition, two key concepts, as it might be expected, were mentioned repeatedly during the explanation: “anonymity” and “aggregation”, conveying that personal data they collected are protected, in order to ensure the privacy of their users. Although I don’t want to doubt about this, we must recognize that this is a controversial issue both for Telcos and OTTs and whose discussion isn’t over; still a lot of water under the bridge must flow in order to clarify the suitable use of personal data. I mean It’s necessary the existence  of a strict legal framework that protects users worldwide, but that’s another topic for another post.

Well then, I understand and guess that in general the focus of these talks was simply to present compelling and novel visualizations, so audience can glimpse the power behind the data and the endless options that the use of Big Data technology can bring us in the future. Visualizations such as the movement of Russian tourists or cruise passengers through Barcelona, where they sleep, eat or buy luxury items, food, etc, (all geo-located) or also sentiment analysis by means of twitter of a determined event, etc. etc. Also to mention some research projects with social scope where TID is involved in several countries as could be the analysis of crowd movement after an earthquake or during a flood, i.e. migration events. On the other hand, they highlighted also something very important and revealing: analyze the behaviour of people by means of their movements through cellular radio system (and social networks) provides a more accurate and less biased notion of the users (potential clients) than an opinion survey. Anyway, it gets the feeling that Big Data is a world of possibilities and as Henry Ford said: “If I had asked people what they wanted, they would have said faster horses”. See Smart Steps project by TID.

However, it’s logical to think all this is the tip of the iceberg of an emerging business that could be very lucrative: “sales data”…well in fact, it already is the case. This reminds me a news of October 2012 where Von McConnell, director of technology at Sprint said, in relation to if Telcos became nothing more than a dumb pipe, “we could make a living just out of analytics”, this is, Telcos can survive on Big Data alone. Besides, I remember last year at Telecom Big Data conference (Barcelona), Telcos were aware that they are “sitting” on a goldmine of data and already are working on mechanisms to get useful business information, at all level, with a main goal: data monetization. However, here I would like to mention briefly an aspect that can modify this scenario: there exists a war Telco vs OTT players for dominance over the data, but it’s another story which we must be alert. Anyway, currently some relevant topics in a telco are: marketing analytics, M2M solutions, voice analytics, operational management (network and devices), advertising models, recommendation systems (cross/ up selling) etc. This give us an idea about the topics that Telcos are currently working. By the way, I recommend to check Okapi project by TID (Tools for large-scale Machine Learning and Graph Analytics).

Configuring RHIPE and SDN-Mininet

Well, actually this preamble was only a pretext to present a simple example where is possible to see an application of Big Data & analytics tools (e.g. Map/Reduce Hadoop and R) over data gathered from a network. It’s true however, these are well-known issues that I already had mentioned in previous posts, but my intention this time (as well as to repeat my speech) is to place Big Data in a context purely of network. Typically when we talk about Big Data or analytics in a Telco, some common examples appear such as customer churn analysis or pattern analysis over a cellular radio system.  SDN (and NFV) in this sense, by decoupling control and data plane, offers a clear opportunity to manage network communications in a centralized way, with which now it’s possible to have server a farm (Data Center) that can process several network metric in real time using Big Data analytics, i.e. now can be possible to do an advanced network tomography: huge network matrix, delay matrix, loss matrix, link state, alarms, etc

Anyway… currently however, I don’t have access to real traffic data of a Telco, which would be ideal, but as proof of concept, a simple network created with Mininet is enough from my point of view. So, I programmed a tree-based topology with Python, with an external POX controller and a series of Open Flow switches and hosts. In this tree-based topology is possible to configure the fanout (number of ports) and some characteristics for the links such as bandwidth and delay. It’s very easy to add packet loss rate or CPU load, but this time I only used the first two features. Moreover, it isn’t very complicated to programme a fat-tree or jellyfish topologies or inclusive random networks, if you prefer to work with more complex networks.

On the other hand, I used wireshark tool to gather network data. In any case, I only want to capture ICMP packets in order to calculate simply the latencies between nodes and then construct a “Delay Matrix”. Actually this is very simple, but now all analysis will be done with RHIPE package, in order to apply a Map/Reduce & HDFS scheme. According to Tessera project: “RHIPE is a R-Hadoop Integrated Programming Environment. RHIPE allows an analyst to run Hadoop MapReduce jobs wholly from within R. RHIPE is used by datadr when the back end for datadr is Hadoop. You can also perform D&R (Divide and Recombine) operations directly through RHIPE MapReduce jobs, as MapReduce is sufficient for D&R, although in this case you are programming at a lower level than for datadr.” So, basically RHIPE is a R library that acts as “wrapper” which allows interact directly with Hadoop.

For Hadoop environment, I used Vagrant Virtual Machine by Tessera Project that includes CDH4 and RStudio.  My R Code is in Rpubs and csv file (traffic_wireshark.csv) is in Github.

Configuring Mininet  (see Github for

# Loading wireshark
sudo wireshark &
# Filter ICMP (hiding Open Flow messages)
icmp && !(of) && ip.addr ==
#Loading POX controller
~/pox$ ./ forwarding.l2_learning
#Loading tree-based topology
sudo python
SDN Topology

SDN Topology

Screenshot wireshark

Screenshot wireshark

Map-Reduce Scheme

Map-Reduce Scheme

Delay Matrix

Delay Matrix


Leave a comment

Mobile World Congress 2014: A Deep Dive Into Our Digital Future

(Published in Barcinno  on March 07, 2014).

MWC 2014 ended about a week ago and I thought it’s appropriate to share some things that I saw, heard, and read during this event. It was also my fourth year attending this event, always with high expectations. To the point; although it’s common, specially on this type of event, to discuss which is the best mobile device launched by this or that manufacturer or which are the most promising apps for this starting year, my personal interest is to talk about technological trends in the mobile industry and some other interesting things, at least for me, such as which is the state of the Telco APIs, what happen in the OTT-Telco relationship, and also to comment briefly the Dr. Genevieve’s talk from Intel. However, I would like to say that my post at first sight doesn’t have a defined storyline, because my initial idea was just to mention noteworthy topics without having in mind a special order; though all these topics are part of the mobile communications ecosystem and therefore these are related each other somehow.

I just attended a couple of talks and main exhibition but unfortunately I was unable to attend Mark Zuckenberg’s talk in situ.  It’s clear that his presence was the most expected thanks to WhatsApp acquisition by Facebook for USD 19 billions but the key point in my view was to know, among other things, which is the new scenario opens in the fight for supremacy of the instant messaging and specially of the mobile voice calls at the global level, once WhatsApp announced in Q2 of this year will include this new last feature for free. It’s true that apps such as Line or Viber already have voice services but Whatsapp is a giant and in this sense, Jan Koum, WhatsApp CEO, he talked about figures: “To date, we have 330 million daily and 465 million monthly active users. We also have detected an increase of 15 million on the number of users when Facebook acquisition was known”. It suggests that in the upcoming months there will be significant movements in relation to OTT-Telco battle, which will be fought not only here but also in other scenarios such as Netflix vs US Telcos (Who should pay for the upgrade of the network?, network neutrality obsolete?), but it’s another interesting story.

An inspirational Talk: Dr. Genevieve Bell from Intel

I went to the Dr. Bell’s talk with many expectations. I wanted to know her vision about technology today and I wasn’t disappointed. Let’s see, all she said is a bit expected or say obvious, she doesn’t reinvent the wheel but her vision is valuable because as anthropologist and director of Intel Corporation’s Interaction and Experience Research, presented a clear and inspiring idea about which things we should keep in mind when develop applications or services currently. I want to remember this talk was given within the scope of WIPjam workshop where audience is typically technical.

At times, apps developers and designers tend to focus on technology forgetting the real needs, this is, “technology must grasp human nature” in order to have success apps. In this sense, she explained that, despite of passage of time, there are 5 things that haven’t changed in human behaviour:  “1) We need friends and family, 2) We need share interest, 3) We need a big picture 4) We need objects that talk for us and 5) We need our secrets”.  So, with this in mind, it’s logical to think the reasons why social networks have been successful. On the other hand she also mentioned 5 things that technology is reshaping the human behaviour and where new questions arise: “1) How to guard our reputations?, 2) How to be surprised or bored?, 3) How to be different?, 4) How to have a sense of time?, and 5) How to be forgotten?”.


It’s true that all these are generic issues which have multiple implications such as privacy and anonymity, but it also points to the fact that human being “wants to be human not digital” and this must be considered as a starting point in the development of new services and applications.

Coming back to MWC…

Telco APIs

Telco API workshop was interesting in general terms: different visions and one common goal: to provide a flexible and robust APIs ecosystem that allow to Telcos develop solutions more quickly under the premise of interoperability, scalability and security overall. Telcos want to bridge the gap with OTT trying to recover territory in some businesses as instant messaging and proposing advanced solutions in the area of mobile calls and video streaming by means of RCS (Rich Communications) suite. For this, GSMA, a global association of mobile operators, want to give a new boost to solutions as Joyn (app for chat, voice calls,  file sharing, etc) which to date hasn’t given the expected results.  On the other hand WebRTC, a multimedia framework focused on real-time communications over web browsers, is gaining momentum from last months. It’s doesn’t clear that WebRTC is going to be fully embraced by Telcos, but sure they are planning some synergies with its RCS, at least this I perceived in MWC.

For all is known the slowness in innovation of the operators either due to internal bureaucracy of organizations, interoperability and integration issues or standardization delay, but according to different speakers in this workshop, Telcos are aware of this lack and, as it would expect are working with manufacturers on solutions associated to RCS and VoLTE (Voice over LTE).  Moreover, new business models are need because with free competitor solution such as Line, Viber or soon Whatsapp in voice calls, they cannot charge for services such as SMS or perhaps mobile calls inclusive. Although it’s true that there is a big difference in the targeted markets. At least I don’t know a big company that uses Whatsapp for its corporative communications for security aspects and on the other hand when you pay for a service you demand quality, security, etc and therefore Telcos play in another league (for now). In any case, Telco are convinced to create a robust Telco API ecosystem that allows them to take advantage over OTTs.

Some interesting companies present in Telco API workshop were: ApigeeAepona and Solaiemes (from Spain). In general Telco APIs are focused on improving interoperability, integration, monetization, service creation, among other things and can be grouped as follows: Communications (Voice, Messaging, Conferencing), Commerce (Payment, Settlement, Refund, and Identity), Context (Location, Presence, Profile, Device Type), and Control (QoS, Data Connection Profile, Policy, and Security & Trust).  For more detail, I also recommend to read about OneAPI, a global GSMA initiative to provide APIs that enable applications to exploit mobile network capabilities.

Big Data and SDN

From my recent posts about Big Data and SDN nothing has changed in my appraisal. So, I just would like to add some ideas to complement the information given previously.  I can see that Big Data and SDN/NFV are on track to become key elements to support OSS (Operational Support Systems) and BSS (Business Support Systems) within Telcos. OSS and BSS are considered main assets whose importance in the current and future business is now unquestionable. For example, churn is a challenge problem for the Telcos whose predictive analysis is key in order to avoid customer attrition. Another example, no without controversy, is the sale of anonymized and aggregate data. Here, telco thanks to information gather from cellular cells indicate areas where an enterprise can install this or that type of business and other demographic data. Also, I could see some interesting solutions given by SAP (Hana platform) or Hitachi Data System. Unfortunately, I also was unable to attend to keynote panel “Up to Close and Personal: The Power of Big Data” where representatives from Alcatel-Lucent, EMC and SK Planet debated about the convergence of ubiquitous connectivity, cloud computing and Big Data analytics. I guess they talked about these issues and challenges in the industry. It’s a new world to explore.

Although SDN and NFV are solutions, say, mainly focus on data centers and backbone networks (also SDN WAN), it seems that mobile networks neither have escaped to “the charm” of SDN. It’s proposed to use the SDN capabilities in centralization/abstraction of the control plane in order to apply traffic optimization functions (also for radio resources). From technical point of view, this is very interesting because, as we all know, mobile traffic is increasing and Telcos and researchers are searching practical solutions to mitigate some capacity problems and SDN and NFV could be a real alternative. In MWC, HP also launched its Open NFV strategy for telcos with the purpose to help them in the acceleration of the innovation and generation of new services.

Hetnets, Backhaul, Small Cells and Hotspot 2.0

MWC is the great meeting point where Telcos (mainly Mobile Carriers) and manufacturers can discuss their network needs and to present their services and products respectively. As in 2013, many well-known concepts were heard again in the talks, halls, and booths of the exhibition. However a key idea always is present in the environment: How to provide better bandwidth (network capacity) and QoS under an increasing traffic or radio spectrum depletion. In this sense, many people are currently talking about “mobile data crunch” which is indicative that it’s necessary to search possible solutions. Small Cell, for example, is a solution to improve the mobile service coverage and so to expand network capacity for indoor and outdoor locations.  This solution can be used in complementary way to macro cell or to cover a new zone faster and over licensed or unlicensed spectrum. Depending on the needs there are many options in choosing a small cell: pico cells, femto cells, etc.

On the other hand, Wifi networks are again seeing the light of day, thanks to Hotspot 2.0, which is a set of protocols to enable cellular-like roaming (i.e. automatic Wifi authentication and connection). This allows that Wifi is a real alternative to improve coverage and many Telcos is already exploring this solution or inclusive planning alliances with established Wifi operators like Fon or Boingo.

All this brings us another recurrent concept: Hetnet, this is, heterogeneous network but focus on the use of multiple types of access nodes in a wireless network, i.e. integration and interoperability for traditional macro cells, small cells and Wifi. And now, How can we connect all these type of access points to backbone?  Well, the answer is simple: through Mobile Backhaul, which is a part of mobile network that interconnect cell towers with core network.  In MWC were presented different solutions: wired and wireless, based on Ethernet IP/MPLS, Microwave, with line-of-sight (LoS), near line-of-sight (nLoS), and non-line-of sight (NLoS), etc. It’s is an area very active and broad but in this post isn’t worth to enter into too details.

Wearables and Digital Life

Finally, I would like to comment two ideas that were highlighted in MWC 2014.  First, “wearables”, a buzzword that refers to small electronic devices based on sensors (body-borne computing) that may be worn under, with or on top of clothing, i.e. watches, wristbands, glasses and other fitness sensors. Many companies presented their products that will set trend this 2014: FitbitSmartband (Sony), Gear2/Neo(Samsung), etc, etc. According to Dr. Bell, wearable computing has much potential because also fit with the human nature, although all this is an early stage (even though, it’s a old idea), it’s clear that technology as said she, “is changing some of our behaviours and preferences”.


Digital Life is another buzzword. Basically is commercial name given to an advanced home automation system by ATT. I suppose Digital Life, as name or concept, is related to research from MIT about rethinking of the human-computer interactive experience. Well, ATT solution basically uses Internet of things (IoT), Augmented Reality (AR) and many other technologies that “supposedly” will make our life most secure, comfortable and easy at home. I mention all this because “wearables” and “Digital Life” are two old concepts that today again are gaining strength and it’s expected are trends in the next months or years. Many things I have left out (e.g. LTE evolution, VoLTE, Fronthaul, 5G, Internet of Things, Connected Car/Home, 4YFN initiative, some local startups, and many etc.). Perhaps in an upcoming post I come back to comment some of the topics.

Leave a comment

Programmable Networks: Separating the hype and the reality

(Published in Barcinno on February 20, 2014).

Each year MIT Technology Review presents its annual list of 10 breakthrough technologies that can change the way that we live. These are technologies that outstanding researchers believe will have the greatest impact on the shape of innovation in years to come. In 2009, Software Defined Networking (SDN) was one more in that list. This is a significant fact because this technology promises to make more programmable computer networks, changing the way that we have been designing and managing them. But, let’s start from the beginning: What is a Programmable Network (PN)?. According to SearchSDN website, PN is defined as a network “in which the behavior of network devices and flow control is handled by software that operates independently from network hardware. A truly programmable network will allow a network engineer to re-program a network infrastructure instead of having to re-build it manually”.

Till now, a lot of water has passed under the bridge, but it doesn’t mean that SDN is currently a consolidated technology. According to Gartner Hype Cycle (2013), SDN and other related technologies like NFV (Network Function Virtualization) are still at the peak of inflated expectations stage waiting to fall at trough of disillusionment (see Hype Cycle definitions). Although it seems to be a recent topic, it isn’t. In this point I recommend to read an interesting report called “The Road to SDN: An Intellectual History of Programmable Networks” by Nick Feamster et al. (Dec 2013) where authors remark that SDN and NFV aren’t novel ideas at all, but an evolution of a set of ideas and concepts in relation to PNs over at least the past 20 years.

Particularly, PNs interest me because for years I have had the opportunity to work on networking from two points of view: industry (Telco) and academia (research university), and in both areas I have checked “in situ” some limitations of the current network infrastructures, which have already been described widely in the literature (e.g. report).  Therefore, I consider interesting to explore the current network capabilities and which are industry/academia proposals (and challenges) to tackle the migration problem towards PN.

For example, in order to improve diverse network aspects such as speed, reliability, management, security, and energy efficiency, researchers need to test new protocols, algorithms, and techniques in a real large-scale scheme. It’s a tough work to do it over existing infrastructures, because routers and switches run a complex, distributed, and closed (in general, proprietary) control software. Moreover, to try a new scheme or approach can result a cumbersome task specially when each time you need to change the software (firmware) in each network element.

On the other hand, network administrators, for both management and troubleshooting, need to configure each network device individually, which also is an annoying task when there are many devices. However, it’s true that in the market there are some management tools to manage these elements in a centralized manner, but they typically use limited protocols and interfaces, and therefore, it’s common to find interoperability problems between vendors, which adds complexity to the solution.

Without going further, last June I attended SDN World Conference & Exhibition in Barcelona where the telecom community discussed several aspects about SDN and Network Virtualisation in general (Note: two next events this year will be on May/London and on September/Nice). Well, I remember, among other things, a recurrent idea in the talks: “In last years, computing industry has actively evolved towards an open environment based on abstractions thanks to the cloud paradigm. It has allowed to progress in various aspects: virtualization, automation, and elasticity (e.g. pay-per-use and on-demand infrastructure).  On the contrary, networking has evolved towards a complex, rigid, and closed network scheme, lacking of abstractions (e.g. control plane) and open network APIs.”

In consequence, from Telco’s point of view it’s necessary to consider a change in the network design that facilitates the network management. It’s also urgent to work in the integration between computing and networking by means of virtualization and so to see the network as a pool of resources; not as separated functional entities. Also it’s desirable to avoid hardware-dependent and vendor lock-in. All this suggests that the simple idea or promise of a PN that helps to solve these drawbacks would be a big leap in quality and a great opportunity towards network innovation.

Currently there are two main approaches in the field of PNs that industry is fostering: SDN and NFV. The former is mainly (though not exclusively) focused on Data Center Networks and the latter on Operator Networks. Next, some definitions about SDN and NFV are given for a better understanding of the text, however, the main goal of this post is to review some characteristics of SDN and NFV from an innovation point of view. Therefore, I am not reviewing technical aspects in depth. For more detail, I recommend to review the following websites: ONFSDNCentral and TechTarget (SearchSDN). For experienced readers, I recommend to check this website for academic papers.

Some definitions: What is SDN?

According to Open Networking Foundation (ONF), an organization dedicated to the promotion and adoption of SDN through the development of open standards, SDN is an approach for a network architecture where the control plane and data plane (forwarding plane) are decoupled. In simple terms, the control plane decides how to handle the traffic and the data plane forwards traffic taking to account the decisions that the control plane performs.

In SDNCentral, Prayson Pate describes SDN as “Born on the Campus, Matured in the Data Center”. Effectively, SDN began on campus networks at UC Berkeley and Stanford University around 2008, and thereafter, it made the leap to data centers where currently has been showing since then, to be a promising architecture for cloud services. In a simplified way, he indicates that the basic principles that define SDN (at least, today) are: “separation of control and forwarding functions, centralization of control, and ability to program the behavior of the network using well-defined interfaces”. Through the Figure 1, I will try to clarify, as far as possible, these concepts.

Figure 1: Traditional Scheme vs. SDN Architecture

Figure 1: Traditional Scheme vs. SDN Architecture

Figure 1a shows a traditional network scheme where each network device (e.g. router, switch, firewall, etc.) has its own control plane and data plane, which are currently vertically integrated. This supposes extra cost and incompatibilities among manufacturers. Moreover, each device has a closed and proprietary firmware/control software (Operating System) by supporting diverse modules (Features) to perform some protocols related to routing, switching, QoS, etc.

On the other hand, Figure 1b depicts a logical view of the SDN architecture with its three well-defined layers: Control, Infrastructure, and Application. The first layer consolidates the control plane of all network devices in a network control “logically” centralized and programmable. Here, the main entity is the SDN Controller whose key function is to set the appropriate connections to transmit flows between devices and therefore to control the behavior of all network elements. Also, there may be more than one SDN controller in the network, depending on configuration and attending to scalability and virtualization requirements. In contrast, the second layer involves the infrastructure formed by the packet forwarding hardware of all network devices, which is abstracted to the other layers, this is, physical interface from each device is seen just as a generic element by applications and network services. Finally, the third layer is the most important from innovation point of view because contains the applications that will provide added-value to the network. Applications such as: access control (e.g. firewall), load balancing, network virtualization, energy efficient networking, failure recovery, security, etc.

In this architecture, an important aspect to be highlighted is the communication between layers, which is carried out via APIs (Application Programming Interface). Conceptually, in computer and telecom science two terms are used to describe these interfaces: Southbound Interface and Northbound Interface. The former refers to an interface to communicate with lower layers. In the case of SDN, Openflow is a prominent example of this type of API. Conversely, the latter refers to an interface to communicate with higher layers. Today, however, there isn’t a consolidated standard for this kind of interface, which is the key to facilitate innovation and enable efficient service orchestration and automation.  Later, I will comment some things about this.

What is NFV?

In simple words, NFV is a new approach to design, deploy and manage networking services. Its main characteristic is that some network functions are decoupled from specific and proprietary hardware devices. This means network functions such as Firewall, NAT (Network Address Translation) IDS (Intrusion Detection System), DNS (Domain Name Service), DPI (Deep Packet Inspection), etc. are now virtualized on commodity hardware, i.e. over high performance servers of independent software vendors. It’s applicable to any data plane or control plane function in both wired and wireless network infrastructures. (see Figure 2).

Figure 2: Vision for Network Function Virtualisation (source ETSI)

Figure 2: Vision for Network Function Virtualisation (source ETSI)

Although SDN is perhaps the buzzword when talking about PNs, currently the term NFV hasn’t lagged far behind in popularity. In fact, it turns to be a recurrent term between Services Providers, given that a group of them formed in October 2012 a consortium dedicated to analyze the best way to provide a solution of PNs in the field of operator networks. This consortium afterwards created a committee under the umbrella of ETSI (European Telecommunications Standards Institute) in order to propose and promote virtualization technology standards for many network equipment types. In the paper called “Network Functions Virtualization: An Introduction, Benefits, Enablers, Challenges & Call for Action” (October 2012), NFV ETSI Working group describes the problems that are facing along with their proposed solution.

Trends and some Comments

Beyond the obvious differences between SDN and NFV, such as: focus (Data Centers / Service Network Providers), main characteristic  (Separation of control and data plane / relocation of network functions), etc. Analysts agree that both approaches can co-exist and complement each other (i.e. there is synergy), but each one can operate independently (Note: I recommend to read “The battle of SDN vs. NFV“ for more detail). In fact, Service Providers understand there are too “works in progress” (and open problems) and nothing completely defined, and therefore it wouldn’t be logical to dismiss out of hand any solution, more even when they have a broad portfolio of services in several fields. TelefónicaColt, and Deutsche Telekom, are just three examples of Service Providers from ETSI that are working actively in these topics developing pilot programs.

SDN and NFV are just tools and they don’t specify the “killer application” that hypothetically could boost its use. Actually, SDN is meant to give support for every new coming application. Here, a key element in the development of applications is the Northbound API. In recent months there has been movement in ONF to form a group in order to accelerate the standardization of this interface, but it isn’t clear that this effort will bear fruit in the short/medium term.  At this moment there isn’t a common idea and many discordant voices. In this link there is an interesting discussion about this topic. For instance, it’s mentioned that apparently ONF is more interested in developing Openflow protocol than a Northbound API, because “standardizing a northbound API would hamper innovation among applications developers”. Dan Pitt from ONF said in this sense: “We will continue to evaluate all of the northbound APIs available as part of our commitment to SDN end users, but any standard for northbound APIs, if necessary, should stem from the end users’ implementation and market experience”. It’s clear that finally, as in other many occasions, the market will give its verdict.

Meanwhile, it’s gaining relevance a consortium called OpenDaylight formed by “important” network hardware and SDN vendors, which have launched on April 2013 an open source SDN controller with its respective framework through which is possible to develop SDN applications.  OpenDaylight supports the OSGi framework and bidirectional REST-API for Northbound API. They expect this initiative gains momentum but for now, they rejected to say that are an universal standard, although seeing the power of its members, it isn’t excluded that they have a high market share in the future.

Taking into account this previous idea, we already know that the heart of SDN is the SDN controller and today there are other many alternatives to OpenDaylight. Personally, I have worked with NOX/POX(C++/python-based) and Floodlight (java-based). The former is suitable to learn about SDN controller because has, say, an academic character, meanwhile the latter has a focus more professional and works with REST-API, which is a more common interface. On the other hand, in addition to startups whose focus is to develop a SDN controller, there are many others related to SDN applications (some people use the term ADN-Application Defined Networking) in areas as diverse as security, management, energy-saving systems, etc. With all this, I want to say there are many available open source tools and it’s possible already to begin the development of SDN applications or applications on top able to work on top.  Some interesting startups are mentioned in CIO and IT World. In Spain, at startup level (by excluding Service Providers) there are few examples to mention. In fact, the first one and unique that comes to mind is Intelliment Security based in Seville, which provides a centralized network security management offering “an abstract and intelligent control plane where changes are automatically orchestrated and deployed to the network without human intervention”.

Going to another topic, SDN and Big Data analytics are two technologies that are destined to understand each other.  It’s expected that SDN makes certain network management tasks easier (e.g. OSS/BSS) and therefore will be necessary to have a technology that takes advantage of the huge amount of data about the network and so Big Data will enter to scene. For instance, issues such as traffic patterns analysis, traffic demand prediction analysis will help to enable an intelligent management. On the other hand, a prickly topic that certainly will be on the table, for anonymity reasons, is the use of DPI (Deep Packet Inspection) techniques. So, in general, as we are talking about to centralize the communications, it’s logical to think that SDN and Big Data will meet soon. One first approach can be found in the paper called “Programming Your Network at Run-time for Big Data Applications” (2012), G. Wang et al. where authors explore an integrated network control architecture to program the network at run-time for Big Data applications based on Hadoop using optical circuits with an SDN controller.

Another pertinent issue to be commented is how PNs will affect the development of OTT providers.  Currently they are, say, helpless at network control level due to they cannot guarantee by themselves the delivery of their services. I know “it’s Internet and many Service Providers come into play”, but today, OTT providers like Skype, Netflix or Wuaki-TV in Spain just can make some QoS network measurement with your client in order to adapt the delivered content (transcodig) or simply to indicate minimum requirements to guarantee a suitable quality of experience (QoE). Usually SDN and NFV promise to help Service Providers improving the management and control of “their own” networks, but perhaps in the future this control can also be extended to third-parties. OTT providers are injecting increasingly traffic to the network becoming in major players in the Internet wherewith likely they demand more control and SDN or NFV can be the key to reach it as well as to generate new business models.  So, OTT providers will be able to give better services with better QoE, Service Providers will be able to adapt to the requirements of OTT providers, and Apps OTT will be able to talk to the network in real time.

On the eve of MWC 2014 at February in Barcelona, undoubtedly SDN and NFV will be buzzwords again. Matthew Palmer in SDNCentral said about SDN and its trends in 2014: “2012 was the year for people to learn what is SDN?, 2013 was the year for people to understand how they use SDN, and 2014 will be the “show me” year; the year of the proof-of-concept for SDN, NFV, and Network Virtualization”.  SDN and NFV have a huge potential but are still in early stage; we must to be expectant to the news in the upcoming months. SDN is much more than a “match/action” scheme in switches and a logically centralized control of multiple network devices. Moreover, there are still a lot of open problems to solve such as: Northbound API, control orchestration, security, etc.

Finally, for people who want to learn more about SDN and implement some practical examples by using open source SDN controllers, network simulator, python programming, etc., I strongly recommend to take the free MOOC by Cousera called “Software Defined Networking” from Georgia Tech that begin at June 24th. I took the same course the last year and really it’s an excellent starting point for understanding this topic. The course lasts only 6 weeks and has 10 quizzes and 4 programming assignments, approximately. Nick Feamster, the instructor, said the content will be updated according to new developments and trends in the field.

Leave a comment

An inspirational poem for Error Correcting Codes enthusiasts:

In Galois Fields, full of flowers
primitive elements dance for hours
climbing sequentially through the trees
and shouting occasional parities.

The syndromes like ghosts in the misty damp
feed the smoldering fires of the Berlekamp
and high flying exponents sometimes are downed
on the jagged peaks of the Gilbert bound.

S.B. Weinstein (IEEE Transactions on Information Theory, March 1971)