Quality of Service is the idea that transmission rates, rates of error and other types of characteristics of this nature can be improved and measured. Quality of service is usually of particular concern for the transmission of networks that utilize video and multimedia on a high level. Transmitting these formats of content can be extremely difficult, thus, companies and corporations often seek ways to better use their network consumption by either limiting the content in accordance to certain techniques and/or improving the techniques by which they execute the content. Quality of service can be significantly and notably improved by certain types of implementations. Quality of Service essentially is responsible for all related aspects of computer networks including bandwidth, jitter, delay and packet loss parameter management, and control. Moreover, companies consider QoS implementation when they are seeking to efficiently curb costs that are often associated with content and networking in the corporate world.
The purpose of the project is to enhance system performance of employees on essential tools; limit audio and video streaming when appropriate; allow proactive interference when company-wide video meetings are conducted by CEO to enhance delivery to remote sites; to improve employee satisfaction of software performance; and to provide data of traffic for analysis to fine-tune and project future needs. The network's primary infrastructure and foundation will be Cisco products. Prior execution of solutions of this nature has been done before. This particular company has a limited policy on the appropriate usage of science, technology, mathematics, and engineering which allows the IT area to manage the importance of network usage from a technical aspect versus employee honor code by adding QoS. This project plan will be presented in two sections. The first section defines QoS, the concepts, what the Cisco QoS is and the various aspects of QoS such as QoE, policy definitions and managing QoS. The second section identifies the timeline of implementation, roadblocks to implementation, overall project plan evaluation and competencies that went into the project.
Quality of Service can be defined as the ability of a network to provide better service for traffic that occurs along that particular network in accordance with a variety of technologies. The primary objective of quality of service is to provide priority including bandwidth dedication, latency as well as controlled jitter and improved loss characteristics. Quality of Service technologies provides the essential building blocks that allow for future business applications and networks.
Network traffic is made up of what are known as flows. These flows are placed on a wire through various endpoints or functions. Network traffic consists of applications such as e-mail, video, voice, server replication, CAD/CAM, service advertising protocol, control and systems management traffic, branch applications, collaboration applications, and factory control applications to name a few. There is a need for some level of control over these particular applications, specifically with jitter, delay and bandwidth and loss that each type of application can handle. Performance measures greatly vary and can have a myriad of effects. Thus, it is important for companies and corporations to divide their service strategy into four distinctive levels: provisioning; best-effort service; differentiated service and guaranteed service. By applying service levels against that various types of applications that utilize network traffic, there comes an understanding of the best ways in which to handle the problems that often arise as a consequence of network traffic (“Cisco,” 2013).
The following defines the four levels per Cisco (2013):
-Provisioning - this is the first level that appropriates allocation of bandwidth in accordance with the design of the network. It is vital for application characteristics to be understood in terms of what they are (“Cisco,” 2013).
-Best effort service - most applications have data flows that usually fit into this particular service level. This type of service provides the basic structuring of connectivity with both packet handling and delivery (“Cisco,” 2013).
-Differentiated service - at this service level, traffic is often grouped into categories and/or classes that are based on each of the respective requirements. The various classes are treated in accordance with the particular QoS mechanism (“Cisco,” 2013).
Guaranteed service - this type of service requires allocation that is absolute of specific resources to make certain that traffic that is profiled meets the specifics of the network requirements (“Cisco,” 2013).
Each of these four levels of service dictates to companies and corporations the best way to manage network traffic in their Internet environment. Therefore, Quality of Service is a control mechanism that assists networks traffic is running and operating smoothly. Its importance then cannot be stressed enough in order that problems and issues are effectively and efficiently prevented to the best of the ability that a type of implementation of this nature can do.
As companies and corporations are moving more and more toward cloud technologies, QoS becomes all the more important. Anderson (2009) noted that in a virtual environment that tools are present that ensure that networking components, memory and CPU remain within certain parameters that a system administrator determines and sets. Virtual environments, however, can benefits from QoS as it will increase ROI on a more manageable scale and have more intelligent solutions for company functionality. Moreover, deployment of QoS across a virtual network allows for firmware and software-defined network upgrades to take place from a central repository rather than moving from one server to another one (pg.1). Basically, more virtual server conversion is undertaken when QoS is along a virtual milieu.
The project will use the Cisco QoS Toolset, which are both features as well as solutions that address the needs of video, voice and data applications. The company is a former family-owned business that has recently been purchased by an investment group. While family-owned, there was a need to control and manage traffic, the family leadership, for neither cost nor invasion purposes, did not support technology. Thus, the rationale for such an implementation of technology is for there to be an impact on the work transactions needed for the importance of company success and management. Various forms of network traffic take place on a day to day basis hence, the need for QoS so that customers and employees of the business gain the necessary satisfaction from network accessibility and reliability.
The following figure illustrates the Cisco QoS toolset.
(Figure omitted for preview. Available via download)
QoS can be considered an expansive topic in the sense that it can be implemented in accordance with many different and diverse applications and products. In the research examined for the purposes of this project, one interesting article was Cardoso et al. (2004) which discussed QoS for Workflows and Web Service Processes. The general dynamics of QoS is to have the efficiency of traffic on a network within a big website, and the authors of this particular article reasoned that management of QoS is perhaps the most important item missing from most QoS implementations. Organizations that operate in the modern marketplace (i.e. e-commerce) require significant and noteworthy management of QoS. The control of quality leads in essence to the creation of products and services of a quality nature in an effort to fulfill consumer expectations and achieve customer satisfaction. QoS has been a major priority of networking arenas for many years. Most of the research that has been executed on the topic has predominantly concentrated on enhancing workflow systems and the overall functionality of them. Additionally, technological solutions that have been presented more or less have been limited and preliminary (pg.1-2).
An additional article by Smriti Bhagat entitled, “QoS: Solution Waiting For A Problem” discussed how QoS is a solution that can be quite problematic in its overall implementation. The article cited several reasons why QoS is often an issue in implementing including limited deployment, specifically with the architecture deficiencies and the smooth transition those companies and corporations often face when trying to deploy QoS. Bhagat identified that "upgrading to QoS routers and hardware [for example] significantly decreases the manageability of the network, [but] current QoS enabled routers have an insufficient number of queues to support large scale differentiation" (pg.3). Basically, for Bhagat, QoS is a catch 22. Additionally, Bhagat's article identified that QoS does not have a business model. This has been "the greatest criticism of QoS. There is a paradoxical situation here: there is no model to charge for high quality of service, and hence no economic benefit in providing the services, which in turn leads to the absence of a pricing model. This situation [has led] to the lack of motivation in investing in QoS mechanisms" (pg.3). While this article was not common among some of the ones that we examined before deciding to implement QoS within our company utilizing Cisco products, we did question whether or not the ultimate implementation of the QoS would be something satisfactory and surpass our expectations.
As recently as January (2013) the company Qualcomm began making QoS mechanisms available directly to the consumers. This particular QoS technology is called, Streamboost. Streamboost provides each device and each particular application with the necessary bandwidth it requires to make sure that there are dynamic experiences while connecting to Wi-FI in the home. Qualcomm noted that the typical home can connect up to a total of 7 devices and the number is predicted to expand over the course of the next 5 years. Thus, users are finding themselves increasingly aggravated with network congestion. Qualcomm hopes to alleviate consumer concerns by providing the StreamBoost technology which takes the scope of QoS to a whole another level in terms of superior connectivity and enhanced experiences with regard to network management and traffic shaping. It also provides the necessary features for optimal performance. With the explosion of media that can be enjoyed on a plethora of devices, this is a bona fide QoS mechanism that stands to be quite interesting (“Qualcomm Introduces StreamBoost Technology to Optimize Performance and Capacity of Home Networks," 2013). The purpose of exploring StreamBoost technology for the purposes of this plan was to ascertain how well the objectives of the plan would develop accordingly. It also provides a backdrop for the type of technology that the company hopes to employ in additional phases post the first companywide implementation.
When speaking on a fundamental level, Quality of Service provides better service and flow of service. The Internetworking Technologies Handbook states that better service and flow of service is "done by either raising the priority of a flow or limiting the priority of another flow. When using congestion management tools, [the aim is to] try to raise the priority of a flow by queuing and servicing queues in different ways. The queue management tool used for congestion avoidance raises priority by dropping lower priority flows before higher priority flows. Policing and shaping provide priority to a flow by limiting the throughput of other flows. Link efficiency tools limit large flows to show a preference for small flows" (pg.49-2). Thus, by establishing QoS on any network for that matter, it will hopefully reduce and minimize congestion issues that may surface. While congestion may be minimized by QoS, that does not absolve the fact that there are circumstances and times when there is too much traffic along a network which results in QoS being solely a temporary fix to a problem that is continual.
The Internetworking Technologies Handbook identifies the basic structure of QoS implementation in three parts: "QoS identification and marking techniques for coordinating QoS from end to end between network elements; QoS within a single network element; [and] QoS policy, management, and accounting functions to control and administer end to end traffic across a network" (pg.49-2). The following is a basic diagram of QoS architecture that will be followed for the purposes of said implementation of the project.
(Diagram omitted for preview. Available via download)
In any project plan for QoS implementation, identification and marketing must be done. For this to be executed properly, it must be done through the process of both classification and reservation. Classification essentially is an identification of the type of traffic that must be provided in order for preferential service to be offered. Secondly, a packet of information may or may not be marked. When a packet is identified but not necessarily marked, the classification is considered to be per hop. Thus, the classification pertains only to the particular device that it is on, and not sent to the next router. Moreover, this process tends to occur with what is known as priority queuing and custom queuing. However, when a packet is marked for use on a network, precedence bits can be set (“Cisco,” 2013).
Cisco mandates that there be policing tools within their QoS networking. Therefore, given the implementation of Cisco QoS on our network, we will utilize policing tools. Policing tools assess "whether packets are conforming to administratively defined traffic rates and take action accordingly. A basic policer monitors a single rate: traffic equal to or below the defined rate is considered to conform to the rate, while traffic above the defined rate is considered to exceed the rate. Policers complement classification and marking policies" ("Cisco," 2013).
Tools for scheduling assess how a packet exits a particular device. Whenever a packet or packets enter a device quicker than they are able to exit the device, for example with mismatches of speed, then that can create congestion along the network. These devices often have buffers that permit higher priority packets to exit sooner than those of lower ones (“Cisco,” 2013). Cisco has a main software testing tool for scheduling known as Low Latency Queuing or LLQ. The following illustrates the algorithms that LLQ utilizes in its scheduling execution.
(Figure omitted for preview. Available via download)
Cisco defines traffic shaping as the ability to control the amount of traffic that goes out on a particular interface in order to match the speed flow of the target interface. Essentially, traffic shaping allows an imposing of rates along a network. Traffic shaping then is more or less a regulation of packet flow and thus, guidelines and limitations can be imposed within the implementation of traffic shaping. Traffic shaping is configured at the system or interface level. Certain policies pertaining to queuing can be overridden depending upon the type of configuration that is set up for traffic (“Cisco,” 2013). The following is a chart per Cisco (2013) of the procedure for Traffic Shaping Configuration.
(Chart omitted for preview. Available via download)
The following example shows how to configuring packet shaping using 200000 packets per second (pps):
(Configuration code omitted for preview. Available via download)
The following example shows a sample configuration for traffic shaping using 200000 packets per second:
(Configuration code omitted for preview. Available via download)
With Cisco QoS software there are features that provide five link layer efficiency. These include link fragmentation and interleaving for multilink PPP, frame relay fragmentation, distributed compressed real-time protocol and link fragmentation and interleaving for frame relay and atm VCs ("Cisco," 2013). For our purposes in executing QoS for the aforementioned reasons, it is important to ascertain the link layer mechanisms. The following describes them below per Cisco (2013).
- Interactive traffic (i.e. VoIP and Telnet) are frequently vulnerable to latency that is increased when there are large packets that the network is processing. Delay in packets is particularly notable when the queued FTP packets on the links are slowed within a WAN. In order to solve this dilemma with bandwidth links, larger packet fragmentation and then small fragment queuing between the large packet fragmentations is needed. With Cisco QoS, there is an LFI addition that helps to diminish delay on links of slower speed through the process of breaking up datagrams that are large and low delay traffic packets are interleaved with the packets that are smaller, thereby resulting from datagram fragmentation. The LFI feature permits queues to be set up in RTP (or Real Time Protocol) that can be distributed into a queue of higher priority ("Cisco," 2013). The figure below illustrates Link Fragmentation and Interleaving.
(Figure omitted for preview. Available via download)
-This particular feature provides for the transporting of voice (real-time) as well as other data traffic on lower speed Frame Relay and ATM virtual circuitry. This can be performed without causing delay of an excessive nature to traffic in real time. Cisco has implemented this particular feature that will enable delay sensitive real time packets as well as packets that are currently not in real time or contain data in real time to split the link that is similar by fragmentation of packets that have long data. These long data packets will be fragmented into a type of sequence of like fragments and then interleaved with packets that are in real time. On the respective link, when there is receiving, these particular fragments are then reassembled and reconstructed. This methodology of fragmentation and interleaving assists QoS of real time traffic on the network ("Cisco," 2013). There are restrictions associated with Link Fragmentation and Interleaving for Frame Relay and ATM VCs such as: there can only be per bundle of MLP one link and voice over ATM and voice over Frame Relay are not supported while VoIP is supported.
- Real Time Protocol is considered to be the Internet Standard protocol for transportation of data in real time. RTP as it is referred to, is intended to be able to provide network transportation of functions for certain applications that provide video, audio and data simulation through either unicast or multicast networking services. Real Time Protocol also allows for support for conferencing of groups of any size along the World Wide Web. This additional support includes identification of sources, gateway supporting (i.e both audio as well as video bridges) and translators that go from multicast to unicast. Real Time Protocol provides QoS feedback through the receivers to the group of multicast. It also provides for different media streams synchronization.
-Real Time Protocol incorporates both a header and data portion. RTP data portions consist of a protocol that is thin that gives assistance for properties in real time for certain applications (i.e. media of a continuous nature such as content identification, loss detection and timing reconstruction). The RTP header portion is noticeably larger than the data portion. In order to prevent consumption of bandwidth, RTP headers have a compression feature that is used on the basis of link by link ("Cisco," 2013). The following is a figure of RTP Header Compression per Cisco (2013).
(Figure omitted for preview. Available via download)
Service levels "refer to the actual end to end QoS capabilities, meaning the capability of a network to deliver service needed by specific network traffic from end to end or edge to edge. The services differ in their level of QoS strictness, which describes how tightly the service can be bound by specific bandwidth, delay, jitter and loss characteristics. Three basic levels of end to end QoS can be provided across a heterogeneous network: Best-effort service, [which is] known as lack of QoS, best effort service is basic connectivity with no guarantees. This is best characterized by FIFO queues, which have no differentiation between flows; Differentiated service, [also referred to as soft QoS], where some traffic is treated better than the rest (faster handling, more average bandwidth, and lower average loss rate). This is a statistical preference, not a hard and fast guarantee. This is provided by classification of traffic and the use of QoS tools such as PQ, CQ, WFQ and WRED" ("The Internetworking Technologies Handbook," n.d.). For the purposes of this project, end to end QoS capabilities will be utilized.
It is the aim of the project plan to execute a limiting of audio and video streaming when appropriate. Thus, measures will be taken to ensure that End to End QoS is implementing rather than anti-malware software dictated networking. Egilmez et al. (n.d.) stated that with multimedia transmission, the delivery in terms of timing is preferred rather than reliability. Streaming applications that provide multimedia have delay requirements which cannot be guaranteed irrespective of the Internet connection. The design of end to end has two major benefits: it permits for a best effort type service and it minimizes the overhead and cost at the network layer without losing any particular type of robustness associated with the network structure. It is then very desirable for a network infrastructure to be able to support QoS for multimedia applications (pg.1). In order to improve and enhance system performance and limiting of audio and video streaming when appropriate to be executed, there must be an improvement through the usage of QoS arbitration on the framework of QoS.
Siller and Woods (2003) noted that many solutions have been proposed in order to provide QoS along networks. There are seven layers within the application model of QoS. Concern lends itself to the end user experience thus, "a security metric used for measuring performance is Quality of Experience or QoE. The application layer of QoS is concerned with parameters such as resolution, frame rate, color, video codec type, audio codec type, layering strategy, sampling rate and number of channels. On the application layer of QoS, it is driven by the human perception of audio and video. The perception is based [primarily] by three characteristics: spatial perception, temporal perception, and acoustic bandpass for audio. Applications such as Teleconferencing, VoIP, distance learning makes use of these characteristics. In terms of coding the audio is divided into blocks of samples called frames" (pg.1-2). The relationship between QoS and multimedia is fairly new in terms of company understanding and overall undertaking. Within the discussion of QoE, "the totality of QoS mechanisms provided to ensure smooth transmission of audio and video of IP networks [is necessary]. QoE is referred to as what a customer experiences and values to complete his tasks quickly and with confidence. It also proposed a QoE benchmark for e-business" (Siller and Woods, 2003). In order for the implementation of the project plan objectives to be met, QoE will be implemented.
(Figure omitted for preview. Available via download)
Integration into the company infrastructure will not be easy as monitoring QoS is not an easy task. However, reports will be produced on a bi-weekly basis that will be analyzed by key individuals to ascertain if the QoS is functioning as it should. It will be important to compare and contrast QoS traffic per interface and per service policy, in addition to per device within the company so it is understood how QoS is operating and being maintained. QoS monitoring is vital to ensure good quality VoIP and for troubleshooting in general. We will implore a QoS sensor which will seek to effectively monitor the quality of a network connecting by examining the QoS parameters such as packet loss, delay variation, jitter and other types of readings. Any slight variations in these processes typically has been noted to have a nominal effect on the services based with TCP, however, it is important to continually measure and monitor the network connectivity parameters to ensure that everything is running smoothly on the QoS.
QoS monitoring essentially takes measurements by sending packets of UDP between two probes that are remote. This translates to mean that any network connection within a company's network can be effectively tested by simply placing the probe that is remote on each end of the particular connection and measuring said quality between them. If the QoS monitor detects an issue, the administrator of the network is immediately notified and can seek to resolve the problems that are happening. Therefore, QoS monitoring is not only essential for networking services, but for all services including VoIP ("QoS Monitoring," 2013).
If for any reason any of the aforementioned mechanisms are not functioning properly, updates will be made. Continual evaluation will be needed in order to successfully ensure QoS along the project plan objectives that were established at the onset of the project. Policies may be adjusted if certain elements of the QoS implementation are changed and/or altered in a certain fashion that will affect overall company protocol or risk having network congestion in the interim. Inspections will be done with regard to the multimedia elements such as QoE to make certain that said processes and features of the QoS are operating efficiently.
The next section will discuss managing QoS and the purpose behind it. This portion of the plan is one of the most important as managing QoS helps to alleviate issues pertaining to network traffic congestion. Selecting and choosing the right management tool is first and foremost in addition to understanding the common methodology association with QoS policies and goals that pertain to QoS management.
According to the Internetworking Technologies Handbook, managing the QoS "helps to set and evaluate QoS policies and goals. Common methodology entails the following steps:
I. Baseline the network with devices such as RMON probes. This helps in determining the traffic characteristics of the network. Also, applications targeted for QoS should be baselined (usually in terms of response time).
II. Deploy QoS techniques when the traffic characteristics have been obtained and an application has been targeted for increased QoS.
III. Evaluate the results by testing the response of the targeted applications to see whether the QoS goals have been reached. For ease of deployment, Cisco's Quality of Service Policy Manager and Quality of Service Device Manager can be [utilized]. For verification of service levels, Cisco's Internetwork Performance Monitor can be [utilized]. Consideration [must be given to] an ever-changing network environment. QoS is not a onetime deployment, but an ongoing, essential part of network design" (pg.49-5).
There are several ways by which congestion on a network can be handled. Per Cisco (2013) software includes the following queuing algorithms that seek to control or rather manage congestion. They are: FIFO queuing, priority queuing, custom queuing, flow based weighted fair queuing and class-based weighted fair queuing. Each type of queuing was designed in an effort to solve network traffic issues that may occur from time to time and to improve overall network performance.
In our assessment of what is best needed, we have ascertained that the best management tool to use for our purposes is Flowed Based WFQ. Flow Based WFQ is "desirable to provide consistent response time to heavy and light network users alike without adding excessive bandwidth. WFQ is one of the Cisco's premier queuing techniques. It is a flow based queuing algorithm that creates bit-wise fairness by allowing each queue to be serviced fairly in terms of byte count. WFQ ensures that queues do not starve for bandwidth and that traffic gets predictable service. Low-volume traffic streams-which comprise the majority of traffic-receive increased service, transmitting the same number of bytes as high volume streams. This behavior results in what appears to be preferential treatment for low-volume traffic, when in actuality it is creating fairness" ("Internetworking Technologies Handbook," n.d.). The following figure per Cisco (2013) reflects that statement.
(Figure for preview. Available via download)
The Internetworking Technologies Handbook further noted that:
"WFQ is designed to minimize configuration effort and it automatically adapts to changing network traffic conditions. In fact, WFQ does such a good job for most applications that it has been made the default queuing mode on most serial interfaces. Flow based WFQ creates flows based on a number of characteristics in a packet. Each flow (also referred to as a conversation) is given its own queue for buffering if congestion is experienced. WFQ is efficient in that it uses whatever bandwidth is available to forward traffic from lower priority flow if no traffic from higher priority flows is present. This is different from [other methods]. The WFQ algorithm also addresses the problem of round trip delay variability. [For example] if multiple high volume conversations are active, their transfer rates and interarrival periods are made much more predictable. This is created by the bit wise fairness. If conversations are serviced in a consistent manner with every round robin approach, delay variation stabilizes. WFQ greatly enhances algorithms such as SNA Logical Link Control and the Transmission Control Protocol congestion control and slow start features" (pg. 49-13-14).
(Diagram omitted for preview. Available via download)
Additionally, the Internetworking Technologies Handbook stated that WFQ is “IP precedence-aware, [which means] it is capable of detecting higher priority packets marked with precedence by the IP forwarder and can schedule them faster, and providing superior response time for this traffic. This is the weighted portion of WFQ. The IP Precedence field has values between 0 and 7 (6 and 7 are reserved and normally are not set by network administrators). As the precedence value increases, the algorithm allocates more bandwidth to that conversation to make sure that it is served more quickly when congestion occurs. WFQ assigns a weight to each flow, which determines the transmit order for queued packets. In this, lower weights are provided more service. IP precedence serves as a divisor to this weighting factor. For instance, traffic with an IP Precedence field value of 7 gets a lower weight than traffic with an IP Precedence field value of 3, and thus has priority in the transmit order" (pg.49-14).
In addition to management tools, there are also congestion avoidance tools that can be implemented. Per Cisco (2013) the primary congestion avoidance tool is known as WRED, or weighted random early detection. The Internetworking Technologies Handbook stated that "random early detection algorithms are designed to avoid congestion in internetworks before it becomes a problem" (pg.49-16). Hence, when coming up with the project, execution of WRED was necessary in order to keep congestion to a minimum and potentially avoid it altogether. Random early detection was primarily designated to be an algorithm that works with TCP in IP environments. When considering WRED, understanding the elements and aspects that go into were necessary.
WRED "combines the capabilities of the RED algorithm with IP precedence. This combination provides for preferential traffic handling for higher-priority packets. It can selectively discard lower priority traffic when the interface starts to get congested and can provide differentiated performance characteristics for different classes of service. WRED is also RSVP aware and can provide an integrated services controlled load QoS" ("Internetworking Technologies Handbook," n.d.). The following illustrates a WRED diagram.
(Diagram omitted for preview. Available via download)
The projected timeline will be approximately 10 months for first across the company implement to limit video and audio consumption. Additional project requests will be submitted for future phases defined by priority during the initiation implementation and monitoring of success. The following pyramid illustrates the timeline of the project’s execution within a 10 month time frame for companywide implementation.
(Pyramid omitted for preview. Available via download)
The following chart shows the timeline in overall QoS implementation and execution of specifics within the QoS.
(Chart omitted for preview. Available via download)
Each stage of the QoS implementation was done based on the effectiveness of the previous step. Thus, the initial stages were a combination of discussion and project planning to understand how QoS was done, learning the concepts surrounding it and all of the necessary components to ensure its success for the objectives stated at the onset of the project. Following that, implementation began. Resources were assembled with the admission control implementation, classification and marketing and scheduling tools being implemented or rather scheduled to be implemented within a 3 month period. There were no issues with this particular stage of the QoS execution. Upon the 4-6 months mark, traffic shaping algorithms were added and the link specific mechanisms. These were also successful in their overall implementation. As the 6 month mark was nearing completion, the end to end aspects were added and examined. The 7-10 month stage then began with the QoE elements being added as well as the Flow based WFQ Management Tool.
10 months to execute a project was concluded to be an auspicious undertaking. Said implementation of QoS encountered several issues due to the speed of trying to get it implemented across the company. While the initial thought process was that the QoS would be implemented within 10 months, there were a few problems that occurred during the 7-10 month stage with the Flow Based WFQ Management Tool Implementation. While this particular type of Cisco queuing technique is one of the best, consideration was given to CQ as during testing of the implementation of Flow Based WFQ there were a few issues pertaining to congestion.
The Internetworking Technologies Handbook states that CQ was "designed to allow various applications or organizations share the network among applications with specific minimum bandwidth or latency requirements. In these environments, bandwidth must be shared proportionally between applications and users. Cisco CQ [can be used] to provide guaranteed bandwidth at a potential congestion point, ensuring the specified traffic a fixed portion of available bandwidth and leaving the remaining bandwidth to other traffic. Custom queuing handles traffic by assigning a specified amount of queue space to each class of packets and then servicing the queues in a round robin fashion" (pg.49-7). The following diagram illustrates CQ.
(Diagram omitted for preview. Available via download)
Consideration was given to CQ due to congestion issues in testing. While consideration was given to CQ, ultimately it was the company's decision to remain with the Flow Based WFQ approach as it is known to be the superior queuing technique.
While the basic objective of the project plan was to limit audio and video streaming as appropriate, future development of the QoS will be centered on VoIP solutions per Cisco (2013) capabilities. "IOS software [from Cisco] features give VoIP traffic the service that it needs, while providing the traditional data traffic with the service that it needs as well. [Some] businesses have opted to reduce some of [their] voice costs by combining voice traffic onto [their] existing IP network," ("Internetworking Technologies Handbook," n.d.) which, while not the focus of our QoS implementation was something that we were able to effectively seek to establish for future execution. The following diagram from Cisco (2013) illustrates an overview of a QoS VoIP Solution, which will be emphasized in the coming months following the rollouts of the company-wide implementation of QoS pertaining to the project objectives.
(Diagram omitted for preview. Available via download)
When evaluated on a scale of 1-10 (10 being the highest in terms of company satisfaction the responses by the company were mostly 9s and 10s. The following is a list of questions the company was given during the first companywide implementation of the QoS. The surveys were distributed quantitatively to 50 people within the IT Department. The survey was composed of 5 questions.
(Questions and table omitted for preview. Available via download)
It can be assessed that the majority of the people that were asked thought that the QoS implementation was well executed with minimal issues. Surveys will be distributed again when additional updates to the QoS is deployed a second time.
(Competency Matrix omitted for preview. Available via download)
One of the key aspects of leadership is knowing how to develop a plan and stick with it. This particular capstone project enabled me to learn more about the process of implementing QoS on a network. In working with a team of individuals, invaluable insight was gained as to how to the various leadership styles. While leadership styles are many, the one that was exercised throughout the course of this plan was a combination of delegative which allows employees to make the decisions, but the leader still is responsible for the decisions that are made and a democratic leadership style, which allowed for input among multiple individuals in order to successfully and skillfully execute a project of this nature. My professional skills were furthered as well, as teamwork played a unique component in the execution of the project as without a team, the project plan and said implementation would probably not have happened and if it perhaps did, it would not have necessarily been as effective and efficient as it appears to be currently.
Reasoning the best course of action in terms of how long to predict the project would take required some rational thinking. It became an important element in the overall project to gain an understanding of how best to reason when certain aspects of the plan would be implemented and examined. While there were issues over the course of the project, the problem solving of assessing a different type of congestion management tool in the face of a problematic one, allowed for a greater expertise in the different and diverse types of tools that QoS has.
This was perhaps the most developed area along the project as communication is the foundation of any project plan getting implemented and thoroughly deployed properly. Effective communication minimizes potential issues and/or problems that may occur throughout the plan being implemented. In the case of language, there was knowledge of technical jargon that allowed for a better grasp of the material pertaining to QoS and the various aspects within the IT topic.
Communication also played a key role in the success of being a good leader throughout the project to ensure that the objectives were met and met timely. It became essential for the communication that was given to the team to be compelling, strategic and of course, clear. Communication is a two way process, so while orders were being given with regard to how the project was to be carried out efficiently and cost-effective, others on the team were also communicating with myself in terms of how well the implementation of the plan was going.
Understanding big data analytic concepts becomes all the more important especially in the case of a project of this magnitude. While Cisco (2013) hardware and software were used, being able to effectively analyze and test the implementation along the way required competency in basic problem solving and a rational mathematical understanding. Knowing what graphics to read and use in the course of the project was also needed. Drawing and making inferences about the best operative measures with respect to congestion management tools was definitively needed in order for this project to be implemented timely and smoothly.
The capstone project demonstrated my technology competency in project management. Project management is often an overlooked component of any venture, however, effective project management can push a project to its greatest potential or it can cause a project to end abruptly. It is essential that a team that is working on a project has clear and precise objectives as to what they are working toward. There needs to be a strategic undertaking in the particular management of the project. Project management requires diligence, detailed expectations as well as constructive criticism.
Within the project, there were checkpoints that occurred to ensure that everything was going according to plan and it also provided an opportunity to change and/or implement new instructions if need be. By having regular checkpoints, project management skills of the leader of the project are improved. It is important to note that while this particular QoS project is complete, continual evaluation will be done as that is an integral step in the overall project management process. It assists the team that worked on this project understanding what works and what does not for any additional phases of QoS that will be executed as well as any additional projects that will be worked on. My understanding of project management was solidified by this particular project. It allowed me to better comprehend IT concepts pertaining to network traffic and the various components that go into ensuring that there is quality of service on network and why it is essential.
In conclusion, the implementation has nominal problems and/or issues, which does tend to happen from time to time on projects. While there were a few snags along the way in the overall timeframe of the project and the first companywide execution, the overall project was a success. The company has a lot to look forward to in its overall dynamics of smoother network traffic and better use of bandwidth, among other networking aspects now that QoS has been effectively implemented.
References
Anderson, T. (2009, March 3). Why Quality of Service is Even More Important in a Virtual Environment. Retrieved September 5, 2013, from Storage Switzerland, LLC website: http://www.storage-switzerland.com
Bhagat, S. (n.d.). QoS: Solution Waiting For A Problem [Article]. Retrieved from Rutgers University website: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.119.1234&rep=rep1&type=pdf
Cardoso, J., Sheth, A., Miller, J., Arnold, J., & Kochur, K. (2004). Quality of Service for Workflows and Web Service Processes. Journal of Web Semantics, 1-40.
Cisco. (2013). Retrieved September 4, 2013, from Cisco website: http://www.cisco.com/
Egilmez, H. E., Dane, S. T., Bagci, K. T., & Tekalp, A. M. (n.d.). OpenQoS: An OpenFlow Controller Design for Multimedia Delivery with End-to-End Quality of Service over Software-Defined Networks. IEEE, 1-8
Internetworking Technologies Handbook [Handbook]. (n.d.). Retrieved from Cisco Systems website: http://www.cisco.com/en/US/docs/internetworking/technology/handbook/ito_doc.html
QoS Monitoring. (2013). Retrieved September 6, 2013, from Paessler AG website: http://www.paessler.com/qos_monitoring
Qualcomm Introduces StreamBoost Technology to Optimize Performance and Capacity of Home Networks. (2013). Retrieved September 7, 2013, from QUALCOMM Incorporated website: http://www.qualcomm.com/media/releases/2013/01/04/qualcomm-introduces-streamboost-technology-optimize-performance-and
Siller, M., & Woods, J. (2003). Improving Quality of Experience For Multimedia Services By QOS Arbitration on A QOE Framework. Proc. of the 13th Packed Video Workshop, 1-7.
Capital Punishment and Vigilantism: A Historical Comparison
Pancreatic Cancer in the United States
The Long-term Effects of Environmental Toxicity
Audism: Occurrences within the Deaf Community
DSS Models in the Airline Industry
The Porter Diamond: A Study of the Silicon Valley
The Studied Microeconomics of Converting Farmland from Conventional to Organic Production
© 2024 WRITERTOOLS