The Internet is not a homogeneous network. It may be argued that the price of sending a message can be based on the most congested point of the network. However, the path that a packet will take cannot be predicted with any degree of certainty. It is thus close to impossible to base pricing on an algorithm related to the network load at the most congested point of the network along the path that the packets have to traverse in order to be able to reach their destination.
Also, network load is unpredictable, and is prone to sudden peaks and troughs. It is entirely possible that the load at a particular node changes rapidly and the bid is simply not good enough to receive priority from that node at that moment, even though it might have been so earlier.
It may be argued that through consensus a system could evolve where "regional" congestion is calculable, and the price determined on the basis of an algorithm that considers all possible routings and all possible levels of network loads. However, given the diversity of the Internet and the multiple levels of players, this sounds extremely far-fetched and difficult to achieve without any neutral, oversight agency.
Second, and more importantly, a pricing system based on network load opens itself up to potential abuse by those who control the facilities at the system bottlenecks. It may be argued that any system would be vulnerable to abuse, but the anonymity of data transferred along the Internet would make this system especially vulnerable: for example, unscrupulous firms in control of the various nodes would have both the incentive and ability to manipulate the network load to keep it artificially high so as to create an upward pressure on the price of network usage.
Given that marginal costs are almost zero, the firm would attempt to maximize revenue. It can do this by tracking network usage and artificially keeping the network load at a point where overall revenue realization is maximized.
The system is therefore open to abuse by bottleneck-controlling firms who peg the network load at high levels in order to maximize revenue, thereby manipulating the price of network usage upwards. For the system to operate fairly and efficiently, there would either have to be no motivation for exploitation of market power, or a strict system of controls against abuse. These two issues—the perceived homogeneity and the possibility of manipulation—are the fundamental reasons why the Smart Market mechanism, or any variation of it, needs to be combined with an institutional form that is responsible for a consensus-building, and b ensuring against manipulation, anti-competitive behavior, and abuse of market- power.
Given the experience of the telecommunication industries, it should be amply clear that there is an essential contradiction in free market operations. The greater the degree of freedom, the greater becomes the role for regulation.
It is important to address the control of bottlenecks and their role in influencing the pricing mechanism. Although an oversight agency could, hypothetically, ensure that the consumer surplus [13] generated is not collected as excess profits by the firms and is returned to consumers MacKie-Mason, [14] , it is more desirable to design a system wherein the transfer of excess funds does not happen in the first place.
While it is true that competition is the best form of regulation, the privatization of the Internet's facilities and the emergence of the NAPs indicate that the owners of the underlying trunks and access paths the Regional Bell Operating Companies, the Inter Exchange Carriers, and the CAPs are likely to have more market power than any private organization has had over the Internet to date.
Whether one envisions Internet carriage emerging as a competitive industry or one that is effectively oligopolistic, there seems to be a role for regulatory agencies. There is a need to regulate pricing and control anti-competitive behavior in the event that the industry is less than competitive. On the other hand, even if the system is highly competitive, the dynamics of network pricing need to be implemented by some form of nonprofit consortium or by a public agency to ensure consumer protection on the one hand, and coordination and consensus among the different service providers on the other.
In the absence of such consensus building activities and an imperfect market situation, dynamic pricing is likely to have a chaotic effect where the cost of accounting and regulatory oversight is extremely high. This might have an undesirable effect on the implementation of such a scheme in the first place.
Some may argue that in the event a purely competitive situation emerges, then it does not matter what form of pricing scheme emerges Bohn, [15]. But this overlooks the fact that every pricing schemes has its own inherent bias and different levels and kinds of associated social benefits. An added factor that needs to be assessed is how technology is expected to develop over time.
Similar to pricing schemes, every technology also has its own bias. Since technological development is likely to be unbalanced, and breakthroughs can be expected to be sporadic both in terms of time and space, the pricing schemes that are implemented need to be accordingly tailored to reflect or obviate the effects of technological imbalances.
For example, transmission technology, which is dependent on fiber-optics, is slated to develop much faster than switching technology, which is currently electronic based. Should the expectation be that switching technology will develop quickly and fiber-optic technology implemented, the fear of congestion at the nodes will no longer be a valid one. The bottleneck will then change back to the transmission lines, not in terms of the physical capacity of the fiber optic trunk lines, but in the costs associated with overlaying all user lines, especially the last loop that connects the customers premises to the nearest switch.
In all likelihood, the market is going to be transformed in an incremental manner. Initially, some form of usage-based pricing, possibly dynamic pricing, may be combined with flat- rate pricing. For applications that require resource reservation, usage-based pricing would be necessary to control their proliferation and to ensure network performance.
For more traditional forms of net usage, such as email, flat-rate access would continue to be the norm. In other words, the pricing system that is likely to evolve would move the industry towards multiple service levels.
While it would be difficult to predict the exact form of pricing that will emerge, it seems clear that there will be a role for oversight agencies and regulators as the Internet evolves. Bohn, R. Future Internet pricing. Posting on telecomreg relay. Mitigating the coming Internet crunch: Multiple service levels via Precedence.
Tech rep. Business Editors. March 11, Competition, controversy ahead in era of Internet commercialization. Business Wire. Cocchi, R. Pricing in computer networks: Motivation, formulation, and example. Campbell, A. April 4, Distributed testing: Avoiding the Domino effect. Telephony, England, K. Love, J. MacKie-Mason, J. Mitchell, B. Pricing local exchange services: A futuristic view.
In Perspectives on the telephone industry: The challenge of the future. Edited by James H. Pecker, C. To connect or not to connect: Local exchange carriers consider connection oriented or connectionless network services. Russell, J. Multimedia networking requirements. In Asynchronous Transfer Mode. Plenum: New York. Tenney, G. Varian, H. Pricing the Internet. Economics of the Internet. Wenders, J. Deregulating the Local Exchange.
Vickrey, W. Local telephone costs and the design of rate structures: An innovative view. Mitrabarun Sarkar sarkar tc. Tel: Traffic statistics are available from Merit's ftp site at nic. Varian and MacKie-Mason note that the actual growth has been faster. Internet usage is underestimated by the Merit figures, which do not incorporate data related to alternative backbone routes where the traffic is estimated to have been growing much faster.
This division of labor also keeps barriers to entry in the market for online applications and content extremely low: Anyone can create a new web site without the assistance or approval of the large firms that manage the internet's physical infrastructure. And the fact that anyone can enter the market is one reason technological progress online has been so rapid. There is nothing inevitable, however, about the fact that payments flow from broadband companies to backbone companies instead of the other way around.
The direction of payments results from unregulated negotiations among the companies involved; they are a function of relationships developed in an open market.
Traditionally, broadband ISPs have been small compared to backbone providers, so the former paid the latter for connectivity. But a firm that controlled a sufficiently large share of the internet's end points — the points at which users access the network — would be able to make itself a gatekeeper to the market for online applications and content. And as the broadband market has been consolidating in recent years, this has become more than a theoretical possibility.
The rise of online video has given large providers of internet access even greater incentive to pursue a more active gatekeeper role. Growing numbers of consumers are "cutting the cord," using online video services such as Netflix as substitutes for traditional cable-television subscriptions. And since some of the largest incumbent providers of internet access are cable-television companies, this trend poses a serious competitive threat to their core business.
It is logical that these firms would want to neutralize, or at least reduce, this threat by restricting the flow of rival internet content through their networks. Up to now, the internet's decentralized architecture has made such anti-competitive behavior difficult; with the continued consolidation of broadband, however, this obstacle could be swept away.
Imagine, for instance, that a single broadband provider — call it Broadband, Inc. This would dramatically alter the internet's structure, effectively transforming the internet's largest backbone providers into mere re-sellers of access to Broadband, Inc. Broadband, Inc. That, in turn, would transform web-content companies such as Yahoo! And Broadband, Inc. For example, if Broadband, Inc. The economics of the internet would never be the same, and the competitiveness so critical to its great success would suffer most of all.
Rapid consolidation also occurred in the cable-television industry during the same period, with several small cable firms merging to form two giants: Comcast and Time Warner. And its dominance is likely to grow in the coming years. In , Comcast completed an upgrade of its network that will allow it to offer Mbps broadband service, which is an order of magnitude faster than the typical DSL connection provided by phone companies today. For technical reasons, Comcast can squeeze much more bandwidth out of the coaxial cables that make up its existing cable network than can be squeezed from the twisted-pair cables of traditional telephone networks.
FiOS is Verizon's attempt to solve this problem by replacing its slow telephone cables with fiber-optic connections capable of offering speed that can compete with Comcast's. But in , Verizon announced that it was winding down its FiOS installation efforts. Verizon plans for the network to reach around 18 million households, but not in some major metropolitan areas, including a few like Boston at the heart of its service area.
News reports cited the high costs of the project as a reason why it was not being extended to all homes in Verizon's territory. This might explain why, in the third quarter of , Comcast added more than twice as many subscribers as did the seven largest telephone incumbents combined.
Comcast's large share of the broadband market, along with the fact that most of its customers have few if any comparable alternatives, gives Comcast significant leverage in negotiating with backbone ISPs.
Comcast has traditionally been a customer of Level 3, one of the largest internet backbone providers. When a Comcast user exchanged data with a network that was not directly connected to Comcast's own network, Comcast paid Level 3 to carry that traffic. But in November , the two firms became locked in a bitter dispute. Level 3 had just won a contract to deliver content for Netflix, one of the internet's largest video services.
Anticipating that Netflix would generate more traffic than the existing links between the Comcast and Level 3 networks could accommodate, Level 3 proposed installing additional links between the networks. Ordinarily, Comcast, as a Level 3 customer, would gladly accept what was essentially a free upgrade. Instead, Comcast refused to accept the new connections unless Level 3 agreed to pay Comcast for the additional traffic.
And Level 3, after voicing strong objections, paid up. Was this the first step toward a regime in which Comcast requires backbone providers to pay it for access, instead of having payment flow in the other direction? It may well have been, although the details of this incident were a little more complicated. Before it signed its contract with Level 3, Netflix had used a service called a content-delivery network to deliver video content to Comcast broadband users.
CDNs are networks run by third parties, such as Akamai, which place servers within the networks of large ISPs like Comcast, enabling faster downloads while reducing Comcast's bandwidth costs. Level 3, by contrast, planned to host Netflix content on servers outside of Comcast's network, access to which would likely have consumed significantly more resources on Comcast's network.
Comcast argued that Level 3 was abusing its relationship with Comcast to gain an unfair advantage in the CDN market, and in a way that would cost Comcast money. Remember, CDNs typically save broadband ISPs money by reducing the amount of expensive long-distance traffic they must pay for. The fact that Comcast feels comfortable charging CDNs for the privilege of installing servers that reduce Comcast's own bandwidth expenses is itself evidence of Comcast's growing market power.
Comcast's dispute with Level 3 suggests that, if the company does not yet have the ability to charge backbone networks to deliver traffic to its customers, it is getting close. Comcast's leverage over backbone providers has grown with its share of the broadband market, and, as we have seen, that market share is likely to continue growing over the next decade. It seems increasingly clear that the economic model of the internet is changing in ways that will soon present regulators with very troubling challenges.
Regulating telecommunications networks is difficult because it is hard to draw clear lines between the monopolistic parts of the industry and the competitive ones.
On a highway, this line is defined by the point at which the rubber meets the road: Competitive industries drive above this point, while the monopolistic road system lies below it. Because building roads and operating vehicles are such different activities, it is not difficult to keep them separated.
Regulators have repeatedly tried to draw similar lines in the telecommunications industry, but to no avail. In the s, the FCC tried to divide computing and communications. In the s, it tried to separate the "local loop" from retail internet access.
Such distinctions have quickly been rendered obsolete by technological progress and incumbent resistance. And the failure to define clear lines has produced costly litigation and uncertainty.
The one line that has stood the test of time was not drawn by federal regulators at all. As we have seen, the buffer between ISPs and content providers is defined by the internet's protocols and enforced by the internet's decentralized architecture.
And the durability of this line has given regulators the luxury of taking a hands-off approach to the internet: The FCC hasn't needed to closely scrutinize the actions of broadband incumbents, because incumbents have no real leverage over content-providing firms like Google, Facebook, and Netflix. If a broadband provider gained enough market power to undermine the internet's decentralized architecture, however, the FCC would no longer have this luxury. The commission would likely be sucked into adjudicating an endless series of interconnection disputes, just as it has for decades in the traditional telephone industry.
And just as in the telephone industry, this would be a process rife with rent-seeking, waste, and expensive litigation. But if the FCC chose not to get involved in such interconnection disputes, independent backbone providers would likely be driven out of business.
The result would be a new, vertically integrated internet that would smother competition and innovation in Silicon Valley. In other words, broadband consolidation presents a case where an ounce of regulatory prevention today would be worth more than a pound of regulatory cure in the future. By preventing broadband incumbents from becoming too concentrated, regulators can preserve the internet's backbone as a buffer between the duopolistic broadband market and the competitive internet economy.
How can policymakers protect this division? The basic strategy should be to prevent any single firm from gaining too large a share of the national broadband market. There has never been a network like the internet before, so it is impossible to know exactly how much concentration is "too much.
And given the high stakes, it is better to be safe than sorry. The first and least intrusive step regulators can take is to require greater transparency. The terms of interconnection on the internet are closely guarded secrets, a fact that makes it difficult for regulators and the general public to understand how the market is evolving. Requiring that agreements involving large broadband incumbents be publicly available would allow for a much better informed public debate.
Second, regulators should prevent mergers that would increase the market share of the largest broadband providers. There have been many mergers among phone and cable incumbents since ; had regulators blocked some of them, it is unlikely that any firm would have a large enough share of the broadband market to raise concerns today. Rather than reviewing mergers on a case-by-case basis — a process that invites arbitrariness and corruption — Congress should establish a formal rule governing mergers involving phone and cable incumbents.
Such mergers increase the incentive and opportunity for broadband incumbents to engage in anti-competitive conduct. A final step, and the most intrusive and problematic, would be to follow the example policymakers set a generation ago and seek divestiture of the largest broadband incumbents. As the largest of these, Comcast merits the most attention. Breaking the firm up into two or more pieces would greatly reduce the risk it poses to the open internet.
To be sure, breaking up Comcast would have significant downsides. The legal process would be long and acrimonious, and litigation would distract Comcast's leadership from improving their company's products and services. And to the extent that Comcast owes its growing market share to its aggressive investment in network upgrades, it would be unfair to use that smart investment as a basis for seeking the cable giant's break-up. Someone financed and built those networks and someone has to keep building and improving them.
There is no free lunch. Something has to pay the bills, including the broadband bills. End of story. He could have spared us all the sloppy economic sophistry and just told us that. It would have made it a bit easier to take him seriously. Incidentally, Mr.
0コメント