The coming crash of e-commerce
December, 1999
Comments in this paper were gnereated prior to the March meltdown in dot coms.
It's already a cliché in an industry too young for clichés.
Competition is only a click away. Or, from another perspective, competition might be a short drive away, a phone call away, or even a postcard away. As the latest form of direct sales, electronic commerce is the least established. It's also the one from which the most will be demanded and the least forgiven.
E-commerce vendors, as they contemplate what will be their biggest selling season, are facing a situation unprecedented in the history of sales automation. While being forced to compete on price, they are also obliged to ensure smooth, fast, efficient transactions transactions that must take place over a minimally managed public network on which only outbound traffic from the Web site is subject to any control.
The situation is doubly ironic, because few major enterprises conduct critical enterprise functions over public Internet connections; security and reliability are just too problematic. But the e-commerce business has no choice but to do so, and it must do so in the context of greatly augmented Internet traffic at the peak holiday selling season.
No one knows for certain how much Internet traffic will increase this month, but Avi Freedman, vice president for AboveNet a San Jose, Calif.-based purveyor of backbone connectivity and secure points of presence for Internet servers, sees significant demands for capacity.
"We had a 30% increase in demand for capacity last year, and we're projecting 50% to 60% this year," notes Freedman. AboveNet started seeing capacity increases in October.
Adds Al Avery, vice president of marketing at Equinix (Redwood City, Calif.), an independent provider of Internet interexchange facilities: "The figure we're hearing is a three-fold increase in transactions over the last holiday season. This will be the crucial year in determining how the various e-businesses perform - if they survive."
Reports Michael Terretta, chief technical officer at e-Media CNew Canaan, Conn.), a provider of Web hosting and content distribution services: "Most of our customers expect their traffic to double this holiday season."
So how, exactly, will such increases in traffic impact the performance of Internet service providers ISPs) and of the Internet itself? What will be the effects on e-commerce vendors, both individually and collectively, as the e-commerce industry struggles to enter the mainstream? And how will individual vendors and service providers attempt to satisfy a seasonal spike in demand, when doing so might create costly overcapacity in the months to follow?
ISPs, e-commerce businesses, and the numerous providers of ecommerce software and Web site design services are struggling to come up with the answers.
DEFINING A METRIC
Characterizing the network performance of an e-commerce Web site is difficult because several networks are involved.
There's the local area network (LAN) connecting the e-commerce vendor's Web server, ecommerce server Chopefully, these two are separate), and backoffice enterprise system which itself might include accounting, inventory control and purchasing, credit records, customer profiles, fulfillment and, possibly, ancillary systems such as customer relationship management (CRM).
There's the public Internet over which all customer queries and transactions must take place, at least in terms of inbound traffic to the Web site.
There's the public switched telephone network (PSTN), forming the physical backbone over which most Internet traffic travels.
Finally, there are Internet overlays provided by so-called premium ISPs and exchanges, multiples of which some e-commerce vendors subscribe to simultaneously to ensure redundancy.
In the case of the internal corporate LAN, an additional Internet connection might be involved to extend the LAN to remote locations if the network facilities of the e-commerce vendor happen to be distributed - which, of course, would engender further uncertainties.
So what do conventional measures of reliability and availability mean in the context of all this complexity? How does one extend criteria normally applied to physical layer network connections to logical connections that may employ different physical paths for the same data stream from moment to moment?
"It's difficult to apply the measures used for assessing traditional networks to the Internet," observes Jay Adelson, president of Equinix. "What does availability mean, for instance?"
Adds Avery: "Maybe what's really significant for e-business is how many transactions aren't completed due to network difficulties, and how many potential customers leave the site, out of frustration."
George Khater, director of product management at NaviSite (Andover, Mass.), a provider of Web hosting and long distance networking connectivity to e-businesses, says that "it's very difficult to specify, let alone guarantee, network performance in an ecommerce setting, because no one owns the whole pipe from end to end. There are things that you can do to optimize performance both at the Web site and over the backbone, but you can't control or predict everything."
Still, some think network performance is definable, at least with regard to e-commerce, in fairly simple terms.
"The real measure of network performance in this context is delay - that is, delay as relates to the expectations of the customer," says Dan Berkowitz, spokesperson at Rvot (San Francisco), a router manufacturer recently acquired by Intel Corp. "Eight seconds represents the boundary. That's the point where the majority of customers will not tolerate further delay, and will simply exit the Web site and go elsewhere."
"Yeah, eight seconds is the figure people bounce around today," concurs Lloyd Taylor, vice president of operations at Keynote CSan Mateo, Calif), a provider of Internet and Web server evaluation and monitoring services for ISPs and e-commerce companies. "I think the tolerance levels of different types of customers vary, but as the Internet speeds up - and it has sped up - tolerance of delay will go down. Where eight seconds might be tolerable today, six might be the limit in a few months."
Mark Wagnon, director of network services for the Honolulubased Digital Island private global network and international ISR frames the problem in somewhat different terms. "We believe 80% of all online purchases today are impulse purchases," he says. "if delays in downloading a page of more than a few seconds occur, then the impulse is simply not acted upon. One study shows that 60% of online shopping baskets are discarded. In other words, the online closing rate is not very good, and delay plays a big part in that."
Neither Taylor nor Wagnon believe that delays in the appearance of Web pages constitute the sole issue in network performance when it comes to e-commerce.
"If you are a software content provider offering streaming media, then latency, which affects image or sound quality, is obviously a major issue and one that is much harder to address," Taylor states.
Wagnon concurs, but sees problems at every level of the Internet - problems that will be exacerbated with heavy holiday traffic. "MAE East has reached 30% packet losses at peak demand," he says. "That is not good, and it will get worse as the holiday season progresses."
WHERE IT NARROWS
But if problem of e-commerce induced traffic congestion is real, where are the network bottlenecks most likely to occur?
Answers tend to vary according to whom one queries among the array of support services and products aimed at the e-commerce vendors. Manufacturers of e-commerce routers, such as IPivot and Foundry, tend to take the position that the source of most problems will be found at the Web server, and that the Internet itself will weather the seasonal traffic spike with minimal difficulties.
Conversely, ISPs and related network providers such as specialized content distributors and backbone carriers see the problems also occurring at the wide area network (WAN) level as much as the Web site - and they are far more apt to blame other network service providers. But a small albeit growing number of diversified service providers such as NaviSite and Digital Island - which simultaneously offer Web hosting and collocation as well as controlled bandwidth backbone connections tend to assume the seemingly logical position that an e-commerce transaction must be tracked from end to end in order to determine the effectiveness of the business. Hence, they say, problems are possible at any point.
BOTTLENECK To GROWTH I
SO WHO'S RIGHT?
Interestingly, Keynote offers ecommerce vendors comprehensive diagnostic services that pinpoint delays anywhere that they occur. The company's individual network profiles tend to confirm the holistic view that problems can happen anywhere. At the same time, Keynote has identified points where delays are more likely to occur, and the company's cumulative investigations provide some basis for predicting the distribution of problems in the future.
DIMENSIONS OF DELAY
Keynote's Taylor provided us with an overview of the incidence of delay based on his company's ongoing experience in measuring the performance of thousands of Web sites. "When we contract with a customer, we measure their site and the sites of their direct competitors," he explains. "Altogether we perform about 12 million separate measurements every day. That provides us with a very detailed picture of overall network performance."
CURRENT COLLOCATION MODEL
NEW MODEL
WHAT D0 SUCH MEASUREMENTS SHOW?
"When pages on a Web site are consistently slow, it is generally a result of Web site design," reports Taylor. "That's going to be a constant, regardless of network loading. Having lots of graphics on the page, dynamic content that changes according to the visitor's particular browser, animation, objects from other locations such as advertising banners - all of these serve to slow down a Web site."
Of course, these retardants also happen to be what makes a Web, site most interesting and attractive. "It's a real dilemma," admits Taylor. "Do you want a site that loads quickly or one that is visually busy? There's no easy answer, but if there's a lot of clutter the site will be slower."
These views are shared by some independent analysts. "What we've found is that the manner in which the e-commerce Web site has been hosted is much more important than backbone considerations," notes Maribel Lopez, senior analyst at Forrester Research (Cambridge, Mass.).
Lopez cites the importance of collocation or "mirroring" of servers to reduce the distances data must travel to reach customers within nationwide or worldwide markets. This is a tactic becoming increasingly commonplace among vendors. She also favors multiple ISP connections.
"If Victoria's Secret is hosting an online fashion show, they want to have a separate connection for people who are going to buy, because the fashion show is going to slow down the Web site," Lopez says.
Berkowitz of IPivot points out another aspect of Web site design that is apt to slow network traffic - and it's a vital aspect. "SSL [secure socket layer] encryption for Web transactions imposes a lot of overhead. Everybody encrypts the transaction now, and an increasing number of Web sites encrypt the shopping as well.
"That's where we come in," Berkowitz adds. "By providing an SSL accelerator in our router, we can significantly reduce the delays at the Web site."
Yet, e-Media's Terretta takes issue. "This idea that mirroring servers is going to solve your latency problems is erroneous," he asserts. "You can have a server located literally down the block from a customer, and because of the routing pattern occurring over the IP [Internet protocol] connection, the message might travel clear across the country and back again before it reaches the customer. And when you have a lot of congestion, this is just what will happen. A nearby router that is overloaded will be bypassed.
"Peering is really the issue - not the number of local servers you have out there," Terretta says.
Terretta also objects to the notion that rich graphical content results in undo delay. "One of our customers is the World Wrestling Federation, and their site is very graphics-heavy," he notes. "And it is not slow; it's a matter of using image compression."
Terretta then takes issue to the supposedly baneful effects of encryption on transaction speed. "Sure, SSL imposes overhead, but you only need it on the financial transaction," he says. "People who don't know any better are encrypting everything, including images. That's just poor Web design."
So what's going to hobble the networks this holiday season? Terretta tends to put emphasis on the role of wide area connectivity. "Oversubscribing bandwidth has been a significant problem," he says. "Many ISPs are reselling capacity based on a 30% usage pattern. That may reflect average usage, but it doesn't account for spikes."
"That figure should be reversed," says Digital Island's Wagnon. "We routinely provide 70% over-capacity."
But according to Terretta, even such generous allocations may not be sufficient in all cases, particularly during the peak buying season. "People simply don't understand sizing," he says. "You get IT professionals who try to treat the Web like an enterprise network, study traffic patterns for a week, and figure they've got it down. The Internet doesn't work like that it's much more chaotic."
Keynote's measurements suggest some interesting overall patterns.
"It used to be that the backbone was the problem," says Taylor, "but with backbone providers like Qwest achieving throughputs of 800 Gbps, that's not really the case anymore. The biggest problem in the Internet today tends to be at the interexchange points."
Taylor is referring to the major network hubs of the Internet where backbone providers and ISPs exchange traffic. Ever since the Internet was commercialized and opened to the public, these interexchanges always have been subject to intermittent congestion, but the problem is becoming more acute.
"The interexchanges are simply not scaling as fast as the Internet is growing," says Taylor. 'That creates ongoing problems, including routing flaps where traffic is not handled efficiently."
Adelson, whose company is proposing to build a global constellation of some 30 interexchanges, each the size of the current MAE East and MAE West, couldn't agree more.
"There hasn't been the incentive among the owners of the [network access points] to address the problems of capacity at the switch level," Adelson says. "They've generally been carriers as well, and they've made lots of money selling connections. The exchange part of the business has been relatively neglected. In most cases, they're putting all of the transmissions through a single switch, and some of these switches can only handle a maximum throughput of 100 Mbps. They're ridiculously inadequate."
PEERING FOR PROFIT
Such problems are not insurmountable. An increasingly common solution, at least among some of the first-tier ISPs, such as Worldcom, AT&T and Qwest, has been so-called private peering arrangements whereby the ISP avoids the switch as well as the congested public interexchange points, and establishes a private connection to another ISP or ISPs. Such arrangements are essentially 'swaps' where none of first-tier participants charge one another for handling one another's traffic, which is why the process is known as peering. It's a pact among equals.
But the arrangement isn't readily available to second- or third-tier ISPs; they're simply not the equals of the first-tier giants which own their own backbones. The smaller ISPs simply haven't the capacity to swap which would make the arrangement attractive to the firsttier players. So the only way these smaller companies can avoid the public access points is to pay for the privilege of a private connection to a first-tier company.
Those smaller ISPs which are concerned about maintaining some level of service quality will pay for the equivalent of private peering, but since the arrangement involves money rather than barter, it places them at a competitive disadvantage.
"Buying transit is expensive," says NaviSite's Khater, "but it's necessary to ensure performance. In our case, it's part of a suite of services, including database clustering, and applicationbased load balancing," a process in which the algorithm looks at what's going on in the e-commerce application rather than simply at volume of traffic.
Although such switch-less private exchange of traffic constitutes the primary tactic for speeding IP traffic across the nation and the globe, it isn't the only method. According to some, it isn't the best method, either.
"Private peering isn't efficient," says Adelson. "It results in replication of equipment and unnecessary expense."
And, according to AboveNet's Freedman, private peering doesn't provide ironclad assurance of speed and control of latency. "Peering arrangements only ensure that the data will leave one network and enter another quickly," he says. "Once it's on the other guy's network, whatever service level agreement the subscriber has with the first network - the one that provided him with access - generally does not hold.
"This whole area of service level agreements and network performance is seriously misunderstood by many people in e-commerce and other e-businesses," continues Freedman. "What a service level agreement really means is not that some figure for speed or latency or packet loss will be maintained, but that the ISP will drop someone else's packets before he drops yours. In other words, he'll degrade someone else's service in an effort to maintain yours. That's not so reassuring when you think about it and it's not at all the same thing as adhering to a real standard. But when the overall capacity's not there, what else can he do?"
THE 'OTHER WAY
AboveNet's solution, one that is followed by perhaps a half-dozen other service providers, is to buy transit - that is, negotiate arrangements in which all of the backbone providers that carry AboveNet traffic will enter into primary service level agreements (SLAs) with AboveNet, not just agree to accept the traffic and avoid the switch.
"We don't make specific service level agreements with our customers only with the carriers," notes Freedman. "All of our subscribers get the same high level of service. At certain times the network will slow down - that's inevitable - but nobody's being short-changed."
Freedman frames the situation in terms of economic forces bearing on the ISPs. '7hey want to appear to be providing the customer with enhanced performance while limiting their own accountability," he says. "Peering provides an 'out' for them. Data becomes a hot potato to be passed off to someone else's network.
"Our philosophy," Freedman says, "is to treat our customers' traffic as a cold potato for which we are responsible from one end of the connection to the other."
While Adelson applauds AboveNet's approach, he maintains that other means are becoming available for accomplishing the same end.
"AboveNet is what we call a 'single-hop network! - in other words, the data AboveNet is carrying will only go through one exchange point on its way from an AboveNet Web site to the customer," Adelson says. "That's really the basis of their service, because the average Internet transmission today, whether e-commerce or whatever, involves 17 hops before it reaches its destination. That's where the delay comes in - it's inefficiency in routing, not overall lack of backbone capacity."
Adelson's alternative? A "third generation" exchange where numerous backbone providers would provide switches and routers and where, at the same time, e-businesses could place their servers.
"If a business needs more bandwidth due to a spike in demand, as in the selling season, acquiring that bandwidth is literally a matter of moving a wire," Adelson explains. "The exchange point is designed to be highly scalable and to be open to any backbone provider, ISP, or business. It's getting away from the model where one carrier owns the exchange point and substitutes the model of an open competitive exchange."
LONG TERM IBX BUILDOUT
Will Adelson's new model ease those periodic Internet stresses nearly everyone expects this season? Unfortunately, no Equinix data centers will be completed until late 2000.
EXPECT TURBULENCE How will e-commerce’s biggest year conclude?
"A lot of tools aie available to facilitate the online process," says Terretta, "but not everybody is that savvy. There were a lot of problems last year. There will be more this year for people who aren't prepared."
Navisite's Khater sees problems as well, but also detects a growing determination on the part of businesses to provide a better, faster online experience.
"Business people realize that success in e-business is vital, and more and more are willing to pay for network performance," Khater says. "It will be years before e-commerce works flawlessly. It will take broadband to the home for that to happen, but the better vendors are ready for the holidays."
THE CHALLENGES OF DIGITAL CERTIFICATES is the emergence of electronic commerce, the value of data traveling through networks has become critical to businesses and the U.S. economy.
But e-commerce poses certain risks. In open systems like the Internet, messages travelling from node to node may be intercepted by hackers. In addition, recipients need to know whether a message is authentic - that it has not been altered in transit or sent by an imposter. Finally, one must be certain that anyone accessing network resources and services has been authorized to do so.
The first problem is solved by using standard encryption technologies. The second problem requires digital signatures using a public key technology. The last problem is more challenging.
A common method for authenticating a user is via a digital certificate. Typically, the certificate functions like a passport; you present a digital certificate to the security system of a network to gain access to it or to perform a transaction. However, an assortment of personal identification information (name, address, telephone number, etc.) may be contained in the certificate itself; consequently, these certificates may be fairly large. Moreover, relying on such a certificate to per-form an electronic transaction is comparable to showing every piece of your ID in your wallet every time you buy gas, write a check or pay a bill.
Toronto-based Diversinet (www.dvnet.com) markets a technology that places authorization privileges into .permits" that remain separate from the certificates used for @authentication." in this approach, authentication only requires that a business know that the holder of the certificate is its rightful owner; this can be done independently of knowing the owner's identity or any other personal information by validating the public key contained within the owner's certificate. The issuing enterprise will itself be a certification authority, performing its own verification of an applicant's identity before issuing a certificate. Since the enterprise's systems can associate separate identity information for the applicant with a certificate's anonymous ID, there is no need for third-party intervention.
Certificates are generated in the following manner:
· The user downloads client software. A key pair is generated.
· The private key is stored locally (password protected). The public key is registered with the certificate server.
· The certificate server generates the certificate (including unique ID) and the certificate chain. The user receives the certificate and certificate chain from the server.
Consider an example application of processing an insurance claim. The insurance company settles a claim using certificates and permits to control access to its claims application server by an independent broker and an external adjuster.
1. The insurance company's certificate server issues a certificate to its broker. With the certificate are individual permits that allow the broker application specified access to client files on the company's application server.
2. The broker is authenticated to the system by the broker's certificate, and he is allowed access to the client's file by the permit. The broker submits the claim.
3. The claims adjuster also has been qualified by the insurance company. The permit server issues a permit to the adjuster (who also has been issued a certificate) to allow access to the particular file, perhaps for only a specified time. In each transaction, a mutual authentication between the insurance company and the agent's system occurs.
4. The adjuster presents his permit, accesses the company's application server, and settles the case.
5. The application server proceeds with payment.
Source: Diversinet
Telstra Development, the US.-based e-commerce division of Telstra Corp Ltd, has formed a global e-commerce partnership with Harbinger Ca%, a supplier of e-commerce software Harbinger will deliver operations support for Telstra's Open Commerce Platform. Telstra, in turn, will become a worldwide distributor for Harbinger Internet products and services.
Copyright Advanstar Communications, Inc. Dec 1, 1999
©1999 UMI Company; All Rights Reserved. Only fair use, as provided by the United States copyright law, is permitted. UMI Company makes no warranty regarding the accuracy, completeness or timelines of the Publications or the records they contain, or any warranty, express or implied, including any warranty of merchantability or fitness for a particular purpose, and shall not be liable for damages of any kind or lost profits or other claims related to them or their use.