Cache as Cache Can

SAN MATEO (06/05/2000) - As a relative new player in the network-caching arena, WorkFire is making efforts to establish a position alongside companies such as Inktomi and Novell. In a conversation with InfoWorld Editor in Chief Michael Vizard, company founder Tom Taylor talks about how is taking a unique "genetic" approach to accelerating the serving of Web pages at sites such as Inc. and Norman Levy & Associates.

InfoWorld: What exact problem is WorkFire trying to solve?

Taylor: Workfire was founded with a very simple idea, which is to turn CPU cycles into Internet performance. When we started the company two years ago, that probably wasn't that exciting of a proposition because everyone was saying bandwidth was going to be free and cheap. But as we stand today, it looks like a very exciting proposition because bandwidth isn't free and cheap. In fact it's a very precious commodity. So the idea of turning around CPU cycles as performance is very attractive because Moore's Law has been working in our favor. Very powerful CPUs are now available very inexpensively.

InfoWorld: What is the relationship between CPU cycle and bandwidth?

Taylor: With Workfire software deployed on the server side, and using intelligence, we can actually affect end-to-end performance in a very substantial way. By using the nature of the data and [by using] mathematics, we can figure out how to turn CPU cycles into end-to-end performance in any type of delivery environment, whether it's high-speed Internet or low-speed Internet.

InfoWorld: Can you describe how WorkFire is deployed?

Taylor: Workfire software goes typically behind the firewall and won't even touch data until it's been requested by a user and served up by a server. At that point it kicks in and does real-time analysis. It'll analyze the type of connection that the user has, the nature of the content that's being requested and will try to discover if there's a way to exploit that content so it can be ordered in a way that will render it faster on an end-to-end basis. It requires absolutely nothing from the client.

InfoWorld: How does this issue manifest itself as a business problem?

Taylor: On the Internet, it's very common to refer to the eight-second half-life, which means that if you haven't rendered your Web page in eight seconds, you've lost half your customers. There's another factor as well.

Customers spend a long time moving through an e-commerce site picking out products, but then they end up abandoning their shopping carts either because the server fails or problems with the Web. Clearly, that's an unacceptable situation, so it's a quality of service problem, and it's directly related to the profitability of an e-commerce business.

InfoWorld: How does your approach differ from what Inktomi or Novell are doing?

Taylor: There [are] really only three ways to make the Internet perform better, and the first way is with big pipes. The second way you can increase performance is a very simple idea. You push frequently accessed content out toward the edges, and that gives you two gains. It gives you a performance gain for the user because of proximity, [and] you also get bandwidth savings all through the network. But the edge is a very fuzzy thing. We always draw it on our whiteboards as a nice fluffy cloud, and it's very clear where the edge is.

But in the real world, the edge is not nearly as well defined. The third way to increase performance is the Workfire way, which is press the CPU into use, and use intelligence and information theory to optimize the end-to-end performance.

InfoWorld: How do you see caching and intelligence evolving?

Taylor: The first phase of caching was the type of caching you saw deployed in browsers. Then we saw the next generation of caching from companies like Network Appliance, Cache Flow, and Inktomi, where you're typically deploying caching at an aggregation point. The third generation is the final frontier, which is caching in conjunction with information theory and advanced mathematics. Exploiting the nature of the data itself. The information on the Internet is 99.9 percent self-similar, because otherwise we wouldn't be interested in looking at it. We don't go look at screens full of noise. There's a high degree of organization in the data, and that degree of organization can be exploited to drive end-to-end performance.

InfoWorld: Why are high-fidelity Web sites important?

Taylor: Web sites don't really have a lot of choices for delivering high fidelity. They can buy more bandwidth, but that doesn't necessarily solve their problem. Typically on the server side, adding bandwidth in T1 type increments [is] very expensive, and you have to remember with broadband becoming available, a single T1 only serves a fraction of a single broadband user.

Adding more bandwidth is like an eyedropper into a red-hot frying pan. Clearly [Web sites] need real-time software to guarantee that they're going to be able to deliver a high-fidelity Web page end-to-end.

InfoWorld: How do you figure out what the data looks like in order to optimize it?

Taylor: That's what I call the heavy-lifting part of the problem, and that's really where the term genetic caching comes into play. Because we end up caching genetic variants -- the user coming in, the type of browser, the connection speed -- we'll do some heavy lifting [on the] server side in order to massage the data. Then we store that variant so that we don't have to do that heavy lifting again, and probability is extremely high that you're going to get another user with that same connection ballistics that's going to need that same data server.

InfoWorld: What are the benefits of doing this for the Web site?

Taylor: Web sites at the moment have to make unhappy compromises. What are you going to optimize for -- 14.4, 56k, ISDN, DSL, Explorer, Netscape? The truth is, [optimization] for Explorer versus Netscape are complete polar opposites.

And [optimization] for 56k versus DSL are polar opposites. And putting multiple versions of your content on your Web site is very, very difficult to manage.

That's where a real-time solution comes in. You author once, and in real time you make the decision [as to which is] the best way to optimize. There is lots of room to maneuver within that area. And there's always something we can do server side to increase performance.

InfoWorld: So what keeps you up at night?

Taylor: There are many adjacent categories that you view as partnering opportunities. But certainly, it would certainly make sense for other caching companies to move into a genetic caching type of solution. And then you have sort of infrastructure and appliance companies and the layer-three and layer-four switch companies. It would certainly make sense for them to move towards this. So my concern is definitely about people in adjacent spaces moving into our category., in Scottsdale, Arizona, can be reached at

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about InktomiNetAppNormanNovell

Show Comments