Tuesday, July 12, 2005

Moore's Law, Bandwidth Scaling and Metcalf's Law

I went to physorg today to catch up on the Tempel 1 mission, but found several items that will have a lot more impact.

One of the primary drags on Moore's Law is heat production. As transistor size decreases with each generation of manufacturing technology, more are packed onto a smaller area, and the heat that needs to be removed goes up per unit area. A novel way of removing heat has been devised.

What makes this method even more interesting is that it appears to have broad utility outside the computer industry:

"In addition to computer and other electronics applications, bulky liquid-liquid heat exchangers - found in everything from automotive oil coolers to ice cream makers - could be made 30 to 50 times smaller if the new approach is adopted."

Where are liquid-liquid heat exchangers used? Just about everywhere. What's more, there are a lot of places (like cell phone towers) where they aren't used because they are just too bulky. As the discoverer says:

"I think we may be looking at a paradigm shift in how heat exchangers are designed."

Stay tuned.

The Bandwidth Scaling Law states that the amount of bandwidth available to consumers doubles every so often. How frequently has varied between 6 months to 18 months, so it's not a reliable law, but it's still remarkable. The law initially applied to wire and optical networks. Now, versions of the law include wireless networks, but there is a qualitative difference that can't be overstated: the network goes with the user. So far, this difference has come in bubbles: low-bandwidth limited capability bubbles like cell phone service cover large regions; high bandwidth, full capability bubbles with full internet access are limited mostly to buildings and some designated public spaces. The major qualitative shift will come when the high-bandwidth bubbles converge. The industry has been moving towards a technology called WiMAX which appears to be the best candidate for bringing convergence. Its primary drawbacks have been cost, heat and power. An old new technology, the ribbon electron beam may well have solved all three at a stroke.

Metcalf's Law essentially states that the value of a network scales as the square of the number of nodes. Roughly stated, a network with twice as many people provides four times as much value to its subscribers. In part, this is what creates lock-in. Once enough people are connected to a network, the relationships they develop in the context of that network make it much easier for them to accomplish certain kinds of goals by virtue of the amount of specialization they have access to, and to switch to another network would require a great deal of reinvestment. Once a network reaches a certain size, it grows itself. Working out the behavior of such networks as they scale keeps many disciplines very busy. What's the easiest way to find someone or a service in a large network? How and why do parts of networks clump? What kinds of clumps are stable? How do these clumps communicate? How do clumps-of-clumps clump (and so on, up the chain). Are there ways to characterize clumps at different scales which allow meaningful comparisons to be made without masking the essential nature of each scale? Some interesting work has been done with regard to the sensitivity and robustness of networks at different scales. They show that networks whose decay exponent (the logarithm of the ratio between the probabilities of finding a number of connections on randomly selected nodes between adjacent decimal scales) is between 2 and 2.5 can store and are very sensitive to a large number of patterns (selective perturbations), and very resistant to random perturbations. Furthermore, this behavior is scale free. As long as the ratio is observed, the network can scale to infinity and its storage capacity is proportional to its size. The decay exponent for the brain was found to be about 2.1.