1. In my previous post I noted that our transportation system (for moving people and goods) has many characteristics in common with our systems for moving information. The latter has seen revolutionary changes over the last 50 years that have lead to today's internet with people able to broadcast and access information, including video streams, from almost anywhere they are in our world. But our systems for moving people and goods have been stuck with small evolutionary changes to 100+ year old technology. My goal in this post is to provide a context for a revolutionary change in the way we move people and goods that will dwarf the impacts of the changes in the way we move information.

    As noted in the earlier post the primary performance metrics for a system designed to move things are the rate at which any of those things can be moved between any pair of points A and B (throughput or capacity), the time it takes to perform this movement (latency or trip time), and the probability that the items being moved arrive at B in essentially the same state in which they left A (to some degree QOS and safety). Such systems are infrastructure that is used to support many applications, and it is those applications, their performance and costs, that people are really interested in. Different applications may place different emphasis on these performance metrics and their cost trade-offs.

    The biggest difference between moving information and moving physical objects is that the latter have non-trivial mass and spatial dimensions. This means that significant energy is required to perform the movement. It also places major restrictions on the performance metrics while intrinsically requiring much larger costs for building, maintaining and operating all network elements and vehicles.

    The communication domain would indicate that if one really understands the underlying features of an infrastructure, it may be possible to identify a small set of characteristics that are fundamental to its performance and cost, and the manner in which those could be changed to provide revolutionary improvements. In the transportation field I believe that the primary characteristics driving the high costs and poor performance are the large mass and large spatial dimensions of the vehicles. Minor ones are the use of on-board energy storage and power conversion that lead to a cycle that promotes the need for larger and heavier vehicles.

    To understand the full impact of those characteristics I have developed a small set of principles that I believe should be satisfied by any transportation system. Unfortunately, no single system today, or combination of systems, fit all of these principles. Also, all of the PRT (or personal rapid transit) proposals fail some of those criteria. Note that all of these must be satisfied in the context of minimizing all resource (natural and human) requirements associated with building, maintaining and operating all of the network elements and vehicles.

    My sustainable transportation (ST) system is designed from the ground up to satisfy all of these principles simultaneously. ST has guideways that would replace all urban roads and most rural paved roads with elevated guideways that would cost less than $100,000 per lane-kilometers supporting vehicles that are about 5% of the mass of today's average personal vehicle (<150kg p="">

    The principles:

    • Rapid door-to-door service
    • Shared
    • Built-in safety for all
    • Flexible use vehicles/networks (people and goods)
    • Appropriate capacity for throughput and loading/unloading
    • Fully automated control (enabler for most of above)
    • Off grade guideways (above ground with suspended vehicles)

    Rapid door-to-door service

    The primary performance measure for any transportation system should be the time between the start of a trip and its completion. The speed of any individual leg is irrelevant if it does not improve the overall trip time. A few examples:
    • even using personal vehicles in an urban setting is typically slow. Although these vehicles can achieve speeds well in excess of 100kph, on most urban streets they are typically limited by law to 50 or 60kph. But their real average speed is typically lower than half that, often much lower. Part of the decrease arises from intersections where vehicles are stopped or delayed to avoid collisions with vehicles travelling in a conflicting direction. We know how to avoid these problems on expressways by using over (/under) passes and not allowing non-vehicular traffic. The former is not usually possible in urban settings due to the very large structures needed to support the very large vehicles.
    • all urban mass transit systems can have significant legs at the ends that usually involve walking, which is generally quite slow and susceptible to inclement weather. There may also be significant, and unpredictable waits to transfer between multiple legs on different mass transit vehicles. As the routes are shared they often do not support direct routes even between their end points. The net is that the effective speed is often much less than 10kph. Many PRT systems have been proposed that overcome some of the speed limits once the vehicles are boarded, but these still have the big issues with the initial and final legs.
    • the essential problem with short haul air travel today (e.g. from New York to Boston or Washington DC): although the time spent in the air as the vehicles are 8 times faster than typical highway speeds, substantial time is spent getting too and from the airports and in the numerous sources of delays within the airports.
    In contrast to this ST vehicles have very cheap guideways, and a novel loading and unloading mechanism, to ensure that all loading happens just outside the door of the start point, and unloading happens just outside the door of the destination. Once on board the vehicles travel to their destination at high speed without stopping. The higher speed is allowed by having the vehicles operate off-grade (as subways and elevated trains do) with their bottom 3m above ground, so there is no interaction with ground level activity such as pedestrians and cyclists. Stopping at intersections is avoided by using an analog to the expressway mechanism, but designed to fit within the tight constraints of the urban setting. My studies indicate that the average urban trip using ST would take 20-25% of the time making the same trip using a personal vehicle, and less than 10% of using typical mass transit.

    The mass and dimensions of the vehicles that travel today's roads are inherently incompatible with these capabilities. Just as the analog and circuit switched nature of the communication infra-structure prior to the 1970s, the only way to overcome these intrinsic limitations is to adopt a completely new approach. All applications supported by today's system must still be supported, such as the movement of cargo that is too large or too massive, and all emergency services, but all these can be treated as special cases with special case solutions. In general it a very poor performance/cost trade-off to allow a small percent of the use cases to drive the characteristics of the overall system design. In my ST design I provide special case implementations for those applications that are not, or only partially, supported by my very light guideways and vehicles.

    Shared

    In the old circuit switched networks all network elements (lines and switch relays) between the parties at either end of the "circuit" were dedicated to those users for the length of the call. This made such calls very expensive as they consumed non-shareable resources. With packet switching there is no such dedication of resources. All lines and switches can be shared among all data moving over them. There are many potential ways such sharing can occur including time division multiplexing (think one after another with tight spacing) and frequency division multiplexing (like broadcast radio and tv) where each "channel" is carried on a different frequency.

    In the transportation area everyone today shares the benefits of our roads, and usually pay for them through sharing in the form of taxation including tolls for use. Individual ownership of small segments of road would not make sense. However, the only shared vehicles are taxis and the various types of mass transit ones. But shared vehicles can greatly reduce the need for such vehicles as well as resources, such as parking facilities, associated with vehicles not in use. In very dense urban areas, such as Manhattan, relatively few personal vehicles are used due to these benefits of sharing.

    There are two potential categories of vehicle sharing: temporal and spatial. In temporal sharing a vehicle is used to perform some trip and when that trip is completed the same vehicle can be used to perform other trips. Spatial sharing is the domain of mass transit systems. Large vehicles are used that can accommodate many passengers simultaneously. For some segment of many distinct individual trips a vehicle is shared. Both forms of sharing can lead to significant reductions in the number of vehicles needed to support a given number of trips over some period, but they can also lead to contention for the shared resource at times. In the case of spatial sharing fewer vehicles may be needed, along a particular network segment, to move the same number of people, or amount of cargo, during the same period. This characteristic is most beneficial during “rush hour” periods in high population density settings.

    Circuit switching was intrinsically an unshared system. All network resources (lines and switches) were dedicated to a single conversation once the call had been connected. In contrast, with packet switching most network resources could be shared. This is one of the main reasons that a long distance phone call in the 1960s could costs in excess of $10/minute, while today one can make free video calls virtually between any two locations in the world. The wireless revolution even made such sharing possible to the individual client device, where the wired one still had unshared wires near the edges of the network (e.g. from a neighbourhood switch to individual residences. The temporal and spatial sharing distinction also has an analog in the communication domain: the time and frequency division multiplexing noted earlier.

    ST may be viewed as a system of automated taxis. This temporal sharing does not lead to the issues associated with spatial sharing, while other characteristics of ST address the primary capacity benefit of that form of sharing. The cost advantages are tremendous. Today the number of vehicles (personal or commercial) in North America is in the neighbourhood of 800 for every 1000 people. With the sharing provided by ST my calculations indicate that the number required would be in the range of 5-10% of that count. As each vehicle has about 5% of the mass of today's typical vehicle, the material resource required for ST vehicles are less than 1% of what is used today. This is happening while the door-to-door trip times are going way down as noted above.


    Safety

    As noted earlier one performance measure for communication systems is the probability that a piece of information sent from A will reach B in a form that is sufficiently reflective of the form that existed at A. In the analog and circuit switched world this quality was realized as the intelligibility of the speech at the receiver. But these losses were intrinsic to the nature of analog signals and circuit switching. But with the introduction of digitization the potential to use error correcting coding over any leg of the network allowed the information to be restored as long as the errors introduced were within the bounds of correction provided by the code.

    In the world of moving things I call the analog to this state preservation what is typically referred to as "safety". However, my use of safety is somewhat larger, and to some degree breaks the analogy with communications, as I consider the impact on third parties as an aspect of this safety. For example, some indication are that in many locations as many as 30% of all injuries and fatalities are to pedestrians and cyclists.

    The concentration on safety over the past 50 years has been on adding features to our vehicles that overcome some of their inherent safety limitations. These have included seat belts, air bags, anti-lock braking and traction control systems. But most of these have little or no impact on the safety of third parties, and are just patches to the real safety issues related to the fundamental problem, which is the mass of the vehicle controlled by a human driver and the nature of its interface with its guideway (i.e. unconstrained with tires on roads subject to weather impacts). To overcome issues with driver error research today is being done on fully automating the control of these vehicles.

    The US NHTSA has prepared an extensive report on the causes of motor vehicle crashes. The report investigated 5471 crashes that occurred in the period from July 3, 2005 to December 31, 2007, and are claimed to be a nationally representative sample. Critical reasons related to driver error (from table 9a) were a factor in 5096 of the crashes with various recognition (such as inadequate surveillance) or decision errors (such as driving too fast for the conditions) making up almost 75% of that. Vehicle condition related factors (table 12) occurred in 703 cases of which 526 were related to tire or wheel deficiencies. Roadway related factors occurred in 1629 cases with conditions such as a wet or slick surface accounting for 1148 of them.

    But we do know how to avoid the third party issues: use off-grade guideways as with subways, elevated trains and expressways. But these are very expensive to produce, particularly in an urban setting and in the context of the massive vehicles used. The same systems also avoid the potential for vehicular collisions by using the third dimension to allow traffic to cross without the stops that slow its average speed or the potential for collisions.

    The low mass and small dimensions of ST vehicles allow us to have a guideway supporting off-grade operation to eliminated interactions with items below 3m above ground, and provide a very cheap version the 3 dimensional guideways that allow traffic at intersections to avoid the potential for collisions. Fully automated control also avoids any issues with human control, and in the case of ST triple redundancy in both sensors, communication mechanisms and the control system is used to limit the potential for failures in those areas. The guideway is covered with the vehicles suspended below it to avoid weather related factors. All of this is backed by continuous monitoring of all vehicle and guideway elements to provide early detection of any symptoms indicating future failure to allow preventive maintenance to be performed prior to such failures. Thus, virtually all of the sources of problems identified in the NHTSA report are intrinsically eliminated.

    Note that as in the other areas the safety benefits of ST are only possible because of the interaction with other features. The elevated guideway would not be acceptable if it were not very low cost, or if it were just an addition to the existing roadways and mass transit systems. Also, if those systems continued to operate with ST they would interact with ST vehicles which would greatly effect the safety potential. But if those systems were to be abandoned with the introduction of ST, then it really needs to provide door to door service, that should outperform the best that exists today (i.e. personal vehicles).

    Flexible use vehicles

    Over the past 50 years a lot of freight traffic has used large containers to move things. This has a significant impact on the costs of handling the cargo when moving it from one transport vehicle to another. These containers can then be viewed as packets. But possibly as significant is that it separates the characteristics of what is being moved from the mechanism (vehicle and network used) for movement. This compartmentalization of the functionality of the transportation function is an analog for the flexibility and cost savings enabled by digitization.

    A question arises as to what size the container should be. The TCP/IP standard that sits near the bottom of the internet communication stack originally defaulted to a fairly large containers to optimize the throughput over the relatively low capacity lines and switches available at the time. The ATM system used by the phone networks used much smaller packets as throughput there was not as much of a concern as the predictable, and reasonably low, latency needed for carrying voice. But, with the advances in both end to end throughput and latency of the physical network elements, along with changes to the network protocols, it is now possible to carry not just voice but video streams using internet standards.

    In the Sustainable Transportation system I have developed I use this principle of separating the parts of the system that are responsible for providing the movement capability from the "containers" that are specific to the needs of the item being moved. In the urban system there are two primary "vehicles":
    • a capsule that is roughly bullet shaped with a length of 4.5m, width of 1.5m and height of 1.5m. These capsule carry anything that fits within those dimensions up to a maximum mass of 225kg. This supports passenger modules for 2 adults or 1 adult and 2-3 children (depending on their size), most urban cargo needs, and special purpose modules such as a gurney with attendant for an ambulance function. The passenger modules can be system supplied (think of a taxi) or be supplied by the riders as long as they fit the dimensions and provide the interfaces to safely connect to the vehicle. The capsules ride at high speed (100-150kph on what are today's arterial roads), suspended from an extremely low cost guideway (<$100K/km/lane), with their bottom 3m above the ground.
    • a flat-bed "truck" that is really just that, a 3m by 4m flat-bed riding upon a bogie that provides the power and movement capability. These move loads whose spatial dimensions or weight do not allow them to go in the above capsules. This should be a very rare requirement. Containers specific to any function can be carried by these vehicles, with the containers not needing to deal with the movement function. For even larger loads, two or more of these flat-beds can be linked. These vehicles travel on the ground at a top speed of 15kph and use a combination of automated and radio control. These characteristics enable:
      • the simple low cost structure (e.g. no need for a driver compartment)
      • safe interaction with ground level traffic like pedestrians and cyclists without needing to solve the far more difficult problem of fully automating the control of large numbers of large vehicles travelling at relatively high speeds (which is a major area of research today)
      • flexibility to deal with arbitrary loads as the load specific requirements are isolated to the container that rides on the "truck"
    My inter-city systems also uses the principle of separation of movement function from the needs of what is being moved. It has 3 levels:
    1. the first is simply an extension of the capsule system, used in urban areas, along most paved inter-urban roads, including expressways. Operating at 150kph along this dense network supports much greater capacity and lower latency than today's combination of expressways (that essentially become a single point of failure of any type, including simple congestion) and other roads.
    2. A higher speed system that runs at significantly higher speeds in low pressure covered trenches. These would run along the medians of existing expressways, or where no such median is present replace the 2 centre lanes. Such a replacement is enabled by the capsule system of level 1. The vehicles would have dimension similar to narrow buses capable of carrying 15 rows of two seats. Some container modules would actually fit that description. This is to some degree similar to Hyperloop, but also differs in many ways. One difference is that the same vehicle can carry any container that fits the specifications.
    3. An even much higher speed system that I conjecture could be built within the next two decades.

    Appropriate capacity for throughput and loading/unloading

    One lesson that our communication networks have reinforced is that the ability to provide low latency and high throughput between any pair of points on a network is far more related to having a dense network that avoid choke points and single points of failure. This lesson is far to often ignore by proponents of large and expensive transportation infra-structure projects such as the CA HSR and many LRT (light rail transit) systems. Note that CA HSR is projected to cost more than $60B, while the single 19km long LRT that is to be built in my city is projected at $818M (or more than $40M per km).

    In contrast, as noted earlier my ST guideways, including all materials and construction costs, could be built for under $100,000/lane/km. The US DOT 2009 report has details on the amount and type of roads in the US, which I have put into a Google Docs spreadsheet here. There are about 4.2M km of paved roads, with about 2.3M lane-km in urban areas and about 5.3M paved lane-km in rural areas. Thus, having a 1-1 replacement of such routes with ST guideways would yield a network, that would provide door to door movements between any pair of points in the country going 150kph for almost the whole trip (for any trip more than a few km), would take about $900B. This would provide the dense door to door network that is an analog of today's internet including all the wireless devices such as smartphones and tablets. The amount is little more than it costs for a single year to fuel all of today's vehicles.

    So the door to door trip time from anywhere in LA to anywhere in the bay area would take about 4.5 hours riding in ST vehicles, with many possible routes along guideway along any of today's roads that go between those areas. CA HSR has a projected to speed of 350kph, and much higher capacity per vehicle. This would lead to trip times between the stations in the SF and LA of 2.7 hours. But anyone wanting to ride that vehicle would still need to get to the stations, and go through any delays that arise in the station, so in many cases this would be the same time as the ST trip.

    CAI HSR is projected to support 6 minute departures with a capacity of over 1000 people per vehicle, implying a system capacity of 12000 people per hour. But there is no evidence of a need for such capacity. Today's air traffic between LA and SF averages less than 320 departures each way per hour (roughly 2 80% mid-sized airplanes). Total train traffic between SD and Seattle is less than 1200 people per day or a maximum of 50 per hour between LA and SF. Even with peak time rates at 5 times the average only 1600 departures per hour are needed today. With growth in demand of 3% the average capacity requirement would still only be 780 30 years from now. But what happens if there is an issue with this rail line in anyway and there are 600 km of it. This provides the choke point that our communication system success has shown us to avoid.

    In contrast, even with 40m headways, a single ST top speed guideway lane would have a capacity of 3200 vehicles per hour. Along I-5 we could have 3 such lanes in each direction for a capacity approaching that of the CA HSR along the single route, but at a cost of under $600,000 per km. Further, all the other routes such as highways 1, 101, 99, ... would have ST guideways along them that provide additional capacity, with redundancy against outages along any single lane or route, and low latency to between any pair of points on any of these roads.

    One capacity consideration that comes up in discussions of mass transit systems is not just the line capacity, but also the loading and unloading capacity. It is noted that vehicles for subways and LRTs provide far more efficient loading and unloading because of the relatively large number of doors in comparison to typical buses. Buses typically only allow boarding through the single front door which leads to a single choke point for that function. This can provide significant delays at some stops. There are many issues around loading and unloading that I discuss in detail in my full ST document.

    However, a quick observation is that the door to door design of ST allows people to board anywhere on the grid. Although there are no stations or stops, there can be low cost (essentially just an additional mesh of standard guideway) to support higher loading and unloading capacity at certain high traffic areas such as office buildings and stores. But I do show that the high redundancy, and board anywhere capability, would allow even dense locations like Hong Kong's Causeway Bay area to have sufficient loading and unloading capacity with ST to eliminate the need for its subway while allowing all it travellers to have rapid door to door service. For example, today the subway does not even exist between Causeway Bay and Ap Lei Chau (although one is being built). The drive is estimated by Google Maps to be 7.4km and taking about 11 minutes (although very susceptible to significant traffic delays as it goes through the single choke point provided by the Aberdeen tunnel). Maps again says that the trip by mass transit would take from 37 to 44 minutes. Neither account for the delays at the end points. Using a car this involves getting to a parked car, and finding a parking spot at the other end and getting from that point to the ultimate destination. With mass transit it involves get to and from the bus stop. In contrast, an ST vehicle can be guaranteed to be ready to be boarded outside any building a person is in in under a minute, and to travel most of the way at 150kph or less than 3 minutes and be dropped off at the door, with less than a minute total for boarding and debarking.

    Posted by
    0

    Add a comment

  2. Elon Musk's Hyperloop proposal has lead to a lot of media regarding certain transportation topics, and much heated opposition. Hyperloop was proposed primarily as an alternative to the California High Speed Rail (CA HSR) system. CA HSR would service all primary population centres in California. Hyperloop would be primarily focussed on a single link between Los Angeles and San Francisco. I believe that both Hyperloop and CA HSR are seriously flawed, although my main objections to Hyperloop have little overlap with most of the criticism of it that I have seen. I think that the only way to effectively discuss those issues is to provide a global perspective on transportation systems. I will start that with an analogy to another system that is involved in moving things: our communication system that moves information. Latter posts will apply this analogy to the system of moving people and things, and to specific criticisms of Hyperloop and CA HSR, given the context provided be the two earlier posts.

    For any system of moving things there are some performance metrics and some costs associated with providing various levels of performance for each metric. All such systems are about getting the items they carry between any two points A and B on their network, with the best performance, and the lowest costs. In almost all cases the network does not provide single connections between A and B, but a number of "legs", that each have their corresponding performance and cost measures.

    One of the most prevalent issues in all of these systems is what is usually referred to as the "last mile problem". Essentially this says that the major cost of a network is not in its core, but in reaching all of its end-points. For example, most modern communication networks today provide high speed fibre optic links to locations that service hundreds of homes. But the network needed to provided signals to all of those homes is much more expensive than that which provides the links from that location to the small number of non-end-user connections it makes.

    Different systems for moving things, based on the characteristics of what is being moved use different nomenclature for what are essentially the same performance metrics. In the communication of information the primary metrics are the latency, throughput, and the probability that an item of information from A is likely to arrive at B "intact". These are almost direct analogs respectively for the names used in the system of moving people and goods: trip times (the inverse of average speed), capacity, and safety. The system for moving people or goods is often merged, and labeled our transportation system. Also, in that case safety is applied not just to the item being moved, but to third parties. Many of all the direct injuries and fatalities related to the transportation system are to pedestrians and cyclists. The cost metrics vary much more.

    To provide a frame of reference for developments in these area I will provide a brief overview of the history of the information moving (communication) field. Among the objections to Hyperloop have been strong statements that the performance and cost gains claimed are unrealistic, that incremental improvements provide a better approach, and that the revolutionary change indicated is not valid for the systems of moving things. I think the communication field history provides a clear counter-point to such arguments. Further, I will show that the performance gain claims are anything but revolutionary in a "moving things" system wide sense. This will also provide arguments indicating that the incremental HSR system is even more flawed. I will show that neither is a justified investment in a performance/cost context.

    From the late 1800s to the late 1960s and early 1970s our communication systems had some fundamental properties: broadcast wireless and point to point wired, circuit switched, analog. Today's system is packet switched, digital, with wireless replacing wired in many situations and rapidly evolving to be overwhelmingly dominant in the rest. There have been many expensive modifications to the infrastructure along the way that have lead to today's smart-phones able to do things like stream video streams from one hand-held device to another that were considered science fiction in the 1960s.

    Consider the infrastructure of the 1960s, which to a large degree, was unchanged from that of the late 1800s. A cable containing 2-4 copper wires connected homes (usually 100 to 10000) to some relatively local, and expensive, switching station. The switching station would contain mechanical relays that would create physical electrical connections between a pair of wires: one leading to a home and one leading to a higher level switching station (or possibly to another home serviced by the same station). To make a connection to a remote destination a set of such physical connections would be made up and down through a hierarchy of such stations. The actual call would then occupy a sequence of relays and wires between the source and destination (circuit switching). The information sent in would be a simple modulation of the strength of the electrical signal (the analog part) to reflect changes in sound waves detected by a device called a microphone that could do such a conversion. At the other end the signal would be translated be another device, called a speaker, that could translate electrical signals into sound waves. The phone companies (such as ATT) that deployed the expensive cables, and provided the expensive switches, and often even the handsets containing the microphones and speakers, were among the most high valued companies of that time.

    In the 1970s the revolution in electronics, that would eventually lead to today's information devices for communication, processing and storage, were able to efficiently perform the conversion of changes in an analog input (AD devices) electrical signal from a microphone into a digital signal encoding the same information as a sequence of numbers. A fellow called Nyquist had proven that a sufficiently rapid sequence of such number would contain all of the information required to precisely reconstruct the original analog signal. Corresponding digital to analog (DA) devices were cheap enough to convert this number sequence into the analog signal required by a speaker device to reproduce the sound at the microphone on a device all across the world using a digital information stream from the speakers phone to the listeners phone, or vice versa. These digital signals could be carried by the same wires, and switched by the same circuit switched, between these points. That was the fundamental revolution, that upset 100 years of expensive wired, analog, circuit switched systems to create todays internet and smartphones. Now the information dealt with in the system was in a digital form that computers used for representing any information, not just voice signals.

    The digital conversion is what enabled packet switching for all data. It represented all types of information into a common form: a sequence of numbers. This was a great enabler for many operations that would have been extremely difficult, or not even possible, on the analog form. These included things like:

    • error correction that could deal with noisy lines, and switches, and still reconstruct the original information
    • encrypting the information to make it secure
    • data compression, particularly lossy ones, for high bandwidth information like video
    • the analog lines and switches were designed to deal with human voice that had a bandwidth of only about 3000Hz. In the early 80s people who wanted there computers to communicate used modems into which they could plug their phone handset. This would allow the computers to talk at about 300 bits per second. Today's cell networks are rapidly approaching 1 million times that speed, which is what allows them to stream live videos, that are represented as digital streams just like any other data.

    Very quickly thereafter the phone companies started the change from circuit to packet switching. Where a circuit switch required a physical electrical connection between the communicating devices, the digital information eliminated the need for physical connections. Digital information could be read from any input to a computer, stored temporarily, and sent to any of its outputs. As long as this was done in a time that was transparent to users at the end points (a relatively simple problem even at that time) they would not notice a difference relative to a circuit switched system. For the phone companies this allowed them to eliminate large numbers of expensive analog switches, with just the problem of making sure that the signal arriving at the now more remote digital switches was accurate. But that was also solved to a large degree by the semi-conductor revolution. It was possible with digital signals to add extra information that would allow all but the most improbable errors in the stream to be efficiently corrected. Phone companies exploited the twin revolutions of digital information, and packet switching, to allow them to use their existing wires to talk to much fewer and much cheaper digital switches to support much cheaper communication.

    As recently as the late 80s I spent almost $1000 in a month on voice communications to my wife when she was in Hong Kong. Today, using VOIP technology, even video calls between such locations are free.

    But digital communication between computers, were still separated from each other and the protocols that the communication companies used internally. The big revolution here was the introduction of the "internet" that grew out of a project by the US DARPA agency to create a standard to allow all of the computer networks to communicate between themselves. This involved creating a stack of inter-computer communication protocols (TCP/IP) that could deal well with errors (the third leg in the communication metrics I noted above). Development and deployment of such systems was done in the late 1980s.

    By the late 1990s, the integration of streams of computer communication, and digitized voice integration, were being developed. The networks had been developed to the point that this could easily occur. The problem was that for a digitized voice signal to be acceptable each packet of information from the initiator would have to arrive with sufficiently low latency to allow the listener to hear it as continuous. The characteristics of TCP/IP were incompatible with this goal. The voice communication companies had developed their own standard (ATM or asynchronous transfer mode) that achieved this goal, but were not compatible with TCP/IP. Both the latency and the throughput of the existing networks,and the latency relevant aspects of TCP/IP. continued to improve to the point in the early 2000s where voice over IP (VOIP) became viable. Voice communication could now piggyback on all data communication cost improvements.

    Also beginning in the mid-90s the cost and performance of wireless communication started to improve rapidly. This was the last major leg in the huge infra-structure change for information communication from what existed from the late 1800s to the 1960s. Today, even phones can send information of any type (since it is all digital and built on a reliable packet switching mechanism) to virtually any other phone on the planet almost instantaneously (latency from a human perspective) at rates (throughput), and with high reliability, that would have been undreamed of as recently as the late 1990s. People can stream real time video information, requiring many megabits per second, from virtually wherever they are.

    The infrastructure expenses for today's digital, packet switched, wireless communication are a small fraction of those of the old analog, circuit switched, wired systems. Today, most developing countries can very rapidly deploy an information communication system with the same capabilities as in the most developed countries. New issues exist with respect to some of the materials in the handsets, but those are likely to be swept aside in the next decade as the production problems with graphene are solved. Other than that the next steps in changes to our information communication infrastructure will seem bland by comparison to the changes of the past 50 years.

    Another common network related issue is how the network supports outages along any of its links. Most people have probably been "stuck in traffic", where a primary link (e.g. a specific expressway) is essentially shutdown for some reason (e.g. an "accident" at some location on a network leg) for some time. While a given network configuration may perform good enough at most times, it may have the potential to perform very badly in the context of such outages. The problem in these cases is that the network essentially has a "single point of failure". There is a simple solution to avoid such issues: have a dense network in which the disruption on a single link does not present significant disruption to movement in the neighbourhood of the problem. The internet works so well today because it has such redundancy. Our transportation system does not in such situations because we have implemented the network without sufficient redundancy, leading to major disruptions whenever there are (far too common) outages along single edges.

    In my area, there is a single major expressway between Detroit and Toronto. A single "accident" anywhere on that route usually leads to multiple hour delays for anyone attempting to use that road on the stretch that includes that outage. The issue here is that the network is not sufficiently dense. Our communication systems recognized and addressed this problem long ago. Our networks for moving people and things have not, but they could. It requires putting planning for single link outages into the overall system design. Unfortunately that is seldom done, and many people spend far too much of their time "stuck in traffic" caused by such poorly designed networks.

    In my next post I will describe how I believe changes will happen in the infra-structure for moving people and goods that will follow an analogous path of improvement to what has happened in moving information. Neither Hyperloop nor HSR fit into that vision. Both have far too poor system level performance at far to great a cost. Some of the critics of Hyperloop make claims that the technology of moving people is special in some way that precludes revolutionary advances, and people working in the area should focus on small evolutionary changes such as the safety improvements (seat belts, air bags, ABS, traction control, and crumple zones) in cars over the past 50 years. In my opinion what is needed is a revolution that would compare to the changes from analog to digital, circuit switched to packet switched, and wired to wireless. Most of the technology to do this exists today, it just requires a new system level vision. That is what I will describe, it does not include anything that resembles Hyperloop or HSR, and I will note the flaws of each of those in the context of that vision. This is also more than just a vision as I have almost completed detailed design documents for systems that can implement that vision. The major area in which I agree with Hyperloop is the deployment of those designs as an "open source" specification, which is what I shall do.

    Gary

    Posted by
    0

    Add a comment

Followers
Followers
Blog Archive
Blog Archive
About Me
About Me
Loading