toX tP -CZ3 gy POLICY RESEARCH WORKING PAPER 23 84 Are Cost Models Useful As developing countnes build up their capacity to regulate for Telecoms Regulators privatized infrastructure in Developing Countries? monopolies, cost models are likely to prove increasingly important in determining the Daniel A. Benitez efficient cost of providing a Antonio Estache service to a certain area or D. Mark Kennet type of customer. But cost Christian A. Ruzzier models require reliable information, which is often scarce in developing countries. Census data and the location of wire services together may help provide the minimum information a regulator needs to implement a cost proxy model, a promising regulatory tool for assessing the efficient cost of providing a utility service. The World Bank World Bank Institute Governance, Regulation, and Finance July 2000 POLIcy RESEARCH WORKING PAPER 2384 Summary findings Worldwide privatization of the telecommunications Without information, the question cannot be industry and the introduction of competition in the answered. sector, together with the ever-increasing rate of Benitez, Estache, Kennet, and Ruzzier introduce cost technological advance in telecommunications, raise new models and establish their applicability when different and critical challenges for regulation. degrees of information are available to the regulator. For matters of pricing, universal service obligations, They do so by running a cost model with different sets of and the like, one question to be answered is this: What is actual data from Argentina's second largest city and the efficient cost of providing the service to a certain area comparing the results. or type of customer? Reliable, detailed information is generally scarce in As developing countries build up their capacity to developing countries. The authors establish the minimum regulate their privatized infrastructure monopolies, cost information requirements for a regulator implementing a models are likely to prove increasingly important in cost proxy model approach, showing that this data answering this question. Cost models deliver a number of constraint need not be that binding. benefits to a regulator willing to apply them, but they also ask for something in advance: information. This paper-a product of Governance, Regulation, and Finance, World Bank Institute-is part of a larger effort in the institute to increase understanding of infrastructure regulation. Copies of the paper are available free from the World Bank, 1818 H Street NW, Washington, DC 20433. Please contact Gabriela Chenet-Smith, room J3-147, telephone 202-473- 6370, fax 202-676-9874, email address gchenet@worldbank.org. Policy Research Working Papers are also posted on the Web at www.worldbank.org/research/workingpapers. Antonio Estache may be contacted at aestache@worldbank.org. July 2000. (22 pages) The Policy Research Working Paper Series disseminates the findings of work in progress to encourage the exchange of ideas about development issues. An objective of the series is to get the findings out quickly, even if the presentations are less than fully polished. The papers carry the names of the authors and should be cited accordingly. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the view of the World Bank, its Executive Directors, or the countries they represent. Produced by the Policy Research Dissemination Center Are cost models useful for telecoms regulators in developing countries?' Daniel A. Benitez, CEER-UADE Antonio Estache, ECARE, World Bank D. Mark Kennet, George Washington University Christian A. Ruzzier, CEER-UADE JEL Classification: L5, L9, C8 l We are grateful to M. Celani, F. Gasmi, L. Guasch, J.J. Laffont, M. Rodriguez-Pardina and W. Sharkey for useful comments and suggestions. 2 I. Introduction Worldwide privatization of the telecommunications industry and the introduction of competition in the sector, altogether with the ever-increasing rate of technological advance in telecommunications, raise new and critical challenges for regulation. For matters of pricing, universal service obligation (regulation required to boost industry growth in areas not currently served or to maintain provision to areas in danger of losing it) and the like, one of the key questions to be answered is: "What is the efficient2 cost of providing the service to a certain area or type of customer?" As developing countries move forward with their efforts to build up their capacity to regulate their privatized infrastructure monopolies, cost models are likely to prove increasingly important for several reasons. First, an independent ability for regulators to assess costs can remove information asymmetries from the process of crafting efficient regulation. Second, independent cost estimates can increase transparency and may be helpful in reducing the risks of corruption that may exist in designing or reviewing pricing and subsidy policies. Finally, cost models may help in the development of infrastructure buildout policy by identifying cost differences across regions of the country. Costs models deliver a number of benefits to a regulator willing to apply them, but they also ask for something in advance: information. Without this vital element no answer can be given to the question posed above. In the remainder of this paper we will introduce cost models and establish their applicability when different degrees of information are available to the regulator. We accomplish the latter by running the model with different sets of actual data from Argentina's second largest city and comparing the results. The paper is organized as follows. Section II deals with the proper definition of costs and their measurement. Section III presents the FCC model for cost assessment in detail, while Section IV discusses the data required to implement it. Section V concludes. 3 II. Costs and Their Measurement: A Digression Which costs are we talking about? The regulator must decide on the relevant definition of costs to be considered in cost models when answering the question in the first paragraph of this paper. First, a distinction should be made between economic and accounting costs. Economic cost is about opportunity cost, i.e., the reward the factors of production involved in the provision of the service would obtain in their best alternative use. As stated in Atkinson et al. (1997), firms make decisions based on prices and economic costs; in particular, in dynamic, competitive markets like telecommunications, firms base their decisions on the relationship between prices and forward-looking economic costs, so this is the relevant definition of economic costs to be considered. Forward-looking economic costs are the costs which would be incurred if a new service were to be provided, or avoided if an existing service's provision were to be ceased, assuming that all inputs of the firm can vary freely (thus the term 'forward-looking' or 'long-run'). Considering the long-run economic cost ensures that the firm recovers all of its costs, not only operating and maintenance costs (which vary in the short run), but also fixed investments costs, necessary inputs in the provision of the service (which are not variable in the short run). If market (or regulated) prices in a competitive framework exceed long-run economic costs, new providers will be attracted to the market, and this entrance would be efficient. If market (or regulated) prices fall short of economic costs, no new competitor would have an incentive to enter the market, and some incumbent firms may decide to leave. These voluntary actions of the firms in a competitive market achieve an efficient resource allocation by adjusting price or output until the value to consumers of additional output is equal to the additional costs incurred in its production (incremental costs). Accounting costs, on the other hand, are historical costs (embedded costs) as registered in the books of a firm, and pricing based on embedded costs would normally fail to make the connection between economic costs and prices, thus leading to inefficiencies in the allocation of resources. Moreover, it is likely to be inconvenient to 2 Firns should not be paid for their inefficiencies. 4 base prices or universal service obligation subsidies on information in the hands of the firms themselves (the well-known problem of asymmetric information). Only forward- looking economic costs can give operators in the market the right signals for entry, investment and innovation. Thus, international regulation has reached a consensus: the relevant definition of costs is the economic concept of long-run incremental costs3. Regrettably, the same consensus has not been reached over the appropriate approach to measure such costs. How can regulators measure costs in practice? The accounting auditing approach to cost assessment is by far the best known and the least information-demanding and time-consuming methodology. Accounting information is readily available to the regulator. The accounting approach relies on embedded costs recorded in the companies' books, so a regulation based on this methodology would be close to a cost-plus regulation, providing little incentives to the firms for cost minimization. Thus, productive efficiency would be thwarted. Moreover, assessing cost on the basis of historical data would lead to allocative inefficiency, as we stated above. Another problem with the accounting approach is that it is based exclusively on information provided by the firms, with no independent checking. This is typically all the information that the license or the regulator himself asks the companies for, and it should be clear after the former discussion that it is not enough, if the regulator wants to minimize the asymmetric information problem and the informational rents the firms can earn. So, to improve regulation, more information is needed. Because the firm will have no incentives on its own to reduce the asymmetry of information, the regulator should make an effort to collect information beyond accounting data. This will allow him to implement a cost proxy methodology that could enhance cost assessment for various regulatory purposes. The alternative to accounting methodologies is to use simulation exercises, varying mainly technology and market parameters, which rely much less on historical data (see stated in Benitez et al. (1999) ). These alternative methodologies provide a non- 5 discretionary framework within which regulators and firms can discuss with a significant degree of objectivity, and which could provide an independent check on the accuracy of firms' cost studies. However, lunch is not free: these alternative approaches require much more time and effort in both data collection and preparation as well as the time and effort spent on model design. Since accuracy only comes at a cost (the cost of information), there will normally exist a tradeoff; we will discuss when these approaches are worthwhile, depending on the information at hand. In their paper, Benitez et al. (1999) classify the proposed methodologies into two broad categories: financial models (as used in the UK and Australia) and engineering- economic models (as used in the USA). Both approaches agree on defining the relevant costs for regulation as incremental costs. The former4 approaches focus on the overall financial performance of the firm with and without providing service to certain areas or customers (or groups of areas or groups of customers), thus concentrating on the costs the firm would avoid if it were to cease providing the service (avoidable costs) and the revenues it would stop receiving (foregone revenues). The analysis also takes into account a variety of factors that may indirectly affect the financial performance of the firm. According to this approach, there are universal service costs when the revenues generated by a customer or a group of customers are insufficient to meet the costs incurred by the universal service provider in providing the service to that customer or group of customers. Thus, to evaluate if a user is profitable or not, it is necessary to compare the long-term avoidable costs for the provider of service to that user and the foregone revenue. Long-term avoidable costs include the operating costs, depreciation and a reasonable return for the capital used. "Long-term" means the period of time in which all the assets are replaced. Then considering long-term costs implies that all the capital equipment costs that the universal service provider would stop needing, should it disconnect the service to the user, shall be included in the analysis. This is deemed appropriate since the universal service obligation influences the investment decisions of the universal service provider. Due to the fact that the long-term avoidable cost is an economic concept, the appropriate approach to evaluate the assets should be that of 3Plus certain arrangements so as to protect investments already made. 4 See Oftel (1995, 1997). 6 considering the asset replacement cost, such as is considered, for example, in current costs accounting (CCA). The measure of the long-term avoidable cost obtained from the cost information submitted by the universal service provider will give the costs incurred by that operator. However, it is not necessarily the case that this measure represents an efficient degree of avoidable costs. In the event that a finance mechanism is determined, the other operators should not have to pay for the universal service provider's inefficiency. Thus, an efficiency adjustment must be applied to the long-term avoidable costs incurred so as to obtain an estimate of the efficient level of costs.5 The financial approach uses the proper definition of costs, and introduces some interesting concepts into the analysis of the universal service burden (like the indirect financial benefits of being a universal service provider). It seems, though, that financial models share some of the undesirable properties of historical embedded cost (HEC) models (accounting auditing approach). Both types of models rely on firm-reported cost data; the major difference is that the financial model uses a current cost accounting methodology rather than the HEC standard.6 Engineering-economic models have been developed in recent years as an alternative to the traditional econometric and accounting approaches to cost assessment. Engineering models offer a more detailed view of cost structures than is possible using econometric data. The engineering models (also known as cost proxy models) could enable the regulator to estimate the forward-looking economic cost of the service without having to rely on detailed cost studies that otherwise would be necessary. Proxy models can be useful for many regulatory purposes, such as determining levels of universal service support in high cost areas and the pricing of unbundled network elements (e.g. interconnection). An economic cost proxy model begins with an engineering model of the physical local exchange network, and then makes a detailed set of assumptions about ' For the case of BT (British Telecom), the scope of the efficiency adjustment was taken from the analysis and the assumptions used by Oftel in the tariff revision for BT. 6 Financial models and engineering-economic models should not be seen as rival approaches, but rather as complementary methodologies. For example, the avoidable costs mentioned above could be estimated through a cost proxy model. 7 input prices and other factors. The next sections explore in more detail the underlying structure of one such model and its input requirements. III. The FCC Approach The Federal Communications Commission (FCC) of the U.S.A.7 has analyzed the use of forward-looking economic cost methodologies as the basis to determine the universal service cost, and it has concluded that the models that are being used are promising regulatory tools. The major actors in the telecommunications industry in the U.S.A. submitted their own versions of forward-looking models to the FCC for the Commission to analyze them. These models are the Cost Proxy Model (CPM) of Pacific Blue and INDETEC International; the Benchmark Cost Model 2 (BCM2) of Sprint and US West; and the Hatfield Model (Hatfield) in different versions, of AT&T and MCI. The FCC developed and adopted a model as an alternative to the models proposed by the industry players. The FCC model, known as Hybrid Cost Proxy Model (or HCPM), combines appropriate principles of engineering design for different network elements with economic principles of cost minimization.8 The model draws freely from engineering principles displayed in other models, thus the term 'hybrid'. In what follows, we describe the most relevant features of the cost proxy model approach. We give an overview of the HCPM and leave the discussion on input requirements and data problems to the next section. Structure of the HCPM9 The 14CPM has from the beginning been designed to use sources of geocodedl' customer location data. The model can also handle Census block level data as an alternative source of publicly available data. The last version of the FCC model incorporates some minor modifications to improve the performance of the model under 7See Atkinson et al. (1997). ' See Bush et al. (1998). 9 This part follows Bush et al. (1998). 8 the expectation that it will ultimately be used with a source of geocoded data (we will return to this issue in the next section). The most recent release of the HCPM also contains a clustering module that explicitly incorporates optimization routines as part of the clustering formation process (see below), and allows the user to choose from three different clustering algorithms. The HCPM currently consists of two independent modules: a customer location module and a loop design module. Customer location module. The local exchange telephone network must connect every customer to a local central office switch. A critical component in the design of such a network is the definition of a 'serving area', which consists of a group of customers served from a common remote terminal. Then feeder plant connects every serving area to the central office and distribution plant connects every customer in a serving area to a 'serving area interface'. The customer location module first groups individual geographic locations of customers into clusters, to form serving areas based on engineering considerations. These considerations include a distance constraint, so that no customer is farther from a potential terminal location than is allowed by the maximum copper distance, and the maximum number of customers in a serving area, which depends on the capacity of the largest terminal. A divisive clustering algorithm successively splits new clusters from a main cluster that initially contains every customer location. Clusters are evaluated on the basis of the relative distance of customers from the line-weighted centroid of the new and old clusters. After an initial clustering process, two different optimization algorithms look for ways to re-assign customers to clusters, so as to reduce the total distance from the cluster centroids, while satisfying the maximum distance constraints. After geocoded customer locations have been grouped into clusters, it is necessary to further process the location data, so that they can be used by the loop design module. HCPM defines a grid on top of every cluster and then subdivides each grid into a large 10 To geocode a customer or a wire center location (or any location), a coordinate (latitude-longitude) must be attached to that particular location (e.g., the customer's address). 9 number of microgrid cells, placing each customer location into the correct microgrid cell. Loop plant can therefore be designed to reach only populated microgrid cells. Loop design module. Distribution plant consists of the set of analog cables, structures and other facilities, such as network interface devises, that are required to connect every customer location to the nearest serving area interface (SAI). Feeder plant consists of the set of fiber, digital copper (Ti) or analog copper cables and structure that connect every SAI to the central office. When there is more than one SAI, the model identifies a primary SAI, the one located closest to the central office. The feeder network, which connects every primary SAI to the central office, is designed using a 'minimum cost spanning tree' algorithm, modified to take into account the cost of cable and structures rather than simple distance. Beginning at the central office, the algorithm builds a network sequentially by examining both the cable and structure costs involved in attaching new nodes to the network. Lowest cost nodes are attached first. When each new node is attached, the connection is chosen so as to minimize the cost of cable and structures that are required to connect that node to the central office using the currently existing network. Distance computation can be done using rectilinear distance or airline distance. In addition, 'junction nodes' are placed at points due north, south, east and west of the central office along what would be the main feeder routes in a traditional 'pine tree' feeder design. In calculating the cost of the distribution network, HCPM employs two separate algorithms. One algorithm deploys vertical backbone and horizontal branching distribution cables from the serving area interface to reach every populated microgrid. Branching cables run along every other microgrid boundary. Each microgrid is subdivided into equal-sized lots. Drop terminals are located to serve one to four lots and cables are placed on every other lot boundary to connect with the backbone and branching cable leading to the SAI. The second distribution algorithm uses the same minimum cost spanning tree network as was used to design the feeder network. In this approach, microgrids are divided into lots and drop terminal locations are determined as before. A spanning tree network is then constructed which connects every drop terminal 10 to its nearest SAI. All distance computations used in constructing the distribution network are based on rectilinear distance. The HCPM incorporates a number of explicit optimization routines in both the distribution and feeder algorithms. It selects the appropriate feeder technology (fiber, digital T-1 on copper, or analog copper) on the basis of cost minimization subject to engineering constraints defined by user inputs. The model also selects loop electronics by examining every feasible combination of large and small terminals and selecting the cost- minimizing outcome. In the feeder network, the model optimally determines whether to splice two fiber cables or run multiple cables at each junction point. All technology decisions are made on the basis of life cycle costs, based on a table of technology specific annual cost factors. In this way, the loop design module determines the total investment required for an optimal distribution and feeder network by building loop plant to the designated customer locations represented by populated microgrid cells. Applicability"1 A key concern in developing countries is the importance of ensuring that the poor users or regions get access to services. These concerns are typically built in through the inclusion of service obligations in the concession or other types of privatization contracts signed between the governments and the private operators. The main challenge for the regulators is to ensure that the financing requirements claimed by the operators do not result in excess profits or grossly inefficient cross subsidies. An independent cost modeling approach can be a significant advantage in addressing this concern. For low- income build-out programs and universal service initiatives, the unbiasedness property thwarts the 'cost-plus' inefficiency inherent in using accounting costs, since cash outlays are not based on monetary claims under the recipient's control. Because the HCPM approach involves economic optimization, the programs will be delivered at a lower cost (or level of subsidy) than the alternatives when comparable input values are used. The following table compares the HCPM with the more traditional accounting approach. " More on the topics in the remainder of this Section can be found in Sharkey et al. (1999). 11 Applications Accounting HCPM Approach Access! Leads to application Flexibility limited only Interconnection of wholesale rates by computing time that instead of the user can tolerate. incremental cost based rates. Universal Impossible to Flexible and unbiased. Service separate costs of subsidized services from unsubsidized services. Price Cap Will understate Greater optimization Review efficiency gains by allows substitution over-weighting among inputs, though historical data. not perfect. Poverty Leads to "cost-plus" Can lead to balanced, Programs type arrangement for efficient social low-income program. infrastructure deployment. Unbundled Accounting data not Flexible and unbiased. Network disaggregated in Elements relevant way. Policy Objectives The new regulators of infrastructure services in developing countries have a number of underlying objectives to meet. Among these objectives are the need to set independent benchmarks for cost-based regulation. Such independence is desirable from the perspectives of avoiding the pitfalls of cost-plus reward structures and of avoiding the undue influence of any one industry group on policy decisions. Benchmarks developed using the HCPM approach avoid both pitfalls. Another objective is maintaining control of information used in policy applications. The HCPM approach guarantees that all data input to the model are generated in a public process in which all parties are invited to participate. 12 A third objective is to enhance the transparency of cross-subsidies embedded in pricing schemes. The HCPM is likely to be the most accurate of the approaches, since it incorporates economic optimization and thus will calculate more accurate estimates of the costs of networks delivering subsets of the services provided by a complete network (stand-alone costs). Such computations are necessary for determining the presence and amount of cross-subsidies. Once again, we present a table where both the HCPM and the accounting methodologies are shown together. Policy Accounting HCPM Objectives Approach Benchmarking Accounting models do All sides can not set independent recognize benchmarks. independent benchmarks. Information Determined by Determined by public Control accounting rules. data inputs. Cross-Subsidy Accounting models do Greater optimization Transparency not enable calculation improves estimate of of stand-alone costs; cross-subsidies. thus, subsidies cannot be calculated. Theoretical validity The credibility of any model proposed for policymaking is determined by the adherence of the model to theoretical precepts. Such precepts include the realism of the model's treatment of customer locations, the incorporation of economic optimization, and the usefulness of the model for calculating true long-run incremental costs. The HCPM accepts fully disaggregated customer location data and "builds" plant to individual customer locations, as mentioned in earlier sections of this paper. This characteristic helps in convincing skeptics who feel that cost models build "fantasy" networks, and is likely to result in more realistic cost estimates. Accounting models have no provision for handling customer locations on a prospective basis, since they can only model costs that have already been incurred. Moreover, they do not incorporate any 13 significant degree of economic decision-making in the model and overstate the incremental costs of services, since inherent in the approach is a non-economic allocation ofjoint and common cost. Limitations HCPM also has some limitations. First of all, HCPM is unable to model a dynamically evolving telecommunications network. Such a model, involving complex dynamic optimization techniques, is likely to be quite computationally expensive and is beyond the current ability of cost modelers to develop. Given this shortcoming, HCPM may provide a second-best advantage in that it optimizes more significantly than the other approaches, and one can therefore use the HCPM to calculate an optimal static network at various points of time in order to approximate certain dynamic considerations. This repeated static exercise will not result in an optimal time path for the network (since it always "rebuilds" the network from scratch), but is an improvement over the alternatives that do no optimization at all. Another limitation that the HCPM presents is a failure to model the labor input in a detailed way. Accounting approaches may be somewhat better in this dimension, since if the data are available one can observe a time path of labor cost, perhaps by activity type. However, a well-designed industry or independently designed proxy model can make use of this information in order to appropriately calibrate input values. Both approaches also share the limitation that economic depreciation information is not generally available. Conceivably, information requests may lead to economic depreciation information being available over time. Accounting-based approaches have a clear advantage in terms of current data availability and in their treatment of stranded investments. The forward-looking investment data necessary to use the HCPM can be difficult to collect, necessitating a large amount of staff time. Accounting models inherently handle stranded investments, since they are inherently retrospective. Of course, it is a matter of some debate in many circumstances whether such investments actually exist or are a significant proportion of total plant; to the extent they are, proxy model approaches must be adjusted to account for stranded investments if necessary for the regulatory objective. 14 IV. How Much Data Does a Regulator Really Need? Data Requirements A main concern in most reforming developing countries is the limited access to data. This implies that a careful assessment of the data capacity of a regulator is probably the first stage in the development of a toolkit for any regulator. The following table'2 shows the input requirements of the HCPM, compared to those in the accounting approach. Data Accounting HCPM Requirements Approach Customer data Only aggregate Uses and processes historical data used. data at any level of aggregation. Input Prices Highly aggregated; Highly not forward looking; disaggregated input accounting data possible. categories not relevant to economic issues. Engineering Engineering inputs Based on network other than historical optimization experience not principles; can possible. design to variety of engineering standards. Time to Accounting data Logic available implement available now in now; inputs may many countries, but require six months not in others. or more. Most typically, regulators of privatized utilities will start from scratch and initially have only a very modest source of information to work with. As regulators build on this information, they will need to consider the policy issues that they will ultimately have to address. These issues will determine the form of the best cost model for regulatory purposes and therefore should determine the optimal degree of disaggregation to impose on the data collection process. A significant advantage of a flexible model such as the HCPM model is its ability to accommodate various levels of input data disaggregation. 12 Reproduced from Sharkey et al. (1999). 15 Useful results can be obtained initially with relatively aggregated Census level inputs, but the same modeling engine is able to accept the most disaggregated customer location data that can be provided. Other inputs representing the costs of best international practice in specific areas can also be derived from current values used in the HCPM since these are driven by technology rather than local conditions. At a later time, these inputs can be customized to fit local conditions as required. But, what is the usefulness for regulation of cost proxy models like the HCPM? Several questions arise in this context: What is the least information a regulator should have at hand if he is to trust the results given by the model? What is there to be gained if better data is available? What can the regulator do with certain qualities of information? Precision will normally pay, so it is likely to be convenient to make some investment in data collection and elaboration. We mentioned that the HCPM had been designed with the expectation that it would ultimately be used with a source of geocoded data (i.e. geocoded customer location data or Census block level data). But does lack of geocoded locations imply that the model cannot be applied? The answer to the question is no (with some qualifications). If a map of the study area were available, a grid could be placed on top of it, and blocks could be constructed, where each would be given a location in the grid. In this most primitive case (least information case), we assume that only the total number of residential and business lines for each wire center, locations of the wire centers, and a fairly detailed country or regional map are available. We assume that a user is capable of manually "gridding" the map, creating a matrix in which areas that appear from the map to be populated are given the value '1' and areas which appear not to be populated are given the value '0'. Each populated mesh in the grid is assumed to have the same number of lines and the same area; these data are converted via a straightforward computer program into input files usable by HCPM. The program assigns each populated mesh block to the nearest wire center as measured by earth surface distance between the coordinates of the mesh block and the switch. Alternatively, another minimum information case might be one in which the wire center locations are known and customer locations are known at a more aggregate level; 16 that is, very imprecisely (intermediate information case 1). We have implemented this possible scenario using Census data for the second largest city of Argentina"3 at the level of the.ftacci6n, which is a collection of radios, which in turn is a collection of manzanas, the smallest reporting unit available.'4 We have also computed costs when information is available at the level of the radio. The results of this thought experiment can be seen in Table 1. Another intermediate information example (intermediate information case 2) might be one, as in Argentina, in which the wire center locations are known, line totals are known, and location data are known at the level of the manzana. The relatively disaggregated location data are used to impute a more uneven customer location distribution by assigning the lines in the wire center proportionally to the populations of each manzana. The results of the computations for this particular case can also be seen in Table 1 below. Table 1 Total Investment (in dollars) [Total lines Manzana Radio Fracci6n 1 320,403 215,032,106 216,802,535 233,801,440 The total investment shown in the table is the sum of feeder costs, distribution costs, drop costs and some other costs (e.g., terminals, interfaces).15 As can be readily seen from the table, the use of relatively aggregated Census data does not markedly bias the results with respect to what we take to be the best approximation to the true cost, i.e. interrnediate inforrnation case 2 (manzana). Note that using radio there are no significant differences in total investment (less than 1% higher), and using fracci6n, total investment is only 9% higher than with manzana (our best estimate of the true cost, at least with Census data). The best situation regarding data availability would be one in which costs calculations could be performed having nearly perfect information available on precise individual 13 A brief description of the database used for these exercises can be found in the appendix. 14 Those familiar with the U.S. Census data reporting conventions can think of the manzana as analogous to the Census block, and the radio as equivalent to the Census block group, or CBG. 15 Total investment does not include maintenance, depreciation and cost of capital charges, among others. 17 customer locations (maximum information case). The telephone companies could provide such a database, in which customer addresses could be geocoded using a standard GIS package, like ArcView&. Each location could then be mapped using a simple custom computer program to the nearest wire center location. Should this kind of information not be available, the regulator could resort to Census data at any level of (dis)aggregation (as discussed above). The question remains whether this would be a good substitute for the precise individual customer locations. To show that it could be so (at least in populated areas), we considered two wire centers (A and B) of Argentina's second largest city and compared the results using the geocoded locations of the customers'6 served by those wire centers (geo-A and geo-B) with the results when Census data (manzana level) corresponding to the area served by them is used (census-A and census-B). The estimates are shown in Table 2. Table 2 Total lines Total Average Investment Investment (in dollars) (in dollars) Census-A 1,716 1,257,492 733 Geo-A 1,216,582 709 Census-B 5,169 3,113,898 602 Geo-B 2,964,068 573 The table shows that there are no large differences between both data sources, so in principle, Census data could be used whenever geocoded customer locations are not available. Moreover, if the regulator wishes to measure the costs of serving a currently non-served area, she would have to resort to information from the Census, since customers do not exist by definition. V. Conclusions Cost proxy models are promising regulatory tools, which can be used to assess the efficient cost of providing telephone services. These models could enable the regulator to 16 The source and description of this kind of information is also given in the appendix. 18 estimate the forward-looking economic cost of the service without having to rely on detailed cost studies that otherwise would be necessary. This alternative methodology provides a non-discretionary framework within which regulators and firms can discuss with a significant degree of objectivity, and which could provide an independent check on the accuracy of firms' cost studies. However, lunch is not free: these alternative approaches require much more time and effort in both data collection and preparation as well as the time and effort spent on model design. In general, reliable and detailed information (as required by costs models) is a scarce good in developing countries. In this paper we have established the minimum information requirements that a regulator needs to implement a cost proxy model approach, and we have shown that this 'data constraint' need not be that binding. In particular, the HCPM can run with the following inputs: * Census data. Each unit surveyed in the Census (e.g. Census block, or Census block group) should be referenced to a system of coordinates (e.g. latitude and longitude; nevertheless, the mrodel can be modified to work with another system -even with an Excel worksheet- when latitude and longitude coordinates are not available). There should be information on the number of households in each Census unit, and on the number of units in a Census unit group. * Location of wire centers. Each wire center in the study area should be referenced to a system of coordinates (e.g. latitude and longitude; nevertheless, the model can be modified to work with another system -even with an Excel worksheet- when latitude and longitude coordinates are not available). There should be information on the number of lines provided by each of the wire centers. The telephone companies could provide this information. * All other inputs are provided by the model (i.e. factor prices, technologies, etc.), based on information for the US, but can be freely varied by the regulator when this information becomes available in the country. We have also shown that Census data is a good substitute for the individual customer locations, and that the level of aggregation of the Census information does not markedly 19 alter the estimates. Geocoded customer locations may prove difficult for the regulator to obtain, but aggregated Census data is likely to be available in most developing countries, making cost proxy models easier to apply to their particular realities. References Atkinson, J., C. Barnekov, D. Konuch, W.W. Sharkey and B. Wimmer (1997), The Use of Computer Models for Estimating Forward-Looking Economic Costs. A Staff Analysis, Federal Communications Commission, January 1997, United States. Benitez, D.A., M. Celani, o.O. Chisari, M.A. Rodriguez Pardina and C.A. Ruzzier (1999), Minimizing the Costs of Universal Service Obligations in Argentina's Telecoms Sector Through a Cost Model, XII World Congress of the Intemational Economic Association, August 1999, Buenos Aires, Argentina. Bush, C.A., D.M. Kennet, J. Prisbrey, W.W. Sharkey and V. Gupta (1998), The Hybrid Cost Proxy Model. Customer Location and Loop Design Modules, August 1998. Oftel (1995), A Consultative Document on Universal Service in the UK from 1997, Office of Telecommunications, December 1995, United Kingdom. Oftel (1997), Universal Telecommunications Services, Office of Telecommunications, July 1997, United Kingdom. Sharkey, W.W., D.M. Kennet, A. Estache, L. Guasch and M.A. Rodriguez Pardina (1999), A Comparison of Cost-Modeling Approaches for Regulatory Policy Development, mimneo, March 1999. Appendix The information needed to run the exercises presented above involves customer data, wire center data, and parameters (both technological and economic). We describe here our sources and the database itself * Customer data. The information on the particular customer locations was provided by the provider of telephone services in the study area (Telecom de Argentina). It contained, for each customer, her complete address and the wire center to which she was connected. The distribution of lines (customers) by wire center is given in Table Al below. Each customer location was geocoded using ArcView®, a standard GIS package. The alternative source of customer information was Census data. This data were obtained from INDEC, Argentina's office of statistics and census. This database contained information from the 1991 Census on households and small businesses, at 20 the level of the manzana; and on large businesses, at the level of the radio. We assumed a homogeneous distribution of these large businesses among the manzanas that form a radio. Table A2 below gives a description of the Census data used. * Wire center data. This information was also provided by the company, containing, for each wire center, its location (address) and type (host, remote). Some wire centers were located in the same building. We just added these up and take them to be a single (larger) wire center to perform the estimations. Table A3 shows how these groups of wire centers were formed. * Parameters. All the information on costs and technology was taken from FCC's web site (www.fcc.gov). 21 . Table Al: Distribution of lines by wire center Wire Number Wire Number Wire Number Wire Number Center of Lines Center of Lines Center of Lines Center of Lines 2 8,650 71 1,760 9 3,340 21 21,545 57 2,973 75 508 94 7,797 22 6,920 60 5,944 76 8,885 96 832 23 4,436 61 8,477 77 899 97 5,829 24 3,530 62 1 78 8,954 297 18 25 3,002 64 7,638 79 826 901 96 26 1,187 65 9,017 80 10,276 902 153 27 98 66 3,180 81 8,895 904 78 28 638 68 3,204 82 2,653 990 836 33 437 69 2,297 84 8,055 993 178 51 9,600 70 6,603 85 80 995 415 52 6,952 71 9,784 88 5,653 997 795 53 347 72 4,881 89 7,492 998 455 55 9,641 73 3,656 92 3,306 999 169 . Total 233,873 22 Table A2: Census data Unit type Number of Number of units manzanas Viviendas 290,708 12,172 Locales Comerciales 30,233 8,106 Total 320,941 12,712 Table A3: Wire Center Groups Number Group Wire Centers of lines 1--T 71-72-73-74 20,071 2 51-52-53 16,888 3 (A) 81-82 11,575 4 20-21-22-23-24-25-26-27-28-33 41,447 5 80-88-89 23,488 6 70 7,071 7 78-79 9,967 8 60-68-69 11,537 9 75-76-77 9,505 10 61 8,663 11 84-85 8,618 12 65-66 12,337 13 55-56-57 21,481 14 (B) 64 8,078 15 97 5,869 16 92 2,911 17 997 834 18 93-94 9,182 19 999 134 Total 46 wire centers 229,696 Policy Research Working Paper Series Contact Title Author Date for paper WPS2365 Leading Indicator Project: Lithuania Stephen S. Everhart June 2000 M. Geller Robert Duval-Hernandez 85155 WPS2366 Fiscal Constraints. Collection Costs, Keiko Kubota June 2000 L. Tabada and Trade Policies 36896 WPS2367 Gender, Poverty, and Nonfarm Constance Newman June 2000 M. Clarke Employment in Ghana and Uganda Sudharshan Canagarajan 31752 WPS2368 Seeds of Corruption: Do Market Harry G. Broadman June 2000 S. Craig Institutions Matter? Francesca Racanatini 33160 WPS2369 How the Proposed Basel Guidelines Giovanni Ferri June 2000 E. Mekhova On Rating-Agency Assessments Li-Gang Liu 85984 Would Affect Developing Countries Giovanni Majnoni WPS2370 A New Model for Market-Based Marcelo Giugale June 2000 M. Geller Regulation of Subnational Borrowing: Adam Korobow 85155 The Mexican Approach Steven Webb WPS2371 Shock Persistence and the Choice Marcelo Giugale June 2000 M. Geller of Foreign Exchange Regime: An Adam Korobow 85155 Empirical Note from Mexico WPS2372 Financial Openness, Democracy, Mansoor Dailami June 2000 W. Nedrow and Redistributive Policy 31585 WPS2373 Reciprocity across Modes of Supply Aaditya Mattoo June 2000 L. Tabada in the World Trade Organization: Marcelo Olarreaga 36896 A Negotiating Formula WPS2374 Should Credit Be Given for Aaditya Mattoo June 2000 L. Tabada Autonomous Liberalization in Marcelo Olarreaga 36896 Multilateral Trade Negotiations? WPS2375 Asset Distribution, Inequality, Klaus Deininger June 2000 M. Fernandez and Growth Pedro Olinto 33766 WPS2376 The Effect of Early Childhood Michael M. Lokshin June 2000 P. Sader Development Programs on Women's Elena Glinskaya 33902 Labor Force Participation and Marito Garcia Older Children's Schooling in Kenya WPS2377 Reforming the Water Supply in Claude Menard June 2000 H. Sladovich Abidjan, C6te d'lvoire: Mild Reform George Clarke 37698 in a Turbulent Environment Policy Research Working Paper Series Contact Title Author Date for paper WPS2378 Disintegration and Trade Flows: Simeon Djankov June 2000 R. Vo Evidence from the Former Soviet Caroline Freund 33722 Union WPS2379 India and the Multilateral Trading Aaditya Mattoo June 2000 L. Tabada System after Seattle: Toward a Arvind Subramanian 36896 Proactive Role WPS2380 Trade Policies for Electronic Aaditya Mattoo June 2000 L. Tabada Commerce Ludger Schuknecht 36896 WPS2381 Savings and the Terms of Trade Pierre-Richard Agenor June 2000 T. Loftus under Borrowing Constraints Joshua Aizenman 36317 WPS2382 Impediments to the Development and Thorsten Beck June 2000 E. Mekhova Efficiency of Financial Intermediation 85984 in Brazil WPS2383 New Firm Formation and Industry Thorsten Beck June 2000 E. Mekhova Growth: Does Having a Market- or Ross Levine 85984 Bank-Based System Matter?