Four sets of comments, two from Federal agencies and two from parties outside the government, specifically addressed the economic assessment. The comments highlight a number of important shortcomings in this assessment. However, the comments do not contain sufficient detail to allow us to modify the analysis. However, we appreciate the input from the commenters and seek in this chapter to address their concerns and describe the implications those concerns might have on the conclusions drawn in the economic assessment.

A Federal agency suggested looking at the analysis being done by commercial and professional organizations such as the May 3, 2000 report on the Tech Republic site or the technical index of the Association of Computer Machinery. Unfortunately, we were unable to locate any information that was generalizable in these locations. A thorough search for data sources was conducted at the time the original analysis was conducted. Undoubtedly, there were sources of data that were missed in that initial search. However, without performing an entirely new analysis based on a specific data source, there is no way to judge whether the new data will significantly change the conclusions of the economic assessment. The same agency also suggested that the Board work with the OMB to expand the Federal budget "object class codes." While this may be useful for future assessment of information technology expenditures, it is not likely to provide information that is useful to inform this analysis.

The same agency also questioned the assumptions associated with the modification of mission specific software. The agency noted that it spends a far greater percentage of its information technology budget on mission specific applications than on general office software. The agency estimated that the cost to modify mission specific software was higher than the 1 percent to 5 percent used in the economic assessment.

With respect to the relative level of spending on general office software and mission specific software, it is not clear whether the comment represents the unique circumstance of a single agency or a more systematic misunderstanding on the part of the authors of this analysis of Federal information technology spending. Because the 20 percent assumption was meant to represent an average, it is assumed that some agencies will buy a greater proportion of general office software, while others will purchase a greater proportion of mission-specific products. If, however, the concerns of the commenter are broadly generalizable to the rest of the Federal government, the estimate of the cost to the Federal government of general office software would decrease and the mission-specific estimate would increase. The cost to the Federal government would increase as would the total cost to society - because changes in general office software cost are borne by the entire market irrespective of the amount purchased by the Federal government. The magnitude of this shift in costs is unknown.

With respect to the concern that a 1 percent to 5 percent increase in the cost of developing mission specific software applications may be low, we concede that this may be the case. However, the commenter gave no indication by how much this assumption may understate cost for the software the agency develops. Moreover, we have no basis for judging whether this concern is shared by other agencies, or represents a specific concern for a particular agency. As a result, we cannot draw any generally applicable conclusions from this concern, other than to note that the $10 to $52 million estimated as the increase in costs for these products may be understated by some unspecified amount.

The same agency also questioned whether the analysis includes contractor dollars to develop software. It is our understanding that contractor costs are included in the GSA categories used in the analysis. If this were not the case, the total costs of the rule estimated by this analysis would be an understatement of the real cost of the standards.

In addition to these concern, there are two other factors associated with the use of the GSA data that may systematically understate costs:

  • The methodology does not capture expenditures on software that may be purchased in conjunction with other goods and services purchased by the government on other schedules.
  • The methodology does not capture the marginal increase in the cost of developing software completely within the Federal government.

Again, the magnitude of these additional costs in unknown, but the direction of the bias is clear.

One private sector commenter expressed concern that the costs of the standards are "severely understated." The commenter cited two specific flaws in the analysis, which are quoted here in their entirety:

The determination of price in the marketplace is not based totally on the cost of production but rather largely on perceived value to the consumer. Since, as the Access Board has pointed out, accessibility features will mostly be invisible to individuals who do not have disabilities, they will therefore not contribute to their perceived value of the product. Consider this, if a software developer were to just make accessibility modifications to a product and then offer it as a new version at a higher price, the individual who does not have disabilities would refuse to buy the product at the higher price.

The scarcest resource in software development is experienced developers. In general, adding accessibility modifications will divert resources from the production of new and enhanced features, which will have the potential of wide acceptance and significant return on investment. Distributing the cost of the accessibility modifications across society not only makes the non-handicapped customer pay for unperceived value but it also deprives the user of new features and functions that may have value. Just hiring new people, even if they were available, does not change these parameters.

The first concern affects who pays for the costs, not their magnitude. It is assumed by this analysis that the total costs of adding accessible features to a software program (including development, marketing, documentation, etc.) is the same whether you sell one copy or one million copies. That cost can then be recovered in one of three ways: the price of the product can be raised to all consumers of the product; the price can be raised only to those consumers who benefit, or only to the Federal government; or the company is forced to bear the cost, in which case the stockholders or owners "pay" for the accessible features. The ability to pass costs on to consumers broadly will be limited by their willingness to pay for the new features and the consumer surplus associated with their purchase of the initial product. If the accessible features of a product are transparent and not valued by most consumers, it will be difficult to pass these costs on to consumers unless they were already paying less for the non-accessible version than they were willing to pay. This analysis assumes that consumer surplus is sufficient to absorb the increase in costs.

The ability to pass the costs along to consumers through price discrimination (charging different consumers different prices) is limited again by the value of the features and by the ability of the seller to control who buys which version of the software. If this is the approach taken, it may make sense for the seller to market two distinct versions of the software. An extreme version of this approach would be to increase the price of software sold to the government sufficiently to cover the entire cost of the new features. This option is only viable if the Federal government is somehow precluded from buying the product elsewhere. If neither of these options is available to the seller, then the seller will be forced to bear the entire cost of the standard. The likely result of these standards will be some combination of the above. However, the question of who pays for the costs does not affect the overall costs of the standard.

The other concern regarding diverting software development resources is a completely valid criticism of the analysis. The analysis only quantifies the cost of the resources diverted to incorporating accessible features into software. It does not quantify the opportunity costs associated with what those same programmers would have been doing otherwise. While we cannot estimate the magnitude of this cost, it should have been more fully discussed in the analysis. Section 4.7 attempted to identify some of the unquantified opportunity costs, including the reduction in the rate of innovation. It is important to note, however, that the magnitude of the opportunity cost is limited by the marginal value to sellers of selling to the Federal government.

A Federal agency questioned the assumptions used in the wage gap analysis of the Federal workforce. The agency suggested that Federal workers with disabilities can be accommodated without the standards and that job satisfaction surveys may be a better source of information. While we agree that the wage gap analysis has significant limitations, we disagree with the comment. While individual accommodations clearly provide benefits, wage and productivity differences continue to exist despite such accommodations and may be decreased as a function of the standards. While increases in job satisfaction are clearly a benefit, the purpose of these standards is to reduce or eliminate the barriers to productivity that result from lack of access to electronic and information technology. Increases in productivity are the more appropriate measure of the benefits of the standards. Wages are a convenient measure of productivity.

Another federal agency questioned the use of aggregate statistics, favoring instead separate estimates of costs and benefits for each disability type. While such an analysis may be an improvement over the reliance on aggregates and averages used to generate estimates in this analysis, neither the data nor the Board's resources supported such an approach.

Another commenter raised a number of important issues with respect to the analysis. While the economic assessment can be an important tool to inform the discussion, it was never intended to be the dispositive analysis of all possible costs and benefits of the standards, nor is it intended, as suggested by the commenter, to guide the implementation of the standards in any way. We agree with the commenter that the document is insufficiently precise or complete to serve such a purpose. The commenter stated that the standards are likely to impose a wide variety of technical and non-technical costs including:

For Companies:

  • engineering and development
  • usability/human factors
  • product testing and quality assurance
  • product management
  • product development process management
  • marketing
  • government sales staff
  • product documentation
  • technical support
  • customer support
  • corporate communications
  • legal and regulatory

For the Federal Government:

  • procurement
  • IT planning
  • human resources
  • legal
  • technical support

We believe that the analysis captures most of the costs borne by companies, both technical and non-technical, because these costs will be reflected in the price of the products sold to the Federal government. On the other hand, the costs to Federal agencies for the items enumerated above are not quantified in the analysis. Instead, we discuss these costs without attempting to attach a number to them. This approach was taken due to a lack of available information. As a result, however, the quantified estimates clearly understate the true total cost of the standards.

The commenter also correctly pointed out that the timing of benefits and costs matters. We agree. However, we did not have any reliable information on the rate of decline of costs. Therefore, we used a steady state approach that inherently assumes that accessible products are available at reasonable cost.

The commenter suggested that some of the cost estimates on the upper end of the cost range may represent an undue burden. This interpretation of undue burden is inconsistent with our understanding of that term. It is our understanding that an undue burden is an affordability, not a benefit/cost test. The commenter also notes that benefits may be underestimated due to a failure to consider increases in productivity of workers without disabilities resulting from some subset of the accessible features. Again, this benefit is discussed in the analysis, but not quantified. It is difficult without considerably more information to assess which functions will have what benefit to whom. We must again satisfy ourselves with the understanding that the quantified estimates of benefits may underestimate the true benefits of the standards for failure to quantify these effects. The same is true of improvements in providing public access to government information.

The commenter took issue with the assumption that software would only be produced in a single accessible form for all markets, citing the existence of multiple versions of many existing programs. This concern is testable merely by observing the number of software products that are currently marketed in separate accessible and non-accessible versions. We are unaware of any such product. Moreover, this concern is contradicted by the comments of manufacturers who agreed with this assumption. As discussed previously, this assumption affects who pays for the costs rather than total costs.

The commenter also took issue with the assumption that Federal and non-Federal versions of hardware would be available. The commenter cited the expense of manufacturing two product lines to serve the same function. While it may be beneficial in some cases to stay with a single design, the choice to do so will be based largely on the economic implications of that decision. Therefore, the assumption that two product lines will exist represents a reasonable upper bound of the costs that can be attributed to the standards. It should be noted, however, that to the extent this assumption does not hold true, the estimate of the benefits of the standards will be understated due to a failure to consider spill-over benefits of accessible hardware.

The commenter also suggested that the benefits are understated for failure to consider the benefits to all users. It is not clear to us which accessible features will have benefits of what magnitude to which users. Many accessible features are invisible unless you affirmatively turn them on. It is not clear who will do so and why. In any case, the statement that all accessible features increase the ease and convenience (and presumably productivity) of all users seems overstated. The commenter also suggested that unlike costs, which can be expected to decrease over time, the benefits of the standards can be expected to increase over time. The commenter appears to suggest that this analysis may underestimate benefits as a result of the failure to consider this increase over time. The commenter offers five supporting arguments:

  • First, the number of accessible features will increase over time and they will become more available as the associated costs fall below the undue burden level. While both of these factors may be true, the analysis assumes that fully compliant products will be available immediately and that Federal agencies will not use the undue burden exemption. As a result, the analysis may overstate benefits in the short-term, not understate them in the long-term. However, future benefits may be understated if costs come down sufficiently over time such that they no longer justify manufacture of a separate Federal hardware products and there is significant spill over of accessible hardware into the general market.
  • Second, increase awareness of accessible features will create additional benefits without requiring actual changes in products. Again, while this may be true, the analysis already assumes perfect knowledge and implementation of all accessible features required by the standards.
  • Third, the increased options of persons with disabilities will be recognized earlier in their education, perhaps reducing special education and remedial education, as well as improving educational and employment outcomes. This is, in fact, a category of benefits that is not included in the analysis. We do not know how to assess the magnitude of these benefits, but we acknowledge that they are important.

The commenter understood our analysis to assume that all software products were complete re-writes. This is, in fact, not the case. We assumed that software products were in a constant process of improvement and that some proportion of that process of improvement would be devoted to ensuring that both the underlying product and the improvements were accessible.

The commenter offered suggestions on the issue of training. On the one hand, the commenter argued that accessible technology should decrease the cost of training an employee with a disability. This benefit should already be captured in the increased productivity of that worker. On the other hand, the commenter agreed that the standards would result in the need for additional training within agencies and companies. The commenter suggested that requests for documentation in alternate format are likely to be low. We have no basis upon which to disagree with the comments. However, until those requests are zero, companies must still produce documents in alternative formats. In any case, documentation costs are not separable in the economic assessment. The commenter noted that the assumption in the analysis that the undue burden exemption will not be used may not be accurate. This was a simplifying assumption that results in both benefit and cost estimates being overstated. The magnitude of this bias will be determined by how Federal agencies and the courts chose to interpret this language.

The commenter suggested that the inclusion of the "information technology services" category in the calculation of software costs may not be justified in its entirety. Some of the products purchased under this category may not be affected by the standards such as project management or temporary personnel services. The commenter suggests that we break this category of costs into services that are affected by the standards and those that are not. We do not have the information that would allow us to do such a breakdown of the spending in this category. Moreover, while the services identified by the commenter may not be directly analogous to the costs of software modification, the standards will impose additional costs in the form of training and other necessary expertise for all individuals who deal with Federal computer systems. The degree to which inclusion of these services leads to an overstatement of costs is also offset, at least to some degree, by the fact that there is likely to be some software, hardware, or other computer service component in many of the products purchased by the government on schedules not included in the analysis.

The commenter also suggested that the cost range estimated using the number of accessibility specialists in a software company overstates the cost of the standards because not all of the accessibility specialists are involved in software development. The commenter suggests that we lower our cost range by a factor of sixteen to reflect the fact that in one company only 25 percent of accessibility specialists are involved in software development and to reflect the unsupported assumption that accessibility specialists only leverage a quarter of a work year from other employees in the firm. We disagree with this conclusion. First, replacing one set of imperfect assumptions with a second does little to improve the analysis. Second, and more importantly, the cost of a product reflects all of the costs associated with its development, marketing, legal and regulatory compliance, and client service. The fact that many accessibility specialists are involved in the non-development activities at a firm does not eliminate their contribution to the cost of a product.

The commenter does not agree with the 5 percent upper bound incremental cost for compatible hardware products. As stated in the assessment, this cost estimate was included as a sensitivity analysis. We agree with the commenter that the real number is likely to be closer to the lower bound estimate of zero. If we use the1 percent increase suggested by the commenter as an upper bound, the sensitivity of the analysis to our assumption of zero cost falls from $337 million to $67 million.

In sum, the comments raised a number of valid and interesting concerns. While we cannot say with absolute certainty what the overall impact of weaknesses in the assumptions used in this analysis might be, it appears a reasonable conclusion that both the benefits and the costs of the standards are understated by some amount.


1. If manufacturers do not distribute the costs across society, the upper bound of the Federal cost will increase to an estimated $1,068 million.

2. In a related matter, in April 1987, GSA published Bulletin 48 in FIRMR which set forth requirements for Federal agencies to provide accommodations to meet the needs of employees with disabilities when replacing computer systems.

3. The definition of electronic and information technology established by the Access Board must be consistent with the definition of information technology contained in the Clinger-Cohen Act (40 U.S.C. §1401(3)). Congress enacted the Clinger-Cohen Act in 1996 for the purpose of creating consistency across Federal agencies in the acquisition, use, and disposal of information technology. The Clinger-Cohen Act defines information technology as: "any equipment or interconnected system or subsystem of equipment, that is used in the automatic acquisition, storage, manipulation, management, movement, control, display, switching, interchange, transmission, or reception of data or information by the executive agency."

Among other things, Clinger-Cohen states that information technology includes computers, ancillary equipment, software, firmware, support services, and related resources.

4. U.S. Department of Commerce, Economics and Statistics Administration, Americans with Disabilities: 1994-95 (P70-61), August 1997.

5. The SIPP is a longitudinal survey of adults in households obtained from a multi-stage stratified sample of the noninstitutional resident population of the United States. It is a multi-panel survey with a new sample (panel) introduced at the beginning of each calendar year. The initial selection of households into the survey is done according to a sample selection methodology similar to that used for the Current Population Survey (CPS). The primary focus of SIPP is adults, i.e., persons 15 years old or older in the initial household sample.

6. EEOC, "Annual Report on the Employment of Minorities, Women and People with Disabilities in the Federal Government: For the Fiscal Year Ending 1998." See Table II-6.

7. CPDF is an automated file created by OPM. The file is based on personnel action information submitted directly to the OPM by Federal agency appointing offices. The Standard Form 50, "Notification of Personnel Action," is the basic source of input to the CPDF. The CPDF does not include data for the Tennessee Valley Authority, United States Postal Service, Army and Air Force Exchange Service, Central Intelligence Agency, Defense Intelligence Agency, the National Imagery and Mapping Agency, or the National Security Agency. These agencies make up approximately 30 percent of the Federal workforce.

8. OPM, Federal Civilian Workforce Statistics: Pay Structure of the Federal Civil Service as of March 31, 1998. Data from the tables on page 51 are used to derived the average rate in each pay grade. The range for senior pay is derived from the subtables for Senior Executive Service (ES), Executive Schedule (EX), and Senior Level (SL & ST). The average rate for the other category is based on data from Figure 3 for all areas.

9. OPM, "Federal Civilian Workforce Statistics."