Inadequate Testing Can Be Risky Business
In general, a product manufacturer or importer has the duty to design and manufacture or represent a product that is reasonably safe and effective for its intended purposes. The company can be held liable for damages to persons or property incurred when the product is used according to the expectations of a typical consumer. This is the basis for the principle of “implied warranty” as opposed to an “express” (written) warranty that clearly spells out what the manufacturer will or will not do in the event of a claim, with the former often trumping the latter.
While product liability insurance is a useful backstop against claims, a better strategy is to limit the risk in the first place. Rigorous testing of products is an excellent means of minimizing legal liability, be it for performance that does not meet required codes or standards, product design or composition that endangers the end-user, or for actual failure that causes injury or property damage. Such testing is, in effect, the cheapest form of “insurance” a manufacturer can buy. The cost, versus product recalls and damage to reputation, to say nothing of lawsuits, is miniscule by comparison.
An obvious legal defensive strategy against claims would seem to include dilution of liability based on the fact that fenestration manufacturers typically have little influence on specifiers’ selection of product, proper installation by the contractor or proper use by the consumer. But, it is not so obvious to plaintiffs.
“Control over the product is an essential element of any product defect case,” says attorney Paul Gary, principal of the Portland, Ore.-based Gary Law Group, which specializes in defending window and door liability cases. “The method, manner and extent of control exercised by a manufacturer over a product as it moves into service impacts not only interaction with the customer, but also potential legal actions. The list of variables that affect the presence of damage includes installation methods, installation materials, adequacy of the moisture barrier, overall site exposure, individual exposure of each window opening, etc.”
This principle has in fact been recognized in court. In a landmark case, an insurer attempting to recover damages it had paid to an insured due to window leakage argued that, because there was clear evidence of leakage and that sealing of the window at the factory and its “unaltered” arrival at the job site were sufficient evidence of control, the window manufacturer was by definition negligent. While the court recognized the role of the contractor, installer and building owner, the manufacturer was still held liable in that it did control the original design and production.
The manufacturer’s defense at this point would have to be evidence refuting the existence of any defect in design/manufacture and show how other parties and the passage of time could alter the product after it left the factory. The message is clear: a manufacturer must exercise control over proof of performance in the form of credible third-party testing and the attendant paper trail.
It’s in the Water
By far the majority of product liability suits, both individual and the dreaded class action, concern window leakage and the ensuing damage it may cause. “Many window and door companies have been hit by the tidal wave of litigation alleging water infiltration, including [claims for] dry rot, mold and other consequential damage,” Gary reports.
Water leakage is a complex phenomenon involving the interaction of gravity, capillary action, surface tension and pressure differentials. Fortunately, there are accepted practices and performance standards to provide design targets and laboratory tests to verify the designed performance. Test results can help identify realistic warranty terms and reduce future liability, something liability defense attorneys recommend, as well as provide a form of “insurance” against liability claims.
Testing vs. Inspection
Inspections and reviews of production lines, although useful to verify and improve processes and determine production line compliance with measurable or observable attributes such as dimensional specs and bills-of-material, have little value in verifying product performance–and thus less impact in “controlling” the product from a legal standpoint. Specifically, some attributes cannot be evaluated by inspection, such as sealant formulations, improper hardware tempering, structural strength, resistance to air and water penetration or presence of prohibited compounds.
Mattel Inc. suffered the impact of a multi-million-dollar lawsuit and loss of consumer confidence by relying on inspections that could not have discovered the now-infamous lead content in children’s toys. The company has now put in place a rigorous physical testing program that, had it been implemented from the start, would have been significantly less expensive. As Mattel can likely attest, this can be of special concern with products wholly sourced from China or other overseas sources.
Confirmation of code-mandated performance and/or claimed performance through laboratory testing thus offers a degree of brand protection. Astute marketers will recognize the opportunity for product differentiation inherent in backing up claims to buying influences with unbiased third-party test reports.
Fenestration Standards and Tests
An important measure of control that a manufacturer can exercise over its product is therefore impartial third-party testing to verify conformance with generally accepted and code-referenced standards. Test reports supporting conformance, generated by an accredited independent laboratory, are hard to challenge in court or in the marketplace, because the laboratory has no financial or other stake in whether the product passes or fails the tests.
The primary standard governing window and door performance is the North American Fenestration Standard (NAFS), aka AAMA/WDMA/CSA 101/IS2/A440-2008, and its still-applicable predecessors. This standard liberally references the test methods developed by ASTM International (formerly the American Society for Testing and Materials)–the quiet but meticulous force behind many product performance standards and test methods for more than 100 years. The majority of these test methods require the application of actual physical loads, not computer simulations.
|Testing: Not for Wimpy Windows|
Air, water and structural tests of windows are not trivial. The test methods specified by NAFS-08 for determining water resistance (ASTM E 547 and E 331) require zero water penetration when the exterior surfaces of the fenestration product are subjected to a deluge of 5 gal/ft2/hr–roughly equivalent to 8 inches of rain per hour in a basically horizontal direction–for a total of 15 minutes under a static pressure differential ranging from 2.9 psf (equivalent to a wind speed of 34 mph) to 12.00 psf (about 70 mph) depending on performance class. This is quite extreme compared to typical real-world conditions.
Additional testing for products destined for hurricane-prone areas where wind-borne debris can take out windows, tests can be conducted according to AAMA 506, Voluntary Specifications for Impact and Cycle Testing of Fenestration Product, which references ASTM E 1886 and E 1996 test methods. Not surprisingly, even more stringent performance requirements, which are described in Miami-Dade standards TAS-201, TAS-202 and TAS-203, apply in Florida’s Miami-Dade and Broward Counties, located in the defined High Velocity Hurricane Zone.The impact tests are not trivial, either. For windows to be located less than 30 feet above ground level, the impact of large missiles is simulated by impelling a 2 x 4 stud into the product at 50 feet per second, equivalent to 34 mph. For windows located more than 30 feet above ground, the impact of roof gravel and other small objects is simulated by firing a shotgun-like pattern of two-gram ball bearings into the window at a speed of 130 fps (88 mph). To pass these tests, there can be no penetration upon impact, no opening formed larger than 3 inches in diameter or no tear longer than 5 inches.
The foundation for fenestration performance in NAFS is so-called “air, water and structural” (AWS) performance, which identifies minimum capability for wind loading, air infiltration and resistance to water penetration. Fenestration products that purport to meet NAFS must pass laboratory tests of increasing rigor for these attributes depending on their performance class; i.e., applicability to residential, commercial and architectural (typically high-rise) structures, whose performance criteria serve as a guide for specifiers.
Once the basic AWS tests are performed, other tests may be applied to verify performance under specific conditions. Examples include resistance to impact due to hurricane-borne debris, and resistance to hail damage (a skylight concern). Other tests are available for verifying conformance to standards for blast and ballistic resistance, sound transmission, effectiveness of fall prevention devices, wildfire exposure, etc. The testing is as tailorable to the end-use condition as is the product itself.
The Role of Certification
Assurance that products meet the applicable requirements enumerated in NAFS is built into the laboratory testing element of certification and labeling programs administered by the American Architectural Manufacturers Association, the Window & Door Manufacturers Association, the Fenestration Manufacturers Association, Keystone Certifications Inc. and the National Accreditation and Management Institute. Laboratories must be qualified by AAMA to accurately perform the required specific ASTM tests or be independently accredited to ISO/IEC 17025, General Requirements for the Competence of Testing and Calibration Laboratories.
But for certification to have credibility there must be some assurance that the manufactured product is indeed equivalent to the sample tested. The threat of class action lawsuits “should represent more than sufficient reinforcement to establish programs to assure that performance standards are met, not just at testing but throughout day-to-day production,” Gary states.
Certification programs address this by periodically (e.g., every four years) subjecting manufacturer-submitted specimens for continued conformance to the standard. Between such tests, minor design changes or component substitutions may be made without retest so long as they meet certain specific standards and are approved via a “waiver of retest” issued by a reviewing authority. Production line units are checked during subsequent unannounced plant inspections for ongoing compliance with the originally tested design or latest waiver-of-retest configuration.
This is a good and relatively inexpensive approach to product verification, but may not be air-tight for product liability litigation. This limitation would be of even greater concern if certification programs, pressed by cost concerns, ease requirements and extend the time between required whole-unit verification tests from four to eight years as has been proposed by some. Such a hiatus in testing would inevitably allow tolerance stack-ups through (albeit authorized) component substitutions and minor design tweaks to potentially affect overall performance. Factory inspections alone could not disclose such performance changes, potentially exposing the manufacturer to product deficiency lawsuits.
Confirming the Installation
Despite meticulous design, reasonably frequent third-party laboratory testing and independent certification, installation remains the potential weakest link. Even the best-designed window can fail (e.g., admit excessive water penetration) if improperly installed. And, as with all products, performance levels decrease over time due to normal wear and tear. Codes only instruct that windows must be installed “according to the manufacturer’s instructions.”
Following recognized third-party standards such as ASTM E 2112, Standard Practice for Installation of Exterior Windows, Doors and Skylights, and/or using contractors qualified to E 2112 through the InstallationMasters installer training, testing and certification program provide a credible foundation for ensuring that laboratory-tested product design translates to intended performance in the field. Some manufacturers have wisely begun making their warranty terms conditional upon E 2112-compliant installation or installation by an InstallationMasters-certified installer.
But in today’s litigious world, even this may not be sufficient buttress against claims. To provide the next level of defense, methods exist to test the installation process itself, and then to test the actual installation before building occupancy. These serve to demonstrate a manufacture’s “control” of the delivery and installation of the product. Laboratory testing per AAMA 504, Voluntary Laboratory Test Method to Qualify Fenestration Installation Procedures, helps manufacturers verify their installation instructions.
Performance after installation can be checked according to the field testing protocol AAMA 502-08, Voluntary Specification for Field Testing of Newly Installed Fenestration Products, which verifies the actual air infiltration and water penetration resistance performance. A sister standard, AAMA 503-08 offers the similar protocols for testing of newly installed commercial storefronts, curtain walls and sloped glazing systems. Note that such field testing, as with laboratory testing, is considered valid only if it is performed by a duly accredited testing laboratory.
To Test or Not to Test
Whether it’s during a service call or in the courtroom, window manufacturers should be ready with the facts when their products are held to inappropriate or unreasonable expectations of performance. A major, virtually bullet-proof, element of such facts are bona fide test results. This is an important consideration for architects who specify products for job applications–often named in the widely-cast net of a building product liability lawsuit – as well as manufacturers. Both are well advised to back up their design and specification decisions with requirements credible testing.
Of course, the question of expense arises. The question is similar to how costly insurance coverage is versus the risk. Is testing expensive? Not if you are hit with a claim, especially a class action lawsuit. In such event, a $1,000 physical test is going to look pretty cheap.