Page 1 of 1
Recent years have seen a significant change in the satellite market. The accelerated build-out of core telecommunication networks that accompanied the Internet bubble of the mid-to-late 1990s has resulted in a demand decline for telecom infrastructure. However, while the demand for telecommunications satellites may have slowed, the unsettled global political situation has left military planners scrambling to deploy space-based reconnaissance and intelligence gathering systems. Overall, the need for satellite-based communications remains strong. In fact, the market research company Teal Group forecasts that approximately 1,200 satellites will be built and/or launched between now and 2012.
For satellite applications, designers have a tough choice to make when considering technology alternatives: application-specific integrated circuits (ASICs), SRAM-based field-programmable gate arrays (FPGAs) and antifuse-based FPGAs. Because no one technology is perfect for all applications, the designers of satellites face the same issues that challenge designers everywhere, trading off one attribute for another to find the best fit for a particular application. Of these, antifuse FPGAs offer many compelling advantages, especially when it comes to radiation tolerance.
Internal Satellite View
Regardless of the intended application, any satellite can be viewed as two distinct systems—the bus and the payload. The bus takes care of system housekeeping, performing functions such as power supply management, thermal management, attitude and orbit control (also called station-keeping), craft-to-earth telemetry, and command and data handling. On the other hand, the payload is the satellite’s reason for being—the television broadcast relay system or the telecommunications relay. In a civilian satellite, the payload is the collection of scientific experiments, the weather monitoring system or the earth mapping cameras. In a military system, the payload is the collection of reconnaissance and intelligence gathering equipment, such as synthetic aperture radar, hyper-spectral imaging sensors and processors, and missile launch detectors.
The different duties assigned to bus and payload equipment are reflected in the choice of components for each system in the satellite. In the bus, reliability is paramount. An equipment failure in the bus can cripple or destroy the entire satellite. In the payload, reliability is still extremely important; however, while an equipment failure in the payload may render a single sensor, processor or transmitter ineffective, it probably will not jeopardize the entire mission. Overall, these requirements need to be balanced against the pervasive requirements of satellite designs—minimum weight, board space and power consumption in combination with maximum reliability and immunity to radiation effects, such as single-event upsets (SEUs). Immunity to SEUs for satellite design is imperative, as a temporary logic state change can occur within the integrated circuit (IC) as it is bombarded with radiation particles in space.
In considering space-based IC choices, for many bus and payload applications ASICs offer the highest density, lowest weight and lowest power solution, but lack the flexibility offered by FPGAs. ASICs are also the highest cost solution when design tool cost, verification time and non-recurring engineering (NRE) costs are considered. The relatively low volume of satellites makes ASIC per-unit costs that much higher.
Typically, SRAM-based FPGAs are primarily found in payload applications due to their susceptibility to SEUs and high power consumption. Their benefits include: the highest density currently available in an FPGA and the ultimate in flexibility up to and beyond launch. However, this added flexibility comes with added system complexity, increased component count and lower overall reliability.
But for most satellite bus and payload applications, the benefits of nonvolatile, antifuse-based FPGAs over ASICs and SRAMs are overwhelming. The use of radiation-tolerant antifuse FPGAs frees satellite designers from the NRE costs and schedule risks of ASICs and gives them the time-to-market and design flexibility benefits typically associated with FPGAs.
Specifically, radiation-tolerant antifuse solutions offer the designer significant advantages over ASIC devices, including increased flexibility to make design changes after board layout is complete with much shorter shipment lead times; lower cost of ownership with fewer vendors to qualify; no NREs required; and lower risk since the design does not have to be completed six months in advance of device delivery. Further, radiation-tolerant antifuse FPGAs offer the following additional benefits: reduced weight and board space due to decrease in devices required; ease of implementation with no configuration components; the lowest FPGA power consumption; high reliability; and availability of medium- to high-density solutions. Each of these technologies is discussed in more detail in Figure 1.
Historically, designers of digital sub-systems for satellite payloads have relied on ASIC technology to accomplish logic integration. ASICs have often been selected because they are very efficient at integrating large quantities of logic into a single chip that offers a small footprint and low power consumption. ASICs are also chosen due to their radiation performance and immunity against SEU events. The satellite industry benchmark is that a linear energy transfer threshold (LETth), or the rate at which a device becomes susceptible to SEUs, should exceed 37 MeV-cm2/mg, a requirement radiation-tolerant ASICs can easily meet.
In addition to SEUs, the total amount of ionizing radiation a device can tolerate must also be considered. Generally, there are two ways of assessing this: a device can be tested while being subjected to a stream of ionizing radiation either until it violates its specification or until it ceases operation entirely. These metrics are referred to as total ionizing dose (TID) parametric and TID functional, respectively. Typically, ASICs are expected to reach TID functional after more than 200 Krad (Si), which compares favorably to the total dose requirement of most mid-earth orbit (MEO), high-earth orbit (HEO) or geosynchronous (GEO) satellite programs of 100-300 Krad (Si) (Figure 2).
Despite their capabilities, ASICs used in space applications have a very high cost of ownership. Typically ASICs are associated with big volumes, where the high up-front costs can be absorbed over a large number of units. As process geometries continue to shrink, the mask set costs for sub-micron technologies rise rapidly. It is common for NRE costs for state-of-the-art ASICs to approach or exceed $1 million. Unlike high volume commercial applications, these NREs are amortized over a minimal number of devices because satellite programs usually consume small quantities of ICs. Consequently, the true cost of ownership of an ASIC can run in excess of $20K per device, which includes silicon packaging, test and amortized NREs.
ASICs can also create significant technical and schedule risks due to lengthy lead times required in the fabrication and verification processes; the possibility of multiple silicon respins; required design changes after board layout; and inventory issues if requirements change. Other hidden costs of ASIC methodology include: expensive design tools, CAD support infrastructure, potential program delays or missed launch windows, and the lack of flexibility to run new functions and applications to adapt for changing operation and mission requirements.
For satellite design, time-to-market is critical as designers are constantly under pressure to have their designs completed and tested in time to meet the satellite launch window. The penalty for missing a satellite launch window can be severe: satellite operators lose revenue if their satellites are on the ground; astronomers relying on satellite-borne scientific instruments may miss the opportunity to observe unique astronomical events; and the contractor whose late-running component causes the satellite operator to miss a launch window can expect significant penalties and loss of future business.
As with the commercial electronics design community, the space industry is moving rapidly toward faster, cheaper, smarter and more flexible satellite missions. The combination of these elements is contributing to the displacement of ASICs by FPGAs in both the terrestrial-based and space environments. As designers look beyond ASICs for logic integration for their satellite design, they currently consider one of two programmable technologies: SRAM- and antifuse-based FPGAs. Both share some significant advantages over ASICs such as lack of NRE, easy CAE support, low tools cost and shorter time-to-market. But there are some fundamental differences between the two technologies.
Unlike their nonvolatile antifuse-based counterparts, SRAM FPGAs are reprogrammable in the field, offering designers the ability to reconfigure a satellite without retrieving it (not impossible with the Space Shuttle, but highly improbable.) Additionally, vendors of SRAM-based FPGAs typically offer higher densities than those of antifuse-based solutions with nearly the same performance. As a result, designers often consider SRAM-based FPGAs for their satellite payload applications.
However, this flexibility can come at a price. All SRAMs are vulnerable to the effects of the high levels of cosmic radiation. Heavy ions from cosmic rays can easily deposit enough charge in or near an SRAM cell to cause a single-bit error, or SEU. Further, because SRAM FPGAs store their logic configuration in SRAM switches, they are susceptible to configuration upsets, meaning the routing and functionality of the circuit can be corrupted. These types of errors are extremely difficult to detect and correct, and are nearly impossible to prevent as configuration switches represent more then 90 percent of the total SRAM bits in an SRAM FPGA. Radiation-induced configuration upsets can result in system failures.
In some cases, these upsets can have catastrophic consequences if the upset creates a short that damages the device or board, or causes the system to operate in an erratic and unpredictable way. The SRAM cells employed to store configuration data in SRAM FPGAs are sensitive to SEUs at less than LETth 5 MeV-cm2/mg. This is far below the 37 MeV-cm2/mg required by most satellite applications for mission-critical electronics. TID performance also tends to be lower than typical industry requirements, at 50-100 Krad (Si).
It is acceptable to do nothing to mitigate soft errors if the quality of the data stored in the SRAM is insensitive to single-bit changes, such as an image comprising millions of pixels or a streaming video feed. However, for sensitive data and mission-critical functions—such as command and data handling, orbit and attitude control, and spacecraft power management—it is inappropriate to ignore the problem. Therefore, to protect the design from soft errors, designers of SRAM FPGAs have developed mitigation techniques to detect and minimize the effects, including configuration bitstream scrubbing and repair, triple-module redundancy (TMR) in soft gates and design-level mitigation, such as the inclusion of external watchdog circuitry. Of course, these kinds of mitigation techniques add to the satellite’s weight and system complexity, as they require additional board-level components.
In addition to SEU effects, there are other system complexities that designers using SRAM FPGAs must contend with. The most obvious is the configuration cycle. Unlike an ASIC, at every power up, the SRAM FPGA has to be configured. This causes additional design trade-offs from a board- and system-level perspective: Will the board use local PROMs? Will the FPGA be configured using a processor? What parts of the system need to be active in order for the FPGA to be configured? And, in what order can the system be booted up?
Further, when the system experiences a brownout or power glitch, dedicated circuitry, such as on-the-board voltage supervisors and CPLDs, must be in place to capture these events and force a reset and reconfiguration of the FPGA. Again, in all cases additional components are needed to support the FPGA, adding to system cost and complexity as well as increasing power consumption, weight and board space in the satellite (Figure 3).
Unlike nonvolatile solutions such as ASICs, SRAM FPGAs also have an initial in-rush current at start-up. This occurs because the SRAM-based device is not yet configured at power up; therefore, internal contention occurs until the device loads its configuration program and settles down to a quiescent level. This in-rush current, which can easily exceed 1A, surprises many designers. These current spikes can force the need for either a larger power supply or the addition of more circuitry to control the power-supply sequencing of the FPGA in order to minimize the current spike. In multi-FPGA systems, the boot-up of each FPGA and board may have to be carefully sequenced to minimize the total current demand at system power-up. Supporting this current demand at power-up impacts system complexity, weight, cost and reliability.
Unlike reconfigurable SRAM-based FPGAs, antifuse-based solutions are one-time programmable (OTP). With SRAM solutions, if a design change is necessary, then an upgrade can be made at the last minute before launch, or if the system is designed for it, even in space. This reprogrammability comes at a significant price, since the volatile nature of the reprogrammable memory used to configure SRAM FPGAs is responsible for their intrinsic radiation softness and the extra system complexity required to counteract that softness. With antifuse, it is not possible to reconfigure the FPGA once it has been programmed and soldered to the board, although in most mission-critical applications it is very unlikely that late-breaking design changes will be encountered.
In many ways, ASICs have a greater similarity to antifuse-based devices than they have to SRAM solutions. Historically, SRAM FPGA density has far exceeded antifuse, which has limited antifuse-based FPGAs to bus applications where reliability was paramount and smaller densities were acceptable. However, the latest generation of radiation-tolerant antifuse products from Actel, called the RTAX-S family, have closed this gap significantly and now offer up to two million equivalent system gates, enabling designers to consider antifuse FPGAs for both bus and payload applications.
When comparing FPGA alternatives, important advantages of antifuse include the inherent nonvolatility and the lack of mandatory device configuration at every power-up. Like ASICs, it is this live-at-power-up capability that makes antifuse a true single-chip solution. As such, antifuse FPGAs simplify board-level design and minimize weight and board space. Gone is the need for PROMs to store the bitstream or a CPLD to control the boot-up of the board. Further, additional circuitry or processor routines are not required to capture brownout or power glitch information since the configuration cannot be erased. Once the FPGA design is complete, no further effort is needed to design all the support circuitry into the system, nor is additional time wasted debugging all the possible power-up scenarios.
Radiation test data has been published extensively at conferences such as the IEEE’s Nuclear and Space Radiation Effects Conference (NSREC) and NASA’s Military and Aerospace Programmable Logic Device International Conference (MAPLD). In fact, years of testing have shown that radiation-tolerant antifuse FPGAs are immune to SEUs and their characteristics do not degrade over time due to TID. As mentioned earlier, data in logic flip flops can easily be corrupted by incoming cosmic radiation. Unlike the soft TMR approach of SRAM-based solutions, recent generations of radiation-tolerant antifuse FPGAs have addressed this through architectural improvements, with each flip flop implemented as a group of three flip flops with a voting circuit.
This improvement has allowed designers to achieve better than 63 MeV-cm2/mg LETth, which significantly exceeds the 37 MeV-cm2/mg requirement of the satellite industry. Like ASICs, recently introduced radiation-tolerant antifuse FPGAs are also expected to reach TID functional in the region of 200 Krad (Si), meeting the total dose requirement of most satellite programs (refer back to Figure 2).
In addition, antifuse FPGAs do not suffer from in-rush current issues, as do their SRAM counterparts. This power-up friendly profile simplifies the system design, allows the power supply to be properly sized to the dynamic current requirements of the system and eliminates the need for a complex FPGA and board boot-up scheme.