Recently I participated in an online interview with Andrew Zai, social media chair for the 2019 IEEE International Symposium on Phased Array Systems and Technology, to discuss the history of phased arrays/radar and the role of simulation software. You can listen to the podcast here. In preparing for our conversation, Andrew provided me with some questions he thought we could talk about and I, in turn, did a little historical research on the history of simulation and phased-array radar development. The following interview covers some of the topics that we did not get to discuss during the podcast.
How have phased arrays and radar been used historically?
Phased-array transmission was originally demonstrated as far back as 1905 by Nobel laureate Karl Braun. Braun demonstrated the enhanced transmission of radio waves in one direction, and another Nobel laureate, Luis Alvarez, used phased-array transmission in a rapidly-steerable radar system for "ground-controlled approach" during World War II to provide support for landing aircraft.
The discovery of bouncing radio signals that led to the invention of radar was really almost accidental and occurred many times across decades before early radio engineers knew what to do with it. Sometime in the 1880s, Heinrich Hertz demonstrated that radio waves could be reflected from solid objects. Many credit Englishman Robert Watson-Watt with the invention—he was using radio technology in 1915 to provide advance warning to airmen and led the U.K. research establishment through the 1920s.
In doing research on the early days of radar, I came across the story of Taylor and Young in the U.S. from sometime in the early 1920s. These two U.S. Navy researchers placed a transmitter and receiver on opposite sides of the Potomac River and discovered that ships passing through the beam path caused the received signal to fade in and out. They submitted a report, suggesting that this phenomenon might be used to detect the presence of ships in low visibility, but the Navy did not immediately continue the work and it was eight years later before a researcher at the Naval Research Laboratory observed similar fading effects from passing aircraft. This led to a patent application and further intensive research on radio-echo signals from moving targets.
In 1935, Watson-Watt was asked to judge recent reports of a German radio-based death ray. Watson-Watt’s calculations demonstrated a death ray was impossible but combined with his team’s earlier discovery about aircraft causing radio interference, they were led to experiment with a powerful BBC shortwave transmitter as the source and a receiver set up in a field while a bomber flew around the site. The immediate success of this experiment led to funding for the development of the very first radar systems, which provided vital advance information that helped the Royal Air Force win the Battle of Britain. Without this radar system, called “chain home,” significant numbers of fighter aircraft would always need to be in the air to respond quickly enough if enemy aircraft detection relied solely on the observations of ground-based individuals.
Another amazing character in the history of radar was Alfred Lee Loomis. He was the subject of an excellent American Experience episode on PBS that is really worth watching. Loomis was an amazingly accomplished individual: he was a Wall Street tycoon, a famous scientist, and a lawyer. He was born to upper-middle-class parents, really smart, brilliant in mathematics, with an inventor mentality. After attending Harvard Law School, he joined a prominent Wall Street law firm.
During the first World War, he used his undergraduate training in mathematics and science to secure a position as a lieutenant colonel at the Aberdeen proving ground in charge of development and experimentation, where he invented a number of instruments (two of which were later patented) used for the next 25 years in measuring external ballistics.
After the war, he entered investment banking and made a huge fortune with his brother-in-law Landon Thorne by spearheading the complex financing of the nascent electric utility industry, handling upward financing of over $1.6 billion U.S. dollars at the time. The two also helped to organize many important mergers and acquisitions, and in the process acquiring numerous seats and untold influence on the resulting boards of directors. They were lauded not only for their success, but for their application of scientific principles and long-term economic planning to the management of public resources and provision of cheaper and more reliable service to consumers. Fortune magazine described them, in February 1930, as "the most potent force in shaping the present and future organization of America's huge, complex power and light business." Sensing the oncoming Depression, Loomis cashed out of the market, protecting the assets that would support his extravagant and idiosyncratic lifestyle. Loomis alone earned an estimated $50 million during the early years of the Depression.
Throughout the years, while he traded vast sums of money on the financial market during the week, in the evenings and at weekends he worked with the world's greatest scientists at his own secret laboratory known as Tuxedo Park in New York. In fact, the name of the PBS program was called “The Secret of Tuxedo Park.” Being a wealthy New Yorker probably had him in the same social circles as President Roosevelt, which was an important factor in getting American resources to help develop large-scale radar manufacturing during the early phases of the war.
The first practical radar system was produced in 1935 by the British physicist Sir Robert Watson-Watt, and by 1939 England had established a chain of radar stations along its south and east coasts to detect aggressors in the air or on the sea. The Chain Home radar network, operating on metric wavelengths, was insufficiently accurate to detect small targets such as individual aircraft. What was needed was a radar system operating on a narrow beam that could either sweep the sky with a fan of radiation or point like a long pencil of light through the dark. Very short wavelengths had to be generated for this purpose, but this was no easy task. In September 1940, while the Battle of Britain raged with daily air raids, a device called the cavity magnetron made its way from England to the U.S. as part of a top-secret mission.
The palm-size radio transmitter produced high-frequency waves at tremendously high-power levels. It was invented by Henry Boot and John Randall. As the U.K. scientists knew, the device would enable microwave radar systems to track enemy aircraft and ground targets. Radar already existed, but it was neither compact enough to be mounted on planes nor accurate enough to be effective at night. To get the new technology into the field—and to turn the tide of the war—the U.K. needed U.S. industrial strength, as well as electronics expertise from MIT, Bell Labs, and other U.S. research institutions. In September of 1940, a small group of men, most of whom were American, gathered in the sitting room of the exclusive enclave of Tuxedo Park. Two men, however, John Cockcroft and Edward Bowen, were the British physicists who had arrived in the U.S. as part of the top-secret mission, with the support of Roosevelt. With some fanfare, they produced a small wooden box, inside of which sat the cavity magnetron device, which they promised could generate 1,000 times more power at a wavelength of 10 centimeters than any other microwave transmitter known to U.S. technicians. The challenge for the English was the manufacturing of this device, which was slow and costly, with poor yields.
This is the part of the story where Percy Spencer enters. Percy taught himself about electricity and when he was 18, he joined the U.S. Navy as a radio operator. During this time, he taught himself a number of scientific subjects, including calculus, chemistry, metallurgy, physics, and trigonometry. After World War I, Spencer joined the American Appliance Company in Cambridge, MA, which would later become Raytheon Company. During World War II, Raytheon was contracted by the British to mass produce their newest invention; combat radar equipment. In desperate need of radar to detect German planes and submarines, the British turned to the U.S. to produce the cavity magnetron, the primary component in the radar system. Spencer developed a system of mass production for the magnetron, increasing its production output to 2600 per day.
Raytheon radar had a marked effect on every major sea engagement of the war, and for his work during the war, Spencer received the Distinguished Public Service Award the U.S. Navy’s highest civilian honor.
Spencer is best known as the inventor of the microwave oven. During his research into electromagnetic waves in the 1940s, Spencer noticed that a candy bar in his pocket melted when he was standing next to a magnetron. He realized that electromagnetic waves could be used to cook food, and he subsequently filed a patent with Raytheon for the Radar Range in 1945. This history doesn’t even touch all the microwave theory that was developed by the collective scientists and engineering teams gathered together to form the Radiation Laboratory (Rad Lab) at MIT during the war.
How has radar been designed in the past? How did engineers predict the performance of their system?
I’d say engineers predicted performance through a combination of mathematics and experimentation. It’s worthwhile to go through the history of the MIT Rad Lab and learn about all the fundamental microwave theory that had to be discovered and documented in the early days of our field.
I’ve always worked at the component level, so I can’t talk in great detail about past system-level design efforts, but I know at the component level, prior to powerful circuit and electromagnetic (EM) simulation technologies a lot of development was empirical; build and test. I started in industry in the mid 1980s when gallium arsenide (GaAs) and monolithic microwave integrated circuits (MMICs) were relatively new technologies, the 8510 vector network analyzer was introduced, and circuit simulation was netlist-based with programs like Super Compact. Nonlinear simulation wouldn’t be available until the introduction of harmonic balance techniques in the late 80s. I remember hearing stories that waveguide filters were tuned with a ball bearing, magnet, ball peen hammer, and epoxy filler. The tech would position the ball bearing with the magnet and then whack the side of the waveguide to form a dent in the location of the ball bearing. The epoxy was used to make the outside smooth for appearances.
My first boss in the industry, Bill Rushforth, was a fellow at M/A-Com and led a group of engineers working with the emerging field of iii-v semiconductors. Bill did a lot of work with the development of the Precision Acquisition Vehicle Entry Phased-Array Warning System (Pave Paws) radar system, an elaborate Cold War early warning radar and computer system developed in 1980 to "detect a sea-launched ballistic missile attack. " This system was the very first solid-state phased array deployed. Bill is cited by Joseph White in his book in Microwave Semiconductors for his work with diode delay lines, which would seem to fit into the phase-shifting requirements for a solid-state phased-array system like Pave Paws.
The radar was built during the Cold War to give early warning of a nuclear attack, allowing time for U.S. bombers to get off the ground and land-based U.S. missiles to be launched, decreasing the chance that a preemptive strike could destroy U.S. strategic nuclear forces. The Soviet Union deployment of submarine launched ballistic missiles (SLBMs) by the 1970s significantly decreased the warning time available between the detection of an incoming enemy missile and its reaching its target, because SLBMs can be launched closer to the U.S. than the previous intercontinental CBMs (ICBMs), which have a long flight path from the Soviet Union to the continental U.S. Thus, there was a need for a radar system with faster reaction time than existing ones.
Pave Paws was one of the first large phased-array radars. A phased array was used because a conventional mechanically rotated radar antenna cannot turn fast enough to track multiple ballistic missiles. The radar consists of two phased arrays of antenna elements mounted on two sloping sides of the 105-foot high transmitter building, which are oriented 120° apart in azimuth. The beam from each array can be deflected up to 60° from the array's central boresight axis, allowing each array to cover an azimuth angle of 120,° thus the entire radar can cover an azimuth of 240.° The building sides are sloped at an angle of 20° and the beam can be directed at any elevation angle between 3° and 85.° The beam is kept at least 100 feet above the ground over public-accessible land to avoid the possibility of exposing the public to significant EM fields.
The radar operates between 420 and 450 MHz, with circular polarization. The active array has 1,792 transmitting elements (solid-state transmitter/receiver modules) and radiates at a peak power of 320 W, so the peak power of each array is 580 kW, operating in a repeating 54 millisecond cycle in which it transmits a series of pulses, then listens for echoes. Its duty cycle is never greater than 25% (so the average power of the beam never exceeds 25% of 540 kW, or 145 kW). It is reported to have a range of about 3,000 nautical miles and at that range it can detect an object the size of a small car, or smaller objects at closer ranges.
What are some of the newer commercial applications that are emerging and what is responsible for the commercial proliferation of this technology?
Newer commercial applications that are emerging from radar technology are automotive radar and beam-steering technologies for 5G. I have also seen some literature on beam-steering for internet-of-things (IoT) networks, but I think for now the two big opportunities are in transportation and communications
Commercial proliferation always seems to be driven by opportunity and challenges. There is the constant driver of data capacity and throughput or data rates for the proliferation of phased-array technology in communications systems. For this, systems need bandwidth, which is driving networks to utilize the millimeter-wave (mm-Wave) spectrum. The challenge with the 28 GHz and 39 GHz frequencies being considered is that mm-Wave radio signals incur a lot of propagation loss over the air. Therefore, the energy from an antenna needs to be more directed or focused in order to efficiently overcome these higher atmospheric losses. Spatial efficiency is why beam steering is being pursued for 5G networks. And at mm-Waves, wavelengths are smaller and therefor the supporting antenna hardware will also be physically smaller. Dealing with fixed manufacturing tolerances can be a challenging design issue to overcome, but that also means that hardware becomes smaller and more easily integrated into a vast array of picocells and other small base stations that are envisioned in support of a densified 5G network.
For automotive, mmWave is required for the higher target resolution that is made possible with smaller wavelengths. At 77 GHz, auto radar is also dealing with greater propagation losses, but the smaller hardware is equally beneficial to vehicle integration, where multiple radar systems are embedded into an automobile for 360-degree coverage.
What are some promising areas for growth with regard to technology?
Like much of what drives our technology forward, radar and phased arrays are evolving, thanks to development in semiconductor technologies, including advances in gallium nitride (GaN) and complementary metal-oxide semiconductor (CMOS), both of which are providing greater performance at higher frequencies. CMOS provides the benefit of enabling designers to integrate more functionality, including digital processing and control, onto a single chip, as well as providing a cost and high-volume production advantage. GaN and GaAs still have huge performance advantages. The debate between the use of silicon and II-V semiconductors will probably continue for the foreseeable future, but great strides are being made with both and I believe the competition is driving users of each toward greater innovation and better performance. I also think device integration technology in the form of module and multi-chip packaging is helping to drive growth. I know the Defense Advanced Research Projects Agency (DARPA) has funded many programs to develop integration technology that will find its way into a variety of applications, and the low-cost, high-volume applications called for by the vast array of IoT devices will also likely spawn innovation and new device integration solutions.
What are current design challenges?
These advanced capabilities do come at the cost of design complexity. New device technologies must be appropriately characterized and modeled for use in simulation. Higher circuit density and use of high-power GaN devices results in thermal issues that must be resolved. Design flows become a concern as design teams of different disciplines work independently on through collaboration, sometimes across distances and time zones. They must be able to share design data efficiently and across different tool sets. A lot of this type of design is very specific to performance results tied to standards-based signal waveforms. So, for instance, linearity is a big concern in communication ICs and one measure of linearity is adjacent channel power (ACP), which is the leakage of energy from the main communication channel to a neighboring channel. These RF digitally-modulated waveforms are very specific to the standard and so designers need that information for their simulations, as well as their measurement systems. Power amplifier (PA) designers struggle to find the right tradeoffs between linearity and efficiency. The high peak-to-average-power (PAPR) ratio of LTE and 5G requires PAs to operate backed off from their compression point operations, but this will negatively impact drain efficiency, which then impacts power consumption. It’s a delicate balancing act, and, additionally, when the amplifier is terminated in a phased-array element whose impedance varies with position and the act of beam steering itself, one can get a sense of how difficult today’s RF design challenges can be.
How are simulation technologies evolving to support these challenges?
The good news is that simulation technology keeps improving every year. This is in part due to improvements in the underlying computational algorithms. Speed improvements are made possible with advances in computing technology itself and the evolution of distributed computing, where problems are broken up for parallel simulation within much larger compute farms. Many engineering tasks are being addressed by developers through automation and tool-to-tool integration. NI has a very exciting partnership with Cadence in which the NI AWR software AXIEM planar EM simulator, which is based on the method-of-moments (MoM) technique, has been fully integrated into the Cadence Virtuoso RF design environment for RFIC development. Similarly, in 2014, ANSYS HFSS EM simulation software was integrated within the NI AWR Design Environment platform via its EM Socket architecture for designers who want to use a third-party EM tool for their passive device or antenna modeling.
NI AWR software also offers a proprietary phased-array generator utility that guides designers through the complexities of configuring a phased array based on geometry such as element spacing and arrangement and real antenna radiation pattern assignments for each antenna element of an array or groups of elements. Designers can assign gain tapers, either standard ones or custom tapers, specify and develop their feed networks, and perform analysis based on arrays with failed elements. This feature enables designers to view the resulting radiation pattern as a function of steer angle and frequency, observe main and side lobes, and then generate a hierarchical network of the antenna array itself, the combiner networks, phase shifters and gain control, and more. It’s a really impressive utility that automates the process of getting an initial design started with an EM simulation-ready antenna array and the initial feed network and allows designers to methodically introduce real components into the simulation. Today’s simulation tools really are quite impressive with regard to what they can analyze.
What final thoughts would you like to share?
I’d like to thank you for this opportunity to share some thoughts and insights with your audience. I think both the IEEE radar and phased-array conferences are very important events at this particular time as these technologies migrate from largely aerospace and military applications to more commercial applications. When I attend the last phased-array event in Boston, I remember sitting in on a talk about the future market potential for phased-array technology given by Gabriel Rebeiz, distinguished professor at UC San Diego, and being blown away by his projections of the future market potential. Commercial applications could conceivably greatly eclipse aerospace applications, which is truly impressive. I’m sure I will be even more impressed by the state of our technology this year.