“Improving the Capabilities of Cognitive Radar & Electronic Warfare Systems”
presented by Tim Fountain, Global Market Segment Manager
Thanks to sponsors Association of Old Crows, specifically the AOC Granite State Chapter.
“Journal of Electromagnetic Dominance” is their house organ.
There are a number of resources for this webinar. Slides and the white paper are on our Slack account at phase4ground.slack.com. Please contact a board member of ORI for access to the Slack.
Tim Fountain is introduced.
A cognitive RF system is designed to perceive the RF environment. System converts the RF spectrum and associated energy into a stream of RF IQ data. So far, so good. But we add AI/ML to make autonomous decisions about what we are seeing in the RF spectrum.
A course of action is determined without human intervention.
For electronic warfare (EW) we want to
- deny the use of the RF spectrum by an adversary. We also want to
- protect the platform we have against jamming and other things. We want to
- deliver supporting information to another system - we aren’t standalone in many situations. This third aspect is less relevant to ORI, except for our work in larger standards like OpenRAN.
Emerging threats are enumerated.
We think of htis as static library of known quantifiable and repeatable characteristics. We develop something like a PDW or pulse descriptor words. And we have countermeasures to these. It’s like a lookup table of asshattery.
The slides have a static library RADAR/EW system graph, with search and tracking. That’s the place that we are starting from in terms of jamming, denying, harassing transmissions.
However, there are problems with this approach. We have mode agile attacks. These can come from anywhere. Non-conforming, non-traditional, and totally not in our library! What to do?
We can’t counter these threats (is this a true axiom?) and the platform is at risk.
The static library is considered ineffective. What new concepts work? AI/ML? Why not, let’s try it.
Classify and counter, on the fly. This is a game of cat and mouse since attackers, just like in infosec, are also using AI/ML.
Flexible and continuously adaptable AI/Ml systems are considered to be required.
An AI/ML analysis and threat counter system is now under dicussion in this talk.
There is an upcoming session with Mathworks that focuses on “how to develop” these critical subsystems.
But, how do we train them? That is where we are going next in this talk.
We see a landscape of things and we look at them in time and frequency. Sometimes we can miss “typing” the emitters if we limit ourselves to just time and frequency.
The most commonly used AI techniques (claimed here) are ANN and DNN, or artificial and deep neural networks.
Fuzzy Logic and Genetic Algorithms are cited strongly here.
Tim said his wife uses Fuzzy Logic to make decisions and then immediately attempted to frame it as a compliment. I am not sure how to react to this, as I happen to be a “wife”. It did not come across as a compliment, until it was redefined without any additional supporting comments (i.e. “just kidding”). I am glad I am not in “a room” for this talk, because I expect most men laughed at this joke.
Moving on.
Heuristics, Support vector machines, and Markov decision processes are mentioned.
Significant computational resources are required at this tactical edge - this seems like a great place for FPGA, and FPGA is indeed mentioned. Maybe federated learning would be the right answer, but it’s not on this slide.
Wideband cognitive systems use more electical power, compared to static library systems.
Manuverability and flexibility comes at a cost, always.
Mode-agile threats can also enter low-power RF modes, operating at or below the noise floor. This means we need a high dynamic range. Wider bandwidth and higher dynamic range ar diverging requirements.
Jamming - is a big deal.
Challenges? There are alot.
The requirements for training are the most critical part. The data sets must be rich and real, and these data sets, as we know, do not in general exist.
Iterative HWIL approaches can make up part of the gap.
Terabytes of data and you are looking for that one CW signal. Joint time-frequency analysis can be the way forward here. Looking at the whole recording (this assumes you can record) then you have time vs. amplitude and persistence display. ZoomOut and ZoomIn are introduced. These are products. 100 nS barker code is an example signal “found” through these methods.
“How to use MATLAB and simulink to create these signals to analyze and close the loop for new training data sets for the things of interest”. <=== other talks.
Hardware in the Loop Settings revisited: classic requirements Vee shown on the graphs.
“What does FEDS give you?” Access to the IQ stream. AXI4 stream for IQ data. Zynq ultrascale. Like, what we have at ORI.
Customer implements the DSP, but we have the sandbox where we can drop custom code.
We really should duplicate this, since it would help us in our open source work.
“If you can visualize it, you should be able to drop it into the container”
FEDS is the thing that does this, over at R&D.
Open source tools specifically mentioned. Maybe there is a path forward here that we can leverage.
Support package for the FPGA is mentioned. They are using Vivado AI, where we are not (yet).
HIL/SIL system for algorithm development and testing. See slide 32.
It’s complex, time consuming, and training and the data sets are the crucial part. This is where the cheese is binding, as they say. See slide 33.
This was a wonderful talk about something a lot of folks at ORI know a lot about and want to do for open source work… right up until the joke/comment about the presenter’s wife and fuzzy logic.
Speaking for myself, as the person summarizing this talk for my team, who happens to be “a wife”, I’d really like to not feel like a joke in technical seminars? Equating women with old and largely discredited fad techniques like Fuzzy Logic derailed my attention and enthusiasm for a webinar. This was a talk I was very much looking forward to attending. It made it much harder to pay attention to, understand, and then synthesize for teaching others.
Aside from that, we’ll try and take full advantage of the content presented here and follow up with R&D and Mathworks and Xilinx.