by Rodney Feldman, VP Business Development and Marketing; SECO USA, Inc.
Deployed sensors enable situational awareness and early warning detection of emerging battlefield threats. Ideally, a wide variety of sensors communicate and correlate data to automate and accelerate the detection of changing conditions. In reality, this is limited by several factors, including the traditional size, weight, power, and cost (SWaP-C) as well as associated limitations on intercommunication and processing capability. Current advances in processing technology along with artificial intelligence, computer vision, and machine learning are changing the dynamic – enabling intelligent sensors at the edge.
Making Sensors Intelligent
Spanning a wide array of technologies, sensors include cameras (whether electro-optical, infrared, or hyperspectral), sound or vibration detectors, radar, environmental (such as gas, temperature, humidity, and pressure), positional (GPS, accelerometer, gyroscope, magnetometer), and more. Each type of sensor generates specific data formats, which differ in frequency, amount, and structure. To be useful, data must be analyzed and the results communicated or automatically made actionable.
In the non-military world, we’re very familiar with the artificial intelligence (AI) model where data collected at the edge is transmitted via a network to a data center where high-performance computing infrastructure with AI capability processes it and returns a result. This paradigm minimizes intelligence required at the edge, but requires network connectivity, incurs added latency to get a result, and limits the amount of data that can practically be handled and correlated against other data. There is also the risk of interception of communications.
For military and intelligence operations, edge autonomy is often preferred. With a computing platform capable of local high-performance processing, the problems of the cloud computing model are eliminated. Larger data sets can be processed, multiple data sets can be correlated, and faster results can be utilized that enhance operational performance.
Current advances in microprocessor technology, artificial intelligence, machine learning, computer vision, and algorithm development make it increasingly viable to implement intelligent sensors at the edge with lower power, smaller packages, and for a lower cost.
Processor Platforms That Enable Intelligence
Emerging microprocessor platforms enable increasingly power-efficient instruction sets in conventional processing cores, and additionally provide hardware accelerators for artificial intelligence, computer vision, and machine language operations – along with associated software libraries that enable easy access by algorithm developers.
For example, Intel’s 11th generation Xeon W-1100E series (formerly Tiger Lake-H) and 11th generation Core 1100 series (formerly Tiger Lake Up3) offer high-performance processors with industrial temperature and long-term availability. Featuring DDR4, PCIe 4.0, USB4, and more, these processors enable very high bandwidth interfaces. Integrated graphic cores can be used for display rendering as well as general-purpose number crunching. Intel Deep Learning Boost provides hardware acceleration for training and inferencing along with Vector Neural Network Instructions (VNNI) – usable for on-the-fly learning from sensor data. Intel offers a distribution of OpenVINO toolkit (Open Visual Inference and Neural network Optimization) that enables efficient development of machine learning and deep learning applications. Tiger Lake-H Xeon processors offer 4, 6, and 8-core options with thermal design power (TDP) ranging between 12W and 45W. Tiger Lake Up3 Core processors offer 2 and 4-core options with thermal design power (TDP) ranging from 12W, 15W, and 28W.
For even higher performance in a similar power envelope, Intel’s 12th generation Intel Alder Lake P series introduces Performance-core (P-cores) that provide high-performance processing and Efficiency-cores (E-cores) for managing multitasking – for optimal processor core loading that leads to power efficiency. Added graphics execution units and further implementation of Intel Deep Learning Boost, VNNI, and OpenVINO to enhance AI capabilities. Core counts range from 5 to 15 with 15W, 28W, and 45W TDP options.
ARM-based processors are also emerging that are useful for intelligent edge sensors. For example, NXP’s i.MX 8M Plus microprocessor provides up to four Cortex-A53 cores and a neural processing unit for machine learning applications. While it has far less PCIe and USB interface capability than the above Intel processors, its two MIPI-CSI streaming cameras and various serial interfaces, along with integrated image signal processors, make it useful for less extensive, and lower-power, intelligent sensor applications.
Some FPGA devices, such as Xilinx’s Zynq UltraScale+ family, incorporate a multicore ARM processor with a programmable logic fabric that enables very powerful sensor applications. With large numbers of flexible interfaces, a wide variety of sensors, from very high bandwidth to relatively low speeds, can easily be incorporated, analyzed, and synchronized utilizing programmable logic and traditional processor code.
Intelligent Algorithms Perform the Heavy Lifting
The performance and power advances of leading-edge processors provide a hardware platform that can interface with and crunch data from a multitude of sensors. It is the advanced algorithms that these processors are capable of executing which enable the intelligence in edge sensors.
The need for real-time intelligent algorithms is practically endless. Applications include aerial detection, classification, and tracking of in-flight vehicles and weapons, ground surveillance of people and vehicles, identification of munitions fire sources and explosions, and countless others.
The software algorithms that implement these applications can be divided into various categories. Each category has unique characteristics and software libraries that allow application developers to leverage basic operations. Often, these libraries enable the utilization of specialized processor hardware or software instruction sets that enable high-performance and/or power-efficient execution.
Computer vision (CV) algorithms utilize operations that extract features from images or video captured by imaging sensors – whether electro-optical, infrared or other parts of the electromagnetic spectrum. CV operations extract information from these two-dimensional images, or from the additional dimension of time in the video, through filters that tend to identify edges, patterns, colors, and changes thereof. CV algorithms often utilize graphics processing unit (GPU) hardware with computation units optimized for the basic operations that enable power- and time-efficient parallel execution. Common CV libraries include OpenCV and Open VX.
Machine learning (ML) algorithms enable a computational resource to learn from datasets, whether pre-determined before deployment or on-the-fly once deployed, characteristics that can subsequently be utilized for analyzing and classifying a current situation. ML algorithms execute most efficiently on dedicated hardware such as a neural processing unit (NPU). Example ML libraries include TensorFlow and oneAPI Deep Neural Network Library (oneDNN). ML algorithms may utilize data from one or multiple sensors of various types to develop models of typical and atypical conditions.
Artificial intelligence (AI) utilizes ML, CV, and other algorithms to allow a computer to perform human-like analysis based on data to predict future events or take actions based on very complex sets of data.
Intelligent sensor applications utilize a combination of software technologies to analyze sensor data, recognize target features of interest or anomalies, and automatically take action based on this analysis and recognition. Depending on the application and sensors, various combinations of traditional, ML, CV, and AI techniques may be used. Data from disparate sensors can be synchronized and correlated. A deterministic real-time operating system (RTOS) may be utilized to ensure the synchronization of multi-sensor data sets for maximum accuracy.
The complexity of today’s processors and application software, while enabling new intelligent edge sensor paradigms, also provides developers with significant amounts of work and risk in deploying new them. Time-to-market, development costs and risk can be minimized through the use of commercial off-the-shelf (COTS) computing engines.
COM-HPC – New High-Performance Embedded Computing Standard
Over the years, several standards have been developed for high-performance computing (HPC). Chassis-based systems such as VPX and the older VME offer the ultimate in rugged performance and the use of a combination of COTS and custom plug-in boards. They allow for the relatively rapid development of complex systems for military applications. However, their size, weight, power, and cost limit their utility for many edge sensor applications where small size, lower power availability, and need for lower costs are required.
PICMG’s COM-HPC specification, ratified in early 2021, is targeted for applications of more limited deployments. This new standard form factor, successor to COM Express, utilizes a two-board architecture consisting of a COM-HPC computer-on-module (COM) which hosts a complete computing engine and an application-specific carrier board designed to host the COM-HPC module and associated peripheral circuitry and interfaces to sensors.
COM-HPC provides board sizes ranging from 95 mm x 120 mm to 200 mm x 160 mm and provides module power budgets of up to 300W on server modules and up to 200W on client modules. This enables the use of small form-factor edge applications and entirely new classes of processors, including server-class CPUs, GPUs, FPGAs, and heterogeneous systems on the chip.
COM-HPC provides a wealth of very high bandwidth I/O capability through its two 400-pin connectors. This includes ultra-high performance interfaces such as 100 GbE, 40 Gbps USB4, and x64 lanes of PCIe Gen 4.0 or 5.0, the latter of which delivers up to 32 GTps data transfer speeds. Adding lower speed, legacy interfaces such as UART, I2C, SPI, and GPIO, COM-HPC provides the ability to interface broad varieties of sensors – from high-resolution array sensors that require PCIe to lower bandwidth environmental sensors.
An example COM-HPC module suitable for intelligent edge sensor computing is SECO’s LAGOON COM-HPC Client size A module, which leverages the 11th generation Intel® “Tiger Lake H” mobile processors ranging from octal-core Xeon® to Core™ devices – on Intel’s IoT Group roadmap for industrial temperature range and long-term availability. The processors include up to 32 execution units of Intel® Iris® Xe graphics. Utilizing 20x PCIe Gen 4.0 lanes, 20x PCIe Gen 3.0 lanes, and two 2.5 GbE interfaces, two to four USB 4 interfaces with bandwidths up to 40 Gbps, An intelligent sensor developer, integrating the LAGOON module on a custom-designed carrier board, can interface up to 15 separate PCIe sensor endpoints, up to 12 USB devices, two 2.5 GbE interfaces, and several serial-based sensors – enabling a single microprocessor to analyze and correlate a large number of sensors within a relatively small form-factor.
Combining the conventional, AI, and CV processing power of these processors with the COM-HPC standard, a powerful but relatively power- and size-efficient computing platform implements complex intelligent edge sensors, including sensor fusion applications.
Intelligent Edge Sensors – Marriage of High-Performance Computing with Intelligent Algorithms
Current and next-generation microprocessors, with their artificial intelligence, machine learning, and computer vision hardware accelerators and associated software libraries that utilize them, enable new levels of edge sensor intelligence. With greater processing capabilities enabled with AI, ML, and CV algorithms, ultra-high bandwidth interfaces, and lower power envelopes, intelligent edge sensors allow for unheard-of levels of situational awareness analyzed with minimal latency that allows for autonomous action – avoiding scenarios where disrupted chains of communications may prevent the use of sensor data. With current-generation microprocessors, compact but wide data interfaces enabled by standards such as COM-HPC, and through the use of AI, ML, and CV algorithms, intelligent edge sensors will gain use in deployed applications.