Disclaimer
To avoid any misunderstandings, I want to clarify that all the information in this post is based on open-source intelligence, publicly available documents, and reverse engineering. I have not attempted to compromise or replicate any potential attacks on internet-facing SIGMA systems. Instead, I conducted a simple, non-invasive reconnaissance phase, which involved accessing public websites, reviewing their source code, and examining generic endpoints to gather general information, such as system versions.
A month before publishing this post, I gave a heads-up about it to those who needed to be informed, just in case.
Introduction
This is the first part of a series on the cyber-physical analysis of weapons of mass destruction detection systems, focusing on technologies like CBRN networks and nuclear safeguards. These posts will cover how these systems integrate physical methods with cyber capabilities to counter potential threats. By analyzing both the hardware and software components, I aim to highlight the challenges and advancements in ensuring these systems function effectively in real-world scenarios, as well as some of the vulnerabilities, exploits, and security-related issues discovered during the research. Above all, the goal is to contribute to a better understanding of these systems and promote critical thinking, especially in these challenging times.
Thirty years ago, the Japanese apocalyptic cult ‘Aum Shinrikyo’ managed to fabricate sarin gas in-house and released it in multiple trains during rush hour on the Tokyo subway system. The deadly nerve agent killed 14 people, injured over 1000, and caused severe health issues for thousands more.
Initial reports only mentioned 'an explosion in the subway,' causing the first 30 police officers who arrived at the scene to overlook the possibility of a chemical attack. As a result, they were exposed to and harmed by the sarin gas, which also delayed their ability to provide a timely and proper response to the other victims.
Could a similar event happen today in a modern city? Probably yes, but at least in theory, it would be orders of magnitude harder for the perpetrators to achieve their goals. Even if they succeeded, the immediate aftermath (essentially the ability to mitigate the consequences), would (is expected to) be managed much more effectively, due to technological progress in countering Chemical, Biological, Radiological, and Nuclear (CBRN) threats, as well as improved mapping of adversarial activity linked to these illicit activities.
Essentially, CBRN detection systems are technological frameworks designed to detect, monitor, and respond to hazardous threats in real-time, with a focus on public safety and national security. They can be either static, providing continuous protection for high-risk metropolitan regions, or ad-hoc, deployed for specific events like sports games or concerts. These networks integrate various sensors, algorithms, interfaces, and communication systems to deliver comprehensive situational awareness across different environments and different profiles. This means that operators will receive actionable information without needing to be subject matter experts, such as not being required to interpret a gamma spectrum themselves or possess in-depth knowledge of physics or chemistry.
For instance, let’s imagine a scenario similar to the Tokyo attack, occurring in a city equipped with a CBRN network. Ideally, thanks to the chemical agent sensors, first responders would quickly receive an alert about the presence of sarin gas, enabling a swift and informed response. These networks can also play a crucial role in prevention and deterrence. Imagine a vehicle carrying a ‘dirty bomb’ (based on cesium chloride) driving through the city center: CBRN radiological sensors at various points would ideally identify the specific radionuclide (Cs-137), possibly triggering the necessary investigation from law enforcement.
In this context,
DARPA's SIGMA (Spectral and Isotopic Gamma Monitoring and Analysis) network originated in 2014 as part of a strategic effort to enhance U.S. national security by improving the detection and identification of radiological and nuclear threats.
“SIGMA is a geographically distributed network capable of scaling to receive data from thousands of radiation sensors with spectroscopic gamma and neutron sensing capabilities. Continuously streaming radiation readings are transmitted from distributed sensor systems, and analyzed using advanced analysis algorithms to provide automated anomaly detection and isotope identification. SIGMA also ingests data from sensors and existing systems that perform analysis locally, and transmits any resultant alerts as well as other sensor information to the SIGMA platform for display and for further data fusion purposes.”
The apparent success of this initiative led to the development of ‘
SIGMA+’, which expanded the network's capabilities to include biological, chemical and explosive threat detection.
SIGMA can be found operating in metropolitan areas such as London (UK) and Washington DC (US), as well as in facilities belonging to the Port Authority of New York and New Jersey.
Before delving into the internals of SIGMA, and to provide the right context, the reader may benefit from a practical introduction to gamma spectroscopy, from a cyber-physical perspective. Let’s dive into it.
Practical Gamma Spectroscopy for Security Researchers
Do you remember playing with shadows? Just using your bare hands and light, you could create different shadows that resembled animals.
As we’re going to see, it turns out that by ‘playing’ with unstable isotopes and light instead, we can also generate different projections that serve as unique signatures, allowing us to identify radionuclides according to their spectrum (essentially a histogram of energies). The idea is to use the range of energies and the number of the high-energy photons emitted during the decay chain to generate such a signature.
Various materials are used to detect photons, including scintillation detectors like sodium iodide (NaI) crystals or semiconductor detectors such as high-purity germanium (HPGe). The latter offer optimal energy resolution (the ability to distinguish between different energies of the incoming photons), making them ideal for precise gamma spectroscopy. However, they are expensive, require sophisticated cooling (typically with liquid nitrogen), and come with higher operational complexity.
On the other hand, scintillators offer poorer energy resolution compared to HPGe detectors, but they are generally less expensive, faster, and easier to handle. These characteristics make them more suitable for portable, general radiation detection and monitoring. As scintillators are widely used in CBRN networks, such as DARPA’s SIGMA, I will focus this introduction on them.
In scintillation-based detectors, a high-energy photon (i.e., gamma ray) interacts with a medium (e.g., NaI crystal), transferring some or all of its energy to the material. The three main interaction methods behind scintillators are key concepts in physics. Let’s briefly describe them:

The incident photon is absorbed during its interaction with the target atom, which causes the photon to ‘disappear’, thus transferring all of its energy. If the photon’s energy is above a specific threshold (electron’s binding energy), then an electron (photoelectron) will be emitted. As shown in the diagram, this interaction is predominant for a photon energy <~200 keV.
In this case, the photon is not absorbed but loses part of its energy, which is transferred to the electron it has scattered with, in the form of kinetic energy. As a result of this interaction, the scattered photon has a lower energy, and is deflected from its previous trajectory according to a specific angle. This interaction is predominant in the range of 200 keV-5 MeV.
3.
Pair production (Nobel prize for Carl D. Anderson as part of the discovery of the positron)
The interaction of a high-energy photon with a strong electromagnetic field can convert the photon’s energy into a particle-antiparticle pair, typically an electron and a positron. In the context of scintillators, where this interaction is less common, the positron eventually would interact with an electron, resulting in an annihilation event. This annihilation generates two characteristic gamma photons, each with an energy of 0.511 MeV (half of the rest mass energy of the electron-positron pair). This interaction is predominant above 5 MeV, but technically just requires a photon with an energy greater than 1.022 MeV.
Through these interactions, gamma rays striking the scintillator material excite electrons, which then undergo further interactions within the medium, eventually reaching a 'relaxed' state by emitting excess energy as visible light (photons).
A photosensitive device then collects these photoluminescence events, allowing the original energy of the gamma rays to be indirectly inferred. In modern, portable, scintillator-based gamma spectrometers we can find two main types of devices:
The photoelectric effect is also a fundamental process in PMTs. Photons generated within the scintillator material are directed towards the photocathode of the PMT (via light guides), where they cause the emission of electrons. These photoelectrons are then accelerated and multiplied through a series of dynodes, each typically having a potential 1x or 2x greater than the previous one, until reaching the anode. This results in a significantly amplified current pulse, whose height is proportional to the energy of the incoming gamma ray that initiated the sequence.
An SiPM is a solid-state photodetector made up of an array of microcells, each composed of avalanche photodiodes (APDs). When a photon strikes one of these microcells, it triggers an avalanche effect, similar to the principle used in Geiger-Müller tubes, resulting in a series of current pulses. The distribution of the pulse amplitudes can be correlated with the intensity of the incoming gamma ray that initiated the process.
The next step is for the spectrometer's electronics to process the output signals from either the PMT or SiPM to generate the spectrum. The complexity and functionalities implemented in the design of the analog front-end electronics will depend on each detector, but in general terms we typically find a FPGA implementing different stages such as amplification, pulse shaping and discrimination, filtering, digitizing, etc…and a SoC where the detector’s application layer implements the required functionality such as radionuclide identification, communications, UI…
Understanding Spectra
The spectrum is essentially a histogram, where each bin corresponds to a specific energy range, and the height of the bin represents the number of counts (ideally, detected photons) within that energy range. The following image depicts a random spectrum of 2048 bins to illustrate the concept.
However, scintillators are not perfect detectors. Several intrinsic factors influence how the spectrum is generated by a specific detector, including the material's characteristics, electronic ‘noise’, efficiency, and energy resolution. So, if we define the ‘true spectrum’ as the ideal, undistorted energy distribution from a gamma source, then the detector response function (DRF) is a mathematical representation of how the detector alters this ‘true spectrum’. The ‘altered’ spectrum is the actual output of the detector, the information we are obtaining from the instrument.
Therefore, DRFs are crucial in gamma spectroscopy, as they enable software to model and predict the expected response of a detector when exposed to a given radioactive source. From the cyber-physical perspective, DRFs are fundamental to the development of a digital twin of the physical detector, for various purposes such as:
- Spectrum spoofing
If we are impersonating a specific instrument, it may be necessary to generate realistic spectra that do not correspond to the actual physical conditions. For example, we might need to add/remove certain elements from the spectrum (such as specific gamma lines or photopeaks) to add/conceal the presence of a target radionuclide.
- Adversarial gamma spectroscopy
Modern gamma spectrometers are digital devices that incorporate a software layer implementing radionuclide detection algorithms, which may use various strategies, including machine learning. By reverse engineering these algorithms, we can identify the most effective strategy for bypassing detection. To conceal the presence of a target radionuclide without compromising the instruments, it may be valuable for certain organizations to understand how specific detectors respond to particular combinations of radionuclides (aka masking). Modeling these scenarios in the digital twin can help facilitate this task.
The Sandia National Laboratories in the United States have done a significant job (semi-)empirically modeling
DRFs for various commercial gamma spectrometers, and generic scintillator materials for their
GADRAS software. Although the distribution of this software seems restricted and apparently evaluated upon request (honestly, I haven't personally attempted to request it, as I'm not a U.S. citizen and I’m not affiliated with any academic institution), they also maintain an open-source gamma spectroscopy tool called
InterSpec, which is a valuable resource for gaining a deeper understanding of the topic. InterSpec includes certain DRFs for various commercial detectors, as well as efficiency coefficients for generic scintillators, adapted from GADRAS.
When you're independently
researching nuclear-related topics, you often have to find ways to compensate for the lack of physical resources. So, this time, let's learn how to begin to build a digital twin for one of the gamma spectrometers supported by DARPA’s SIGMA, all without leaving your computer. No radioactive sources, no instruments required; just a laptop, some reverse engineering, and open-source software.
Energy resolution
One of the most important factors shaping the spectrum is energy resolution, which quantifies the detector's ability to distinguish between two different energies of incident ionizing radiation.
Typically, the energy resolution of a detector is expressed as ‘%x @ 662 keV’. This means that the instrument has an energy resolution of ‘%x’ at an incident energy of 662 (or 661) keV, corresponding to the well-known gamma line at 0.6617 MeV emitted during the decay of Cs-137. Since this gamma source is commonly used for calibrating instruments, this specific gamma line has become a standard reference for expressing energy resolution, allowing for comparisons between different detectors. However, what does this really mean? Let’s elaborate on it.
With an ideal detector, the 'true spectrum' for a Cs-137 source would show a perfect peak in the bin corresponding to that specific energy range, as shown in the spectrum below.

However, unlike the 'true spectrum,' a detector’s spectrum has an inherent limitation in measuring the energy of incoming photons. This limitation is quantified by the energy resolution.
To visualize this better, we need to understand that the detector's response can be modeled as a Gaussian distribution, with the photopeak centered around its mean.

Therefore, with an energy resolution of 7% at 662 keV, we can relate it to the FWHM using the simple formula: FWHM = ( R · E ) / 100, where R is 7% and E is 662 keV. In this case, the FWHM would be 46.34 keV.
That 7% represents the uncertainty of the detector’s response to a gamma source. When the instrument records a photon event ideally linked to an incoming energy of 662 keV, the true energy of that photon actually falls within the range of 662 keV ± 23.18 keV (FWHM/2). So, unlike the perfect peaks in the 'true spectrum,' the spectrum obtained from the instrument will show broader peaks.
As we can see in the plot below, the problem with this peak broadening emerges when two photopeaks are too close together, separated by less than 1 FWHM (for instance two gamma lines at 652 and 662 keV), which is the standard threshold for considering two photopeaks as fully resolved.

Poor resolution can pose challenges for peak fitting and matching algorithms, especially in complex spectra. However, to balance operational ease and accuracy, low- to medium-resolution detectors are commonly used in CBRN networks and handheld spectrometers. For example, the following
image clearly illustrates this: we can observe the peak broadening for a scintillator detector (with an energy resolution of 8%) at the 662 keV gamma line in a 137Cs spectrum, while an HPGe detector maintains a resolution (0.22%) closer to the 'true spectrum.'

So far, we’ve explored how the detector’s response follows a Gaussian probability density function, how vendors typically provide a standard energy resolution for comparison, and how to calculate the FWHMfrom this value. At first glance, it seems like we have everything needed to calculate the standard deviation (σ) (remember, FWHM = 2 √(2ln2) · σ) but that’s not the case. There’s an important detail that complicates this process a bit: the energy resolution is energy-dependent, meaning it’s not the same across all energies.
At this point, the approach of empirically determining the energy resolution over a specific energy range comes into play. After collecting enough measurements using the detectors and calibration sources, we can fit the data with a function (e.g., polynomial, exponential, etc.). However, since we’re not as equipped as official institutions, we’ll need to use our typical shortcut: let’s reverse engineer the detector’s firmware.
Reverse engineering
I won’t mention the specific detectors I’ve been reversing, as it's not that relevant to the content, and due to the sensitivity of the topics discussed, I prefer to take a cautious approach, just in case. These detectors are high-end commercial devices that, by default, support integration into the SIGMA network. My general approach involves locating the calibration routines in the firmware to extract the necessary information for generating a DRF. Let’s see an example.
In the following code flow, we can see how the firmware implements the logic required to generate a Response Matrix (using the 'FoldWith' method). This allows for convoluting the resulting response matrix with an arbitrary true spectrum, thus predicting the detector’s response. Among other things, this is useful for ‘easily’ building large synthetic datasets to train machine learning algorithms for identifying radionuclides under different conditions.
Basically, ‘FoldWith’ calculates the Gaussian probability density function for each of the spectrum’s channels by invoking ‘GaussianFor’, which essentially implements the gaussian function within a 6-sigma range (over a pair of adjacent channels), using the midpoint energy of the channel as mean.
As previously mentioned, the standard deviation is energy-dependent so ‘GaussianFor’ firstly calculates the relative sigma for the specific energy by invoking ‘RelativeSigmaFor’.

From this method we can extract the sigma (σrel) equation we need for completing our DRF. Therefore, by knowing how to calculate the standard deviation for any given energy, we can predict the energy resolution across the entire spectrum.
To ensure everything was correct, I performed calculations for a series of random energies, and the results matched the energy resolution models for NaI-based scintillators found in various
scientific publications. In this type of scintillator, energy resolution improves (i.e., sigma decreases) as energy increases.
It's important to note that each detector material has its own specific RelativeSigma equation, so I’m using the RelativeSigma expression for the NaI scintillator.
Now that we can predict the detector’s response to specific gamma sources, the next question is: how can we simulate these sources? To generate a ‘true spectrum,’ we need to run a Monte Carlo simulation that supports photon transport methods. There are several options available, including MCNP (Los Alamos National Lab, restricted distribution), GEANT4 (CERN, open source), and OpenMC (MIT, Argonne National Lab and other contributors, open source).
Of the open-source options, I particularly like OpenMC. Here’s the schema of the simulation:
The idea is to use OpenMC to simulate the NaI scintillator, with an aluminum oxide reflector and aluminum cladding. The gamma sources are placed beneath the scintillator. The result of this simulation will be the ‘true spectrum,’ to which the DRF will be applied in order to generate a realistic simulated spectrum.
The code is commented, so if you're interested, you can take a look. However, here’s a brief explanation of the most important points:
1. Cs-137 is assumed to be in secular equilibrium.
2. Two tallies are used: 'pulse-height' to obtain the 'true spectrum,' and 'reaction rates' to show that most photon interactions in the NaI crystal result in the photoelectric effect and Compton scattering, as expected.
3. Random samples from the true spectrum are used to generate the simulated spectrum by applying the DRF.
4. It supports a simple masking scenario to illustrate the concept, using Co-60 at a 5:1 ratio (5 grams of Co-60, 1 gram of Cs-137).
These are the results:
--------------poc.py--------------
"""
*** Intro to Adversarial Gamma Spectroscopy #PoC-2 ***
This program simulates the response of a scintillator-based gamma detector to a Cs-137 source, optionally
masking it with Co60 (5:1 ratio).
It implements a custom detector response function, which was obtained by reversing a commercial detector's firmware,
and plots the simulated spectrum alongside the true spectrum obtained from a Monte-Carlo run.
-- For educational purposes only --
Ruben Santamarta https://www.reversemode.com
Requirements:
- OpenMC https://docs.openmc.org/en/stable/
- Numpy https://numpy.org/
- Matplotlib https://matplotlib.org/
- Scipy https://scipy.org/
"""
import sys
import openmc
import numpy as np
import math
import matplotlib.pyplot as plt
from scipy.constants import Avogadro
## OpenMC settings - paths
openmc.config['cross_sections'] = '/your/path/to/nucleardata/endfb/cross_sections.xml'
openmc.config['chain_file'] = "/your/path/to/chain_endfb71_pwr.xml"
# Let's model the detector response function by using a gaussian distribution with a energy-dependent sigma (σ(E)).
# I obtained the equation for the custom σ(E) by reverse engineering the detector's firmware.
def custom_gauss(energy):
num = 0.9363 * (811 / (energy ** 1.06) + 6.2 - (energy / 4000) ** 1.8 * 5)
num = num * 0.01 * (1 / (2* math.sqrt(2 * math.log(2))))
sigma = num * energy
return np.random.normal(loc=energy, scale=sigma)
def main():
masking = False
if len(sys.argv) > 1:
masking = True
###### Materials
# Scintillator cladding
aluminium = openmc.Material()
aluminium.add_element('Al', 1)
aluminium.set_density('g/cm3', 2.7)
# NaI-based scintillator
sodium_iodide = openmc.Material()
sodium_iodide.add_element('Na', 1)
sodium_iodide.add_element('I', 1)
sodium_iodide.set_density('g/cm3', 3.67)
# Aluminium Oxide (reflector)
aluminium_oxide = openmc.Material()
aluminium_oxide.add_nuclide('O16', 0.4)
aluminium_oxide.add_element('Al', 0.6)
aluminium_oxide.set_density('g/cm3', 3.987)
materials = openmc.Materials([aluminium, sodium_iodide, aluminium_oxide])
###### Geometry
# Defining Cylinders: Crystal, casing and reflector
cyl1 = openmc.ZCylinder(r=3.50)
cyl2 = openmc.ZCylinder(r=3.55)
cyl3 = openmc.ZCylinder(r=3.60)
# Surfaces
surface1 = openmc.ZPlane(z0=0.00)
surface2 = openmc.ZPlane(z0=0.05)
surface3 = openmc.ZPlane(z0=0.10)
surface4 = openmc.ZPlane(z0=7.70)
surface5 = openmc.ZPlane(z0=8.00)
# Everything enclosed inside a sphere, surrounded by vacuum.
s = openmc.Sphere(r=10, boundary_type='vacuum')
# Defining cells
# Crystal
nai_crystal = openmc.Cell()
nai_crystal.region = -cyl1 & -surface4 & +surface3
nai_crystal.fill = sodium_iodide
# Casing
aluminium_housing = openmc.Cell()
aluminium_housing.region = (+cyl2 & -cyl3 & -surface4 & +surface2) | (-cyl3 & -surface2 & +surface1)
aluminium_housing.fill = aluminium
# Reflector
reflector = openmc.Cell()
reflector.region = (+cyl1 & -cyl2 &-surface4 & +surface2) | (-cyl2 & -surface3 & +surface2)
reflector.fill = aluminium_oxide
# Do not leak particles
detector_back = openmc.Cell()
detector_back.region = -cyl3 & -surface5 & +surface4
detector_back.fill = aluminium
surrounding = openmc.Cell()
surrounding.region = -s & ~(-cyl3 & -surface5 & +surface1)
# This will be our entire 'world'
universe = openmc.Universe(cells=[nai_crystal, aluminium_housing, reflector, detector_back, surrounding])
# Instantiate our universe
geometry = openmc.Geometry(universe)
###### Calculating number of atoms for Cs137/Ba137, assuming secular equilibrium
hf_cs137_s = 30.17 * 365 * 24 * 60 * 60 # half-life Cs137 (seconds)
hf_ba137m_s = 2.55 * 60 # half-life Ba137m (seconds)
b_ratio_cs2ba = 0.946 # Branching ratio (94.6%) from Cs137 to Ba137m (Decay chain)
# NºAtoms = (mass/molar_mass) * Avogadro
CsMass = 1 # grams
NaCs = (CsMass/137.91) * Avogadro
NaBa = (hf_ba137m_s/hf_cs137_s) * b_ratio_cs2ba * NaCs
CoMass = 5 # grams
NaCo = (CoMass/60) * Avogadro
print(f"== Cs137 - Mass: {CsMass:}g")
print(f"* Number of atoms of Ba137m: {NaBa:.4e}")
print(f"* Number of atoms of Cs137: {NaCs:.4e}")
print(f"* Number of atoms of Co60: {NaCo:.4e}")
###### Isotropic gamma sources underneath the scintillator
# Co-60
Co60_source = openmc.Source()
Co60_source.space = openmc.stats.Point((0.0, 0.0, -1.0))
Co60_source.angle = openmc.stats.Isotropic()
Co60_source.energy = openmc.data.decay_photon_energy("Co60")
Co60_source.particle = 'photon'
Co60_source.strength = np.sum(openmc.data.decay_photon_energy("Co60").p) * NaCo
# Cs137
Cs137_source = openmc.Source()
Cs137_source.space = openmc.stats.Point((0.0, 0.0, -1.0))
Cs137_source.angle = openmc.stats.Isotropic()
Cs137_source.energy = openmc.data.decay_photon_energy("Cs137")
Cs137_source.particle = 'photon'
Cs137_source.strength = np.sum(openmc.data.decay_photon_energy("Cs137").p) * NaCs
# Ba137m
Ba137m_source = openmc.Source()
Ba137m_source.space = openmc.stats.Point((0.0, 0.0, -1.0))
Ba137m_source.angle = openmc.stats.Isotropic()
Ba137m_source.energy = openmc.data.decay_photon_energy("Ba137_m1")
Ba137m_source.particle = 'photon'
Ba137m_source.strength = np.sum(openmc.data.decay_photon_energy("Ba137_m1").p) * NaBa
settings = openmc.Settings()
settings.particles = 10**4
settings.batches = 20
settings.photon_transport = True
if masking:
settings.source = [Cs137_source,Ba137m_source,Co60_source]
else:
settings.source = [Cs137_source,Ba137m_source]
settings.run_mode = 'fixed source'
###### OpenMC Tallies
tallies = openmc.Tallies()
# Filters
energy_range = 2e6 if masking else 1e6
energy_bins = np.linspace(0, energy_range, 1024)
energy_filter = openmc.EnergyFilter(energy_bins)
cell_filter = openmc.CellFilter(nai_crystal)
# Tally 1: Pulse-height
tally1 = openmc.Tally(name='pulse-height-tally')
tally1.filters = [cell_filter, energy_filter]
tally1.scores = ['pulse-height']
# Tally 2: Photon interactions
tally2 = openmc.Tally(name='photon_interactions')
tally2.filters = [cell_filter]
reactions = list(["photoelectric","incoherent-scatter","coherent-scatter","total"])
tally2.scores = reactions
tallies.append(tally1)
tallies.append(tally2)
# Instantiate model
model = openmc.model.Model(geometry, materials, settings, tallies)
# Run simulation
sp_filename = model.run()
# Get results
sp = openmc.StatePoint(sp_filename)
# Let's verify that most photon interactions are photoelectric effect and compton scattering
tally = sp.get_tally(name='photon_interactions')
df = tally.get_pandas_dataframe().sort_values("mean", ascending=False)
total = df.loc[df['score'] == 'total', 'mean'].item()
print(f"Total: {total:.4e}")
photoelectric = (df.loc[df['score'] == 'photoelectric', 'mean'].item()*100)/total
print(f"Photoelectric effect: {photoelectric:0.2f}%")
incoherent_scatter = (df.loc[df['score'] == 'incoherent-scatter', 'mean'].item()*100)/total
print(f"Compton scattering: {incoherent_scatter:0.2f}%")
# Let's get the ideal spectrum by using 'pulse-height' tally
tally = sp.get_tally(name='pulse-height-tally')
pulse_height_values = tally.get_values(scores=['pulse-height']).flatten()
# Use midpoints
energy_bin_centers = energy_bins[1:] + 0.5 * (energy_bins[1] - energy_bins[0])
###### Simulating Detector resolution
number_simulated_samples = 1e6
# Sampling
samples = np.random.choice(energy_bin_centers[1:], size=int(number_simulated_samples), p=pulse_height_values[1:]/np.sum(pulse_height_values[1:]))
# Convert to keV
kev_samples = samples/1000
# Folding the ideal spectrum by implementing the detector response function.
simulated_pulse_height_values = custom_gauss(kev_samples)
# Generating the simulated spectrum according to the detector resolution
simulated_spectrum, _ = np.histogram(simulated_pulse_height_values*1000.0, bins=energy_bins)
# Normalized for a Probability Density Function https://www.itl.nist.gov/div898/handbook/eda/section3/histogra.htm
renormalized_simulated_spectrum = (simulated_spectrum / np.sum(simulated_spectrum) ) * np.sum(pulse_height_values[1:])
###### Plotting
# Spectrum
plt.figure()
plt.xlabel('Energy [eV]')
plt.ylabel('Counts')
title = "Cs137 (Co60 Masked) - True Spectrum vs Detector resolution" if masking else "Cs137 - True Spectrum vs Detector resolution"
plt.title(title)
plt.semilogy(energy_bin_centers[1:], pulse_height_values[1:], label="True Spectrum")
plt.semilogy(energy_bin_centers[1:], renormalized_simulated_spectrum[1:], label="Simulated Spectrum (Detector)")
plt.legend()
r_limit = 1500000 if masking else 800000
plt.xticks(np.arange(0, r_limit, step=1e5))
plt.ylim(bottom = 1e5)
plt.xlim(right = r_limit)
plt.grid(True)
plt.tight_layout()
plt.show()
if __name__ == '__main__':
main()
--------------/poc.py--------------
SIGMA Network
The diagram above illustrates the SIGMA Open Core System architecture. One of the key aspects to note is the clear physical and logical zone (kind of a boundary), ‘Edge’ that divides the system into two distinct parts (from an offensive perspective): ‘internal’ and ‘external.’ Most of the systems within the cloud-based (AWS, different SIGMA instances running on a VPC) ‘internal’ part (to the right of the ‘Edge’) are outside the scope of this post, as the information I can provide about them is limited to what is described in public documents. Therefore, readers specifically interested in this section may find it more beneficial to consult the documents directly. However, when necessary, I will elaborate on certain interactions and details.
However, the ‘external’ part is different. There is much more to discuss here, as we can gain valuable insights by mapping live systems using well-known tools like Shodan, reverse engineering SIGMA-compliant gamma detectors, and collecting information from open-source intelligence. Let’s start.
At the EDGE
As we can see in the diagram, "Edge" is the outer layer of the SIGMA system that handles incoming data from Devices and Sensors. There are two API versions, v1 (supported for backwards compatibility only) and v2 (default version), which have different characteristics, from a design standpoint but also in terms of security. However, to keep this post approachable, I’ll aim to unify terms from both versions whenever possible.
In the SIGMA system, a device comprises a sensor (or a group of sensors), a controller, and a communication unit.
The sensors are instruments that provide readings, which are eventually ingested by the SIGMA backbone. These measurements can include various types, such as ‘radiological,’ ‘chemical,’ ‘explosive,’ ‘biological,’ ‘location,’ or ‘atmospheric’. Supported sensors include commercial Gamma spectrometers (with or without neutron detection), chemical and biological agent detectors, and meteorological stations.
The ‘controller’ is an embedded computer that interfaces with the sensors to collect their data and prepare it for transmission to the SIGMA backbone using the EDGE protocol (v1 or v2). This connectivity is handled by the ‘communications unit.’
In this context, a device can be viewed as an entity that encapsulates everything within a container, as shown in the diagram above. Alternatively, a device can also refer to a different configuration, such as a mobile phone paired via Bluetooth with a gamma spectrometer or a drone. In this client-server architecture, the device is the client and the Edge nodes are the server.
Although with different implementations, both v1 and v2 of the EDGE API exposes three key services via APIs:
Provisioning
In theory, a device cannot complete a provisioning procedure by itself without any prior human intervention. Devices must first be registered and approved. Once that step is completed, the device can submit a provisioning request to a specific REST API endpoint to receive its 'credentials.'
For EDGE protocol v1, the device will receive a 256-byte RSA public key in PEM format, which is used to encrypt specific parts of messages, as we'll discuss later. In EDGE protocol v2, the device will receive an API authentication token.
These provisioning requests are valid only for a limited time after the device provisioning is authorized. After this time, a request will only succeed if it is specifically 'reauthorized' by an operator using the SIGMA DTECT WebUI.
Although the 'credentials' (RSA public key and authentication token) are designed to be long-lasting and are persisted on the device, the provisioning API seems to be designed to prevent attackers from abusing it to register fake devices, even if they have access to the information needed to impersonate a device (e.g., Serial, deviceID, name, etc.), which is kind of feasible under certain circumstances.
Server: https://provision.dtect.net
Reporting
These endpoints receive sporadic reports from the devices, such as battery level, temperature, or alarms.
Server: https://api.dtect.net
Streaming
These endpoints are designed to establish a reliable message-based communication between the backbone and the device. The sensors submit their readings periodically over this API.
Server. (http/ws)s://api.dtect.net
EDGE v1: tcp://edge.dtect.net:5569 (ZeroMQ)
One of the things that initially caught my attention was the public availability of these SIGMA endpoints. Do not get me wrong, this complies with the design documents so under the SIGMA design paradigm, there is nothing wrong with it as it would be apparently working as intended. If you think about it, this approach eases a rapid integration of many devices under different scenarios.
However, there are several observations that might be worth mentioning.
1. A simple Shodan search (or a subdomain scan) reveals many different sub-domains, some of which suggest non-production environments for both the AWS cloud and DHS' cloud. While this isn't necessarily an issue, it is generally recommended to prevent dev environments from being publicly accessible, as the policies, configurations, and hardening of these systems may be weaker than those in production environments.
api.dev.dtect.net
preview.dev.dtect.net
demo.dev.dtect.net
spinaltap.dev.dtect.net
demo-test.dtect.dhs.gov
test.dtect.dhs.gov
api-test.dtect.dhs.gov
2. There are certain maintenance systems publicly available (e.g., spinaltap.dtect.net, spinaltap.dev.dtect.net). According to the documentation:
“[Spinaltap] Provides a live view into the status of Devices and Sensors reporting into the system. Often used for system maintenance”.
Accessing this system grants the ability to retrieve certain information, such as the GPS coordinates of available devices, likely one of the most crucial pieces of information for a CBRN network. Additionally, it allows for the disabling or rebooting of sensors and devices, effectively impacting the system's functionality.
In the SIGMA Open Core diagram above, ‘Spinaltap’ is in a ‘gray zone’, close to the internal ‘internal’ system, though the documentation lists it as an ‘external’ system.
As this system is not intended to receive direct requests from devices but instead directly consumes data from Kafka, (“SpinalTap is a user interface for viewing streaming data from Devices and Sensors sent to the Kafka back-bone. It also reads historical data from the Cassandra database when it starts up”), it seems reasonable to consider that it could be configured differently to reduce an unnecessary public attack surface.
However, I’m not claiming that this system is insecure.
3. There appear to be outdated systems in use.
For example, Mender is used for deploying and managing firmware updates on devices. Based on the information I’ve gathered, it seems to be outdated. While I’m not suggesting that these outdated systems represent an immediate risk, it is something worth investigating to ensure that public CVEs are properly mitigated.
4. Exposed third-party service tokens
DHS’ DTECT WebUI (https://dtect.dhs.gov) exposes a valid, secret token for the Mapbox API service. I’m not implying this represents an immediate risk, as there are other factors to consider (e.g., is DHS’ DTECT instance still in use?). However, just to quote Mapbox’s documentation:
“Secret token API requests should never be exposed to the client. If someone else gets access to tokens with secret scopes they may be able to make changes to your account. Make all requests requiring a token with secret scopes on a server.”
5. DTECT WebUI source code
The DTECT WebUI (dtect.net and dtect.dhs.net), which serves as the main interface for the SIGMA system, is publicly accessible.
As a React-based web app, the front-end source code is also available before authentication, including the unminified version with original comments from the developers. Given the nature of this system, I think this level of exposure provides more information than is strictly necessary for public access.
6. Live status
The Statuspage (https://status.dtect.net) for SIGMA is publicly available, which is a good example of transparency. However, considering the system’s characteristics, where availability is critical, this might be too much information to expose. In a hybrid attack scenario, one of the first tactics attackers might employ is to deny SIGMA operators visibility into potential threats, making a targeted denial of service a likely threat.
From the attacker’s perspective, having a live feed of the system's status would be advantageous, not only for launching DoS attacks but also for phishing campaigns and other malicious activities.
Regarding the implementation of the EDGE protocol, my perspective is limited to mostly static analysis of SIGMA detectors’ firmware, as I haven’t even considered analyzing the server-side API due to obvious reasons. Let’s take a look at a couple of interesting things.
Deterministic UUIDs
The EDGE protocol implementation identifies ‘reusable’ objects (devices, sensors, components, algorithms) using a v5 deterministic UUIDs , not a random one. There are 4 namespaces, 3 can be found in the publicly available documents, the fourth was found through reverse engineering.
Sensor Namespace “5a4af76d-acb4-4f04-9dbf-76f8e071a4bc”
Device Namespace “55e83bb7-1066-4319-9a75-c344164067dd”
Algorithm Namespace “dd6654f6-b08c-409c-b440-949ae96c1b08”
Backbone Namespace “dcaa345d-eedb-44ea-XXXX-XXXXXXXXXXX”
As we can read in the documentation
“For a device with make of “DeviceCo”, model of “jPhone” and serial number of “42” generate a version 5 UUID with a namespace of (see above) “55e83bb7-1066-4319-9a75-c344164067dd” and data of “Device-Co_jPhone_42” and get a UUID of “3cfd2156-9ef3-5f74-9915-a5504d1a0af5”.”
The 4 elements required to generate the UUID are “Namespace”, “make”, “model”, “serial”, which are then joined together as follows:
---
private static UUID generateId(Guid namespaceId, params string[] parts)
{
Guid guid = GuidUtility.Create(namespaceId, string.Join("_", parts));
return toUuid(guid);
}
---
This means that object identifiers are deterministic and generated on the client-side, as opposed to the regular model where the UUID is provided by the server. From an offensive security perspective, this has potentially interesting implications. Let’s use the provision request as an example:
There is nothing in this request that cannot feasibly be generated on the client-side. While testing an application with IDs based on random UUIDs can pose a significant barrier, EDGE uses deterministic UUIDs that follow predictable or brute-forceable patterns (such as names, serial numbers, etc.). This potentially opens the door to issues like IDOR (Insecure Direct Object Reference), enumeration (via returned errors), and other similar vulnerabilities. I’m not suggesting that this is the case, but it’s definitely something I would consider trying.
Handling of untrusted data on the server side.
Anyone who has audited cryptographic implementations knows that the more code handles untrusted data, the larger the attack surface becomes, something that is good news for potential attackers.
The streaming API in the EDGE v1 does not have any kind of transport security. As we can read in the documentation:
“The Edge Server binds a ZeroMQ ROUTER socket listening on TCP port 5569. Connections are established from the remote Device by creating a REQ socket and connecting to the SIGMA Edge. No authentication or encryption is established at the transport layer.
The Edge Server, which is used for external device development is located at edge.dtect.net:5569.
Data sent over the transport layer is packed in Thrift-encoded messages as described in Message Formats. When the Server receives valid messages, it will respond with Thrift encoded messages or a basic acknowledgement (see below). However, if the message received is invalid for any reason, the Server will respond with a basic error code (see below)."
However, the messages are partially encrypted and authenticated using the following schema:

“The payload field of each base Message sent to Edge is first compressed using zlib compression and then encrypted using AES128 in CBC mode with a SHA(1) HMAC message authentication code and PKCS5 padding. The symmetric 16-byte AES128 ephemeral session key is selected randomly by the Device at the establishment of a Session. The Device sends the symmetric Session key (encrypted by the asymmetric RSA key obtained earlier via Device Provisioning) to Edge as part of the RegisterDevice Message which initially establishes a Session”
Essentially, they are using CBC with a counter in the IV, along with a timestamp field that helps mitigate replay attacks, even if the counter sequence is compromised. From what I’ve seen, the client-side implementation of this scheme isn't a major concern.
What I find more interesting is the potential for server-side effects. The ZeroMQ server, being publicly accessible, has to handle untrusted data, as the only part that is encrypted and authenticated is the payload.
-----
private async Task<byte[]> wrap<T>(T innerMessage) where T : TBase, new()
{
generateIv(sequenceCounter);
byte[] array = await encrypt(innerMessage);
Message message = new Message
{
Id = UuidBuilder.Random(),
Topic = buildTopic<T>(),
Sequence = sequenceCounter,
Origin = deviceId,
Payload = array
};
logger.LogTrace("Generating message '{topic}' with sequence {sequence}", message.Topic, message.Sequence);
byte[] array2 = new byte[28];
byte[] array3 = hmac.ComputeHash(array);
Array.Copy(aes.IV, 0, array2, 0, 8);
Array.Copy(array3, 0, array2, 8, array3.Length);
message.Token = array2;
return await serialize(message);
}
-----
Therefore, anyone with an internet connection could potentially abuse the implementation to launch a targeted DoS attack on a system whose availability is critical for maintaining visibility into potential threats.
Physical access
SIGMA supports static, in-vehicle and wearable/mobile detectors. There is an inherent risk in having fixed detectors in public places, although the use of cameras and/or lock alarms can serve as a deterrent.
According to public brochures, some of these static detectors appear to be designed for discreet placement behind bus signs.
For instance, a publicly available document shows a screenshot of what appears to be the SIGMA DTECT training instance used by the London Metropolitan Police. I used Google Street View to verify the locations and found that they correspond to bus stops.
You can’t underestimate the power of a customized
reflective vest and a set of specific utility keys. I’m sure fellow security researchers can relate to this trick during physical red team operations.
Based on my analysis I don’t believe most of these detectors are designed to withstand the physical attacks typically performed during embedded assessments. For example, one simple method would be to locate an exposed UART port, which would provide access to the underlying Linux environment.
You can then use hardcoded passwords or accounts with no password to gain access. For example, one of the SIGMA detectors that uses SSH as a management channel exposes a hardcoded root account.
This allows obtaining SIGMA credentials, which are stored on the device, thereby opening the door to impersonating detectors and/or supply chain attacks.
Assuming you’re physically tampering with the detector, based on the settings visible in the exposed DTECT Web UI source code, it’s important to consider that alarms are triggered after 2 minutes.

Conclusions
CBRN (Chemical, Biological, Radiological, and Nuclear) detection systems play a critical role in enhancing public safety. They serve as a powerful deterrent against the use of such weapons, as potential adversaries are aware that their actions can be detected and countered. While fortunately, such attacks are not a part of our daily routine, this also means there are many dedicated individuals working, both in technology and human resources, to ensure that it stays that way.
It is important to recognize that this analysis is limited by the scope of available data and the specific context in which it was conducted. While the research might offer valuable insights for improvement, external factors and considerations (some of which may not be immediately apparent) could influence the assessment of the issues discussed. That’s why I’ve taken a cautious approach in classifying something as a ‘vulnerability’.
Hopefully, the reader has found something valuable in this text and developed an interest in cyber-physical systems and nuclear physics. Feel free to
reach out to me to discuss initiatives, projects, or potential collaborations on these topics.