In 2013 I published a paper titled Beam Viewer Controls at Jefferson Lab through the International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS), held that year in San Francisco. The paper details a complete rewrite of the control system software responsible for managing beam viewer devices on the 12 GeV CEBAF particle accelerator at Jefferson Lab in Newport News, Virginia.
You can read the paper on ResearchGate: Beam Viewer Controls at Jefferson Lab. There is something genuinely satisfying — and very cool — about searching for your own name on a research database and finding a published paper staring back at you. For an engineer who spent years in the trenches writing C++ and debugging hardware signals, seeing that work preserved and indexed alongside physics research from around the world is a quiet kind of thrill.
What Is Jefferson Lab?
Jefferson Lab (formerly the Thomas Jefferson National Accelerator Facility) is a U.S. Department of Energy facility dedicated to nuclear physics research. Its crown jewel is CEBAF — the Continuous Electron Beam Accelerator Facility — a large electron accelerator used to probe the internal structure of protons and neutrons. At the time of this work, the lab was in the middle of a major upgrade from 6 GeV to 12 GeV beam energy, roughly doubling the machine’s experimental reach.
Running an accelerator of this scale requires not just physics expertise but an enormous amount of software. Control systems touch everything: magnets, power supplies, diagnostics, safety interlocks, and the devices I worked on — the beam viewers.
The Problem
Beam viewers are physical devices inserted into the beamline to give operators and physicists a picture of the electron beam’s shape and position. Think of them as tiny fluorescent screens that you slide into the path of the beam to see a glowing spot. Over 140 of these devices were installed on the 12 GeV machine, along with Faraday cups, insertable dumps, and other devices that share the same basic mechanics: a motor drives them in or out, limit switches confirm their position, and cameras route video back to the control room. Complicating matters, some of the devices were thin and fragile enough that the beam at full power would burn right through. Failure to check beam power before moving a device could result in weeks of shutdown to clean and repressurize the beamline.
The legacy software managing all of these devices dated back to the 1980s. It was a sprawling system built in State Notation Language running on 20 Input/Output Controllers spread across the accelerator tunnel — roughly 9,000 lines of SNL code, 2,000 lines of C, and hundreds of configuration files. When a device failed, diagnosing it meant hunting through a maze of EPICS records. Adding a new device meant editing configuration files in dozens of places. The system was difficult to maintain and nearly impossible to extend cleanly.
The New Design
My replacement system, the Insertables System, took a data-driven approach. All device configuration — type, position, hardware channel assignments, beam vulnerability — was consolidated into a single structured XML file. A C++ program called the Insertables Manager read that file at startup, constructed a model of the entire system, and ran a background coordination loop that responded to hardware events through EPICS Channel Access.
The key insight was organizing devices into C++ sets reflecting their current state: inserted, retracted, traveling, lost, hardware-protected. When a beam viewer was requested for insertion, the Coordinator checked the inserted set for any inline devices that needed to clear first, sent retraction signals to all of them, and only then allowed the viewer to move. When a device went lost — no limit switch response within a timeout — the system automatically limited beam current until an engineer resolved the hardware issue, and the status string told them exactly which device had failed and why.
The EPICS database was reduced from hundreds of configuration files to a handful of macro-driven templates. Adding a new device now meant adding a few lines of XML and loading one additional template file at IOC boot time.
Debugging by Ear
Here is something that did not make it into the paper.
During development and testing, I would sometimes sit near the beamline with a laptop and test the software in place. When a viewer received an insertion command, the actuator would fire — and if you were close enough, you could hear it: a soft, definitive plunk as the device settled into position. I would watch my screen for the limit switch signal to confirm the move had registered in software, while listening for that sound to confirm the hardware had actually responded. If the plunk came without the software registering it, I had a wiring or configuration problem. If the software showed the move but there was no plunk, something was wrong on the hardware side.
It is a small thing, but it captures something I love about working at the intersection of software and physical systems. The code was real in a way that most software is not — you could hear it working.
A Disappointment: The Government Shutdown
ICALEPCS 2013 was held in San Francisco that October. I had written the paper, prepared the poster, and was looking forward to presenting. Then the federal government shut down on October 1, 2013, the result of a budget impasse in Congress. Jefferson Lab, as a DOE facility, was directly affected. Travel was off the table — as were a lot of things that month.
I never made it to San Francisco. Someone else presented the poster on my behalf. It was a real disappointment. I had never presented at an international conference before, and the combination of the setting, the subject matter, and the community of accelerator physicists and control systems engineers made it exactly the kind of event I had worked toward. Missing it due to a political standoff rather than anything within my control made it sting a bit more.
Looking Back
The work itself held up well. The Insertables System replaced a brittle, decades-old control system with something maintainable, observable, and extensible. The data-driven design meant that when the 12 GeV upgrade added new devices, the software could absorb them with minimal changes. The modular C++ design made debugging tractable. By the time I left Jefferson Lab, the system had been running reliably for years.
Getting a paper published in the ICALEPCS proceedings is not something that falls in the path of most software engineers. I am glad I did it, even if San Francisco remained out of reach that year. And yes — seeing it on ResearchGate still makes me smile.