This is an old school approach. AI has its uses, but in earlier times we used non-digital measures to good effect. We need to introduce deterrent practices that raise awareness that people who violate the rules will be caught.

First, there should be active supervisory presence and oversight. A supervisor should make sure his subordinates know he/she is watching. Spot checks keep people honest. Periodic searches of all materials leaving and especially entering secure areas are a deterrent. Virtually all spies like Hansen, Ames, and Walker carried huge volumes of classified material out of their work place because no one ever checked. Keep personal cell phones and cameras out of sensitive areas. Keep records of online searches and periodically review them. We occasionally used things like a two-person rule to ensure people were not spending time alone to get into mischief. We might develop alarms to cue supervisors when an operator searches for materials not needed at his site. Finally, there needs to be more intensive training for operators and supervisors. For a long time, the Army worked to get essential information to the people who needed it, but those of us who worked that problem never envisioned that Manning, working in an army S2 shop in Iraq would have almost total access to State Department reports or that Jack Teixeira would be able to access highly classified strategic intelligence documents from a USAF reserve base.

Expand full comment
Apr 16Liked by Jeff Stein, Dr. Emma L Briant

Thanks for the detailed update on the leak and AI. From my experience some of the AI money should be spent on expanding the number of polygraph operators. Knowing that in a few months you will have a repeat appointment for a follow on poly makes a person think twice before doing something so stupid as leaking classified information.

Expand full comment
Apr 16Liked by Jeff Stein, Dr. Emma L Briant

Trust is the issue. Even once trusted people given access, turn disloyal, and sensitive info then is leaked.

Loyalty and trust given to those deemed loyal, who turn disloyal and leak info they promised priorly not to, is the unsolvable issue.

AI user psychologizing programs, I'd think that would be in the news as the next problem solving angle to this overall issue

Expand full comment
Apr 16Liked by Jeff Stein, Dr. Emma L Briant

I think it is a good thing that Dr. Briant has focused on the influence of culture on behavior, but there is a larger dimension of this that makes the problem even more intractable, or at least so it seems to me. As much as there is worry within the broader culture over Big Brother, there is also a great enthusiasm for surveillance that targets "them." Everyone wants to surveil those they mistrust, which explains, for instance, the antipathy on the left toward police surveillance paired with their embrace of body cameras. No one likes a sound-activated camera in a distressed neighborhood until it catches five Memphis officers beating Tyre Nichols to death. So dismantling this culture must first reckon with the misguided belief that there is a "them" that can be effectively controlled by constant surveillance.

Expand full comment

I have lived a good part of the evolution of IT security, from the 1970s though my retirement in 2015. This omits the critical last few years, but perhaps my observations might prove useful.

A large part of the conundrum that made security so difficult was throughput. When one needed the assistance of a DBA or SysAdmin, it was often impossible to get. The corporation could not see the advantage of having a senior tech person idle, therefore staffing levels were adjusted downward until staff utilization approached 100 percent. Often, issues had to be queued, causing projects and personnel needing assistance to grind to a halt. This tended to encourage inappropriate access to be granted to application personnel, in order to keep the development bus moving.

A secondary problem was the offloading of admin tasks to unqualified management personnel in order to deal with the inefficiency described above. Unqualified fingers poking a complex technical issue often caused more stagnation then they cured.

The idea that administration access should never be granted to application level personnel is valid for system security, but hard to achieve in practice.

Expand full comment

Excellent piece. The problem is not fixable. Restricting access to TS interferes with the job at hand. Tech support needs access, it's that simple. Most tech support is low level grunt work and thus is given to people like this suspect. It can't be any other way. Also, total information awareness still doesn't put investigators inside the head of any individual and is dangerous to our democracy.

Expand full comment

I fully concur with Dr. Briant’s impressive article.

Time and again the business culture of Washington has put forward technical solutions that claim great future success in detecting insider threats by proposing enormous and expensive programs, such as AI, ML (machine learning), etc.

No surprise, that’s how they grow their business.

I’ve always known that while technical approaches might actually add a small percentage of insiders that get detected, in a larger sense they fail. The Law of Diminishing Returns at work. That’s because human beings can be so damn ornery. They have all day to figure out how to defeat whatever detection system you can come up with, however brilliant. They’re smarter than you are.

I’m the psychiatrist who worked with four caught spies, in jail, each weekly for up to two hours, for an entire year, including the notorious Robert Hanssen. Based on my unique experiences, I wrote my three NOIR white papers on the psychology of insider spies.

See: NOIR4USA.org.

I could have stopped after writing my first two papers. But I happened to attend an industry conference on counterintelligence that addressed the problem of insider threat and that forced my hand.

At first, I was glowing because conference panelists quoted some of my my work prominently. But as I listened further, I realized that referencing my work was merely to highlight the inside threat topic in service to what they were really wanted: Selling AI and ML technical solutions, as described above. Please understand that the main thrust of my work was focus on the human psychology that’s at the root of someone crossing he line. They were pushing only the exact opposite of what I actually recommended!

I was so annoyed that I felt compelled to set the record straight and write my third white paper on Prevention. The full title of my third white paper:

Prevention: The Missing Link for Managing Insider Threat.

Its three subtitles:

Counterintelligence is the Stepchild of the Intelligence Community;

Prevention is the Stepchild of Counterintelligence;

Detection Gets All the Love.

The new leaker getting all the attention these days exemplifies how current security practices miss the mark. He is one more outlier from current assumptions as to leaker and spy motivation and the unexpected angles of attack they can take to subvert the system as currently structured.

I can only hope that the IC wakes up one day and opens itself up to new ideas that are based on psychology, not just technical solutions. We Americans are such suckers for technology. (I’m not immune). I believe my NOIR white papers are worth reading to begin that awakening.

David Charney, MD, Psychiatrist

Expand full comment