Busy thinking about this rather fantastic bit of ingenuity from Sweden:
People who donate initially receive a ‘thank you’ text when they give blood, but they get another message when their blood makes it into somebody else’s veins.
Swedish blood donors receive a text message when their blood is actually used. That’s a masterful way to leverage a handful of different cognitive mechanisms to incentivize donation. We’re primed for donation anyway, I would hope; doing a bit of civic good for people in dire circumstances. But the donation is so removed from the utility that it’s hard to achieve buy-in. That is, it’s hard to get people to own and engage with the process, to feel invested in the institution and manifestation, rather than just doing it for the principle.
But a text message when the blood gets used is a tiny and wonderful behavioral nudge crafted through being recognized not just for doing good but by being notified of the tangible good when it came to fruition. Knowing that you received recognition timely to the actual good you did reinforces it in memory – it’s the same principle that tells us that punishing kids needs to be done around the time of the transgression, rather than some arbitrary moment (except in this case we’re obviously talking positive reinforcement instead). The synchronous nature of the notification with the dopaminergic reward feelings in your brain cements the positive nature of giving blood and, I would guess, easily and drastically raises returning donor rates. And it encourages buy-in not just to blood donation but a system that recognizes you like that. It provides a certain amount of commitment to and faith in an organization and its digital systems.
Of course the blood donation is cognitively reinforced in another way: the system also feeds into the slot machine-esque dopaminergic nudge that we’re thoroughly primed for already, the text message notification. That goes off and our brains light up.
The entire experience incentivizes people to donate blood on a number of levels: neurochemically, cognitively and institutionally.
The federal government proclaims security principles often and loudly (though not always, as their emphasis on weakening encryption systems shows). But the feds don’t incentivize stakeholders in the bureaucratic or IT systems and without that buy-in you get things like the OPM hack. Looking at the structure and operations of government agencies and IT issues you find a series of disincentives that leave principal parties avoiding any kind of buy-in.
-Government IT security is offered irregular, initiative-based funding for anything more than the obscenely bare minimum. So it’s not funded as a necessary principle and the irregularity of it means stakeholders will wait for the next initiative rather than use their own precious funds.
-Plausible deniability means stakeholders are disincentivized to hire highly competent, motivated security personnel so they’re not confronted by the scope of security problems and forced to fix it with their own departmental budget.
-Seeming lack of consequences for government compromises versus, say, private sector (in which not just IT heads but often CEOs are washed out with the post-breach bathwater).
-Buy-in is about as far from likely as possible with stakeholders on multiple levels. Disenfranchisement runs rampant and especially in IT situations the constant slapping on of band-aid solutions rather than systemic reform and rehabilitation means that it’s no longer about safeguarding the institution you’re part of but just holding a job.
The question becomes – how do you incentivize system administrators and higher-level stakeholders to do their fucking jobs? The answer could be that you turn it into an incentivized civic duty. Which is damn hard with the burnouts and the disenfranchised, and it’s damn hard in an environment that caters to the lowest bidder and motivating contractors to do the most mediocre job possible.
The whole situation is further complicated by the government’s hardening stance on security research. Financing bug bounties and encouraging independent security researchers is crucial. The most eyes on the system that know they’ll be appreciated and compensated for finding and disclosing holes, the more secure the system. It’s not hard to imagine a genuinely productive partnership between the public sector and private security experts but that’s impossible in the current climate. Mostly thanks to the government.
Achieving institutional buy-in for government technologist positions will be damn hard to accomplish as well as hideously expensive. But compared to the prospect of our entire federal background investigation and SF-86 application system getting lifted, it’s peanuts.