Here’s an interesting new idea from Microsoft – a radical solution to the problem of buggy code.
The new paper, posed by Redmondian Andrew Begel and a group of Zurich university boffins, suggests managers monitor programmers via EEG, EDA and eye-tracking sensors. These will alert them when the individual is struggling and therefore likely to introduce flawed code.
Now, it sounds like a pretty good idea in theory, and in practice has apparently performed pretty well. But one security expert I spoke to had some major misgivings.
Imperva co-founder and CTO Amichai Shulman argued that it might stray outside the boundaries of what could be construed “reasonable”.
“I think constantly monitoring the psychological status and the physical conditions of programmers, seems tremendously intrusive and probably strays way off from what I consider to be ‘reasonable means’,” he told me.
“However, I think that even if we review this in the cold eyes of a software professional there are some doubts about the usefulness of this method in general and its application to security vulnerabilities in particular.”
The first doubt he had relates to the tremendous commercial pressure coders are under to release “more functionality in less time”.
“On their way to achieving higher rates of LOC/sec, programmers as well as their employers are willing to sacrifice other attributes of the code such as efficiency, readability and also correctness – assuming that some of these will be corrected later during testing cycles and some will not be critical enough to be ever fixed,” he explained.
“If we introduce a system that constantly holds back on programmers because they are stressed for some reason we will effectively introduce unbearable delays into the project which will of course put more pressure on those who perform the job when schedule becomes tight.”
This is not to mention the fact that programmers should, at times, be “over” challenged to keep them sharp and happy with their roles.
“Additionally, there’s a big question of whether we can have a system like that can make a distinction between making a critical mistake or a minor one, which again impacts its ability to have a positive effect on the software development process in general,” said Shulman.
Then, of course, there’s the issue of what kinds of flaws the system will root out.
“I think that most security related mistakes are introduced inadvertently as a consequence of the programmer not having the faintest idea regarding the potential implication of some implementation decision,” he argued. “This is the case with SQL injection, XSS, RFI and many more vulnerability types.”
So, bottom line: nice idea Microsoft, but it’s probably not going to solve the problem of poor coding anytime soon. Until something genuinely revolutionary comes along we’ll probably have to stick to the usual suspects to reduce risk: security tools, patching, better QA and testing.