The recent report of a vulnerability in Microsoft Teams, coming shortly after the issue with Zoom, illustrates the fact that all software has vulnerabilities. Fortunately, Microsoft fixed the problem as soon as they became aware of it. I’m sure other companies will also respond to reports, as Zoom has done.
This non-technical summary of the “evil GIF” problem (article) hints at the complexity of Microsoft’s security protections. I imagine an in-depth technical description of the issue would show that Microsoft’s anti-hacking measures are pretty complicated, under the covers.
When the protection mechanisms are, themselves, highly complicated, it opens many doors for hackers. Every complication is a crack in the wall. When a message flows from one server to another to another to another, all it takes is one configuration setting on one instance, or one combination of security settings that behaves in an unpredicted way under a special set of conditions, to open a door wide. When a multifaceted online service is assembled from different previously-separate services that have been acquired by a new owner, the complications multiply.
The main concern for companies that provide such services is to keep the service operational and available to users. Security, while important, is a secondary concern.
The extreme workloads on services like Zoom and WebEx due to the work-from-home situation are stressing those systems in ways that have never been tested before. No one knows what security vulnerabilities may be found.
There are people in the world whose full-time job is to find ways to break into systems. They go to offices every day the same as ethical workers do, and work at desks with photos of their families on display, instead of languishing in prison cells where they belong. Therefore, it’s far more likely that unknown security vulnerabilities will be discovered by bad actors than by those trying to protect systems and users, and the manner of discovery will harm someone in some way.
Companies and governments try to cope with the problem, but their full-time job is something other than security, so they will never devote the same amount of time, effort, and money to defending systems as the “bad guys” do to break into them. The “good guys” are always playing catch-up. A system that was vulnerable last month will be safer this month. A system that is relatively safe this month will be exploited next month. When users jump from one service to another based on the latest scary article about a security exploit, they’re probably not gaining anything.
To add insult to injury, many companies have a “bug bounty” program designed to maintain secrecy about any issues that are reported. They worry that if an incident occurs, the public will lose trust in them. Incidents always occur, so their strategy is to keep the information away from the public.
Under this sort of system, those who report bugs must sign a nondisclosure agreement in order to receive their reward. Some companies are more transparent than others about this; they publish the information and pay the bounty. There appears to be a trend in the right direction. But for now, it’s likely that many companies know about security vulnerabilities they aren’t talking about. As long as no one makes the issues public, customers will assume nothing is wrong.
Zoom was playing this game until recently. Many people are upset with them for that reason, even if the recent vulnerabilities have been fixed. I’ve heard comments like, “I’ll never use Zoom again because they were dishonest.” “But they’ve changed the way they operate now.” “I don’t care! I want to punish them for the past!”
The reaction is understandable on a certain level. Yet, I wonder if customers who move away from Zoom are, in effect, moving away from a company that has learned a lesson about transparency and proaction in favor of a company that may not yet have learned the same lesson. When you hear or read that a company has never had a security-related incident, understand it means they have never been forced to admit to a security-related incident.
For those reasons, I think resiliency should be a higher-priority goal than prevention. Companies can’t afford to be hacked, and they also can’t afford to replace all their software and reconfigure all their systems over and over again in a vain attempt to avoid problems.
Rule of thumb: Anything that is a convenience is also a threat vector. When you set up a local VM to fence in your videoconferencing client, don’t say “yes” when the virtualization software asks you if you want to enable access to the host system’s home folder. When a website asks for access to your location so they can serve you better, decline; they’re only going to serve you to hackers. When your browser offers to save your passwords to save time when logging in to various sites, don’t.
There’s no sense in asking for trouble. Trouble enough will find you regardless.
For companies and for individuals, the goal is to minimize the cost, duration, data loss, and operational impact of incidents. We will never be able to prevent all incidents. It’s sensible to take the usual security precautions, short of becoming a security expert. It’s also sensible to practice due diligence to protect sensitive customer information.
Beyond that, it seems more sensible to me that we ensure we have two or more ways to recover our environment and data and get back to work, than to try and prevent every breach. Next month’s exploit hasn’t been invented yet. The full-time hackers will create it before you and I have time to guess what it might be and put countermeasures into play.