Are we going to work in a paper jail?
Any major university IT infrastructure is huge and heterogeneous, it is used by lots of people, many of whom are experimenters and explorers, who like challenge, rather than office robots. Most users are busy, focus on research and study and hate additional (and especially sudden) hassle. This is why consideration of the usability cost is absolutely critical for IT security strategy.
Creating a walled "trusted" area by a firewall--the perimeter model--is an outdated approach to security at the age of universal zero-trust deployment. Instead of following an already outdated approach, a more sensible strategy is to start implementing components of the zero-trust model, including score-based trust and wide use of personal identity hardware tokens.
Major focus in security should be shifted from solely technology components to the end users, creating incentives to use more secure technology, rather than making additional hassle.
The great firewall
There was a very rapid trend towards an increasingly restrictive IT policies at the University of Bergen implemented from October this year. While the aim of “Increasing security” is laudable, I think the planning and implementation of the policies has several flaws which may compromise its declared aim. The biggest problem is that the UiB IT is huge and heterogeneous. There is a variety of services with different levels of security risks, many users, with diverse needs, user cases and environments, competences, personal backgrounds and personalities. This requires a more sensible, flexible and inclusive approach. If this is not the case, rigid policies will not make the IT environment significantly safer. Instead, it may hamper normal work for some users, and in the long run compromise both security and privacy contrary to the declared aim.
Security, including the computer security, is not a fixed state, it is rather a continuous process. Security is not limited solely to the IT technology. Technology alone cannot bring security. Security is primarily a human rather than technical problem. Indeed, most dangerous security breaches did not target encryption algorithms, many even only partly involved exploitation of software and hardware vulnerabilities. They typically make use of human factors, such as social engineering, trust exploitation, human mistakes and so on. Successful tracking and catching cyber criminals do not often primarily target technology, but usually depends on exploiting human errors, negligence, laziness and other similar factors. This is why the current primary focus on just technological restriction of the IT environment, aimed barely to its isolation from the outside networks, is neither sufficient nor efficient. A more balanced, flexible and holistic approach is needed.
Security can only work at a balance with usability. Moreover, there is often a trade-off: technological security restrictions often make for worse usability. A completely “sealed” environment would be just too restricted to be usable. Usability is indeed a primary factor: research shows that many security problems and users’ hesitance or unwillingness to make use of (more) secure tools is caused by their imperfect usability. Furthermore, within a hugely heterogeneous environment, there would be no single optimal balance between security and usability. An important consequence of this is that a flexible and inclusive approach to security, aimed at different degrees of balance with usability, is important.
The technical part of security should start and primarily respond to specific threat model(s), not theoretical or vaguely possible risks. And the threat model(s) should be connected with the real life statistics, e.g. how many breach attempts usually occur, to which of the services, from which IP addresses etc. It does not make sense to install solid steel screens on all windows in our department building to make it “more secure” from any kind of possible breaches; even if there are crowds of hungry zombies walking outside, it is just enough to protect the first floor.
Blanket unconditional restriction of the UiB IT network environment as is being implemented does not seem to respond to specific consideration of threat model(s), the variety of users, needs, sub-environments etc. It looks like a desperate attempt to seal everything in a hope that a jailed environment, isolated from the outside, will be more secure. This is a wrong assumption.
Some specific problems
Multi-factor authentication via TOTP: Not a panacea.
KI 0780 introduced multi-factor authentication policy. This is generally a crucial component to improve security, if implemented sensibly. However, not all implementations would automatically improve security or provide a sufficient balance between security and usability. What is called “multi-factor authentication” may not even be really multi-factor. The definition of multi-factor authentication involves the use of several things for authentication, typically something you know: password, plus something you own, e.g. a mobile device (SIM) and something you are e.g. fingerprint. If the password is entered using the password manager software saved on the mobile device and the “multi-factor” SMS comes to the same mobile device (or password entered and SMS read on the same computer that links to the smartphone as is now the norm within the Apple ecosystem), the whole idea of two factors is ridiculed: the smartphone becomes the single authentication device. It can be at best called “two-step authentication,” a weaker mechanism. The SMS (and anything based on phone-line or phone-number) is actually one of the poorest authentication means due to the long known and essentially unsolvable vulnerabilities in the GSM, SS7 and other related protocols. SMS can be hijacked by malicious smartphone apps (e.g. Google Play store does not even approach 100% safety, there are occasional scandals with malware in apps with very substantial audience) or even basic GSM dumb phones (there are reports about quite a few Chinese-made GSM button-phones having factory-installed malware). Worse still, some of the modern and widespread multi-factor mechanisms such as push-based popups are also easily exploited (even worse, they make for a bad habit of clicking “approve” without thinking). If authentication is done on a web page, it is usual to save the authentication cookie to avoid repeated two-factor invocation. However, cookies are not necessarily secure, long kept cookies might be hijacked by malware, there is a well known mechanism of CSRF attacks, there is also a big privacy drawback (e.g. tracking). The current industry trend is to go away from the cookie mechanism in the mainstream browsers (e.g. Google Chrome will not allow any third-party cookies from 2022). A sensible user policy is to reduce the lifetime of any cookie. However, it makes the “two-step” authentication as is currently implemented at the UiB a hassle. Indeed, the user then has to go via the SMS code process nearly every time he or she logins, even if it is done from the same IP address and the same device.
The ssh access to the
login.uib.no server have apparently disabled the
best-practices secure mechanism of ssh-key authentication (incidentally,
if the key is combined with a passphrase then it is actually a two-factor
authentication itself!) and forced the potentially week password-based
mechanism with SMS code. There seems to be no other TOTP mechanisms except
SMS at the time of writing!
A better mechanism is to use the time-based one-time password code (TOTP) authenticator application on the mobile phone. This is in fact recommended at the Microsoft and UiB web pages as a more secure alternative (via Microsoft authenticator app). While TOTP is better than SMS, it is far from perfect because it is potentially vulnerable to phishing and the MITM attack and the secret seed should be kept on the authenticator application as well as on the server to make synchronised generation of TOTPs possible.
Personal hardware tokens
There is a much better and stronger two-factor authentication mechanism: U2F and FIDO2/WebAuthn that use hardware security device keeping the private key. The security token, in the form of a small USB or NFC key can both authenticate on the server and authenticate the server itself with strong asymmetric crypto, making phishing and many other attacks virtually impossible. Many such devices also implement biometric (e.g. fingerprint) identification with privacy-respected way (e.g. biometric data is not sent from the user's device). This is now a mature technology that is implemented in all major web browsers, can be used with ssh key-based authentication, GPG-enabled email etc.
The best known hardware token is probably the Yibikey and there are a few others on the market (e.g. Google Titan, FEITIAN, Token2, Thetis etc.). They can be not very cheap, but not prohibitively expensive either.
VPN needed for all, even the most essential everyday services
The UiB IT services have previously used several open and industry-standard VPN mechanisms (IPsec, OpenVPN) so that different users could easily find a solution working for them individually. Now, there is a single closed and proprietary mechanism: Cisco AnyConnect including both unique protocol (SSL-based) and the software client. This mechanism may work for many but not necessarily for everyone (e.g. unlike open solution, it may not be available on some computing platforms, some enthusiasts of the open source might find restriction to a single proprietary tool unethical, etc.). There are rumors about unreliable connections with Cisco AnyConnect, and that OpenVPN was previously more stable for some users. It is indeed likely if Cisco AnyConnect is used over certain restrictive environments with DPI that block connections to certain ports or UDP traffic even at the 443 port, or otherwise censor VPNs (e.g. some public WiFi networks may have such limitations). Some implementations of the OpenVPN, in contrast, can be configured to mimic normal SSL web traffic (e.g. shadowsocks) and work even under the Great Chinese firewall. There is a clear benefit at not prohibitively high cost to provide at least some limited support for such a mechanism for certain users (e.g. special needs or during travel). It might even be provided only on special request with some substantiation. Also, the reliability statistics internally used by the IT department might be biased if not all users report minor and transient VPN issues. So there is a case to deploy and support alternative VPN solutions, perhaps even on a smaller scale.
It sounds quite reasonable that providing and supporting a wider choice of VPN solutions for a minority of users would not be economically feasible. However, it is certainly not the case when just all the services become available only from within the UiB internal network jail. Then, there should be more flexibility and inclusion, several ways to get into a jailed environment comfortably by a variety of users in different environments. It is just too unbalanced limitation to mandate the use of a single restricted VPN to get email from home or from an airport, for example. A better alternative is of course to relax the policy moving at least the most essential but inherently secure services out of the jail.
Is the universal jail really essential for everything?
One issue with unconditional moving of all the UiB IT services into a jailed environment is that this would not reflect sufficient balance between security and usability. It is of course good to keep potentially less secure services (e.g. RDP) jailed. But are real threats substantial enough to hide just everything into such a jail?
Are there any real-life statistical or other data evidencing that accessing the university email system from an IMAP client with normal SSL/TLS protection can be dangerous? The user in such a case does not need to enter the UiB password for login (it is saved into the software, often encrypted on devise), so phishing risk is near zero. The authenticity of the IMAP server certificate is usually checked through the standard SSL mechanism. So is there any real security advantage to move such essential everyday tool as email into the jail, does this just induces additional hurdle?
Another example is connecting the UiB login.uib.no ssh server. Many (presumably less advanced) users can use the ssh with their default password. Then, the “two-factor” authentication is a serious security improvement of course, even if it is in fact used in the weakened two-step authentication mode. However, some other users can configure ssh-key authentication, which is a much more secure mechanism. Will the manual entry password with two-factor authentication really provide sufficient security improvement in such a case? Will it provide anything beyond a negligible effect if the user has already authenticated with SMS on the same device, or a different device from the same IP address shortly before? Is there any improvement in security that substantiates such degradation of usability?
The question is this: is the same level of restriction and jailing really essential for all services, often and rarely used, potentially less secure and highly secure, easy and difficult to exploit, those with documented attacks and those that present little interest to intruders? Does not it just provide usability costs not balanced by any security improvement?
Human ingenuity: Is the jail actually made of paper?
It is clear that equally and unconditionally restricting just everything, especially, without considering usability costs, will not automatically increase security. The situation can well be worse: lower security as well as compromised privacy.
For example, to avoid all the nuisance, users may switch to using third-party commercial providers, such as increasingly use private gmail.com accounts, Dropbox etc. Users may use smaller, more cryptic online tools and applications (e.g. file sharing sites, communication tools, some advertised as encrypted) with uncontrollable and unknown security. Some of them might be owned and run by community and volunteers, some could be compromised or deliberately devised to gather data, track users and spy.
Some of more qualified "insider" users might successfully hack the system
to get nuisance-free access to the UiB jailed environment from outside. It
is actually not a hard problem. One possible solution is to use the reverse
ssh proxy. It does not even require administrative rights and can be done
by a motivated average level computer user after 20 min of reading the
ssh manual. More advanced users can create stable backdoors implementing
such things as proxy jump and port forwarding that will sustain reboots,
logouts etc. It is also easy to add various layers for plausible deniability
There are much more tools, ways and possibilities to implant and efficiently hide a backdoor into the UiB jailed environment. All that is required is various open source components freely available on the net and an incentive to do such unauthorized actions. It is not just an abstract theoretical threat but real and serious risk left behind the current jailing policy.
Imposing a jailed environment without considering trade-off of flexibility and usability has this biggest problem: It may create an incentive to break the rules to make life more hassle-free. A related and serious problem is that the IT department would not be able to control this and in most cases will remain unaware of the issue. It is virtually impossible to detect that users communicate and share sensitive medical or personal data over a private google mail account, for example. A cryptic backdoor implanted on the computer within the UiB jail with sufficient plausible deniability can remain long undetected without costly and tedious forensic analysis. But such an analysis will be conducted only by the police after a catastrophic break-in has occurred, too late.
There are many advanced users, smart students, at the university. Many well understand (and they do discuss!) the inconsistency of the restrictive jail policies. Some people may find it quite fun to overcome the silly rules imposing unneeded hassle. It can indeed be an interesting challenge but, unfortunately, an additional incentive.
A further problem is that many users usually do not bother to report smaller or transient problems at the normal issue tracking channels such as hjelp.uib.no. They may not be acquainted with it or just consider it a hassle if they are very busy (and they are very busy with real things to hang at tangential IT problems). A quite typical way of action is to ask someone nearby for a help or workaround. Therefore, if the knowledge of the ways for implanting backdoors and the obvious fact that it is quite easy and just solves the problem, is spread within the student and staff, it can create a real security disaster. Unfortunately, backdoor skills are very likely to spread if the IT department continues to create more and more restrictive jail and provide more incentives to break the rules. Then, it would be essential to further tighten the jail: inspect all devices on entry and refuse entry to everyone with IQ > 0.60. The simple fact is that the jail that is being happily built is not made of rock and steel, it is paper.
The situation at the UiB is quite different from a typical commercial organization that the standard security recipes are based upon. There are many brilliant students and staff out here, many are young and like challenges. There can be those who would not hesitate to take risk, given the benefit of making one’s own hassle-free environment is high, the cost is zero while expected risk is rather low. Making a backdoor is indeed a way of learning technology that is fun and another added incentive. Many folks are already aware of various software tools and know how to use their black magic. People are ingenious, and people at UiB are on average much more ingenious than outside. What is the threat model for developing the jailed IT environment? Is to protect the UiB from outside hackers? It is a wrong model because many such hackers are already within the jailed environment and are ready and to get the challenge to punch its feeble paper walls from inside.
What should be done?
Inconsiderate and inflexible jailing of the UiB IT networks should certainly
be slowed down before it is too low and people started using third-party
tools and making their own unauthorized solutions. There should be a serious
analysis on what must be implemented and over which time scale so the users
can get acquainted and do not just suddenly get huge hassle. As to now,
the “analysis” seems to be mainly focused on “what is suddenly broken
down once we put everything into a jail”; this is not acceptable. The
policies should not be based mechanistically on some manual made for a
different type of environment, they should be inclusive, flexible enough to
adapt to the complex, diverse and heterogeneous UiB environment. The main
focus should switch from technology to people: how to reach most of them
(they are busy!), make security improvements minimally obtrusive, teach very
busy people sufficient security skills without much hassle. Specifically,
the most important information should not be sent by global mailing list
that may disappear in user’s mail filter, but must be directed personally
to each user (it isn’t prohibitively hard to write a script for this,
%NAME% with the real user’s name).
The technological part of the solution should develop sensible threat models based on attack and usage statistics. It should be governed by real risks rather than desire to just protect everything quickly and at all costs. Some of the restrictions already applied can be relaxed. A reasonable solution is to apply more sensible score-based security mechanism, e.g. including IP based rules for two-factor or two-step authentication. Some of more secure services, can for example, be available without firewall restriction if the user comes from his/her frequently used Norwegian home IP address (to improve usability while still reducing potential attack surface). This efficiently transforms a jail into a continuum adapting for the threat and uncertainty level. It will also pay back to demonstrate the practical benefits of client-side certificate authentication, OAuth2 and similar more phishing-resistant security token mechanisms (e.g. they can relax the need in TOTP/SMS authentication) to all users. The university should also facilitate much wider use of hardware-based authentication devices, such as YubiKey, for proper two-factor authentication, perhaps even distribute such devices freely in some groups if universal deployment turns out expensive. Such personal identity verification hardware devices are actually a crucial component of modern zero-trust security approaches.
The crucial element of the whole policy is to create incentives for using more secure tools. For example, the use of hardware personal identity verification tokens should allow to bypass all or most restrictions, perhaps even the need in VPN. There would currently be little added risk with such a policy, but the users would be much happier to do their work securely whenever they need without hassle. This would require hard work, additional integration and funding. But educating, helping and cooperating with users—not restricting and obstructing them—would be the only viable strategy to achieve increased security in the University environment in reality, not just on paper.