132 lines
7.3 KiB
HTML
132 lines
7.3 KiB
HTML
<!DOCTYPE html>
|
|
<html>
|
|
<head>
|
|
<title>Inferencium Network - Blog - Untrusted: The Issue with Decentralisation</title>
|
|
<link rel="stylesheet" href="../infnet.css">
|
|
</head>
|
|
<body>
|
|
<h1>Blog - #2</h1>
|
|
<br>
|
|
<h2>Untrusted: The Issue with Decentralisation</h2>
|
|
<br>
|
|
<p>Posted: 2022-06-30 (UTC+00:00)</p>
|
|
<p>Updated: 2022-06-30 (UTC+00:00)</p>
|
|
<br>
|
|
<h4>Introduction</h4>
|
|
<p>A recent trend is seeing people move towards decentralised services and<br>
|
|
platforms. While this is reasonable and I can understand why they are doing<br>
|
|
such a thing, they are seemingly doing it without thinking about the<br>
|
|
possible consequences of doing so. The issue with decentralisation is<br>
|
|
trust; there is no way to pin a key to a specific person, to ensure that<br>
|
|
you are communicating with the same person you are supposed to be<br>
|
|
communicating with. In this article, I will discuss some of the security<br>
|
|
issues with the decentralised model.</p>
|
|
<br>
|
|
<h4>Example: Messaging</h4>
|
|
<p>When it comes to messaging your contacts on a centralised platform,<br>
|
|
such as Twitter or Facebook, the keys are pinned to that user account,<br>
|
|
using the user's password as the method of identification. This approach<br>
|
|
makes it impossible to log in as a specific user without their password,<br>
|
|
should it be strong enough to not be guessed, whether via personal guessing<br>
|
|
or exhaustive search. The trust in this centralised model is the high<br>
|
|
security these platforms have. It is extremely unlikely that anyone other<br>
|
|
than a government would be able to access the accounts stored on such<br>
|
|
platforms' servers, which makes the physical security trusted. As for<br>
|
|
remote security, should a user's password be compromised, it can typically<br>
|
|
be reset if the user can prove they are the owner of the account via some<br>
|
|
form of identification; this is where the trust issue of decentralisation<br>
|
|
occurs.</p>
|
|
<br>
|
|
<p>In the decentralised model, keys are kept on the users' devices, in their<br>
|
|
possession. While this soveriegnty is welcomed, it introduces a critical<br>
|
|
flaw in the security of communicating with anyone via a decentralised<br>
|
|
platform; should a user's device be lost, stolen, or otherwise compromised,<br>
|
|
there is no way to know it happened and what the new keys really are, and<br>
|
|
if the same user generated those keys. There is no centralised point where<br>
|
|
anyone can go to check if the compromised user has updated their keys,<br>
|
|
which means there must already have been at least one other secure channel<br>
|
|
in place before the compromise occurred. Even if there was, the security<br>
|
|
of endpoint devices, especially typical users, is much lower than a well<br>
|
|
protected corporation's servers, making even those secure channels<br>
|
|
questionable to trust. Should all secure channels be compromised, there is<br>
|
|
literally no way to know if the person you are communicating with is the<br>
|
|
real person or an imposter; there is no root of trust. This point is fatal;<br>
|
|
game over. The only way to establish trust again would be to physically<br>
|
|
meet and exchange keys.</p>
|
|
<br>
|
|
<h4>Solution</h4>
|
|
<p>I'll cut to the chase; there isn't a definitive solution. The best way<br>
|
|
to handle this situation is to design your threat model and think about<br>
|
|
your reasoning for avoiding centralised platforms. Is it lack of trust of<br>
|
|
a specific company? Is it the possibility of centralised platforms going<br>
|
|
offline? Only by thinking logically and tactically can you solve both the<br>
|
|
issue of centralisation and decentralisation. Often, one size fits all is<br>
|
|
never the correct approach, nor does it typically work.</p>
|
|
<br>
|
|
<p>In order to avoid the issue of loss of trust due to lack of root of trust,<br>
|
|
all users' keys must be stored in a centralised location where all contacts<br>
|
|
are able to go to in case of compromise or to periodically check the state<br>
|
|
of keys and to see if they have changed. This centralised location requires<br>
|
|
some sort of identification to ensure that the user changing their keys is<br>
|
|
really the same person who initially signed up for the platform, using a<br>
|
|
trust-on-first-use (TOFU) model, which isn't much different than what<br>
|
|
today's centralised platforms are already doing; the only difference is who<br>
|
|
is controlling the location; trust is still present and required.</p>
|
|
<br>
|
|
<p>In order to have a root of trust, I have posted my keys to my website,<br>
|
|
which is protected by multiple layers of security:<br>
|
|
<br>
|
|
1. I have provided identification to my domain name registrar, to ensure I<br>
|
|
can access the website I rightfully own, should it be compromised, by<br>
|
|
providing identification to the domain name registrar.<br>
|
|
<br>
|
|
2. I have provided identification to my virtual private server host, to<br>
|
|
ensure I can access the virtual private servers I rightfully rent, should<br>
|
|
they be compromised, by providing identification to the virtual private<br>
|
|
server host.<br>
|
|
<br>
|
|
3. I have pinned my website to a globally trusted certificate authority,<br>
|
|
Let's Encrypt, which is a trusted party to manage TLS certificates and<br>
|
|
ensure ownership of the domain when connecting to it.<br>
|
|
<br>
|
|
4. I have enabled DNSSEC on my domain, so it is extremely difficult to<br>
|
|
spoof my domain to make you believe you're connecting to it when you're<br>
|
|
actually connecting to someone else's.<br>
|
|
<br>
|
|
While not the most secure implementation of a root of trust, it is the<br>
|
|
most secure implementation currently available to me. While the domain<br>
|
|
name registrar or virtual private server host could tamper with my domain<br>
|
|
and data, they are the most trustworthy parties available.<br>
|
|
In its current form, decentralisation would make this impossible to<br>
|
|
implement in any form.</p>
|
|
<br>
|
|
<h4>Conclusion</h4>
|
|
<p>Do not demand anonymity; demand privacy and control of your own data.<br>
|
|
Complete anonymity makes it impossible to have a root of trust, and is<br>
|
|
typically never necessary. It is possible for someone else to hold your<br>
|
|
keys, without them taking control of them and dictating what you can and<br>
|
|
cannot do (Twitter's misinformation policy comes to mind). If a platform<br>
|
|
is not listening to your or other people's concerns about how it is being<br>
|
|
run, show those platforms that you will not stand for it, and move to a<br>
|
|
different one. This may not be ideal, but it's not different to moving from<br>
|
|
one decentralised platform to another. Centralisation is not what is evil,<br>
|
|
the people in control of the platforms are what is potentially evil.<br>
|
|
Carefully, logically, and tactically, choose who to trust. Decentralisation<br>
|
|
doesn't do much for trust when you must still trust the operator of the<br>
|
|
decentralised platform, and are still subject to the possibly draconian<br>
|
|
policies of that decentralised platform. If government is what you are<br>
|
|
trying to avoid, there is no denying it is feasibly impossible to avoid it;<br>
|
|
a government could always take down the decentralised platform, forcing you<br>
|
|
to move to another, and they could also take down the centralised key<br>
|
|
storage site mentioned earlier in this article. A government is not<br>
|
|
something you can so easily avoid. Decentralisation does not solve the<br>
|
|
government issue. In order to live a happy, fun, and fulfilled life, while<br>
|
|
protecting yourself against logical threats, there are only two words you<br>
|
|
must live by: Threat model.</p>
|
|
<br>
|
|
<br>
|
|
<br>
|
|
<a href="blog.html">Back</a>
|
|
</body>
|
|
</html>
|