Update webpage "Blog - #2" from version "9.0.0-beta.1" to "9.0.1-beta.1"

This commit is contained in:
inference 2024-03-18 02:41:28 +00:00
parent 6aa565643a
commit 127f7311fb
Signed by: inference
SSH Key Fingerprint: SHA256:FtEVfx1CmTKMy40VwZvF4k+3TC+QhCWy+EmPRg50Nnc

View File

@ -1,10 +1,10 @@
<!DOCTYPE html>
<!-- Inferencium - Website - Blog - #2 -->
<!-- Version: 9.0.0-beta.1 -->
<!-- Version: 9.0.1-beta.1 -->
<!-- Copyright 2022 Jake Winters -->
<!-- SPDX-License-Identifier: BSD-3-Clause -->
<!-- SPDX-License-Identifier: BSD-3-Clause WITH AdditionRef-Inferencium-Personal-exception -->
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
@ -48,120 +48,99 @@
</nav>
<section id="introduction">
<h2><a href="#introduction">Introduction</a></h2>
<p>A recent trend is seeing people move towards decentralised services and
platforms. While this is reasonable and I can understand why they are doing such
a thing, they are seemingly doing it without thinking about the possible
consequences of doing so. The issue with decentralisation is trust; there is no
way to pin a key to a specific person, to ensure that you are communicating with
the same person you are supposed to be communicating with. In this article, I
will discuss some of the security issues with the decentralised model.</p>
<p>A recent trend is seeing people move towards decentralised services and platforms. While this
is reasonable and I can understand why they are doing such a thing, they are seemingly doing it
without thinking about the possible consequences of doing so. The issue with decentralisation is
trust; there is no way to pin a key to a specific person, to ensure that you are communicating
with the same person you are supposed to be communicating with. In this article, I will discuss
some of the security issues with the decentralised model.</p>
</section>
<section id="examples">
<h2><a href="#examples">Examples</a></h2>
<section id="examples-messaging">
<h3><a href="#examples-messaging">Messaging</a></h3>
<p>When it comes to messaging your contacts on a centralised
platform, such as Twitter or Facebook, the keys are pinned to
that user account, using the user's password as the method of
identification. This approach makes it impossible to log in as a
specific user without their password, should it be strong enough
to not be guessed, whether via personal guessing or exhaustive
search. The trust in this centralised model is the high security
these platforms have. It is extremely unlikely that anyone other
than a government would be able to access the accounts stored on
such platforms' servers, which makes the physical security
trusted. As for remote security, should a user's password be
compromised, it can typically be reset if the user can prove
they are the owner of the account via some form of
identification; this is where the trust issue of
<p>When it comes to messaging your contacts on a centralised platform, such as Twitter
or Facebook, the keys are pinned to that user account, using the user's password as the
method of identification. This approach makes it impossible to log in as a specific user
without their password, should it be strong enough to not be guessed, whether via
personal guessing or exhaustive search. The trust in this centralised model is the high
security these platforms have. It is extremely unlikely that anyone other than a
government would be able to access the accounts stored on such platforms' servers, which
makes the physical security trusted. As for remote security, should a user's password be
compromised, it can typically be reset if the user can prove they are the owner of the
account via some form of identification; this is where the trust issue of
decentralisation occurs.</p>
<p>In the decentralised model, keys are kept on the users'
devices, in their possession. While this soveriegnty is
welcomed, it introduces a critical flaw in the security of
communicating with anyone via a decentralised platform; should a
user's device be lost, stolen, or otherwise compromised, there
is no way to know it happened and what the new keys really are,
and if the same user generated those keys. There is no
centralised point where anyone can go to check if the
compromised user has updated their keys, which means there must
already have been at least one other secure channel in place
before the compromise occurred. Even if there was, the security
of endpoint devices, especially typical users, is much lower
than a well protected corporation's servers, making even those
secure channels questionable to trust. Should all secure
channels be compromised, there is literally no way to know if
the person you are communicating with is the real person or an
imposter; there is no root of trust. This point is fatal; game
over. The only way to establish trust again would be to
physically meet and exchange keys.</p>
<p>In the decentralised model, keys are kept on the users' devices, in their possession.
While this soveriegnty is welcomed, it introduces a critical flaw in the security of
communicating with anyone via a decentralised platform; should a user's device be lost,
stolen, or otherwise compromised, there is no way to know it happened and what the new
keys really are, and if the same user generated those keys. There is no centralised
point where anyone can go to check if the compromised user has updated their keys, which
means there must already have been at least one other secure channel in place before the
compromise occurred. Even if there was, the security of endpoint devices, especially
typical users, is much lower than a well protected corporation's servers, making even
those secure channels questionable to trust. Should all secure channels be compromised,
there is literally no way to know if the person you are communicating with is the real
person or an imposter; there is no root of trust. This point is fatal; game over. The
only way to establish trust again would be to physically meet and exchange keys.</p>
</section>
</section>
<section id="solution">
<h2><a href="#solution">Solution</a></h2>
<p>I'll cut to the chase; there isn't a definitive solution. The best way to
handle this situation is to design your threat model and think about your
reasoning for avoiding centralised platforms. Is it lack of trust of a specific
company? Is it the possibility of centralised platforms going offline? Only by
thinking logically and tactically can you solve both the issue of centralisation
and decentralisation. Often, one size fits all is never the correct approach,
<p>I'll cut to the chase; there isn't a definitive solution. The best way to handle this
situation is to design your threat model and think about your reasoning for avoiding centralised
platforms. Is it lack of trust of a specific company? Is it the possibility of centralised
platforms going offline? Only by thinking logically and tactically can you solve both the issue
of centralisation and decentralisation. Often, one size fits all is never the correct approach,
nor does it typically work.</p>
<p>In order to avoid the issue of loss of trust due to lack of root of trust,
all users' keys must be stored in a centralised location where all contacts are
able to go to in case of compromise or to periodically check the state of keys
and to see if they have changed. This centralised location requires some sort of
identification to ensure that the user changing their keys is really the same
person who initially signed up for the platform, using a trust-on-first-use
(TOFU) model, which isn't much different than what today's centralised platforms
are already doing; the only difference is who is controlling the location; trust
is still present and required.</p>
<p>In order to avoid the issue of loss of trust due to lack of root of trust, all users' keys
must be stored in a centralised location where all contacts are able to go to in case of
compromise or to periodically check the state of keys and to see if they have changed. This
centralised location requires some sort of identification to ensure that the user changing their
keys is really the same person who initially signed up for the platform, using a
trust-on-first-use (TOFU) model, which isn't much different than what today's centralised
platforms are already doing; the only difference is who is controlling the location; trust is
still present and required.</p>
<p>In order to have a root of trust, I have posted my keys to my website, which
is protected by multiple layers of security:
is protected by multiple layers of security:</p>
<ol>
<li>I have provided identification to my domain name registrar,
to ensure I can access the website I rightfully own, should it
be compromised, by providing identification to the domain name
registrar.</li>
<li>I have provided identification to my virtual private server
host, to ensure I can access the virtual private servers I
rightfully rent, should they be compromised, by providing
identification to the virtual private server host.</li>
<li>I have pinned my website to a globally trusted certificate
authority, Let's Encrypt, which is a trusted party to manage TLS
certificates and ensure ownership of the domain when connecting
to it.</li>
<li>I have enabled DNSSEC on my domain, so it is extremely
difficult to spoof my domain to make you believe you're
connecting to it when you're actually connecting to someone
<li>I have provided identification to my domain name registrar, to ensure I can access
the website I rightfully own, should it be compromised, by providing identification to
the domain name registrar.</li>
<li>I have provided identification to my virtual private server host, to ensure I can
access the virtual private servers I rightfully rent, should they be compromised, by
providing identification to the virtual private server host.</li>
<li>I have pinned my website to a globally trusted certificate authority, Let's Encrypt,
which is a trusted party to manage TLS certificates and ensure ownership of the domain
when connecting to it.</li>
<li>I have enabled DNSSEC on my domain, so it is extremely difficult to spoof my domain
to make you believe you're connecting to it when you're actually connecting to someone
else's.</li>
</ol>
</p>
<p>While not the most secure implementation of a root of trust, it is the most
secure implementation currently available to me. While the domain name registrar
or virtual private server host could tamper with my domain and data, they are
the most trustworthy parties available. In its current form, decentralisation
would make this impossible to implement in any form.</p>
<p>While not the most secure implementation of a root of trust, it is the most secure
implementation currently available to me. While the domain name registrar or virtual private
server host could tamper with my domain and data, they are the most trustworthy parties
available. In its current form, decentralisation would make this impossible to implement in any
form.</p>
</section>
<section id="conclusion">
<h2><a href="#conclusion">Conclusion</a></h2>
<p>Do not demand anonymity; demand privacy and control of your own data.
Complete anonymity makes it impossible to have a root of trust, and is typically
never necessary. It is possible for someone else to hold your keys, without them
taking control of them and dictating what you can and cannot do (X's
misinformation policy comes to mind). If a platform is not listening to your or
other people's concerns about how it is being run, show those platforms that you
will not stand for it, and move to a different one. This may not be ideal, but
it's not different to moving from one decentralised platform to another.
Centralisation is not what is evil, the people in control of the platforms are
what is potentially evil. Carefully, logically, and tactically, choose who to
trust. Decentralisation doesn't do much for trust when you must still trust the
operator of the decentralised platform, and are still subject to the possibly
draconian policies of that decentralised platform. If government is what you are
trying to avoid, there is no denying it is feasibly impossible to avoid it; a
government could always take down the decentralised platform, forcing you to
move to another, and they could also take down the centralised key storage site
mentioned earlier in this article. A government is not something you can so
easily avoid. Decentralisation does not solve the government issue. In order to
live a happy, fun, and fulfilled life, while protecting yourself against logical
<p>Do not demand anonymity; demand privacy and control of your own data. Complete anonymity
makes it impossible to have a root of trust, and is typically never necessary. It is possible
for someone else to hold your keys, without them taking control of them and dictating what you
can and cannot do (X's misinformation policy comes to mind). If a platform is not listening to
your or other people's concerns about how it is being run, show those platforms that you will
not stand for it, and move to a different one. This may not be ideal, but it's not different to
moving from one decentralised platform to another. Centralisation is not what is evil, the
people in control of the platforms are what is potentially evil. Carefully, logically, and
tactically, choose who to trust. Decentralisation doesn't do much for trust when you must still
trust the operator of the decentralised platform, and are still subject to the possibly
draconian policies of that decentralised platform. If government is what you are trying to
avoid, there is no denying it is feasibly impossible to avoid it; a government could always take
down the decentralised platform, forcing you to move to another, and they could also take down
the centralised key storage site mentioned earlier in this article. A government is not
something you can so easily avoid. Decentralisation does not solve the government issue. In
order to live a happy, fun, and fulfilled life, while protecting yourself against logical
threats, there are only two words you must live by: Threat model.</p>
</section>
<div class="sitemap-small"><a href="../sitemap">Sitemap</a></div>