diff --git a/blog/untrusted_the_issue_with_decentralisation.html b/blog/untrusted_the_issue_with_decentralisation.html index b5b1c1d..da381c5 100644 --- a/blog/untrusted_the_issue_with_decentralisation.html +++ b/blog/untrusted_the_issue_with_decentralisation.html @@ -5,7 +5,7 @@ - + @@ -41,98 +41,131 @@

Posted: 2022-06-30 (UTC+00:00)

Updated: 2022-10-29 (UTC+00:00)


+ + +

Table of Contents

+ +
+
+
-

Introduction

-

A recent trend is seeing people move towards decentralised services and platforms. While this is -reasonable and I can understand why they are doing such a thing, they are seemingly doing it without -thinking about the possible consequences of doing so. The issue with decentralisation is trust; -there is no way to pin a key to a specific person, to ensure that you are communicating with the -same person you are supposed to be communicating with. In this article, I will discuss some of the -security issues with the decentralised model.

-
-

Example: Messaging

-

When it comes to messaging your contacts on a centralised platform, such as Twitter or Facebook, -the keys are pinned to that user account, using the user's password as the method of identification. -This approach makes it impossible to log in as a specific user without their password, should it be -strong enough to not be guessed, whether via personal guessing or exhaustive search. The trust in -this centralised model is the high security these platforms have. It is extremely unlikely that -anyone other than a government would be able to access the accounts stored on such platforms' -servers, which makes the physical security trusted. As for remote security, should a user's password -be compromised, it can typically be reset if the user can prove they are the owner of the account -via some form of identification; this is where the trust issue of decentralisation occurs.

-
-

In the decentralised model, keys are kept on the users' devices, in their possession. While this -soveriegnty is welcomed, it introduces a critical flaw in the security of communicating with anyone -via a decentralised platform; should a user's device be lost, stolen, or otherwise compromised, -there is no way to know it happened and what the new keys really are, and if the same user generated -those keys. There is no centralised point where anyone can go to check if the compromised user has -updated their keys, which means there must already have been at least one other secure channel in -place before the compromise occurred. Even if there was, the security of endpoint devices, -especially typical users, is much lower than a well protected corporation's servers, making even -those secure channels questionable to trust. Should all secure channels be compromised, there is -literally no way to know if the person you are communicating with is the real person or an imposter; -there is no root of trust. This point is fatal; game over. The only way to establish trust again -would be to physically meet and exchange keys.

-
-

Solution

-

I'll cut to the chase; there isn't a definitive solution. The best way to handle this situation -is to design your threat model and think about your reasoning for avoiding centralised platforms. Is -it lack of trust of a specific company? Is it the possibility of centralised platforms going -offline? Only by thinking logically and tactically can you solve both the issue of centralisation -and decentralisation. Often, one size fits all is never the correct approach, nor does it typically -work.

-
-

In order to avoid the issue of loss of trust due to lack of root of trust, all users' keys must -be stored in a centralised location where all contacts are able to go to in case of compromise or to -periodically check the state of keys and to see if they have changed. This centralised location -requires some sort of identification to ensure that the user changing their keys is really the same -person who initially signed up for the platform, using a trust-on-first-use (TOFU) model, which -isn't much different than what today's centralised platforms are already doing; the only difference -is who is controlling the location; trust is still present and required.

-
-

In order to have a root of trust, I have posted my keys to my website, which is protected by -multiple layers of security:
-
-0. I have provided identification to my domain name registrar, to ensure I can access the website I -rightfully own, should it be compromised, by providing identification to the domain name -registrar.
-
-1. I have provided identification to my virtual private server host, to ensure I can access the -virtual private servers I rightfully rent, should they be compromised, by providing identification -to the virtual private server host.
-
-2. I have pinned my website to a globally trusted certificate authority, Let's Encrypt, which is a -trusted party to manage TLS certificates and ensure ownership of the domain when connecting to -it.
-
-3. I have enabled DNSSEC on my domain, so it is extremely difficult to spoof my domain to make you -believe you're connecting to it when you're actually connecting to someone else's.
-
-While not the most secure implementation of a root of trust, it is the most secure implementation -currently available to me. While the domain name registrar or virtual private server host could -tamper with my domain and data, they are the most trustworthy parties available. In its current -form, decentralisation would make this impossible to implement in any form.

-
-

Conclusion

-

Do not demand anonymity; demand privacy and control of your own data. Complete anonymity makes it -impossible to have a root of trust, and is typically never necessary. It is possible for someone -else to hold your keys, without them taking control of them and dictating what you can and cannot do -(Twitter's misinformation policy comes to mind). If a platform is not listening to your or other -people's concerns about how it is being run, show those platforms that you will not stand for it, -and move to a different one. This may not be ideal, but it's not different to moving from one -decentralised platform to another. Centralisation is not what is evil, the people in control of the -platforms are what is potentially evil. Carefully, logically, and tactically, choose who to trust. -Decentralisation doesn't do much for trust when you must still trust the operator of the -decentralised platform, and are still subject to the possibly draconian policies of that -decentralised platform. If government is what you are trying to avoid, there is no denying it is -feasibly impossible to avoid it; a government could always take down the decentralised platform, -forcing you to move to another, and they could also take down the centralised key storage site -mentioned earlier in this article. A government is not something you can so easily avoid. -Decentralisation does not solve the government issue. In order to live a happy, fun, and fulfilled -life, while protecting yourself against logical threats, there are only two words you must live by: -Threat model.

-
-
+

Introduction

+

A recent trend is seeing people move towards decentralised services and platforms. While this is + reasonable and I can understand why they are doing such a thing, they are seemingly doing it without + thinking about the possible consequences of doing so. The issue with decentralisation is trust; + there is no way to pin a key to a specific person, to ensure that you are communicating with the + same person you are supposed to be communicating with. In this article, I will discuss some of the + security issues with the decentralised model.

+
+
+ +

Examples

+

Messaging

+

When it comes to messaging your contacts on a centralised platform, such as Twitter or Facebook, + the keys are pinned to that user account, using the user's password as the method of identification. + This approach makes it impossible to log in as a specific user without their password, should it be + strong enough to not be guessed, whether via personal guessing or exhaustive search. The trust in + this centralised model is the high security these platforms have. It is extremely unlikely that + anyone other than a government would be able to access the accounts stored on such platforms' + servers, which makes the physical security trusted. As for remote security, should a user's password + be compromised, it can typically be reset if the user can prove they are the owner of the account + via some form of identification; this is where the trust issue of decentralisation occurs.

+
+

In the decentralised model, keys are kept on the users' devices, in their possession. While this + soveriegnty is welcomed, it introduces a critical flaw in the security of communicating with anyone + via a decentralised platform; should a user's device be lost, stolen, or otherwise compromised, + there is no way to know it happened and what the new keys really are, and if the same user generated + those keys. There is no centralised point where anyone can go to check if the compromised user has + updated their keys, which means there must already have been at least one other secure channel in + place before the compromise occurred. Even if there was, the security of endpoint devices, + especially typical users, is much lower than a well protected corporation's servers, making even + those secure channels questionable to trust. Should all secure channels be compromised, there is + literally no way to know if the person you are communicating with is the real person or an imposter; + there is no root of trust. This point is fatal; game over. The only way to establish trust again + would be to physically meet and exchange keys.

+
+
+ +

Solution

+

I'll cut to the chase; there isn't a definitive solution. The best way to handle this situation + is to design your threat model and think about your reasoning for avoiding centralised platforms. Is + it lack of trust of a specific company? Is it the possibility of centralised platforms going + offline? Only by thinking logically and tactically can you solve both the issue of centralisation + and decentralisation. Often, one size fits all is never the correct approach, nor does it typically + work.

+
+

In order to avoid the issue of loss of trust due to lack of root of trust, all users' keys must + be stored in a centralised location where all contacts are able to go to in case of compromise or to + periodically check the state of keys and to see if they have changed. This centralised location + requires some sort of identification to ensure that the user changing their keys is really the same + person who initially signed up for the platform, using a trust-on-first-use (TOFU) model, which + isn't much different than what today's centralised platforms are already doing; the only difference + is who is controlling the location; trust is still present and required.

+
+

In order to have a root of trust, I have posted my keys to my website, which is protected by + multiple layers of security:
+
+ 0. I have provided identification to my domain name registrar, to ensure I can access the website I + rightfully own, should it be compromised, by providing identification to the domain name + registrar.
+
+ 1. I have provided identification to my virtual private server host, to ensure I can access the + virtual private servers I rightfully rent, should they be compromised, by providing identification + to the virtual private server host.
+
+ 2. I have pinned my website to a globally trusted certificate authority, Let's Encrypt, which is a + trusted party to manage TLS certificates and ensure ownership of the domain when connecting to + it.
+
+ 3. I have enabled DNSSEC on my domain, so it is extremely difficult to spoof my domain to make you + believe you're connecting to it when you're actually connecting to someone else's.
+
+ While not the most secure implementation of a root of trust, it is the most secure implementation + currently available to me. While the domain name registrar or virtual private server host could + tamper with my domain and data, they are the most trustworthy parties available. In its current + form, decentralisation would make this impossible to implement in any form.

+
+
+ +

Conclusion

+

Do not demand anonymity; demand privacy and control of your own data. Complete anonymity makes it + impossible to have a root of trust, and is typically never necessary. It is possible for someone + else to hold your keys, without them taking control of them and dictating what you can and cannot do + (Twitter's misinformation policy comes to mind). If a platform is not listening to your or other + people's concerns about how it is being run, show those platforms that you will not stand for it, + and move to a different one. This may not be ideal, but it's not different to moving from one + decentralised platform to another. Centralisation is not what is evil, the people in control of the + platforms are what is potentially evil. Carefully, logically, and tactically, choose who to trust. + Decentralisation doesn't do much for trust when you must still trust the operator of the + decentralised platform, and are still subject to the possibly draconian policies of that + decentralised platform. If government is what you are trying to avoid, there is no denying it is + feasibly impossible to avoid it; a government could always take down the decentralised platform, + forcing you to move to another, and they could also take down the centralised key storage site + mentioned earlier in this article. A government is not something you can so easily avoid. + Decentralisation does not solve the government issue. In order to live a happy, fun, and fulfilled + life, while protecting yourself against logical threats, there are only two words you must live by: + Threat model.

+
+