FRACTAL_VERSING
A CYBERSECURITY RISK INTERPRETATION
For this example, rather than create a new FractalVersing ontology, we're going to interpret the one created in TECHNOLOGY FROM THE PERSPECTIVE OF PHILOSOPHY AND DESIGN. Why? Because creating a FractalVersing ontology and using it is useful. But it is much more powerful to use the ontologies created by others, especially in combinations, to bring in a perspective from outside of your blindspots.
INTERPRETATION
When we engage with a system, we become a part of it.
From a cybersecurity risk point of view, a number of things could be going on here.
If the system has an inherent risk (which of course it does), then when we engage with it we absorb or assume that risk, it becomes our risk. An example of this might be needing to hand over personal data in order to use the system, our data is now part of the system and we are now at risk of having that data lost or abused.
But if we also have some sort of inherent risk, then we are bringing that into the system as well. An example of this might be when a user with a different way of thinking uses a system in unexpected ways and highlights latent bugs in the system.
Finally, there is the possibility of an emergent risk that comes from the relationship between us and the system, but wasn't latent in either to begin with. This one is a bit harder to think about; what exactly does an emergent risk look like? It might be useful for this case to draw a distinction between a bug (as above) and flaws. A bug, such as adding two numbers when they should have been subtracted, may only be exposed when a user tries to carry out a particular action. If that action is rare we tend to call these edge-cases. But that bug was always in the code, waiting to be exposed. If the bug can be exploited from a cyber security perspective, we call it a vulnerability. But a security flaw is something different. It is a higher-level implication of the design of the system. Perhaps an emergent property of the system as a whole, in the context of that system and the context in which it exists.
Our interactions with an artifact design our mental models.
First we need to understand what artifact means in this context. A quick Google search gives two different definitions of artifact:
an object made by a human being, typically one of cultural or historical interest
and
something observed in a scientific investigation or experiment that is not naturally present but occurs as a result of the preparative or investigative procedure
Technology systems subject to cybersecurity risk are obviously made by humans, as are the artifacts that those systems are composed of, typically referred to as components. These could be anything from a web server to a login form. In a sense they are of cultural interest.
The other definition hints at something interesting. An artifact not as a thing-in-itself, but as a manifestation of a consequence. In this sense, an exploit could be considered an artifact of the system.
For me, this verse says something about the two-way relationship between people and artifacts. A mental model that created the artifact, and how our mental models change in response to artifacts. A login form is created as a way to capture input, our use of the term "form" long predates the technology itself. So we have a mental model for how we expect a user to behave in response to the artifact, and we're borrowing from the familiarity of "forms". But what mental models are hidden from the typical user? Are they exposed to an understanding of what happens with their data? Do they understand the validation process that takes place, the database lookup for their specified username, the hashing of the password. Should they? We know people don't read the small-print when signing up for services, but should we be exposing more of the underlying mental models of the technology to give users greater awareness, and therefore greater control, of their role in the use of technology, and their role in cybersecurity risk?
When we create an artifact, it is an act of power
This is an intriguing, and I suspect challenging verse. What power is a software developer, designer, or entire company exerting over someone when that someone interacts with the artifacts they create? A login form may seem trivial, but how they've chosen to implement it and the authentication process has a real-world impact on users. If it's a random website, it might not matter. But what about a digital service provided by the government? Something essential. Users have to put a lot of trust into how services use and protect their data. I think a breach of this trust was very clearly shown when the US Office of Personnel Management was hacked in 2015 and "Approximately 22.1 million records were affected, including records related to government employees, other people who had undergone background checks, and their friends and family." [1]. It's a cliché, but to quote a Spider-Man movie "With great power comes great responsibility". A responsibility that many organisations don't take seriously enough.
When we design for many potential states, we open ourselves up to the possibility for new knowledge to be expressed.
Complexity and complicatedness are the enemy of security. But they are also an inevitability. The more ways things can be done, the more options, the more inputs, the more things that can go wrong from a security perspective. But that is not a reason to limit the systems we build. We need to create technological systems that provide utility, but that are also aligned to values. Imposing rigid constraints in inappropriate ways only leads to disaster later on. This is where tools like the Cynefin framework are useful. They allow us to understand the circumstances, the context of the system we're dealing with, and take appropriate action. Dealing with medical devices? Then keeping things as simple as possible and having appropriate regulations in place makes a lot of sense. Artificial Intelligence may be a lot harder to restrict in that way. We need different techniques to secure those sorts of technologies.
CONCLUSION
The FractalVersing ontology has given us a lot to think about when we apply it to cybersecurity risk in a general sense. By its very nature, this is all quite philosophical, so the next question is probably "so what"?
Let's imagine that we're building a new web service. Though we may have other risks as a business (economic, time to market etc), we must take our responsibilities for protecting users and their data seriously right from the start, and not as an after-thought. We should ensure that users are able to make an informed decision about how to use the service, and what that means for the security of their systems and data. We should look holistically at the different ways we can introduce security, and not rely on silver bullets or assume that security is a once-and-done activity. Creating a healthy culture around good cybersecurity is important in organizations of any size, and ethics should be a part of that. It isn't enough to just train developers on SQL injection and how to prevent it. We should all feel a sense of responsibility for the systems we create, manage, and use.