I’m currently reading Iain M. Banks’ latest Culture novel The Hydrogen Sonata (quotes, but no spoliers ahead). It has a discussion of the ethics of simulating individuals, what Banks calls the Simming Problem. As someone who uses modelling to study cultural evolution, it struck a chord. Those who’ve read culture novels will be familiar with this kind of issue, but for the non-initiated, the Culture is a hyper-advanced race with out-of-this-galaxy Artificial Intelligences called Minds who usually end up tangling with the affairs of other, lesser cultures in times of crisis. A useful tool would be the simulation of events to pick the best course of action. However, this brings with it some ethical concerns:
“Most problems, even seemingly really tricky ones, could be handled by simulations which happily modelled slippery concepts like public opinion or the likely reactions of alien societies by the appropriate use of some especially cunning and devious algorithms… nothing more processor-hungry than the right set of equations…
But not always. Sometimes, if you were going to have any hope of getting useful answers, there really was no alternative modelling the individuals themselves, at the sort of scale and level of complexity that mean they each had to exhibit some kind of discrete personality, and that was where the Problem kicked in.
Once you’d created your population of realistically reacting and – in a necessary sense – cogitating individuals, you had – also in a sense – created life. The particular parts of whatever computational substrate you’d devoted to the problem now held beings; virtual beings capable of reacting so much like the back-in-reality beings they were modelling – because how else were they to do so convincingly without also hoping, suffering, rejoicing, caring, living and dreaming?
By this reasoning, then, you couldn’t just turn off your virtual environment and the living, thinking creatures it contained at the completion of a run or when a simulation had reached the end of its useful life; that amounted to genocide.”
Uh oh. Given the number of simulations I’ve ended, this might make me pretty much a war criminal. Seriously, even if this is a genuine problem, we’re so far away from modelling at this level it’s not worth losing any sleep over. This topic is linked to a lot of philosophical work on what constitutes life and sentience. However, people have developed ethics codes for simulationists. The Simming Problem is also mentioned in Open problems in artificial life, which discusses biological and virtual life (Bedau et al. 2006, p. 375):
“It is worth noting that public protocols govern the responsible treatment of human and animal research subjects. The lack of analogous protocols in artificial life may be no serious problem today, but as we create more sophisticated living entities we will have to face the responsibility of treating them appropriately.”
Banks also discusses two other problems: If you make your simulations too realistic, you’re basically left with the same problem as in reality. Finally, there’s the Chaos Problem – basically that even if you didn’t think that simulated beings really had a right to life and you ran loads of simulations, different runs of the same simulation might give you different results. There’s no telling which one will actually match up with reality. The alternative is to
“access the summed total of galactic history and analyse, compare and contrast the current situation relative to similar ones from the past… Its official title was Constructive Historical Integrative Analysis.”
… Sounds a lot like Bayesian Phylogenetics to me. The questions raised here, though, are often the source of conflict between approaches to cultural evolution studies. How useful are abstract models? How do we interpret the results of abstract models? How valid is it to model populations rather than individuals? How useful is it to model things as realistically as possible? Should we only be using real data? How do we integrate real data and abstract models?
But of course, Banks has the last word:
“In the end, though, there was another name the Minds used, amongst themselves, for this technique, which was Just Guessing.”
I have been thinking about this ever since I read that chapter. Interesting.
It strikes me that there might well be a need for a sort of upside down theology. Humans spent much of our development trying to work out how to be good subjects of God only to discover she probably doesn’t exist. But now a newer problem. What happens when WE are the gods of a small universe exisiting primarily as a logical substrate. What are the ethical responsibilities of a God? The ones we thought up for ourselves seem to be poor role models. But maybe we can do better?