Hey Mythcreants, so I’m writing a story with a lot of cyberpunk and transhumanist tech in it – is any of that impractical from a storytelling perspective? I’ve noticed that some settings like Eclipse Phase and Altered Carbon have it, while shows like Star Trek and Star Wars don’t, even though they often have equally advanced technology.
Anonymous
Hey Anon, great to hear from you again!
The good news is that you probably don’t have to worry much about what transhumanist tech is impractical, because very little of it is likely to cause problems for a written story. Franchises like Star Trek and Star Wars largely eschew transhumanism for budgetary reasons. Even today, having a TV character with pronounced non-human traits is expensive. When Star Wars and Star Trek were first made, it was even more so.
Often, novels follow that same blueprint because they want the aesthetic of classic space opera, but that’s all it is: an aesthetic. Advanced body modding won’t cause you any more plot problems than FTL drives or those magic fusion reactors from The Expanse.
However, there are two exceptions that you should probably avoid unless you’re really committed to them.
1. Super Intelligence
Some transhumanist and cyberpunk stories feature upgrades that make a person “smarter,” and these are extremely hard to write in large part because you have to figure out what being smarter even means. Is it math skills? Empathic deduction? Being able to answer spec fic Q&As???
In fiction, super smarts usually translates into being able to guess what’s going to happen next, and this power is the death of plots. It’s often hard to justify why your completely normal human characters don’t guess the next plot twist, and making them super intelligent compounds that problem.
Super intelligence is also an ability that feels arbitrary because, most of the time, it works until the writer suddenly decides it doesn’t. The genius character can predict every move the enemy makes, except this last one because they need to lose now. Bottom line, it’s hard to write a super intelligent character when you’re a normally intelligent human.
2. Digital Immortality
If your setting’s technology is advanced enough to copy and paste consciousness like any other data, that will almost certainly cause problems. Eclipse Phase is the most prolific example of this trope, which is ironic because it’s especially a problem for TTRPG campaigns.
If your story features any amount of action, you’d be amazed how much of the tension is derived from the possibility that someone might die. If anyone can just download their consciousness into a new body, that tension is gone. As long as characters can afford a new clone, they don’t have to be careful or avoid dangerous situations.
If your plot is about preventing pirates from taking over a ship, the characters might rightly decide that it’s less work to just blow the ship up with everyone on board. It’s almost impossible to plausibly get rid of a rich villain since they could have their consciousness backed up almost anywhere.
This is especially bad for RPGs because they need to generate fresh content every week, and they can’t specifically arrange character motivations to go along with it like an author can. But it’s a problem for authors too, and it requires you to completely change how you approach action and violence.
Of course, there are ways to make it work. But it’s a big investment, so unless that’s really what you want, it’s a good tech to stay away from.
Hope that answers your question, and good luck with your story!
Keep the answer engine fueled by becoming a patron today. Want to ask something? Submit your question here.
‘Some transhumanist and cyberpunk stories feature upgrades that make a person “smarter,” and these are extremely hard to write in large part because you have to figure out what being smarter even means.’
*coughcough*DuneMentats*coughcough*
Yep that’s a prominent example! It’s really hard to tell what mentats even do
Only having read the first novel at all recently, my overriding impression is that Mentats are really good at things that computers are good at, so they can do calculations much faster and maybe sort data more clearly, but they don’t think any more profoundly. So, they will come up with the same solution anyone else would, but much faster.
I like to think that having super intelligence is like being a DM. Your super-advanced smarts allow you to think ten moves ahead, plan the whole game while the other side are working on their first move, to understand the big picture in ways that no-one else can, know in advance how certian interactions must go, and then be completely blindsided by some glorious chaotic jack move from a bunch of short-sighted clowns who just decide on the spur of the moment to shoot a god in the face or something.
“Only having read the first novel at all recently, my overriding impression is that Mentats are really good at things that computers are good at….”
That’s more or less what they started out as–a biological replacement for computers. They were in a political system based on vassalage, so mostly in the books they do sociological calculations. It gets weirder in later books, but I think that’s fair; just look at the difference between a PhD student and a freshman when it comes to a subject and you’ll see that intellectual developments can seem weird to folks who aren’t as developed in that area. Things get more weird as the society gets more and more distant from our own, something we’d expect from a series that crosses 3,500 years and several major social upheavals.
In practice, Herbert used mentats for exposition. It’s a convenient excuse to discuss issues that everyone else would assume (the same way we make assumptions about our own culture). It’s as convenient a way as any to deal with the fact that the reader isn’t familiar with the culture. Tolkien advocated the use of some generic audience stand-in (what the hobbits started out as), some use differences in cultures to explain weird aspects of the setting (Star Trek), some just ignore it (the original Star Wars trilogy didn’t care if you knew what a power converter or Corellian star cruiser were). The audience usually has to have some way to figure out what’s going on, and mentats are as good a method as any.
I think it could be interesting to explore how certain technologies could enhance a person’s functioning without it being as simplistic as “she had a 110 IQ, now she has a 150 IQ” which ignores all kinds of realities about what intelligence even is and how it works.
For example, say there’s a student generations ago assigned to write a ten page essay on ancient Babylonian gods and myths. They have access to a library, but the set of encyclopedias is mediocre.
However, they have some heavy books on ancient Babylon which you’d have to dig through to find references to gods and legends.
The main point of these books is military and economic however, so they really have to look hard and there’s a lot of lore that simply wouldn’t be there as the writer would be a lot more likely to mention the temple of Ishtar and give a brief explanation of who she was or the claim of a king to derive his authority from Marduk than more obscure figures and stories.
This person’s grandchild has a similar assignment someday but their school has an excellent, up to date set of reference books including mythology ones and more general guides to Babylonian culture which get into the lore and religions practices overtly.
That person’s child has Google. See the difference in how hard it would be to write the paper and the likely quality of the information (if they know how to stay away from poor quality material) in it?
Then imagine this capacity to search for relevant information quickly was inside you through technology, not on paper or screen.
Also imagine an internal calculator so you could do all or most of the math you needed to in your head. Same with navigation assistance. No one would ever get lost or make a math error at least if they had the enhancement.
The problem would be explaining why only some people have it which in most cyberpunk settings could be explained easily through the inequality that’s usually very common in those settings.
There would be different levels too. Maybe society likes the idea of no one is bad at math or lost on the streets. Maybe technological advancement has made it pretty cheap, so it’s no more of an elite product than a basic cell phone or a cheap VCR in the 90s (which in the 80s might have not been cheap at all).
But there’d be *something* along those lines a person would have to be wealthy or connected or a good thief to get.
One limitation that can work is:- a consciousness can only be downloaded to a specific brain. Specifically, one of the clones of “that character” that are currently growing in the replacement tank.
The pirate can be blown to bits.
And now he only has five bodies waiting for him at the “body clinic”.
Oh, sure, he can have a new clone placed in the now available growth tube…but it’ll take 18 years for that one to come online.
This is certainly better than the alternative, but it also feels extremely arbitrary. If the body doesn’t need to develop a brain, why does it take a human lifetime to grow one? This is the limiting factor on human biological development when contrasted with other primates.
I think a better explanation is that human intelligence simply requires a human brain in order to operate and it cannot be backed up in any meaningful sense due to the complexity of it. Just because you can interface with the brain doesn’t mean you can back up human memories with any reliability. Human memory and neural connections are simply too fragile for this to work.
Oh, yes. It depends on what kind of story you want to write.
Want death to be final and ultimate and catastrophic and something the reader fears for the character?
Just say that a no scientists have yet figured out how to download a soul. The clone might be decanted and lie in a hospital bed remembering memories but it can’t feed the ducks in the park because it’s missing that final component.
Basically, the clone tanks are used to grow organs in this universe. “Persona transplant” experiments just keep failing.
Want to have a black comedy where it doesn’t matter how horrifically the heroine dies in attempts to escape the dome-polis, she keeps getting killed and “recycled”.
And since the last update; back at the clone-bank was one where she both wanted to escape and hadn’t learnt the lesson that escaping was a bad idea:- she keeps trying to escape?
Then go with a finite number of clones.
Want a supervillain who can be killed off and return looking like anyone?
Have a supercomputer with a bio-rewrite beam mounted on a tower somewhere in the city.
“Next life, I might be; a bag lady and the life after that; a surfer dude…you’ll never know until it’s too late.”
Artificial Superintelligence(ASI) also has another massive problem in that it seems far more likely than most of the other future technologies that we so often assume in SF worlds, especially if you’re assuming some sort of brain interface tech that requires AI. Most AI researchers predict it will probably be developed at some point within the next century, with greater than 50% chances of being developed by 2050. Colonizing Mars or building O’Neil cylinders in sufficient numbers for action in space will almost certainly happen after this point. If you’re interested in the real world problems, Nick Bostrom and Stuart Russel are probably the best overall sources around.
The only case of an ASI I have ever seen that more or less worked was Person of Interest, and even then it was less than perfect. What it did right was in control of information, in which the AI only gave a single piece of information and left the heroes to figure out the details. They would find out the identity of someone who was the linchpin to a violent crime, but they wouldn’t know whether that person was the victim or perpetrator. I’m really not sure how you could adapt this sort of idea more broadly outside of the procedural genre that they used, in which the system was originally limited in this way due to national security concerns. I really don’t see how you could use this limit effectively in a setting more like The Expanse.
I mean… the common idea of ASI is that it means “tech singularity.” But honestly that’s kind of a really reductive idea that doesn’t take into account computing plateaus, material limitations, etc. etc.
Thinking on these two specific issues, I’ll argue that no 1 is not such a huge problem in an RPG context since it would mean being able to access and solve info better, which would be bonuses to actions and possible roleplay bonuses.
Issue 2, though, can be tackled in a number of ways. The first is to roll with it, which I prefer, which would require some other setting elements to be present. Like maybe there are “quantum scanners” that can detect clones and/or backupping, or otherwise weaponry that can disrupt backups. The second is to include some kind of limitation.
In terms of the general idea, I think one should look to the idea of “quality of life/society improvements” and “powers” depending on the thing in question.
Digital Immortality is certainly hard to work around.
One of the ways I have found to deal with it is to use opportunity costs. Giving characters a mission to complete and understanding that though they may persist after death, it means the failure of the mission would keep stakes somewhat.
When you have characters defending an artifact during transport, the killings of those characters would remove obstacles and a party wipe would let the artifact be stolen. Then you could have a new mission to recover the artifact before the antagonists use it.
By ensuring conflicts have stakes other than death, it is plausible to maintain stakes with digital immortality.