Hi, I saw your opinions on the Technocore in the Hyperion books which reminds me of my own troubles of including AI in my worlds.
In my sci-fi setting, I have a faction of robots that evolved from sufficiently complex ship computers and run a healthcare/coexistence-institute. While I want my robots to be awesome, I want my flesh-and-blood beings to still be useful instead of having the robots be good at everything. How can I solve this?
Fredrik ”Tektalox” Hansing
Hey Fredrik, thanks for writing in!
This is a really tough problem that most scifi suffers with, though Hyperion is a particularly extreme example. Simmons wasn’t under any obligation to give his AI future-sight; he did that all on his own.
But discounting edge cases like that, sapient AI is still a conceptual problem for scifi writers. Existing computers can perform calculations much faster than any human, and with sufficient hard-drive space, they can store vast quantities of information with perfect fidelity. Plus, information moves around a computer at the speed of light, where human brains have to rely on chemical reactions. It’s really not fair.
When we know all that, it’s only natural to assume that sapient AI will simply be better at everything than humans could ever be. They can already think faster, and once they master abstract reasoning, they’ll be unstoppable! If they ever need a body, they can just control a robot that’s much more capable than a squishy human.
Of course, real AI may not ever turn out that way, but it’s the basic assumption for most AI in science fiction. Heck, you don’t even need fully sapient AI before automation becomes a problem in stories. We’ve all agreed not to question why humans still pilot ships in The Expanse, as AI would be way better at it, because we want there to be fun battles with humans in them!
When we’re talking about sapient AI, you have two basic options: either limit your AI or enhance your humans.
The first option is generally easier. While we tend to assume AI would be like a supercomputer with abstract reasoning, there’s no reason they have to be. Instead, they might only have the same capabilities as a human mind, except that they’re directly integrated into various electronic tools rather than having to carry them around like we do. If the AI wanted to crunch numbers on a black hole, they would need to buy supercomputer time like anyone else. If they’re asked to do complex math, they need to open a calculator app.
The specifics can vary a lot, of course. Your AI might be super good at math but not any better at predicting market trends than a standard human. Or, they might be super easily distracted, making it difficult or impossible to focus their immense processing power on a single problem. The key is that they are not simply better at solving problems than your human characters.
Option two is a lot more difficult, but still possible. You can always add in bio-engineering tech that makes your humans super-capable as well, allowing them to match or exceed the AI. The difficulty with that plan is that it’s really hard to write super-smart characters. It’s hard enough to write normal-smart characters! If your heroes can think much faster than a standard human, you may struggle to create problems for them, but it can still work for a determined writer.
Hope that answers your question, and good luck with your story!
Keep the answer engine fueled by becoming a patron today. Want to ask something? Submit your question here.
I think, you could also have humans put in limitations into their robots so that they are not self-learning. That in itself would already limit your robots severly.
Or have only robots that are super specialized. That maybe you have a robot be really good in piloting a ship, but not much else. Think more of it like softwares. Why would a robot have a multipurpose “brain” when their job is to operate a building crane? Think about why the robots in your story were build in the first place. If they were intended as caregivers at a hospital or a nursing home then they are probably not good at fighting or traversing rough terrain.
The first option is exactly what I decided to do with the robots I made for a Sci – fi snippet.
Since real robots are better at storing information than pattern recognition, I hand waved that the speed and quantity of information becomes a “subconscious” requirement for abstract thought rather than an enhancement.
This limits any sentient robot to around human intelligence range, on top of requiring quite a lot of startup and learning time… Just like a human.
The end result is that the robots seem distinctive to my beta readers without appearing overpowered. But I’m not sure if that kind of limitation would work in your type of story or your type of robot.
Hey, don’t diss the chemical reactions! They’re very energy efficient!
(Energy consumption and heat generation are huge issues in computer engineering, but we animals walk around doing all our computations on the power of some 2000 calories a day WITHOUT overheating.)
Soooooooo sloooooooooow thoooooooough ;)
Also consider how Person of Interest chose to limit it’s AI: it’s only form was a mass of supercomputers, so it couldn’t do everything on its own. Sure, it could rig elections, control the economy, and cure major diseases, but it still needed human agents to do ‘analog’ tasks.
A robot society will be naturally limited by the raw materials needed to create said robots, so humans controlling those materials makes them pretty important. The forms the robots take also limits what they can do, as well as dictates how much cpu, ram, etc. they can operate with, so that will makes humans more useful in certain situations than a robot. (For instance, a low cpu robot would take a while to go through every option in a multi option scenario, the human, especially one with experience in the field, would find it quicker.)
Computers generate heat and it’s very hard to dissipate heat in space. For example, it will take our sun trillions of years after it dies and becomes a dwarf star to cool off and lose most of its heat. An AI needs lots of computing power which means lots of heat. A space ship may not be able to handle it so the AI is hyper-specialized for efficiency so it never becomes “aware”.
Have a look at some of the books by Neil Asher. In his various books the AI are sentient and very intelligent. But some humans have augmented themselves with cyber implants so much that they can match at least some of the AI.
In one series “Rise of the Jain” one of the humans has so much AI and alien tech that she often has issues remembering her human side. At one point her abilities are so extreme that she no longer sees her allies as anything other than things, pawns, to be moved and controlled.
He has a good set of characters that are very intelligent, both AI, human and alien. So the humans still feel they have an impact on the universe they are in.
Just try not to notice how often he uses the word “grimace”.
Here are some things to consider!
1) the overheating and availability of sufficient electric power issues are perfectly valid limitations to an AI’s ability to “do the math”
2) the AI isn’t made to deal with the issue at hand.
AI is greatly overhyped, in my opinion, and the general public is starting to realize that, what with the high profile specific artificial intelligences being talked about these days, like the image-from-text one etc.
Therefore, you can write AI that is very limited in its abilities and not an actual superintelligence and not get an audience reaction of disbelief.
These AI are black box systems, meaning no one has any idea how their algorhythms work, but their creation requires a lot of humans labelling the datasets used to train these AI and each of these AIs is only capable of doing that one task they’ve been made for. How to train an AI to be adaptable and how to take initiative and so on are questions no one has answers for. Not how AI SHOULD grow and learn to become actual synthetic intelligences, but HOW can a microchip running ones and zeros across transistors ever learn ro learn on its own. That algorhythm is still pure Sci-Fi and you can go to town on this point without losing reader credulity. Heck, you could even copy my idea from a cyberpunk story I’ve written where each AI “cheats” by hiring small populations of mutually unconnected humans to identify and process their inputs in real-time and make judgment calls, i.e. they farm out their thinking processes to “external intuitive processors” – humans who have no jobs or job prospects in a fully automated future, except to work for AIs.
3) the AI has a blindspot
As mentioned above (and sadly already proven with numerous real-world applicarions thus far), any AI created by humans has their biases and prejudices built into it at a fundamental level. If you train an AI to manage a society and you choose to train it on the dataset of a deeply unequal and racist society, like the USA, it will be a racist AI which accepts, promotes and maintains inequality and inequity simply by its design.
These human flaws need to be recognized and actively combatted during selections of datasets because the AI can only “grow up to be” what it was taught to be.
(the tech bros of Silicon Valley have absolutely no intention of examining themselves for biases, they’re perfect visionaries with a deeper understanding of the world, just ignore Elon’s DMs which make them look like sychophants throwing around billion dollar amounts with the emotional maturity of a toddler and a severe allergy to research and facts, please)
4) I mean, seriously, no one has made an AI that can make a cup of tea.
This ridiculously simple task requires an AI to be able to:
– understand all forms in which a request for a cup of tea can come in (a cuppa would hit the spot) and differentiate them from similar sounds of different meaning (a cop should be shot) with no chance of error.
– recognize mugs and cups and tell them apart from dining sets made of the same materials, as well as other household items of the same shape or color (a toilet roll is white and cylindrical, as are many salt-shakers, etc.)
– recognize the teapot and tell it apart from the gravy boat
– recognize the teakettle and tell it apart from the french press
The list goes on for a bit and it doesn’t include the many, many things which can go wrong. For instance, how can the AI tell if the teapot has been infested by insects? What if the gas pressure is low, or the electric heater has a short circuit? Can the AI react? Extend the time needed for the water to reach a boil, or is the AI waiting to hear the sound of the kettle whistling? How can it tell that sound from the postman whistling as he’s delivering mail?
When you break down a simple task like making a cup of tea and actually think about each step which is utterly inconsequential to even the most dim-witted human, you reach the conclusion that it would take an AI a very, very long time and a while lotta people training it in many different datasets just to master this one task in just one possible configuration.
I gave up less than a tenth of the way through…
What if the gas pressure is low, or the electric heater has a short circuit? Can the AI react? Extend the time needed for the water to reach a boil, or is the AI waiting to hear the sound of the kettle whistling?
An AI would probably handle British kettles better (they are plugged into wall sockets, heat the water via internal elements, and automatically shut off when it boils). I find it vaguely perplexing whenever I see Americans putting kettles on the stove.
It should be said that AI _started_ with abstract reasoning. This was the era of “symbolic” AI. What we now think of as AI, neural networks, etc. was something of a revolt against this.
What this means for SF is that it might be better to characterize AIs as aliens – they just think differently than we do, and with vastly different motivations.
There are probably many Asimovian stories that could be written about a world in which humans think they’ve hardwired the 3 Laws of Robotics into AIs, but they keep getting broken in inventive ways.
Yeah, why spend the effort trying to circumvent the hard-wired laws of robotics, like “A robot/A.I. shall not harm a human being, nor allow a human being to come to harm through the robot/A.I.’s inaction”, when you can just change the A.I.’s definition of ‘harm’ to be that of ‘oxygen’. Thus any A.I. will do its level best to keep humans from breathing. ;-)