< The Latest 2025-01-21T18:01:19+0000

Why the author of AI novel ‘We Lived on the Horizon’ says to unplug Siri and Alexa

Erika Swyler, speculative fiction author of ‘Light from Other Stars’ and ‘The Book of Speculation,’ explores artificial intelligence in her latest book.

The Pasadena Star-News | Tue 01/21 10:00am PST | Diya Chacko

People are concerned about our reliance on artificial intelligence, whether it’s teachers reading student essays written by ChatGPT or Hollywood unions going on strike over AI concerns. And with the ubiquity of smart devices in homes – lights, thermostats, fridges and washing machines – it’s only a matter of time until generative AI begins adapting those devices to our individual needs.

From there, it’s not difficult to envision a society entirely reliant on a networked AI system – and the very human problems such a system might perpetuate. That’s one future that Erika Swyler, speculative fiction author of “Light from Other Stars” and “The Book of Speculation,” explores in her latest book, “We Lived on the Horizon,” out from Atria Books.

SEE ALSO: Like books? Get our free Book Pages newsletter about bestsellers, authors and more

The city of Bulwark, an oasis in a dystopian wasteland, is run by an AI system called Parallax. Bulwark was founded on the sacrifices of a small group of people who carved the bones of the city out of nothing, often losing their lives in the process. Years later, their descendants have ascended into an upper class known as the “Sainted.” Sacrifice has become the currency on which Bulwark operates, and the Sainted are wealthy from the labor-hours of their ancestors.

One of these elite, Saint Enita Malovis, has spent a long career in developing bioprosthetics, sometimes for those who don’t have the labor hours for hospital treatment. She’s also working on creating a physical body for Nix, her house AI who possesses all her medical knowledge. Enita is not as isolated from the rest of Bulwark as others in her class, but it’s still a shock when a Saint is murdered and information surrounding his death is somehow deleted from Parallax. 

Then a new patient is dragged into Enita’s surgery: a singer named Neren who lost her leg in a building collapse. But in treating Neren, Enita and Nix uncover evidence of cracks in Parallax and a rebellion brewing in Bulwark.

In “We Lived on the Horizon,” Swyler investigates a potential future for humans and AI, including the complexities of class, altruism and bodily autonomy. This interview has been edited for length and clarity.

Q. What sparked the idea for this story, and how did it take shape? 

It takes me a long time to actually think about a book before I even start putting it on paper. The initial idea for this book came from a story I was listening to on NPR – oh gosh, way back in probably 2014 – about extreme altruists . People donating kidneys to total strangers, not even by request or through chain donation, just something they want to do for someone else’s good. So I started thinking a lot about altruism and societies and how we rely on altruists to really keep societies going. Then I wanted to think about that in relation to the future and to technology and the way governments tend to exploit altruists – and this book kind of came out of that.

I’m also a writer who writes about a million and a half drafts, and I throw out about 90 percent of each draft. This is why it takes me a very long time to write. I think what took me the most drafts to figure out was how I wanted to write the voice of Nix, this AI house system who has found themselves in a body. I wanted Nix’s voice to be relatable, but also distinctly not human, involving thought processes that aren’t the way humans think. That sort of plural consciousness took a while to get into in a way that I felt comfortable with.

Q. What research did you do? Especially around AI?

For speculative fiction, you always have to look at what’s around right now and extrapolate on where that could go. So the technology in Bulwark comes from what we’re living with now, Siri and Alexa and smart houses. I mean, we’re very close to a kind of omniscient home, so I kind of built out from there. 

As far as texts, it was impossible to avoid Ursula Le Guin and Octavia Butler. For the sense of community, I took inspiration from “The Dispossessed” and “The Left Hand of Darkness,” by Le Guin. For writing about revolution, it was really Octavia Butler and the powerful “Parable of the Sower.” For more recent texts, I picked up Annalee Newitz’s book “Autonomous,” to help me look at human and tech interactions, and found that really fascinating. Alex Mar wrote this absolutely amazing deep dive on tech and people interacting with robots a few years ago. It was in Wired, and I was reading that while I was doing some early drafting. I just thought, “Yes, this is the type of thing that I want to see.” 

So it’s a mix; I kind of pull from all over the map. I try not to read anything too contemporary while I’m writing because I like to write against foundational texts that people have already built off of. Like, how do I write against Asimov and the laws of robotics? I like to figure out what I want to do differently.

Q. The book dives into the topic of bodies, and what it means to have a body. Can you talk a little about that?

Something that’s always interested me is this idea that a robot or an AI can be put into a body and there’s no disconnection. Whereas, if you’re a human, you know that having a body is, in a lot of ways, a trauma. Every bad thing you experience in your life, be it emotional or physical, you can feel in the body and it’s just such a shock. I never saw that accurately depicted in a way that felt real to me in science fiction films or books, so I wanted to lean into that a lot. I also wanted to think about – and this is a really abstract thing – different things being squished into one body. I was thinking a lot about the US, weirdly, and how we’re all these different, disparate identities, all trying to squish into one thing. 

As far as Enita’s bioprosthetics, I’m fortunate that my spouse is in medicine, so I’ve been around medicine and doctors for decades now. The way they talk about bodies and disease and trying to cure things is really wonderful and incredibly hopeful. That was something I wanted to lean into when I was thinking about the way we inhabit our bodies, the autonomy we have with them, and who gets that autonomy. 

Q. Speaking of Nix the AI, was their character inspired by someone or something?

When I was writing Nix, I was writing against the science fiction that I grew up on and loved – a lot of Asimov, a lot of Bradbury. Now, though, they can be really difficult to read if you’re not a cis white man. They’re very inspirational, but gosh, there is nowhere I fit in at all. There wasn’t room for my voice. So I wanted to write against that.

I especially wanted to write against the idea of an insane AI, an insane computer, because I find that a lot of the language of mental health gets applied to things that don’t have mental health, but are just obeying human command. It interested me to see what technology might behave like or think like if it was made in a human-forward fashion – tech from an altruistic perspective – and where that might lead.

Q. Which was your favorite character to write?

I love writing little tertiary characters. One who doesn’t show up all that much (because it would be too delightful for me) was Kitchen Node, the part of Nix’s system that runs Enita’s kitchen. In my mind, Kitchen Node is a little bit of a jerk. All they do is think up irrelevant old recipes and tell people that they’re wrong.

It was also really fun for me to write Tomas, Neren’s friend. He’s a fiery young person who is all idealism and kind of falling apart all over the place. I really enjoyed inhabiting him for a little bit, because that’s pretty far from where I am right now in life.

Q. AI not being good or evil, but a reflection of humans was a huge theme in “Horizon.” Can you talk about that?

I was thinking a lot about what current AI already does. All of the data that it takes, everything that it learns, is coming from human understanding and misunderstanding. And it’s only running on what it’s programmed for. When you use ChatGPT, or another program like it, it’s already running with racial biases, with gender and sex bias, with financial biases. And when you scale it, it automatically exacerbates everything that’s wrong. Right now, we’re leaning into that because it’s kind of being forced on us, and we’re just, you know, accepting it. So that felt like a natural thing to play around with. 

Every tech that’s being made is reliant on the ideals of those who make it at a certain point in time. Given our current environment, that reliance is only going to exacerbate division. I wanted to think about what could happen when you have a program that’s running purely on data about what does the most good for the most people – and what happens when such a program struggles with how to do that. 

Q. In this story, sacrifice and labor are intertwined. It reminded me of prosperity theology or the idea that faith will increase your wealth. What was your thinking around that?

There’s this whole idea that if you are wealthy, you’ve gotten that way because your family did X,Y and Z, so they very much are worthy of that wealth. And if other people aren’t as wealthy, you just haven’t tried hard enough or given enough or been faithful enough. I wanted to take apart the idea that extreme wealth is sort of anything to aspire to, because ultimately, I think it doesn’t lead any of these characters to a real sense of happiness in any way. There’s no real satisfaction there. The things that they do take satisfaction from turn out to be labor. 

If you’re at all online, it’s sort of impossible to escape this culture of talking about how everything is the fault of money and capitalism. Well, what do we do if we don’t have money? I wanted to look at how this idea of escaping capitalism and escaping money could potentially also fall apart. I think it’s really important, when we talk about massive societal change and trying to start over, to understand what that looks like, including the idea of valuing labor instead of currency. Because it’s just as easily exploitable. 

The truth is, it takes a whole bunch of things to build a society and keep it going, and while we can settle on things that are good for a while, we do have to keep changing. I hope that what ultimately comes across in the book is that change is not just inevitable, but essential.

Q. Speaking of which, what else would you like readers to take away after reading this book?

I actually want readers to feel hopeful about the control that they can have over technology in their lives. I really want people to think about what people-forward technology looks like and how to get involved with things that way. Selfishly, I would love people to turn off their Siris and Alexas and all the smart home devices. Nobody needs that information from you. Nobody, not even you.

< The Latest