Just lately my colleague Charlie Warzel, who covers expertise, launched me to the most subtle voice-cloning software program accessible. It had already been used to clone President Joe Biden’s voice to create a faux robocall discouraging individuals from voting within the New Hampshire major. I signed up and fed it a number of hours of me talking on varied podcasts, and waited for the Hanna Rosin clone to be born. The best way it really works is you kind a sentence right into a field. For instance, Please give me your Social Safety quantity, or Jojo Siwa has such nice style!, after which your manufactured voice, created from samples of your precise voice, says the sentence again to you. You may make your self say something, and shift the depth of the intonation till it sounds uncannily such as you.
Warzel visited the small firm that made the software program, and what he discovered was a well-recognized Silicon Valley story. The individuals at this firm are dreamers, impressed by the Babel fish, a fictional translation system, from The Hitchhiker’s Information to the Galaxy. They think about a world the place individuals can converse to at least one one other throughout languages and nonetheless sound like themselves. In addition they might not be capable of put the genie again within the bottle if (or when) the expertise results in world-altering chaos, notably on this coming yr, when greater than half of the world’s inhabitants will endure an election.
On this episode of Radio Atlantic, Warzel and I focus on how this small firm perfected the cloned voice, and what good and dangerous actors would possibly do with it. Warzel and I spoke at a stay present in Seattle, which allowed us to play a number of methods with the viewers.
Hearken to the dialog right here:
The next is a transcript of the episode:
Hanna Rosin: So a number of weeks in the past, my colleague workers author Charlie Warzel launched me to one thing that’s both wonderful or sinister—in all probability each.
Charlie’s been on the present earlier than. He writes about expertise. And most just lately, he wrote about AI voice software program. And I’ve to say: It’s uncannily good. I signed up for it—uploaded my voice—and man does it sound like me.
So, after all, what instantly occurred to me was all of the completely different flavors of chaos this might trigger in our future.
I’m Hanna Rosin. That is Radio Atlantic. And this previous weekend, I used to be in Seattle, Washington, for the Cascade PBS Concepts Competition. It’s a gathering of journalists and creators and we mentioned matters from homelessness, to the Supreme Court docket, to the obsession with true crime.
Charlie and I talked about this new voice software program. And we tried to see if the AI voices would idiot the viewers.
For this week’s episode, we carry you a stay taping with me and Charlie. Right here’s our dialog.
[Applause]
Rosin: So as we speak we’re going to speak about AI. We’re all conscious that there’s this factor barreling in the direction of us known as AI that’s going to result in big modifications in our world. You’ve in all probability heard one thing, seen one thing about deep fakes. After which the subsequent massive phrase I wish to put within the room is election interference.
At present, we’re going to attach the dots between these three massive concepts and produce them a little bit nearer to us as a result of there are two essential truths that you could learn about this coming yr. One is that this can be very straightforward—by which I imply ten-dollars-a-month straightforward—to clone your personal voice, and presumably anyone’s voice, effectively sufficient to idiot your mom. Now, why do I do know this? As a result of I cloned my voice, and I fooled my mom. And I additionally fooled my associate, and I fooled my son. You may clone your voice so effectively now that it actually, actually, actually sounds rather a lot such as you or the opposite individual. And the second indisputable fact that it’s essential to learn about this yr is that about half the world’s inhabitants is about to endure an election.
So these two information collectively can result in some chaos. And that’s one thing Charlie’s been following for some time. Now, we’ve already had our first style of AI-voice election chaos. That got here within the Democratic major. Charlie, inform us what occurred there.
Charlie Warzel: A bunch of New Hampshire voters—I believe it was about 5,000 individuals—bought a cellphone name, and it might say “robocall” whenever you choose it up, which is customary in case you stay in a state doing a major. And the voice on the opposite finish of the road was this sort of grainy-but-real-sounding voice of Joe Biden urging individuals to not exit and vote within the major that was developing on Tuesday.
Rosin: Let’s, earlier than we maintain speaking about it, take heed to the robocall. Okay? We’re going to play it.
Joe Biden (AI): Republicans have been making an attempt to push nonpartisan and Democratic voters to take part of their major. What a bunch of malarkey. We all know the worth of voting Democratic when our votes depend. It’s essential that you simply save your vote for the November election. We’ll want your assist in electing Democrats up and down the ticket. Voting this Tuesday solely allows the Republicans of their quest to elect Donald Trump once more. Your vote makes a distinction in November, not this Tuesday.
Rosin: I’m feeling like a few of you might be doubtful, like that doesn’t sound like Joe Biden. Clap in case you assume that doesn’t sound like Joe Biden.
[Small amount of clapping]
Rosin: Nicely, okay. Someplace in there. So whenever you heard that decision, did you assume, Uh-oh. Right here it comes? Like, what was the lesson you took from that decision? Or did you assume, Oh, this bought solved in a second and so we don’t have to fret about it?
Warzel: After I noticed this, I used to be really reporting out a function for The Atlantic concerning the firm ElevenLabs, whose expertise was used to make that cellphone name. So it was very resonant for me.
You understand, after I began writing—I’ve been writing about deep fakes and issues like that for fairly some time (I imply, in web time), since 2017. However there’s all the time been this sense of, , What’s the precise stage of concern that I ought to have right here? Like, What’s theoretical? With expertise and particularly with misinformation stuff, we are inclined to, , speak and freak out concerning the theoretical a lot that generally we’re not likely speaking about and considering, grounding it in plausibility.
So with this, I used to be really making an attempt to get a way of: Is that this one thing that will even have any actual sway within the major? Like, did individuals imagine it? Proper? It’s form of what you simply requested the viewers, which is: Is that this believable? And I believe whenever you’re sitting right here, listening to this with hindsight, and, , making an attempt to guage, that’s one factor.
Are you actually gonna query, like, at this second in time, in case you’re getting that, particularly in case you aren’t paying shut consideration to expertise—are you actually gonna be fascinated about that? This software program continues to be figuring out a number of the kinks, however I believe the believability has crossed this threshold that’s alarming.
Rosin: So simply to offer these guys a way, what can it do now? Like, we heard a robocall. Might it give a State of the Union speech? Might it speak to your spouse? What are the issues that it may possibly do now that it’s made this leap that it couldn’t do a number of months in the past, convincingly?
Warzel: Nicely, the convincing half is the largest a part of it, however the different a part of these fashions is the power to ingest extra characters and throw it on the market. So this firm, ElevenLabs, has a stage that you would be able to pay for the place you may—in case you’re an writer, you may throw your entire novel in there, and it may possibly do it in a matter of minutes, basically, after which you may undergo and you may tweak it. It may positively do an entire State of the Union. Basically, it’s given anybody who’s bought 20 bucks a month the power to take something that they wish to do content-wise and have it come out of their voice.
So lots of people that I do know who’re impartial journalists or authors or individuals like which can be doing all of their weblog posts, their e mail newsletters as podcasts—but additionally as YouTube movies, as a result of they hook this expertise, the voice AI, into one of many video or picture turbines, so it generates a picture on YouTube each few paragraphs and retains individuals hooked in.
So it’s this concept of: I’m not a author, proper? I’m a content material human.
Rosin: I’m a multi-platform human. Okay. That sounds—you fill within the adjective.
Warzel: Yeah, it’s intense.
Rosin: Okay, so Charlie went to go to the corporate that has introduced us right here. And it’s actually attention-grabbing to have a look at them as a result of they didn’t got down to clone Joe Biden’s voice. They didn’t set out, clearly—no person units out to run faux robocalls. So getting behind that fortress and studying, like, Who’re these individuals? What do they need? was an attention-grabbing journey.
So it’s known as ElevenLabs—and, by the way in which, The Atlantic, I’ll say, makes use of ElevenLabs to learn out some articles in our journal, so simply so that. A disclaimer.
I used to be actually stunned to be taught that it was a small firm. Like, I’d anticipate that it was Google who crossed this threshold however not this small firm in London. How did that occur?
Warzel: So one of the attention-grabbing issues I realized after I was there—I used to be interested by them as a result of they had been small and since they’d produced this tech that’s, I believe, higher than everybody else.
There are a number of firms: Meta has one which they haven’t launched to the general public, and OpenAI additionally has one which they’ve launched to sure choose customers—partly as a result of they aren’t fairly certain the right way to management it, essentially, from being abused. However that apart, ElevenLabs is kind of good. They’re fairly small.
What I realized after I was there speaking to them is that they talked about their engineering group. Their engineering group is seven individuals.
Rosin: Seven?
Warzel: Yeah, so it’s, like, former—that is the engineering analysis group. It’s this small, little group, and so they describe them nearly as, like, these brains in a tank that will simply—they might say, Hey, , what we actually wish to do is we wish to create a dubbing a part of our expertise, the place you may feed it video of a film in, , Chinese language, and it’ll simply form of, nearly in actual time operating it via the expertise, dub it out in English or, , you title the language.
Rosin: Is that as a result of dubbing is traditionally tragic?
Warzel: It’s fairly dangerous. It’s fairly flat in lots of locations. Clearly, in case you stay in a few the large markets, you will get some good voice performing within the dubbing. However in Poland, the place these guys are from, it’s all dubbed in a very flat—they’re known as lektors. That’s the title for it. However, like, when The Actual Housewives was dubbed into Poland, it was one male voice that simply spoke like this for all the actual housewives.
Rosin: Oh, my God. That’s wonderful.
Warzel: In order that’s a great instance of, like, this isn’t good. And so individuals, , watching U.S. cinema or TV in Poland is, like, type of a grinding, horrible expertise. So that they needed to alter issues like that.
Rosin: For some cause, I’m caught on this, and I’m imagining RuPaul being dubbed in a very flat, accentless, like, sashay away. You understand?
Warzel: Completely. So that is really one of many issues that they initially had been getting down to resolve, this firm. They usually type of, not lucked into, however discovered the remainder of the voice-cloning stuff in that area. They speak about this analysis group as these brains within the tank. They usually’ll simply be like, Nicely, now the mannequin does this. Now the mannequin laughs like a human being. Like, Final week it didn’t.
And once more, whenever you attempt to speak to them about what we did, it’s not like pushing a button, proper? Then they’re like, It’s too sophisticated to essentially describe. However they’ll simply say that it’s this small group of people who find themselves, basically—the rationale the expertise is sweet or does issues that different individuals’s can’t do is as a result of they’d an thought, an instructional thought, that they put into the mannequin, had the numbers crunch, and this got here out.
And that, to me, was type of staggering as a result of what it confirmed me was that with synthetic intelligence—in contrast to, , one thing like social networking the place you simply bought to get a large mass of individuals related, proper? It’s community results. However with these items, it truly is like Quantum Leap–type laptop science. And, , clearly, cash is sweet. Clearly, compute is sweet. However a really small group of individuals can toss something out into the world that’s extremely highly effective.
And I believe that may be a actual revelation that I had from that.
[Music]
Rosin: We’re going to take a brief break. And after we come again, Charlie explains what the founders of ElevenLabs hope their expertise will accomplish.
[Music]
Rosin: So these guys, like lots of founders, they didn’t got down to disrupt the election. They in all probability have a dream. In addition to simply higher dubbing, what’s their dream? After they’re sitting round and also you get to enter their mind area, what’s the magical way forward for many languages that they envision?
Warzel: The total dream is, mainly, breaking down the partitions of translation fully. Proper? So there’s this well-known science-fiction e book Hitchhiker’s Information to the Galaxy, the place there’s this factor known as the Babel fish that may translate any language seamlessly in actual time, so anybody can perceive everybody.
That’s what they finally wish to make. They wish to have this—, dubbing has a little bit little bit of latency now, nevertheless it’s getting sooner. That plus all of the completely different, , voices. And what they basically wish to do is create a software on the finish, down the road, that you would be able to put an AirPod in your ear, and you may go wherever, and everybody else has an AirPod of their ear, and also you’re speaking, and so you may hear all the pieces instantly in no matter language. That’s the top purpose.
Rosin: So the attractive dream, in case you simply take the purest model of it, is all peoples of the world will be capable of talk with one another.
Warzel: Yeah. After I began speaking to them—as a result of, dwelling in America, I’ve a special expertise than, . Most of them are European, or a lot of them—the 2 founders are European. You understand, they stated, You develop up, and you need to be taught English at school, proper?
There’s just a few locations the place you don’t develop up and, they are saying, you additionally gotta be taught English if you wish to go to college wherever, do no matter, and take part on this planet. They usually stated, If we do that, you then don’t have to try this anymore.
Rosin: Ooh, there goes our hegemony.
Warzel: Think about the time you’d save, of not having to be taught this different language.
Rosin: So that they’re fascinated about Babel and this lovely dream, and we’re considering, like, Oh, my god, who’s gonna rip-off my grandmother, and who’s gonna mess up my election?
Do they consider that? Did you speak to them about that? Like, how conscious are they of the potential chaos coming down?
Warzel: They’re very conscious. I imply, I’ve handled lots of, in my profession, tech executives who’re form of—they’re not prepared to essentially entertain the query. Or in the event that they do, it’s type of glib, or there’s a little bit little bit of resentment, you may inform. They had been very—and I believe due to their age (the CEO is 29)—very earnest about it. They care rather a lot. They clearly have a look at all this and see—they’re not blinded by the chance, however the alternative looms so giant that these unfavourable externalities are simply issues they’ll resolve, or that they will resolve.
And so we had this dialog, the place I known as it “the dangerous issues,” proper? And I simply saved, like: What are you going to do about jobs this takes away? What are you going to do about all this misinformation stuff? What are you going to do about scams? They usually have these concepts, like digitally watermarking all voices and dealing with all kinds of various firms to construct a watermarking coalition so whenever you voice document one thing in your cellphone, that has its personal metadata that claims, like, This got here from Charlie’s cellphone on this time.
Rosin: Uh-huh.
Warzel: You understand, like, That is actual. Or whenever you publish the ElevenLabs factor, it says—and folks can rapidly decode it, proper? So there’s all these concepts.
However I can inform you, it was like smashing my head towards a brick wall for an hour and a half with this actually earnest, good one that’s like, Yeah. No, no. It’s gonna take some time earlier than we, , societally all get used to all these completely different instruments, not simply ElevenLabs.
And I used to be like, And within the meantime? And they might by no means say it this manner, however the vibe is form of like, Nicely, you gotta break lots of eggs to get the, , universal-translation omelet scenario. However , a few of these eggs could be just like the 2024 election. It’s an enormous egg.
Rosin: Proper, proper, proper. So it’s the acquainted story however extra earnest and extra self-aware.
Do you guys wish to do one other check? Okay. You’ve been listening to me speak for some time. Charlie and I each fed our voices into the system. We’re gonna play to you me saying the identical factor twice. Certainly one of them is me, recorded. I simply recorded it—me, the human being, within the flesh proper right here. And considered one of them is my AI avatar saying this factor. There’s solely two. I’m saying the identical factor. So we’re gonna vote on the finish for which one is fake-AI Hanna. Okay, let’s play the 2 Hannas.
Rosin (Actual): Charlie, how far do you assume synthetic intelligence is from with the ability to spit out one million warrior robots programmed to destroy humanity?
Rosin (AI): Charlie, how far do you assume synthetic intelligence is from with the ability to spit out one million warrior robots programmed to destroy humanity?
Rosin: Okay, who thinks that primary is faux Hanna?
[Audience claps]
Rosin: Who thinks that quantity two is faux Hanna?
[Audience claps]
Warzel: It’s fairly even.
Rosin: It’s fairly even. I’d say two is extra sturdy, and two is appropriate—that’s the faux one.
Warzel: I’m zero for 2.
Rosin: However man, it’s shut. Like, Charlie frolicked at this place, and he’s gotten each of them mistaken to date.
Warzel: We work collectively!
Rosin: We work collectively. That is actually, actually shut.
Warzel: You understand, the one, like, bulwark proper now towards these items is that I do assume individuals are, usually, fairly doubtful now of most issues. Like, I do assume there may be only a normal suspicion of stuff that occurs on-line. And I additionally assume that one factor we’ve seen from a few of these is—there’s been a few ransom calls, proper? Such as you get a—it’s a rip-off nevertheless it’s your mother’s voice, proper? Or one thing like that.
These issues form of come down the road fairly rapidly. Like, you may fairly rapidly understand that your mother isn’t being kidnapped. You may fairly rapidly, as directors, you may resolve that. Principally, I don’t know the way efficient these items are but, due to the human aspect. Proper? It looks like we’ve a little bit bit extra of a protection now than we did, , let’s say, in 2016.
And I do assume that point is our best asset right here. With all of this, the issue is, , it solely takes one, proper? It solely takes some individual, , in late October, who places out one thing simply adequate, or early November, that it’s the very last thing somebody sees earlier than they go to the polls, proper?
And it’s too exhausting to debunk, or that individual doesn’t see the debunking, proper? And so, these are the issues that make you nervous. But additionally, I don’t assume but that we’re coping with godlike means to simply completely destroy actuality.
It’s form of someplace within the center, which continues to be, , nerve-wracking.
Rosin: So the hazard state of affairs is a skinny margin, very strategic use of this expertise. Like, less-informed voters, a suppress-the-vote—someplace the place you might use it in small, strategic methods. That’s a sensible worry.
Warzel: Yeah, like, hyper-targeted not directly.
I imply, it’s humorous. I’ve talked to a few AI specialists and folks within the area of this, and so they’re so fearful about it. It’s actually exhausting to coax out nightmare eventualities from them. They’re like, No, I’ve bought mine. And I’m completely not telling a journalist. Like, no means. I are not looking for this printed. I are not looking for anybody to learn about it. However I do assume—and this could possibly be the truth that they’re too near one thing, or it could possibly be that they’re proper, and they’re actually near it. However there’s a lot worry from individuals who work with these instruments. I’m not speaking concerning the ElevenLabs individuals, essentially.
Rosin: However AI individuals.
Warzel: However AI individuals. I imply, true believers within the sense of, , If it doesn’t occur this time round, effectively, wait ’til you see what it’s going to be in 4 years.
Rosin: I do know. That actually worries me, that the individuals inside are so fearful about it. It’s like they’ve birthed a monster type of vibe.
Warzel: It’s additionally good advertising and marketing. You may shuttle on this, proper? Like the entire thought of, , We’re constructing the Terminator. We’re constructing Skynet. It may finish humanity. Like, there’s no higher advertising and marketing than like, We’re creating the potential apocalypse. Listen.
Rosin: Proper. All proper. I’m going to inform you my two fears, and also you inform me how reasonable they’re. One is absolutely the perfection of scams, designed to focus on older people who find themselves barely shedding their recollections, which can be already fairly good. Like, they’re already fairly good, and also you already hear so many tales of individuals shedding some huge cash. That’s one I’m fearful about. Like, how straightforward it’s to constantly name somebody within the voice of a grandson, or within the voice of no matter. That one looks like an issue.
Warzel: Yeah, I believe will probably be, and I don’t assume it must be relegated to people who find themselves so previous they’re shedding their recollections. It’s tough to discern these items. And, I believe, what I’ve realized from lots of time reporting on the web is that no person is resistant to a rip-off.
Rosin: Sure.
Warzel: There’s a rip-off ready to match with you. And, , whenever you discover your counterpoint, it’s—
Rosin: It’s like real love.
Warzel: Precisely.
Rosin: Out there may be the proper rip-off for you. Okay, yet one more fear after which we’re going to do our final check.
My actual fear is that individuals will know that issues are faux, nevertheless it gained’t matter, as a result of individuals are so connected to no matter narrative they’ve that it gained’t matter to them in case you show one thing is actual or faux.
Like, you may think about that Trump would put out a factor that was faux and everyone would type of understand it’s faux, however everybody would collude and determine that it’s actual, and proceed based mostly on that. Like, actual and pretend simply—it’s not a line individuals fear about anymore, so it doesn’t matter.
Warzel: I absolutely assume we stay in that world proper now. I imply, actually.
I believe a great instance is lots of the stuff, not solely the stuff that you simply see popping out of the Center East in the way in which that—I imply, clearly there’s a lot literal digital propaganda and misinformation coming from completely different locations, but additionally simply from the conventional stuff that we see. And this can be a little much less AI-involved, however I believe there’s simply lots of people, particularly youthful individuals, who simply don’t belief the institution media to do the factor. They usually’re like, Oh, I’m gonna watch this, and I don’t actually care. And so I believe the extent of mistrust is so excessive in the intervening time that we’re already in that scenario.
Rosin: Like we’re of a technology, and we’re journalists, and so we sit and fear about what’s actual and what’s faux, however that’s not really the road that individuals are being attentive to on the market.
Warzel: Yeah. I believe the actual factor is, like, getting to some extent the place you may have constructed sufficient of a para-social belief relationship with somebody that they’re simply gonna imagine what you say after which attempt to be accountable about it, about delivering them data, which is loopy.
Rosin: Okay. One closing fake-voice trick. This one’s on me since, Charlie, you had been mistaken each occasions. Now it’s my flip.
My producers needed to offer me the expertise of understanding what it’s wish to have your voice saying one thing that you simply didn’t say. So that they took my account, they’d my voice say issues, and I haven’t heard it, and I don’t know what it’s. So we’re going to take heed to that now. Will probably be a shock for all of us, together with me. So let’s pay attention to those faux voicemails created by my great producers.
Rosin (AI): Hello! I’m calling to depart a message about after-school pickup for my youngsters. Simply needed to let their homeroom instructor know that Zeke within the white van is an expensive household pal, and he’ll be selecting them up as we speak.
Rosin: (Laughs.) Okay.
Rosin (AI): Hello, mother. I’m calling from jail, and I can’t speak lengthy. I’ve solely bought one cellphone name. I actually need you to ship bail cash as quickly as you may. I want about $10,000. Money App, Venmo, or Bitcoin all work.
Rosin: My mother doesn’t have $10,000.
Rosin (AI): Hey, I hope I’ve the fitting quantity. It is a voicemail for the parents operating the Cascade PBS Concepts Competition. I’m operating late in the intervening time and questioning if I’m going to make it. Actually, I really feel like I ought to simply skip it. I can’t stand speaking to that Charlie-whatever character. Why am I even right here? Washington, D.C., is clearly the superior Washington anyway.
[Crowd boos]
Rosin: Oooh. Yeah, okay, okay. Now, I’d say I used to be speaking too quick.
Warzel: So one factor I did with my voice is I had it say an entire bunch of far worse issues, like, COVID got here from a—no matter, , simply to see what these issues could be like. They usually had been form of plausible, no matter.
But additionally, what if you then took audio—so the one from jail, proper? What in case you took audio—your producers, our producers are nice—and inserted lots of noise that sounded prefer it was coming from a crowd, or like a slamming of a cell door or one thing like that within the background, light it in properly? That will be sufficient to ratchet it up, proper?
And I believe all these issues can change into extraordinarily plausible in case you layer the fitting context on them.
Rosin: Proper. You understand what, Charlie? Right here’s the very last thing. You, as somebody who’s been actually near this, fluctuate between, Okay, we don’t have to be that alarmed. It’s solely bought these small makes use of, and, But additionally, it’s bought these makes use of, and so they’re actually scary.
Having been near this and gone via this expertise, is there a phrase you’d use to sum up how you are feeling now? As a result of, clearly, it’s unsure. We don’t really know—we don’t know the way rapidly this expertise goes to maneuver.
How ought to we really feel about it?
Warzel: I believe disorientation is the phrase as a result of—so an enormous cause I needed to go speak to this firm was not simply due to what they had been doing, however to be type of nearer, to get some proximity to the generative-AI revolution, no matter we’re gonna name it. Proper? To see these individuals doing it. To really feel like I may moor my boat to one thing and simply really feel like—
Rosin: You might have management.
Warzel: Yeah, and I perceive what we’re constructing in the direction of, or that they perceive what they’re constructing in the direction of. And the reply is that you would be able to stroll as much as these individuals and stare them within the face and have them reply questions and simply form of really feel actually at sea about lots of these items, as a result of there are glorious transformative purposes for this. But additionally, I see, , this voice expertise with the opposite generative-AI applied sciences—mainly, a great way to think about them is like plug-ins to one another, proper? And individuals are going to make use of, , voice expertise with ChatGPT with a number of the video stuff, and it’s going to simply make the web—make media—weirder. Proper?
Every part you see goes to be weirder. The provenance of it will be weirder. It’s not essentially all the time going to be worse, proper? However it could possibly be. And it may perhaps be higher. However everybody looks like they’re dashing in the direction of this vacation spot, and it’s unknown the place we’re going.
And I simply really feel that disorientation is form of probably the most trustworthy and truthful means to have a look at this. And I believe whenever you’re disoriented, it’s greatest to be actually cautious of your environment, to pay very shut consideration. And that’s what it looks like proper now.
Rosin: We are able to deal with the reality. Thanks for giving us the reality. And thanks, all, for coming as we speak and for listening to this speak, and be ready to be disoriented.
[Music]
Rosin (AI): Thanks for listening. And thanks to the manufacturing workers of the Cascade PBS Concepts Competition. That is the AI model of Hanna Rosin talking, as made by ElevenLabs.
This episode of Radio Atlantic was produced by Kevin Townsend. He’s typing these phrases into ElevenLabs proper now and might make me say something. “It’s possible you’ll hate me, nevertheless it ain’t no lie. Child, bye, bye, bye. Bye, bye.”
This episode was edited by Claudine Ebeid and engineered by Rob Smierciak. Claudine Ebeid is the manager producer of Atlantic audio, and Andrea Valdez is our managing editor. I’m not Hanna Rosin. Thanks for listening.