Call to Action: A Robot Sent Me an Email

I have a confession. I’ve been emailing with an AI chat bot about my books. Well, sort of … a few months ago, an obviously AI-generated email landed in my inbox (these usually just get diverted to Junk automatically). I shared it with my publisher, and we had a good chuckle; the email contained 11 emojis and phrases obviously pulled from my author website and reviews. The entire message read as though it were intended for a fourteen-year-old’s group chat, asking me if I wanted to “gain visibility that actually matches your genius-level talent” and “watch the review numbers explode like confetti in a hurricane.”
Yet some part of me remained intrigued. How long would it take for the bot to ask me for money? At what point, if any, would a real human start composing the messages?
For several weeks, I did nothing.
Then another message arrived, nudging me about the first message, asking if I was interested in this person[?] helping me promote my books. This time I replied, determined to get to the bottom of the algorithm or – at least – the source of the scam so I could warn my students and fellow authors.
Since then, we’ve had a few more exchanges, but I haven’t given it much thought. At some point, I’ll return to the ridiculous thread. (The latest email had 28 emojis, but was signed with a real[?] person’s name and included a website…I did not click the link; I won’t do that until I can get to a public computer.) But this entire experience taps into much larger questions about the uses, permissions, and impacts of AI.
Are folks even aware of the unerasable imprint and trail that just one text query into ChatGPT can leave? Of the impact that, say, a five-minute, fifteen-question “conversation” and generative image work has on climate change? (Hint: Depending on many factors, the energy used can be equivalent to the power used to carry a human on an e-bike for 400 feet, or, say, run a microwave for 8 seconds). MIT took a deep dive into this complex question. It’s a long read, but we have a right and a responsibility to know what these tools actually cost our planet and our privacy, so please prioritize that link and share it widely.
I’m not going to say that AI is all bad; I’m sure it’s benefitted me greatly without my awareness. And it’s everywhere. I’m even required to use it to communicate with my child’s teacher (the school’s messaging app, complete with an option for an AI tutor for my child – gag!) But AI is turned on automatically in so many terms-of-service pop-ups we “agree” to without a second thought, and the ramifications are terrifying. (Tip: Here’s how to get started turning off those automatic AI functions on a majority of common apps.)
Which brings me to us, dear lovers of words. To do nothing is to be complicit in far more than we imagine. Whether you’re a reader searching for a your next book or firing up a question about something you read in that novel on your nightstand … or maybe you’re an author who discovered how much time you can save by generating an AI image of your latest character’s bedroom (ie, “Show me a typical twenty-something’s dorm room from the 1980s. The student is a single, white male who is majoring in astrophysics. He has two roommates and a girlfriend, and listens to Def Leppard.”) …these actions cost us, and impact others without their consent:
- More than 123 million people worldwide are currently displaced due to climate change (and that number is increasing every day).
- Data cooling centers are taking over land that could otherwise be used for farming or renewable energy; EPA violations at these locations are common and many fines go unpaid, despite objections and real-life local and global ramifications.
And while companies used every day by authors (and others) have made public commitments to use 100% renewable energy, those same companies are using fossil fuels to power AI. Who? Google, Meta, Microsoft, Amazon… Yeah, them again.
- Here’s a petition to encourage Google, Meta, Amazon, and Microsoft to power their AI ventures with clean energy.
- Here’s an article documenting the trauma inflicted on African data workers whose jobs are behind AI’s image-generative powers (yeah, so, someone could actually be clinically impacted with PTSD, and far worse, just so we can have an image generated for us, at will).*
- Here’s the skinny on Facebook’s use of your phone’s camera rolls to train its AI system and how to stop it.*
Some of these are tough reads, but we have to make ourselves understand the human lives behind these tools so that we can make conscious, informed decisions about our usage. And yeah, I’m also tracking usage as I email back and forth with who/whatever is on the other end of the book promo emails I’ve been getting. I plan to tread lightly with this mysterious emailer, and I can tell already it will be the first and last time I respond to that type of junk mail. The sad truth is that authors do fall for these kinds of things, and so the advocate in me needs to find out more. I’m no detective, but I’ll keep you posted.
*Many thanks to Vu Le at Nonprofit-As-Fuck for these two links.
Are you on my newsletter list? When you sign up, you’ll get my monthly questions and you’ll also receive the 5 S’s Applied to Story downloadable PDF. I send emails approximately every month with mini craft essays, special notices, early-bird registrations, and announcements for subscribers only. No spam, ever; and your email address is never shared. Sign up here.
