Tuesday, February 11, 2025
Home Time MagazineAI Why You Shouldn’t Let AI Write Your Emails

Why You Shouldn’t Let AI Write Your Emails

by CM News
0 comments
Abundance of Cyber Messages


Abundance of Cyber Messages

We’ve all been in a situation where we sit down to our devices first thing in the morning—caffeinated beverage clutched in hand—and brace ourselves for the moment when all those unread messages from the previous day flood our screen. And wouldn’t it be nice if there was a little creature hidden inside your computer or smartphone that could take care of those messages for you?

Well, now there is. At least, if you believe that AI (artificial intelligence) is the cure-all that many are claiming it to be.

banner

[time-brightcove not-tgx=”true”]

I’ve spent my career studying virtual communication, and understanding how we make and form impressions of others virtually has been a core component of my research. With recent innovations in AI, I’ve heard from many people—from front-line employees to executives—who are grappling with the potential of AI to transform their productivity. As inboxes fill up and to-do lists expand, workers are asking themselves: should I let AI write this email for me?

While the question is a simple one, its implications are not. Communication matters. And the impressions you make in digital interactions can determine whether you receive a job offer, how managers and customers rate your performance, and even your salary.

To begin to answer that question, first, let’s acknowledge that AI in communication isn’t entirely new. Even before the more recently introduced large language model (LLM) chatbots that have captured the world’s attention—like Chat GPT, Copilot, Claude, Gemini, and Perplexity—there were chatbots that could interact with almost as much finesse as a human. For instance, in the early 2000s, I spent time conversing with SmarterChild on AOL Instant Messager (anyone else remember dial-up?). By 2017, AI-generated “smart” email replies accounted for 12% of daily email communication through Gmail. That’s a whopping 6.7 billion AI-generated emails sent daily.

Today’s AI is even sneakier, with many AI-based chatbots now being sophisticated enough to fool people into thinking they’re conversing with live humans. Newer chatbots even use tactics such as purposely pausing before responding and using slang words to make them seem more realistic. Researchers have even gone as far as directing chatbots to ask whether the human it’s interacting with is actually a human, thereby beating the person to making a similar inquiry and presenting itself as more lifelike in the process.

When thinking about the potential downsides of outsourcing your communication, consider whether you ever heard a restaurant advertise their “hand-cut” fries. Or chosen a handmade soap, even if there were similar, cheaper options that were factory-produced.

Read More: AI and the Rise of Mediocrity

There’s a reason why these human-made products seem so appealing. Stanford University researcher Arthur Jago found that people rated songs, recipes, and paintings as more authentic when they were told outputs were human-generated compared to identical ones that were created by AI. Why? Because when we experience something that was crafted by hand, it feels more sincere rather than pre-programmed.

Imagine receiving a message from your supervisor congratulating you on your recent promotion. However, you realize the message was generated by AI because it uses formal words like “resonate” and “elevate” in a way that sounds nothing like your manager’s normal writing style. No matter how appropriate the words are, the communication will feel less like a celebration and more like an inauthentic, low-effort platitude due to the lack of human effort.

As I discovered through my own research in managerial, negotiation, and education contexts, virtual communication that is perceived as higher effort is rated by recipients as significantly more authentic. And authenticity perceptions have a host of downstream consequences including impacting customer satisfaction, negotiation outcomes, and employee trust in—and evaluations of—their leaders.

Although your offloading of your email-writing to AI may go undetected in many cases, there are potential “tells” that, if noticed even once, could lead recipients to question the authenticity of all your past communications. Some of the most common AI tells are using words or language that aren’t usually a part of your vocabulary, “hallucinating” information, making an unlikely error (such as misunderstanding a fundamental attribute of your company’s mission), or creating a message that demonstrates a lack of understanding about information you already have (such as a topic discussed in a previous conversation).

For instance, a single email to a close colleague who just confided in you about their divorce, which included the AI-generated “Hope this email finds you well” opener, would risk permanently damaging your relationship with them. Short of establishing a brain implant—which would come with its own set of worrisome issues—AI will never have fully comprehensive knowledge of you and your prior interactions or relationships. Thus, there is always a lurking risk that your AI use will be detected.

At this point, you might rightfully be wondering if the decision to use AI in your communication should simply come down to whether it’s more important in a given situation to be seen as authentic or get the job done as productively as possible.

Here’s the problem with that simple dichotomy: in many work situations, outsourcing your communication to an AI tool can actually make you less productive overall. Research shows that when it comes to assistive technology, people tend to fall into the trap of over-relying on it, a phenomenon referred to as cognitive offloading or automation bias.

Imagine you’re drafting an important client email and turning to AI for help. The first output is too formal, but the second is too casual. After ten attempts of varying prompts to the AI, you finally get a solid draft but spend significant time revising it to include details from previous conversations. What should have been a 5-minute task if you had just written the email yourself balloons into a 30-minute-long ordeal due to your attempts to offload this task to AI. In these more novel and important situations, tackling the task ourselves rather than turning to AI as a crutch is often more efficient and effective.

There’s also the risk of getting too comfortable letting AI do the heavy lifting and trusting its judgment even in cases where it is incorrect. In one study, doctors using AI tools for diagnoses, at times, began trusting the AI more than their own judgment and consequently altered some of their correct initial judgments to incorrect ones based on the AI tool’s advice. When it comes to your work communication, trusting AI too heavily can also lead to overlooking embarrassing mistakes, like accidentally using the incorrect gender pronouns for a major client whose name is ambiguous (e.g., Pat).

Worse, we’re not just risking embarrassment; we’re potentially stunting our own growth. Relying on technology to manage tasks can lead to lower future memory performance, as technology handles the information for us and we assume we’ll be able to access it later via that same technology. By letting technology do our thinking for us, we can find ourselves unprepared to deal with follow-up conversations, especially when they occur synchronously (e.g., on the phone or in person). While crafting difficult messages may be anxiety-provoking, the experience of working through these challenges will improve our communication skills and overall performance in the long run.

If you decide you want to use AI in a context where someone may potentially realize you are doing so, there are ways to minimize—though not completely eliminate—the downsides.

First, be transparent about AI use. Doing so will help avoid running afoul of regulations in regions such as the EU, while also preventing the interpersonal risk of being perceived as “deceptive” and immoral because you hid your AI use.

Second, give a reason why you are using AI that also benefits your message recipient. Even though you know you are using AI solely to improve productivity, research has found that people often incorrectly make attributions about—and assume—negative motivations behind others’ actions, such as that they’re lazy or don’t care. To avoid others viewing your AI use as a negative reflection of your effort or ability, you can add a note at the bottom of your message or email signature that transparently explains you are using AI tools. In this way, you’re being open about your use of AI and showing how it benefits the recipient—like helping them get a quicker response—rather than giving the impression that you don’t care.

When it comes to deciding whether to use AI for a specific interaction, you should be asking yourself: does this communication matter? If the answer is yes, then ideally you should write it yourself. While leveraging AI can help with brainstorming or proofing your message without harming your relationships, if you want your communication to seem authentic, then the message should be in your own words. If others think you are using AI to communicate with them, they’ll begin to wonder why they even need to interact with you in the first place.

Especially as even more innovative AI communication tools continue to emerge, including voice and video assistants that can “attend” calls or meetings on our behalf, it’s vital to remember the irreplaceable value of genuine human connection. After all, in a world increasingly dominated by AI, sometimes the most meaningful act can be as simple as a thoughtful, human-written message.



Source link

You may also like

Leave a Comment

canalmarketnews

Canalmarket News delivers trusted, diverse news from Panama and the USA, covering politics, business, culture, and current events.

Edtior's Picks

Latest Articles

All Right Reserved. Designed and Developed by Joinwebs