A few days ago, I posted this picture to Facebook, with the caption, “Find yourself someone who looks at you the way Boggle looks at me when he wants my soup.” Boggle, obviously, is the cat’s name.

A few hours later, my wife sent me this screenshot of Facebook’s suggested replies to this photo:

This struck us both as unsettling: not because the AI has gotten good enough to generate such accurate approximations of how someone might reply (including the cat’s name, the emoji usage, the soup reference, and the casual meme reference to a “spirit animal”), but because the pattern of offering these as a menu of reactions represents a misunderstanding of the function of these interactions in the first place. Even if the actual text (and emojis) of the response are identical, there feels to me something fundamentally different between knowing someone typed them out themselves rather than selected them from a menu of options.

What is it that makes it different, though? Is it the effort, knowing that the commenter had to actually go through the process of typing out the letters and selecting the emojis? I don’t think so; I imagined knowing that someone dictated these via voice-to-text or typed these on a computer rather than a smartphone, and it didn’t substantively affect my perception of the message.

Instead, there’s something different about knowing the content was generated by the commenter rather than merely selected. It’s in some ways akin to the distinction between recognition and recall, where the ability to recall something represents stronger understanding than merely the ability to recognize it when prompted. Similarly, the process of generating a response oneself feels to me as if it represents something stronger than merely selecting a pre-generated response. In many ways, this likely connects to why reader-generated comments are regarded as more impactful than merely the number of reactions: it represents something stronger about the feelings of the person leaving it. Offering the ability to select a pre-generated response circumvents that, even if the response selected is identical to the one that the commenter would have left on their own.

Generative AI, I feel, is in a position right now that all revolutionary technology goes through: we recognize its potential as a powerful new tool, but we haven’t yet identified what needs it addresses. Generative AI is a solution looking for problems. And in the process of searching for the right problems, we’re coming across lots of problems it is not good at solving—such as the problem of needing a first-person account of a real experience with a 2e child in the NYC gifted and talented program. A human could have generated the identical response to the Meta AI from that story, but if human-generated the response would have value—AI-generated, it does not.

Generative AI is a solution looking for problems.

This conundrum comes up a good bit in my teaching. My rules regarding how much students may copy from AI are generally more restrictive than the rules students may have in the workplace. The reason is similar: in an educational context, the work generated is valuable insofar as it represents the student’s knowledge of some content. In the workplace, the work generated is valuable insofar as it is able to accomplish a task. What the work represents is different. Copying code from AI accomplishes one goal, but not the other.

In the opening synchronous meeting for one such class this semester, I was asked about this policy: if the work itself is the same, what does it matter whether it came from AI or not? I explained my thoughts with an analogy: imagine you have an assistant, whether that is an executive assistant at work or a family assistant at home or anyone else whose professional role is helping you with your role. Then, imagine your child’s (or spouse’s, I actually can’t remember which example I used in class) birthday is coming up. You could go out and shop for a present yourself, but you’re busy, so you ask this assistant to go pick out something. If your child found out that your assistant picked out the gift instead of you, would we consider it reasonable for them to be disappointed, even if the gift itself is identical to the one you would have purchased?

My class (those that spoke up, at least) generally agreed yes, it would be reasonable to expect the child to be disappointed: the gift is intended to represent more than just its inherent usefulness and value, but also the thought and effort that went into obtaining it. I continued the analogy by asking: now imagine if the gift was instead a prize selected for an employee-of-the-month sort of program. Would it be as disappointing for the assistant to buy it in that case? Likely not: in that situation, the gift’s value is more direct.

This gets to the core distinction I feel tools using generative AI need to address: to what extent is the artifact they are generating valuable in and of itself, and to what extent is the artifact they are generating valuable only insofar as it is authentic to the way in which it is perceived to have been generated? In the workplace, a block of code may be valuable insofar as it accomplishes the goal of the program, while in a class, it may be valuable only for what it says about the student. In gift-giving, a gift may be valuable in a professional setting based only on its inherent value, while in a personal setting it may be valuable due to a combination of value and the authentic process that generated it. We can apply this analogy in numerous other places; this is why it is appropriate to bring a store-bought cake to a corporate event but not to a baking competition, or why we regularly see internet personalities criticized for apparently using generative AI to author apology videos.

There are implications of this view for two audiences. For regular users, it’s important for us to consider the trade-off between authenticity and whatever value generative AI is delivering when electing to use these tools. For example, there’s a built-in tool here in WordPress that lets me simplify, expand, or summarize this post. I can be a bit verbose, and so I as a user have to consider whether the loss of authenticity that would come from using that tool is worth the apparent gains in readability or simplicity. We each individually need to attend to whether using these tools undermines the authenticity of the artifact we’re producing. This can be tough, of course, because so often using these tools is going to be much easier than producing the artifact ourselves. Generative AI is like the high-fructose corn syrup of content: it’s cheap and easy to use, but doesn’t yield as good of a result and has some long-term impacts if we use it too much. We have to be careful about when we use it because it would be so easy to get carried away.

Generative AI is like the high-fructose corn syrup of content: it’s cheap and easy to use, but doesn’t yield as good of a result and has some long-term impacts if we use it too much.

For those building tools that leverage generative AI, the same implication applies, but at a broader level. To what extent are we helping our users circumvent authenticity, and to what extent are we operating in areas where authenticity wasn’t part of the underlying value of the artifact? One area of rapid development for similar technologies over the past few years has been in photo editing: Photoshop and other tools can now do in a single click what it previously took a professional several hours to do. While this poses some obvious concerns about technology replacing workers and other such issues, I’ve not yet heard concerns raised about the authenticity of this exercise. But when we develop tools that generate content in settings where its only value would be in its authenticity—such as the aforementioned instance of the Meta AI fabricating a story about a child in a New York School system—we are creating something self-defeating. It’s solving a problem that it inherently can’t solve because it removes the authenticity from the artifact, and the authenticity is the only reason the artifact has relevance in the first place.

So, as both users and creators of Generative AI tools, it’s important that we keep in mind to what extent certain artifacts have value because of what they are, and to what extent they have value because of the authentic process that created them. Generative AI can be a useful tool for creating artifacts whose entire value is just the artifact itself, but if authenticity matters, generative AI is a poor fit.

That said, for the sake of science, I did drop this blog post into Jetpack’s built in AI summarizer. If I had posted the following blog post instead of the one above, would it be more or less impactful? Is that difference because of the different content, or because of the process that generated it? Does it matter that you know it was AI-shortened? Does it matter that you know the input was a full-length post by me rather than a short prompt to generate a longer post?

A few days ago, I posted a Facebook picture of my cat Boggle with a caption about the way he looks at me when he wants my soup. My wife then sent me a screenshot of Facebook’s suggested replies to this photo, which felt unsettling. The AI-generated responses were accurate but missed the personal touch of someone typing out a message themselves.

This lack of effort in AI-generated replies reminded me of the difference between recognition and recall. Generating a response personally feels more meaningful than selecting a pre-made one, much like how a student’s own work demonstrates their knowledge better than just copying answers.

I explained this in a class with an analogy: If you ask an assistant to buy a gift for your child’s birthday, the child might feel disappointed knowing it wasn’t chosen by you. The gift’s value is tied to the thought and effort behind it, unlike a professional setting where the inherent value of the gift is enough.

This applies to generative AI too. The value of an artifact depends on whether its authenticity or just its existence is important. For instance, a block of code is valuable in a workplace for its functionality, while in a class, it shows the student’s understanding. AI-generated content is useful when the artifact’s value is independent of its creation process, but not when authenticity matters.

We need to be cautious with generative AI. It’s easy to use but can undermine authenticity. Users and tool creators should consider whether the trade-off between ease and authenticity is worth it. For skills like photo editing, AI is helpful because authenticity concerns are less. However, AI-generated personal stories or apologies might lack the necessary authenticity, making them less impactful.

Ultimately, generative AI can simplify creating certain artifacts, but it’s vital to assess when the authenticity of the process matters.