On My AI Collaboration Policy

A few months ago, I tweeted about the AI collaboration policy I added to my course syllabi essentially at the last minute before the summer semester began.

I feel like it’s a little trite nowadays to say something “went viral”, but I do know that I woke up the next morning to a ton of notifications and replies. Over the next few months, I had a number of different media requests come up that could be traced directly back to that thread. And frequently in those, I was asked if there was a place where my policies could be found, and… there really wasn’t. I mean, my syllabi are public, but that just shows the policy in isolation, not the rationale behind it. Plus, my classes actually have slightly different policies, so looking at one in isolation only shows a partial view of the overall idea.

In most of my classes, there are a number of specific skills and pieces of knowledge I want students to come away with. AI has the potential to be an enormous asset in these. I love the observation that a student could be writing some code at 11PM at night and get stuck, and instead of posting a question on a forum and waiting to get a response the next day they could work on it with some AI assistance immediately. But at the same time, these agents can often be too powerful: it’s entirely plausible for a student to instead put a problem into ChatGPT or Copilot or any number of other tools and get an answer that fulfills the assignment while having no understanding of their own. And that’s ultimately the issue: when does the AI assist the student, and when does the AI replace the student?

To try to address this, I came up with a two-fold policy. The first—the formal, binding, enforceable part of the policy—was:

My “We’re living in the future moment” came from the fact that this is exactly the same policy I’ve always had for collaboration with classmates and friends. You can talk about your assignment and your ideas all you want, but the content of the deliverable should always come from you and you alone. That applies to human collaboration, and that applies to AI collaboration as well.

With human collaboration, though, I find that that line is pretty implicitly enforced. There are some clear boundaries. It’s weird to give a classmate edit access to your document. It’d be strange to hand a friend your laptop at a coffeeshop and ask them to write a paragraph on your essay. There are gray areas, sure, but the line between collaboration and misconduct is thinner. That’s not to say that students don’t cheat, but rather that when they do, they probably know it.

Collaboration with AI tends to feel a bit different. I think it’s partially because it’s still a tool, meaning that it feels like anything we create with the AI is fundamentally ours—we don’t regard a math problem that we solve or a chair that we build as any less “our” work because we used a calculator or a hammer, though we’d consider it more shared if we instead asked a friend to perform the calculations or pound in a few nails. And this is the argument I hear from people who think we should allow more collaboration with AI: it’s still just a tool, and we should be testing how well people know how to use it.

But what’s key is that in an educational context, the goal is not the product itself, but rather what producing the product says about the learner’s understanding. That’s why it’s okay to buy a cake from a grocery store to bring to a party, but it’s not okay to turn in that same cake for a project in a culinary class. In education, the product is about what it says about the learner. If a student is using AI, we still want to make sure that the product is reflecting something about the learner’s understanding.

And for that reason, I augmented my policy with two additional heuristics:

Truth be told, I really prefer just the second heuristic, but there are instances—especially in getting feedback from AI on one’s own work—where it’s overly restrictive.

Both heuristics have the same goal: ensure that the AI is contributing to the learner’s understanding, not directly to the assignment. That keeps the assignment as a reflection of the learner’s understanding rather than of their ability to use a tool to generate a product. The learner’s understanding may be enhanced or developed or improved by their collaboration with AI, but the assignment still reflects the understanding, not the collaboration.

There’s a corollary here to something I do in paper-writing. When writing papers with student co-authors, I often review their sections. I often come across simple things that I think should be changed—minor tips like grammatical errors, occasional misuse of personal pronouns in formal writing, etc. If a student is the primary author, it makes sense to give more major feedback as comments for the student to incorporate, but for more minor suggestions it would often be easier to make them directly than to explain them—but I usually leave them as comments anyway because that pivots the process into a learning/apprenticeship model. By that same token, sure, there are things generative AI can do that make sense to incorporate directly—that’s part of why there’s been such a rush to incorporate them directly into word processors and email clients and other tools. But reviewing and implementing the correction oneself helps develop an understanding of the rationale behind the correction. It’s indicative of a slightly improved understanding—or at least, I suspect it is.

So, in some ways, my policy is actually more draconian than others. I actually don’t want students to simply click the magic AI grammar-fixing button and have it make suggested changes to their work directly (not least because I subscribe to the view that grammar should not be as restrictive and prescriptive as it currently is seen to be—see Semicolon by Cecelia Watson for more on what I mean there). I’m fine with them receiving those suggestions from such a tool, but the execution of those suggestions should remain with the student.

Of course, there are a couple wrinkles. First, one of my classes deliberately doesn’t have this policy. One of my classes is a heavily project-oriented class where students propose their own ~100-hour project to complete. The goal is to make an authentic contribution to the field—or, since that’s a hard task in only 100 hours, to at least practice the process by which one might make an authentic contribution to the field. Toward that end, if a tool can allow students to do more in that 100 hours than they could otherwise, so be it! The goal there is to understand how to justify a project proposal in the literature and connect the resulting project with that prior context: if AI allows that project to be more ambitious, all the better. The key is to understand what students are really expected to learn in a particular class, and the extent to which AI can support or undermine that learning.

And second and most importantly: we are right at the beginning of this revolution. Generative AI emerged faster than any comparably revolutionary tool before it. Educators ultimately learned to adjust to prior technological innovations: when students received scientific calculators, we assigned more and tougher problems; when students started writing in word processors rather than with pen and paper, we expected more revisions and more polished results; when students got access to search engines instead of library card catalogs, we expected more sources and better-researched papers. Generative AI has arrived with unprecedented speed, but the fundamental question remains: now that students have a new, powerful tool, how will we alter and raise our expectations for what they can achieve?

On My Top Books of 2022

And finishing catching up with porting my top 10 book-I-read-this-year lists to my blog by posting this from 2022:

As I’ve done the past couple years, I’m making a list of my ten favorite books I read in 2022. No particular order. (Well, actually, the order is the order I read them.) They’re not books that came out this year, just books I read this year.

On Classmate Visibility in Online Education

In our original design for CS7637: Knowledge-Based AI, there were six homework assignments, but students only had to do three. Specifically, they had to do one from each of three pairs of assignments. Each pair was due a week apart. Doing the earlier assignment in each pair meant getting earlier feedback and more peer feedback (since most students do the later assignment); doing the later assignment meant having an extra week to work on the assignment, and getting to see some peers’ work before submitting one’s own.

To prepare our grading resources, we did a class poll on the course forum to ask students whether they planned to submit to the first assignment or the second one. About 80% said they planned to submit to the first assignment. We thought, “Huh, wow, online students really are more committed if they’re going to intentionally submit earlier than they need to!”

Then in practice, only about 30% of students actually submitted to the earlier deadline. 70% submitted to the late deadline.

Did students overestimate themselves? No; but students who are active on the course forum and who will respond to an optional poll are more likely to be more invested, and thus are also more likely to submit early. 80% of students who would respond to an optional forum poll did submit early; but 70% of students as a whole would submit late.

As time has gone on, this experience has been repeated over and over again in many ways. One common refrain I hear from students is, “I’m struggling so much. People on the forum are so far ahead of me, and I’m so far behind.” Typically the students saying this aren’t behind at all. Usually they’re ahead, actually, since the kinds of students who will even go to the trouble of self-assessing their progress are usually ahead of the average. But they’re comparing themselves to a very biased sample: the types of students who will engage on a forum are more invested than average.

What makes this different than in-person classes is visibility. Online, students only ever see classmates who do something to be seen. In-person, by comparison, students see everyone in one way or another. Students see classmates in the back of the room typing on their laptops instead of paying full attention. And more than that, students see empty seats. Students may know that a class is full, and if the class is full, they know every seat should be filled. They can look around and recognize that if the lecture hall is 75% empty, then 75% of the class didn’t attend. That helps them calibrate. They can think, “Sure, I’m probably not doing as well as that person in the front row asking questions I can’t even understand, but I’m doing better than the 75 people who never even come to class!”

Online, there are no empty seats. There is no back row where people are typing away, present only in case the instructor throws out some insider knowledge for a future test. The analogue to those front-row question-asking students are the only ones that students see, and so the comparisons they draw are severely skewed.

That’s part of why I think it’s so important to require things like peer review: it gives everyone more of a view of the typical distribution of the class rather than just the biased sample visible on the forum and in other places. It’s why I would like to share things like class averages, except I find those backfire: they may reassure some students that they’re not as far behind as they fear, but they also become the basis for other students to protest already-high grades because they insist on having the best grades even if there’s no practical difference.

More generally, it’s part of my push to feature more peripheral and semi-peripheral community in online classes. There’s some natural community that forms when you have peripheral indicators of some shared foundation, even if you never interact with other members of that community directly. Recreating it online is a challenge, but also can have a significant positive effect on students’ self-perceptions of their progress.

On My Top Books of 2021

Following on from my top books of 2020, here were my top books of 2021:

  • This Is How You Lose the Time War by Amal El-Mohtar and Max Gladstone. I found this absolutely beautiful, as much poetry as novel. I’ve always admired science fiction and fantasy that can create a compelling world without going into extreme detail.
  • The Friendly Orange Glow by Brian Dear. I received this as an advanced promotional copy, and I almost gave it away because I just didn’t think I’d ever go for such a long book about the topic. But then an audiobook came out and I decided to give that a try—and holy cow I’m glad I did. Phenomenally informative and fascinating in so many ways: it’s like all of computing history played out once already, and now we’re just repeating it.
  • Vicarious by Rhett C. Bruno. One of the most imaginative and original science fiction books I’ve read. It’s like The Truman Show meets Ready Player One meets probably a bunch of other books I’ve never read, but it’s fantastic. (And proof sometimes you can judge a book by its cover: I bought it in part because of its gorgeous cover design).
  • Infinite2 by Jeremy Robinson. The book that cemented Jeremy Robinson as one of my favorite authors—though if it wasn’t this book, it’d be another Jeremy Robinson book I read this year. I love his mix of action, humor, philosophy, and fourth-wall breakage.
  • Failure to Disrupt by Justin Reich. It’s easy to talk about the potential of technology. It’s far harder to dissect and investigate why certain technologies don’t catch on. Justin manages to do that not only for some technologies, but entire classes of technologies.
  • Extraterrestrial: The First Sign of Intelligent Life Beyond Earth by Avi Loeb. I’m not convinced of the thesis, granted—Loeb hypothesizes that Oumuamua was an extraterrestrial spacecraft, with the reason being the object is too much of an outlier to actually be an instance of a known object like a comet. I think he adequately supports the idea that it’s not of a class of objects we already know about, but I wouldn’t go so far as to say he supports the hypothesis that it’s an extraterrestrial spacecraft—just that it’s definitely something new we don’t understand. But beyond any of that, it’s a fantastic book about how science is done, how different hypotheses are tested and reasoned through deliberately, etc. It’s fantastic not for its argument that Oumuamua was alien technology, but for its depiction of how science as a whole progresses and its subject to the same kinds of trends and internal political pressures as other endeavors.
  • The Order of Time by Carlo Rovelli. Anything by Carlo Rovelli is what happens when a poet takes a wrong turn and ends up in physics, but continues to write about it with the eloquence of a poet.
  • Indexing & Indexing: Reflections by Seanan McGuire. I listened to these back-to-back so I can’t separate them easily, but these books were amazing: a premise simultaneously original and familiar, executed brilliantly. I wish this was made into a TV procedural; it’s the perfect candidate.
  • Minds Online by Michelle D. Miller. The best book I’ve read about teaching online, grounded in learning sciences and cognitive sciences. It’s always nice to read a book that gives you evidence supporting those things you suspected but hadn’t investigated or tested.
  • Master of Formalities by Scott Meyer. I love Meyer’s writing style and characters, and this was the best of the ones I’ve read. Luke Daniels is also the perfect narrator for his writing style.

There are also come books worth noting that I have trouble really comparing to others, so I’ll mention them here:

  • Whale Day by Billy Collins. It’s unfair to use a spot on anything by Billy Collins because there’s always going to be one. I read a lot of other poetry books this year, but his are by far my favorite.
  • Heaven’s River by Dennis Taylor. It’s hard to rank this separately from the rest of the Bobiverse, but I found this to be the most mature and nuanced of the books. This series deserves a spot among the historical best of science fiction.
  • Making Money and Raising Steam by Terry Pratchett. Similar to Heaven’s River, it’s hard to rank these individually compared to the Moist von Lipwig saga as a whole (and Discworld as a larger whole), but it’s my favorite Discworld subseries.
  • Blankets by Craig Thompson. I love graphic novels, but I find I always go through them quickly enough that they don’t leave as lasting an impact. I remember thoroughly enjoying this at the time, but it hasn’t left as durable impression, I think just because the amount of time it commanded. (I could say the same about On a Sunbeam by Tillie Walden.)
  • An Atlas of Extinct Countries by Gideon Defoe. I learned later he took some liberties with history to make the writing funnier, which knocks it down a peg, but it succeeded at being hilarious and informative, if also a tad bit misinformative.

On the Spiraling Relationship of Technology and Education

I’ve been giving a lot of talks recently on the relationship between AI and education, and as part of that I usually start by discussing a little about the overall history of how technology and education interact. My visuals for describing that evolved over time into a sort of model that I’m calling the Technology and Education Spiral. That said, I feel like something like this has probably been proposed before. I googled around a bit (and also asked ChatGPT), but most of the things I’ve seen are more attempts to capture the way technology and education work at any given point in time—not how they influence each other over time. So, if you know of someone or something that has proposed this idea more eloquently already, please do let me know.

That said, here’s the basic idea:

We start at the top, with teachers teaching some skill. Then, technology comes along that can do that skill for students. “Technology” in this context represents a huge number of different things, though: calculators, for example, perform the skill of graphing parabolas, replacing the skill of finding x- and y-intercepts by hand. Spell checkers perform the skill of fixing spelling mistakes, replacing the skill of looking up words in the dictionary for their proper spelling. Washing machines perform the task of washing clothes, replacing the skill of adding soap, mixing clothes, and… well, whatever else went into washing clothes before washing machines weren’t as common. And of course, ChatGPT performs the skill of generating text.

Based on the arrival of the new technology, teachers then “panic” about what and how to teach. I say “panic” because the media picture usually exaggerates the panic; but for every article that talks about teachers panicking about AI-generated plagiarism, I find there are several more that instead advocate for embracing it.

But regardless of whether we say that teachers “panic about”, “thoughtfully contemplate”, “reluctantly address”, or “eagerly embrace” what and how to teach with the new technology, there nonetheless comes a phase of reflection about how the new tool changes things, which leads to a change to our curriculum. Sometimes it truly does replace old skills altogether, the way calculators replaced learning to look up estimated logarithms in the back of a math textbook or the way digital search engines largely removed the need for physical card catalogs.

More often, though, the new technology recontextualizes the skill, forcing us to focus more on why we had students learn to perform the skill in the first place: graphing calculators can graph parabolas with ease, but we still teach students to draw them by hand because we recognize a value in deeply understanding the relationship between the formula and the visualization in a way that we generally believe will not develop if students overrely on a calculator too early.

And in the most exciting cases, the new technology more fundamentally changes what we can do as educators. Once we teach students to graph parabolas by hand, we then move to teaching them to do so on a calculator because we recognize the tool will allow them to practice more problems, get even better, and move on to even more advanced problems. Students equipped with the tool end up doing more than students without, even though they still learned many of the same skills along the way.

And that’s why this is better described as the Technology and Education Spiral rather than cycle:

Curriculum comes out of this process with a new end point, a level of achievement and skill attainment that would have been impossible without the new technology. Students today learn more in several fields than they did a few decades ago not because they’ve gotten smarter and not because the fields themselves have changed, but because of the role technology plays. It equips them to learn more, learn faster, and thus achieve more in less time. We routinely have undergraduate students solve problems that once perplexed the greatest geniuses of their era—and while part of that is because we can prepare them better given that those geniuses already found the answer, part of it is that our students have access to tools that can help enormously.

There are reasons that ChatGPT feels more revolutionary than these previous developments. I’ll probably talk about that in a separate post. But fundamentally, we’re seeing the same cycle play out again: a technology has emerged that can do some of the things we historically have asked students to do themselves. That forces us to ask: which of these things no longer need to be taught at all? Which of these things should be taught initially, but then offloaded onto AI once the student demonstrates their own mastery? And what things can we do now that we have this new tool that we never could have done before?

With scientific calculators, the answers to those three questions were: looking up logarithm estimates in a textbook; graphing parabolas to represent an equation; and solving advanced problems for which the work to do them purely by hand is impractical. What will be the answers for generative AI?

On My Top Books of 2020

For Christmas in 2018, my daughter bought me Small Gods by Terry Pratchett. She was only 3 at the time, but when we would go Christmas shopping I would give her a bag to secretly put my present in, then we’d tell the cashier to ring up the contents of that bag and not show it to me. She bought it because it had a turtle on the cover. But I read it—the first book I read cover-to-cover in about five years—and I adored it. It kicked off a love of reading that is still going strong, five years and 544 books later.

Starting in 2020, I began compiling lists of my top ten books that I read in the previous year—not books published in the previous year, but that I personally read in the previous year. Here were the books from 2020, in no particular order, with a little added regarding what I remember about them now. But first, the way I prefaced this list back in 2020:

I read a lot, but I don’t usually write book reviews because I don’t really think my perspective is useful to other people broadly. The fact that I thoroughly enjoyed Small Teaching Online is irrelevant to the 99% of people I know who don’t teach online. The fact that I disliked Eleanor & Park is more indicative that I’m probably not the book’s target audience. Plus, a lot of the books I read are more than a few years old; what’s the point in a review for something that old?

But as part of my year in review this year, instead of the painstaking genre and medium breakdown I did last year, I figured I’d just choose my personal ten favorite books I read this year. This isn’t meant to suggest anyone else will enjoy these, but just that these are the ten books that I remembered most fondly when scrolling through the list of 105 books I read this year.

  • Just Six Numbers by Martin Rees. There are six universal constants that physicists cannot yet derive from other fundamental laws. If any of these constants were different by just a little bit, life in the universe probably couldn’t exist. It’s a fascinating look at physics itself, but unlike most physics books, it has a beautiful focus on what we don’t know.
  • God’s Universe by Owen Gingerich. Similar general topics to Just Six Numbers, with an added wrinkle observing how bizarrely comprehensible the universe is when it didn’t need to be.
  • Small Teaching Online by Flower Darby. It’s like an entire book of things I learned the hard way as I got started teaching online. Plus, it was the first experience I ever had with reading a book and unexpectedly finding myself quoted—which will always hold a special place in my heart.
  • Going Postal by Terry Pratchett. I love Terry Pratchett, but the Moist von Lipwig is my favorite sub-saga of the lot, and Going Postal is probably my favorite book from that portion.
  • Ten Drugs by Thomas Hager. I’ve always loved books that retell the history of the world through a particular lens. One of these days I plan to marathon a series of “A History of the World in [Something]” books. But in the meantime, Ten Drugs—essentially a history of medicine in ten notable drugs—is one fantastic example.
  • The Emperor’s Soul by Brandon Sanderson. This was the first experience I had with a book I truly couldn’t put down—I finished it in a single night—and to date one of only a handful of examples of that phenomenon. Most of which have come from Sanderson. And speaking of which…
  • Elantris by Brandon Sanderson. With the benefit of hindsight, I can recognize in general what I loved about this book in particular: Sanderson writes the level of fantasy that I like, interesting and fantastical without feeling like you need a bestiary and a dictionary alongside to understand it. He also treads the line of fanservice beautifully, giving the reader enough satisfying developments to stay engaged without becoming utterly predictable. I’ve gone on to feel the same way about Mistborn, and I can already tell my daughter and I will love Skyward.
  • Deaf Republic by Ilya Kaminsky. This was my first experience with a book of poetry that also tells a story, and it’s stunning to me how effective the medium is for something more narrative.
  • City of Endless Night by Milo Hastings. I picked this up to read on Kindle because, frankly, it was free, given that it’s 100 years old and in the public domain. I was stunned by both how interesting the story was, and by how prescient some if its predictions were. This was a book released in 1920 whose entire basis was the developments in the world after World War II against Germany. You read that correctly: it was released in 1920, based on how the world developed after World War II against Germany.
  • The Starless Sea by Erin Morgenstern. If I’m ever interviewed and asked my favorite author, the answer is Erin Morgenstern. It almost feels wrong to say given that she’s only written the two books—by comparison, I’ve read more than a dozen each by Terry Pratchett, Billy Collins, Neil Gaiman, and Jeremy Robinson, and several by other favorites of mine like Becky Chambers, Max Barry, Leigh Bardugo, David Wong/Jason Pargin, Rhett C. Bruno, Yahtzee Croshaw, Mark Forsyth, Stephen Fry, Sam Kean, Seanan McGuire, Scott Meyer, Kaoru Mori, Randall Munroe, Oliver Sacks, Brandon Sanderson, and Dennis Taylor (okay, that was mostly an exercise in brainstorming my favorite authors), but Morgenstern’s two books have left the most indelible memories in my mind.