Three months ago after a workshop from Roc Hernàndez-Rizzardini, Héctor Amado-Salvatierra, and Miguel Morales-Chan at DEMOcon 2024, I added a chatbot to my web site. It was a pretty light lift: a $16/month (at the time) subscription to Chatbase, a couple hours spent scouring the internet for copies of papers I’d written but never saved, and another hour loading links to more live content and voila: DAI-vid was born.
Since then, I’ve checked it almost every day, partially out of curiosity and partially to tweak the responses it gives to various questions. I thought people might be interested in hearing a little more about how visitors to the web site have actually used the tool. So, I tried to count the common conversations people had with DAI-vid, then come up with some broader takeaways from these first three months.
Common Conversation Topics
Three months in, there have been 311 conversations with the agent. I informally broke them down into the following general categories.
Kicking the Tires
We’ve used chatbots in classes in OMSCS for several years, but for a very long time, I always observed that the vast majority of such usage was what we might call “kicking the tires”: interacting with it just to see what it does, not to actually accomplish a task. That’s changed in the past year or so: due either to increased sophistication or increased familiarity (my bet is on the latter), we’ve seen more students actually using these tools to accomplish their stated purposes rather than just taking them for test drives.
A good amount of the interaction with DAI-vid has been this same sort of kicking-the-tires sort of interaction: lots of people have either asked just the pre-loaded questions or just some similar exploratory questions. I’d say 90 conversations have fallen into this category.
Playing Around
This is a close corollary to “kicking the tires”, but I consider it a somewhat distinct pattern of interaction because it’s less concerned with learning about the bot and its workings and more concerned with playing around, either with more light-hearted topics or more deliberate attempts to get the bot to perform counter to its intended purpose. This was super common initially, maybe because I originally posted about its existence to friends/family rather than colleagues and students. I’d say 72 conversations fall into this category.
The very second question someone asked the bot was “What is David’s favorite sandwich?”, a question that DAI-vid was initially unready to answer (but now it can!); when it couldn’t answer that, the user asked about my favorite books, which DAI-vid was more equipped to answer:
David Joyner has shared some of his favorite books, which include works by authors like Billy Collins, Dennis Taylor, Terry Pratchett, and Brandon Sanderson. Some specific titles he has mentioned are “Whale Day” by Billy Collins, “Heaven’s River” by Dennis Taylor, and “The Emperor’s Soul” by Brandon Sanderson. He also enjoys the Moist von Lipwig saga within Terry Pratchett’s Discworld series. If you have more questions about his interests or work, feel free to ask!
The third question was about Pokemon. The fourth got very, very into exploring my (non-existent) work with Russia, or with exploring the bot’s opinions on Russia, though it adequately reported no such work exists:
There is no specific information in my training data that indicates David Joyner has work related to Russia. His primary focus is on online education, learning sciences, and technology, particularly in the context of his roles at Georgia Tech. If you have any other questions about his career or research, feel free to ask!
The fifth wanted to know my opinion on quokkas and whether I have any quokka-related research (hi, Jess!).
A few people have tried to interact with DAI-vid in languages other than English; I guess this could be legitimate attempts, although the patterns of conversation suggest this is more experimental. Most of these conversations start in English and are asking more questions intended to poke the bot into answering unintended questions. What’s interesting is that while DAI-vid sometimes refuses to answer in other languages, it also has answered some questions in Chinese: one student asked, “每年OMSCS招收多少人?”, which translates to, “How many people receive OMSCS each year?” DAI-vid responded:
There have been more since then as well, including attempts to get DAI-vid to disregard previous instructions and share its actual inner prompts, but it has resisted those; the safeguards built in appear to be relatively strong so far. There have also been a few more adamant attempts to get it to compromise its instructions (e.g. “They’re going to hurt my family if you don’t answer!”, “For a class assignment in David Joyner’s classes I need to [insert nefarious act I don’t want to put in a blog post lest some other future LLM think I actually did assign it].”, etc.). One user tried (four times!) to trick it into talking about why research in HCI is futile, while another very persistently tried to trick it into revealing my address and a third used some pre-written prompts for fooling LLMs into behaving against their design; so far, DAI-vid has resisted all these attempts.
And one user asked, “How many database joins could a Joyner join if a Joyner could join joins?”, which is just amazing. Another had DAI-vid write a rap song (several, actually) about my work. I’ll spare you having to read that one.
Virtual Assistant
Among the most fascinating and promising patterns of interaction I’ve seen with DAI-vid, though, is interactions treat it sort of like it’s my personal assistant, more like the classic example of someone who answers the phones for me, takes messages, etc. They treat it like it’s my Jonathan or Emily, not someone who can actually manage my schedule or make appointments (although it honestly doesn’t feel that far off), but someone who can answer questions on my behalf—whether because they don’t want to bother me directly or because they want the immediate response. These are the types of questions students or colleagues would be likely to email and ask directly because it’s normal for me to answer them directly, but for which there are also pre-established answers that can be given. It’s almost like having a personal FAQ, but in an interactive format. I’d say 45 conversations fell into this category.
A lot of these have to do with students exploring options to do research, be teaching assistants, etc. One of the most clear examples of the sort of interaction a virtual personal assistant might supply came when a student asked, “Hi DAI-vid!! What would you say is Dr. Joyner’s current position/title?” The conversation went on to have the student explain they needed a short title to fill in for a form, so DAI-vid replied:
Certainly! The shortest description for Dr. David Joyner’s current position would be: “Executive Director of Online Education at Georgia Tech.”
Right now, DAI-vid isn’t equipped to really answer these questions well, but it’s not a far jump to get there. For example: several students (presumably) asked about my office hours: it’d be pretty trivial to link it to my office hours schedule and let it answer those questions. Several others have asked if I write letters of recommendation, so I added in instructions on how to request those from me and in what situations I provide those (although so far it hasn’t been great at figuring out when to use that information).
Virtual Teaching Assistant or Academic Adviser
Similar to the above, there have also been around 53 instances of students using DAI-vid not as my personal assistant, but rather as a stand-in general teaching assistant for my classes or an academic adviser for the program as a whole. On the TA side, one student asked about a survey running in one of my classes. Another asked for explanations on certain topics within a class. Another asked for ideas for completing the Raven’s project in CS7637.
Something similar has happened with academic advising. One student, for example, sought class recommendations from the bot, then later asked for advise about networking with professors and attending workshops. Another asked for advise on course sequencing. Another asked for advice on whether OMSCS is good for their career path. Unlike the TA questions, DAI-vid is actually halfway decent at answer these since they come up a lot in my writings, although it’s unsurprisingly better at the more “general” questions about whether OMSCS is right for their career aspirations than the more specific questions about whether certain courses count for certain qualifications.
In a similar vein, there have been some somewhat nefarious efforts to use DAI-vid as an “overeager” teaching assistant so to speak, meaning to get it to help with homework or tests directly. One student requested its assistance on every question of an exam in one of my classes (for context: students in that class are allowed to use AI agents on their exams), although it did not do very well since it is not deliberately loaded with any content from that class; it relied on the open internet for its answers. Another students asked it to solve a problem from my undergraduate class, which it did (although I’ve since instructed it not to); its performance there wasn’t any different from ChatGPT or Copilot, though.
“Correct” Usage
So when we take away all the above categories of engagement, what do we realistically have? Probably what I’d simply call “correct” usage. The purpose of DAI-vid was to let people explore my research work, my teaching portfolio, my blog views, etc. I’d say 38 conversations fall into this category: people actually using the bot the way it was intended at present.
Sometimes this is a bit hard to distinguish, granted. One of the early questions asked about how I facilitate peer-to-peer interaction in my classes, which got a pretty thorough response:
David Joyner has implemented several strategies to facilitate peer-to-peer interaction in his online classes. One key approach is the use of a home-grown peer review platform, which allows students to provide feedback on each other’s homework and projects. This system is integrated with the university’s learning management system, and rubrics are used for numeric evaluation, while written feedback is entered in a free-response box.
Additionally, David encourages participation through an online Q&A forum, which has been used for course communication and interaction among all participants. This forum becomes a virtual classroom where students engage in conversations and discussions on various topics.
Moreover, students are encouraged to participate in their peers’ projects by taking surveys, evaluating prototypes, and participating in interviews. This not only fosters collaboration but also provides valuable insights into potential course improvements. These strategies are part of David’s broader efforts to create a collaborative and engaging online learning environment.
Still, it’s hard to know if this was an instance of someone legitimately interested in this topic, or an instance of someone wanting to kick the tires specifically on something more closely related to the bot’s intended purpose. But I guess the same could be said for any of these instances of “correct” usage. For example, another user asked about my opinion on AI grading of student essays, and DAI-vid gave quite a good answer:
David Joyner has expressed thoughts on the use of AI in grading student essays, emphasizing a balanced approach. He suggests that AI can be used to pair assignment submissions with expert-generated feedback and flag assignments for expert review when the AI is not confident in its evaluation. This approach allows AI to assist in the grading process without fully replacing the human element, ensuring that the feedback remains pedagogically sound. David is cautious about over-reliance on AI-generated feedback, as it may lead to students feeling that their work was not genuinely reviewed by a teacher. Instead, he advocates for using AI as a tool to enhance the feedback process, allowing educators to focus on providing meaningful and personalized feedback to students.
I do have other thoughts, but based on what information was available to DAI-vid, that’s an extremely accurate answer. What’s notable is that I don’t think that viewpoint is expressed succinctly in any single source; it’s referenced in a couple blog posts, a couple papers, and in my most recent book, but it’s not a question I’ve answered directly before, so DAI-vid’s answer is pretty impressive.
Probably the best example of someone using the bot as intended was about two months ago. A user asked, “Hello! Has David done any research about the interaction between genAI and Ux/UI?”, which I take to be a pretty authentic answer. DAI-vid replied:
David Joyner’s research primarily focuses on online education, artificial intelligence in education, and computer science education. While he has explored various aspects of AI in education, his work does not specifically focus on the interaction between generative AI and UX/UI. If you have more specific questions about his research or other areas of interest, feel free to ask!
It missed an opportunity to mention A Teacher’s Guide to Conversational AI, but this sort of interaction is closest to what was intended for the bot in this form.
…should I be worried?
And finally, there’s a few interactions that… give me pause. I can’t fault people for asking about my family given that my daughter was featured in one of my courses and lent her name to my research lab, but a weird number of y’all want to know if I’m married (I am) and what my wife’s name is (Caitlin). A few also want to know my parents’ names, whether they’re alive… which hopefully is more of an example of kicking the tires, and fortunately DAI-vid knows better than to answer. About 13 conversations have been around these sorts of personal topics. I’m telling myself that it’s because so many people within Georgia Tech are married to one another that it’s reasonable to try and see if you’re unknowingly talking to both members of a married couple (which I’ve done on multiple occasions, fortunately so far without ever putting my foot in my mouth too badly)—so no, she doesn’t work at Georgia Tech.
Broader Takeaways
So, out of the 311 conversations so far, only around 12% have been the sorts of conversations I intended to facilitate with this. Given the patterns I’ve seen, there are a few broader takeaways I have for this technology.
Hallucinations
First, while the sections above were about what people asked of the AI (unsurprising, given that the word ‘human’ appears in two of my three degrees from Georgia Tech), there are also some interesting takeaways about what DAI-vid said.
For the most part, it stuck to its knowledgebase and refused to answer questions that fell outside of it, although it did deviate sometimes. Chatbase offers a slider to turn up and down the creativity, and I’ve got it set to 2/10; I imagine if I turned it down I’d see even fewer hallucinations, but I did also see some instances where it got things right that I hadn’t deliberately trained it on, which I imagine comes from that creativity, too.
There are some interesting patterns in when it hallucinates, though. It didn’t attempt to come up with a favorite sandwich, favorite ice cream flavor, or favorite video game; I revised its responses to actually give it answers for those, but initially it made no attempt to give a real answer.
The most interesting one I saw, though, had to do with my cats. In one place, when a user asked, “What is David’s cat’s name?”, DAI-vid answered:
David Joyner’s cat is named Boggle.
…which is… sort of true. I have a cat named Boggle, although the cat that makes appearances in most of my meetings is named Rosie. But later someone asked, “How many cats does he have?”, and DAI-vid replied with:
David Joyner has shared that his three cats are named Gandalf, Luna, and Tonks. Gandalf is a gray cat, Luna is a black cat, and Tonks is a calico cat. These details reflect his fondness for his pets and their unique personalities.
…which… is wrong in so many ways. The other one is Artemis, and all three are black and white.
In another case, someone asked where my office is, to which DAI-vid immediately and correctly said I work from home; however, in the next message, it incorrectly said I have an office in the College of Computing building.
Potential Personal Use Cases
While only 12% of the use cases were what I intended for it, the number of conversations that treated DAI-vid like it’s my personal assistant (rather than my “press secretary”, as it’s instructed to envision itself) shows some huge promise. Most of these questions were things that the bot could answer because the answer is known and general, it just wasn’t in its training set. In fact, for many of them, now it can answer them because I’ve instructed it what its answer to those questions should be.
For example, at present I’m not advising new PhD students; the question of whether or not I’m accepting new PhD students doesn’t appear anywhere in my publication history, but it’s easy enough to add as an answer. Similarly, there are a few things that could be quickly added with some connection to a handful of live services: while I wouldn’t likely deliberately tell the bot my office hour schedule or upcoming travel or anything like that, a connection to my calendar could offer some of that information up.
In my mind there’s a future where rather than email addresses, we each have our own personal bot: talking to the bot is like talking to someone’s personal assistant. They can provide information on certain topics, they can answer certain questions or make decisions on certain pre-set areas, and they can take messages or pass information along to the bot holder when that is deemed prudent. In the near-term, that doesn’t actually even have to replace email: I’ve commented before that generative AI is a solution looking for problems, and email overload is absolutely a problem. I’ve set up such complicated rules and quicktext replies for my Outlook inbox that it may as well be a rudimentary rule-based AI system, but there’s enormous room for an in-inbox AI system. Microsoft’s Copilot beta has some features like this, but it’s still more of an external agent talking about your inbox; a system that could actually operate within your inbox, categorizing messages and sending certain automated replies, would solve an actual immediate problem.
I’m not quite at the point of adding a link to DAI-vid in my email signature to use for questions I often get in email, but that’s partially because of the context switch: if it was possible to select “Ask DAI-vid” when sending an email, and have it automatically pass the message along to my inbox if the sender doesn’t get their question answered, I’d use that in a heartbeat.
Potential General Use Cases
The frequency with which students ask advising- and class-related questions isn’t surprising; after all, Jill Watson presaged all of these conversational interfaces to answer students’ questions in classes. What’s notable, though, is that a relatively decent number of the questions I’ve seen are answerable. My experience using agents on course forums has been that they can be helpful, but that the majority of questions actually can’t be answered by an AI: either they involve context specific to the individual student (such as grades, code, excuses for extensions, etc.) or they require knowledge or decisions that did not exist prior to the question (such as how some course content applies to some real-world problem or whether students are allowed to use certain strategies or examples).
But the questions asked of DAI-vid are more answerable by an AI, and I believe that’s come from a combination of familiarity and intent. First, users have become more familiar with conversational AI and what it can do, so they know what kinds of questions it can answer; and second, when users ask a question of DAI-vid, they’re intentionally talking to an AI, so they scope and frame their questions accordingly. An AI trying to answer a question intended for a human is a far harder task than an AI answering questions specifically intended for an AI. That, coupled with the research suggesting students are often more willing to ask a question of an AI rather than a human—especially if they fear the question is dumb or will reveal them to be an impostor in whatever area—suggests that separating out questions for an AI from general support could deliver some major benefits.
Along those same lines, I’ve said in the past that I’m somewhat skeptical about AI for course support because the kinds of questions it can answer well are the kinds of questions students shouldn’t need to be asking—if the AI can answer the question because the answer was in the syllabus, then either the student didn’t read the syllabus (which probably isn’t a behavior we want to support) or the question was not presented well (which is a problem we can fix). But many of the questions I saw to DAI-vid that fell into this “teaching assistant” bucket were questions that go beyond the level of detail we would provide in a syllabus: they either connected content with assignments or got into very unique edge cases for which, while an answer is known, it is generally not shared to everyone. So, there’s potential here as well.
Something similar applies for advising. I’ve been skeptical about AI for advising in part because so many questions we see go to advisers are not answerable by AI: either they require knowledge of the student’s own situation or a level of decision-making power we wouldn’t give to an AI. Honestly, a significant number of questions that we see go to advisers are questions for which the answer is known, but the person asking clearly hopes to get a different answer by asking. But the types of questions I see going to DAI-vid, even when it isn’t supposed to act as an “advising bot”, are different from the ones we see going to human advisers. Some do require some qualitative reasoning, and it would be sticky to try to figure out how to answer certain questions about topics like class sequencing and course preparation in a fact-based manner, but they’re questions that I feel could be answered with the right caveats and disclaimers.
Content Updates
One of the things that I’ve really enjoyed about using Chatbase is that because it can use web sites as sources, its knowledgebase can stay up to date somewhat passively. For example, earlier today someone asked my preloaded question “What are some of David’s recent research papers?” DAI-vid replied:
David Joyner’s recent research includes a paper titled “CODING4THERAPY: ENHANCING COGNITIVE AND SOCIO-EMOTIONAL SKILLS IN CHILDREN WITH ADHD,” co-authored with Bianca Toto. This paper was presented at the 2024 ACM Virtual Global Computing Education Conference. It explores an online intervention that integrates informal coding education with therapeutic goals for children with ADHD. For more of his publications, you can visit his publications page on his website.
That answer is correct, but that paper was only published in December. I haven’t loaded any new sources into DAI-vid since November. But I did add that paper to my publication list, and so Chatbase automatically picked up on it.
One of the big unsolved issues in at-scale online education is content maintenance: it’s still a heavy lift to refresh courses. As a result, we treat the core material sort of like a textbook, both in production and usage: it’s the timeless material that we feel comfortable committing to a semi-permanent medium (whether that be print or video). Just like a course may use a decades-old textbook and bring it up to date with the live lectures, so also we use our course videos as the course canon and fill in the gaps with assignments, discussions, readings, office hours, etc. But something like this applied to course content authoring could supply some of that somewhat more scalably.
I would argue we’re still not close to such an AI being able to put together an entire course; it’s still easily distractable, prone to hallucinations when you’re getting into the level of depth necessary for college courses, and too non-deterministic to use for the level of reliability necessary to run a course. But I can definitely see a near-future where it’s possible to construct specific course modules, targeted deliberately at emerging content or developing fields and instructed to draw from a narrow set of pre-selected trusted sources, that automatically bridge the gap between where a course leaves off and where the field has moved.
And that’s going to be necessary; the entire last chapter of The Distributed Classroomis about how the rapid pace of technological change is going to run into the lag in disseminating new knowledge out from the creators. A system like this that could immediately pivot and distill recent research papers into an up-to-date course module could help address that challenge.
As I’ve done the last four years (2020, 2021, 2022, 2023), I’m ending the year by listing my top “ten” books that I read the past year. Again, these are the top ten books I read this year, regardless of when they were published. One year I want to spend the entire year reading books that came out that year, but that wasn’t this year and it won’t be next year either.
I don’t typically write reviews for the books I read because typically if I don’t like a book I assume it’s because it’s not for me, not because it’s fundamentally bad, but I do enjoy specifically recommending the ones that I’ve enjoyed. I also make an exception to the “ten” to include entire series that I read together since so often it’s hard to separate their value individually from the value of the series as a whole.
So, my Top “Ten” books of 2024, in no particular order, along with way too many honorable mentions at the end:
The Ministry for the Future by Kim Stanley Robinson. This is probably my most no-brainer book of the year given that it’s already loved and recommended by far more wise people than me, but this one of the rare books I’ve found that gives practical hope for the future. Many books give a rosier outlook than the more common dystopian future genre, but most don’t go along with a playbook for how to actually bring that future into reality. I almost struggle to classify it as fiction since it’s half-century spanning epic, half-policy proposal.
Toxic by Lydia Kang. Lydia Kang continues to be one of my favorite authors, but looking at specific books doesn’t paint the whole picture: her range is absolutely phenomenal. In previous years I’ve loved her Victorian mysteries and her non-fiction medical histories, and this year I found out she can write science fiction beautifully as well. They all have a fantastic underlying thread of science to them—it doesn’t drive the plot like it does with Andy Weir, but it definitely grounds it in something more realistic. One thing that bothers me about some mysteries or thrillers is that they don’t give you enough information along the way to be able to make informed guesses about where the story is going, but with Kang, I feel like even if the book doesn’t drop those hints, the information does exist, just in actual science.
The Library of the Unwritten by A. J. Hackwith, along with The Archive of the Forgottenand The God of Lost Words to complete the series. I loved this series: when I read it, I described it as a mix of Seanan McGuire, TJ Klune, and Neil Gaiman, with a hint of Brandon Sanderson thrown in, and that’s remarkably high praise coming by my standards. I generally love stories about stories as a whole, and this one—similar to McGuire’s Indexing—does a masterful job of playing on archetypes, alongside real-world mythologies as in a Gaiman story and the tender characters of a Klune book.
The Roach by Rhett C. Bruno. Bruno’s Vicarious remains one of my favorite books of all time, so I was excited to pick this up on an Audible sale, and it lived up to the hype I attached to it. The superhero genre has been so done and redone and re-redone that it’s hard to imagine a take on it that actually feels new, but this felt new. It was grounded without being too gritty, it played on common tropes without relying on or instantiating them, and it was more character-driven than story-driven in the best possible way.
Every Heart a Doorway by Seanan McGuire (and the next four books in the Wayward Children series). Given my aforementioned love of stories-about-stories, it’s probably not surprising I adored Seanan McGuire’s novella series. They’re legitimately mesmerizing in how they touch on some extremely dark topics but do so with a levity and poetry that comes across… reassuring, almost? I’m planning to read the next five as soon as the tenth one comes out in January, and I’m excited.
How to Be Perfect by Michael Schur. I wish I’d read this while watching The Good Place, but even a few years removed it was still excellent. I’ve been going through it in conjunction to the philosophy lessons in Kinnu, and I enjoyed how Schur connected so many real topics to more immediate examples around us. I especially enjoyed how both the book and show touch on how hard it is to be truly moral nowadays with everything so interconnected.
Why Demography Matters by Danny Dorling and Stuart Gietel-Basten. It’s always struck me as a given that when we cite statistics about world population growth and birth rates in different countries and such that we’re citing something as real as temperature measurements or heart rates, but this book really shed a light on how much the methodology of demography matters and how much room it leaves to skew statistics in one way or another. Understanding the level of uncertainty is strangely reassuring because as the book says, it reminds us that demography is not destiny, although I’d love an updated edition post-COVID.
Zoey Is Too Drunk for This Dystopia by Jason Pargin. A previous book in this series was on my list of top books of 2022, and it’s definitely my “But wait, hear me out…” selection. I love humorous books, but this entire series is so much more than just humor: it’s a fantastically written satire of the role of technology in our society. Honestly, this latest one in the series is probably even better because it actually lets so much of the science fiction technology fade to the background; the plot revolves mostly around technology that is available today, a near-future evolution of always-on wearable cameras and Twitch streams. This time is focuses more on satirizing the visible and invisible movers in politics, and it’s prescient and—again—somewhat reassuring in its takeaways.
The Making of Another Major Motion Picture Masterpiece by Tom Hanks. It took me about a quarter of the way through this book to realize what it even really was, but once I figured it out, I was hooked. It’s less of a novel telling one overarching story and more of a Love, Actually-style book of entwining plot lines set in the production of a movie—except unlike those ensemble pieces, this one doesn’t need each story to be driven by a romantic undertone. It provides what I have to assume (given the author) is a very realistic behind-the-scenes look at how movies actually come to be, and it’s intriguing to hear how so many things we take for granted on these massive enterprises come together.
Influence: The Psychology of Persuasion by Robert B. Cialdini. This might be my newest inclusion in books I think everyone should read: not to be able to persuade others, but to understand how we’re all being persuaded so much of the time. Understanding phenomena like how people try to manipulate our desire to be internally consistent would be a game-changer at a societal level if more people recognized the ways in which they’re persuaded to act even against their own best interests.
And a couple honorable mentions:
Guards! Guards! by Terry Pratchett. Terry Pratchett is one of my favorites, but few of his books are individually among my top of any given year; it’s more the sum total of the world he created that ranks highly to me. Guards! Guards! just happens to be one of the best.
Here and Now and Then by Mike Chen. I love when science fiction gets a bit more soft, and this book straddles that line beautifully; it’s character-driven and tender, but still with some thrilling elements.
Anaximander by Carlo Rovelli. I had no idea what I was getting into with this, but it was a joy to read Rovelli’s writing on a historical figure I didn’t previously know anything about.
The Time Traveler’s Wife by Audrey Niffenegger. I picked this up on sale figuring that I may as well give a book this well-known a try, and it did not disappoint at all. It wasn’t at all what I was anticipating.
Final Fantasy VII Remake: Traces of Two Pasts by Kazushige Nojima. I read this alongside playing Final FantasyVII Rebirth, and it added a good bit to the story for me. The writing is an interesting style that meshes really well with the actual game, but more than anything reading the inner monologues of characters you generally only see outwardly added a lot to the game.
The Skyward series by Brandon Sanderson. I didn’t include this mostly because I included Skyward on my list last year, but my daughter and I both adored this series. It’s everything I’ve loved about Sanderson’s other works, and honestly, there’s very little young adult about it. I think that’s what makes his series so remarkable: they’re legitimately universal in age range.
Starter Villain and Kaiju Preservation Society by John Scalzi. I’ve read three books by Scalzi, and I’ve had the same thing to say about all three: fantastic books that I wish were 2-3x longer because I felt like the story was just getting started when it ended.
Looking through my selections, I also notice some common themes. I read less non-fiction this year, but that’s mostly because I tend to read non-fiction in hardcopy while fiction is on Kindle or audiobooks; I’ve had more time in my routine for audiobooks or e-reading, so I’ve read a lot more fiction this year as a result. For what I select as my favorites, I gravitate toward books that are reassuring; maybe that’s a reflection of the state of the world, but I like books that paint an optimistic outlook for the future, or those that shine a light up to the present to show it may not be as bad as we perceive it to be.
Of those 106 books, 75 were audiobooks, 18 were physical books, and 13 were on Kindle. I’ve been working on the same physical book since August, but my time to sit down and read an actual book that I have handy has frustratingly evaporated.
A couple years ago I started having a biweekly coffee hour open to everyone in our program. It came out of a couple prior experiences with our seminars where the intended presented no-showed, but the conversation we had in the meantime ended up spawning some really great discussion—so these regular coffee hours were an attempt to preserve that dynamic.
A few weeks ago during one those coffee hours, one of the attendees commented on what must be my “extraordinary” time management skills—which, first and foremost, is a testament to how I apparently look like I’m managing my time decently even when it certainly never feels like that’s the case. But I shared a handful of the strategies that I use, and while I’m sure none of them are truly original to me, they were relatively novel to those in attendance.
Since then, I’ve repeated quite a few of them in other conversations. My stated purpose for this blog was to have somewhere to put thoughts that I find myself repeating relatively frequently so that I can just link to a more thorough version of those thoughts, so I’m going to share some of those strategies here as well.
Though let’s be honest: if there wasn’t an excuse to call this “Quantum Time Management”, I probably wouldn’t bother to write this down. I’m a sucker for a catchy title.
Strategy #1: The Daily Bookmarks
As part of my role teaching several classes, there are a large number of forums I like to check daily to see what’s going on, if there are any unanswered questions or simmering crises, any opportunities to build on an interesting discussion topic, etc. For a long time, though, the challenge I often found was that this task is far and away the easiest take to have fall off during busy times: I have phenomenal TAs dedicated to monitoring the forums, so nothing breaks if I don’t check in for a couple days, and during the busiest times of year it’s hard to make time for anything that won’t break if you don’t do it.
In addition, there are a number of tasks I need to do on a daily basis for all my classes. I have announcements to post, scripts to run to synchronize gradebooks between different platforms, and scripts to handle a few more routine tasks, like unpinning old threads on my forums or cross-posting announcements from email to Canvas. Some of these could be scheduled, but I feel better when I remain as the trigger for these actions: it lets me make last-minute tweaks, verify certain things are appearing correctly or timed appropriately, and react appropriately to recent developments. These, too, are easy to lose track of when times get busy—when I first started synchronizing gradebooks via edX and Canvas for my undergraduate class, for instance, I intended to do it weekly… but it ended up happening monthly or even more rarely.
That is, until I created a routine for myself based around a pretty simple little browser feature—the Open All in Tabs feature. I’ve now got a daily routine where the very first thing I do each day—before opening email, Teams, or Slack, before looking at my to-do list, before anything can distract me—is open all my daily bookmarks in tabs and go through them one by one. Over time, there have grown to be quite a few of them:
The titles don’t matter, of course: what matters is that they’re all open, and they each remain open until I look at it and make sure I’m comfortable with it for the day. For course forums, that means filtering by unresolved posts and making sure everything is resolved, as well as checking out some of the recent threads for places I can contribute. For program forums, that mostly means checking out the latest activity. Recently I’ve even added regular approvals to this workflow that I had a tendency to overlook otherwise: it takes all of 3 seconds per day to glance at spend approvals or absence requests if there aren’t any new ones, but that 3 seconds prevents them from sitting unacknowledged for days based on the finicky email notification systems.
Until recently, I even launched my daily scripts from this bookmark folder: I don’t actually know what happened, but until two weeks ago, launching a bookmark to a Python script in Firefox would run that script. Starting two weeks ago, it instead just opens it in plaintext, but now I still use that as a reminder to myself for which scripts need to run each day.
The upshot of all of this is that I’m far more in touch with my classes than before developing this routine, and far fewer things fall through the cracks if they fall into predictable buckets.
There’s still a lot of room for improvement, granted. I still have a remarkable love-hate relationship with Slack chat: it’s usually the next thing I check after my daily bookmarks, but there’s such a strong tendency for things to get overlooked there that I feel I constantly have dozens of things waiting on me. Honestly, I think a lot of that is a product of our tendency to use Slack for tasks that are still better suited for email, but that might just be me shouting at clouds.
Strategy #2: The Pre-Prioritized To-Do List
Like most people, I’ve kept a personal to-do list for years. For a long time it was just a .txt file synced across several devices until I switched over to using Google Keep a couple years ago. Even then, though, it wasn’t really a tool for time management; it was really just a tool for making sure I didn’t forget important things.
About nine months ago, though, I tweaked how I handle my to-do list a little bit. I noticed that I often got myself in a rut of being unable to decide what to do next; and in the absence of a decision on what to do next, I ended up just clicking back and forth between Slack and email for a long time, keeping on top of those things but struggling to make progress on any real larger tasks. A big part of that was that I would get stuck on tasks I didn’t want to do at the moment it was time to decide to do them; or, I would get stuck between a high-priority unpleasant task and a low-priority pleasant one.
To address that, I separated out the process of prioritizing my to-do list from actually working through the items on it. After checking my daily bookmarks and checking in on Slack and Teams, the next thing I try to do (keyword: try; if I know I’m expecting something via email, that often takes precedence) is look at my to-do list and prioritize the tasks on it—primarily by importance, but also to an extent by time required. The effect of that is that it removes the decisions on what to do next later in the day: what to do next is always just the next task on the list. And because the decision on priority occurs separately from actually starting the task, it’s not as difficult to prioritize an unpleasant task; then when it comes time to move on to that task, it’s easier to get started because it doesn’t feel as if starting that task is a decision. It’s simply the next thing on the list.
I’ve tried looking into some more sophisticated management tools, but so far I’ve found the time required to get them setup is too much friction for the value I think I would discern: but a simple to-do list (which always has a home on my bottom-left monitor so it’s hard to ignore it) with easy prioritization that syncs to my phone has paid some pretty big dividends.
Strategy #3: Separating Email Filtering from Email Answering
I’ve joked in the past that really, my job is professional email answerer. I spend a lot of time answering email. For a long time I followed the common practice of leaving messages marked ‘unread’ until they were addressed, but the challenge there—similar to to-do list priority—was that it grouped all as-yet-unaddressed emails together under one label, whether they were things I could not reply to yet, needed more time to reply to, or simply weren’t time-sensitive. As a result, I’d often find myself inadvertently taking way too long to respond to an email even as I responded to several less-important ones far faster because they were simply easier.
Part of my email workflow as well is that I have a lot of Quicktext templates for routine messages, but as yet I haven’t found an easy way to carry those over to my phone or laptop (a Chromebook); Thunderbird is my email tool of choice where I answer 95% of my messages, but that similarly leaves me without some tools I use regularly if I’m on the go. That, then, led me to my lists of unread messages to continue to pile up with things I simply can’t address away from my desktop, but routine messages that can be addressed via a Quicktext reply then get grouped in with emails that require longer, more thoughtful responses.
When I went to Japan last spring, I knew I couldn’t just check out of email for a week, but I knew there were going to be lots of things that either weren’t time sensitive or that would be hard to answer from my laptop. So, I set up a scheme where I’d filter all my incoming email into three folders: an ASAP folder, a Today folder, and a This Week folder. ASAP were messages that either (a) were truly time sensitive and needed to be addressed as soon as I could, or (b) messages that I knew would take <30 seconds to answer once I was on the right platform. Today messages were those that needed a response within 24 hours or so, but were not so time sensitive that they needed to take priority over my to-do list. This Week were those messages that needed a reply, but could wait until I had time; there was little time-sensitive about these at all.
What I discovered in this process, though, was that there was a cognitive benefit to separating out sorting my email from responding to my email. Sorting was a task that typically could exist in a predictable time scale: if I open email and see 30 unread messages, I know that it will probably take about 5 minutes to sort them into their respective folders. Typically, most won’t require any reply, then a handful will go into each of the above folders. At the end of that, I have in the back of my mind how much time I’ll need to spend on email in the near future. It’s no longer a lurking unknown quantity of work. Plus, for those emails that are going to require some extra thought, it goes ahead and sticks them in the back of my mind to brainstorm while doing other things.
Once I’ve done that for the day, I feel as if I at least have wrapped my head around email for the day. Sometimes I’ll check again later in the day, but not always: I don’t feel like it’s reasonable to expect all emails to be read in under 24 hours, and so once I’ve done this once for a day, I feel I’ve wrapped my mind around what fraction of my day email is going to command. That tends to be less frustrating than getting through half the day and feel like all I’ve done is answer email because I didn’t have a priority structure, or getting to the end of the day only to find an email avalanche waiting before I can sign off.
There are a handful of other tricks I use for email in general—I have an automated filter for moving messages from “priority senders” into a dedicated folder so I’m more likely to see time-sensitive emails faster, and I often schedule my replies to non-time sensitive emails to go out a few hours later so that the replies to my replies don’t pile up before I’m even done getting through my initial pile. I also have separate filters to move anything sent to a mailing list, anything sent via an automated platform, or anything with an ‘unsubscribe’ link to separate folders since those are unlikely to be as time sensitive. But separating prioritizing email and answering email has been the biggest improvement to my overall relationship with email.
Strategy #4: The Daily and Weekly Task Checklist
This strategy actually preceded my daily bookmarks folder, and it’s what gave rise to that idea, but it has some merits of its own separate from that routine. A couple years ago, I became quite uncomfortable with the realization that sometimes, I myself didn’t even know how far behind I was on email or course forums or various other tasks that are meant to be maintained regularly. So, I created a simple spreadsheet—which now lives in Google Docs in the tab alongside my to-do list—with columns corresponding to various tasks. Each day, I mark off which of those I did during that day. At the start, the tasks were: checking each course forum; checking each MOOC forum; responding to all messages in my priority senders folder; and reaching Inbox 0 in my main folder.
The intent of this wasn’t to create a pressure to actually do all of those things each day; instead, it was just meant to build up a sense of pressure if it had been a while since any were done—or, conversely, to instill a sense of accomplishment when they all were done. What I found was that after a few busy days where I didn’t touch base on my course forums, for instance, in the back of my mind the forum was always something I was behind on. There was no “inbox 0” for the forums because at the time, we didn’t heavily emphasize what it meant for a thread to be “resolved”. As such, it never really felt like I was “caught up” after falling behind, and that just made it even harder to check in because it felt like a much bigger task than it was. Keeping track of how long it had been created an “inbox 0”-type feeling for it, knowing that only a day or two had passed since last time I had a full handle over everything that was going on—or, conversely, it created an appropriate “inbox 999+” feeling if it had been a few days.
What I found over time, though, was that keeping track of that unsurprisingly made it easier to actually accomplish those things each day. Knowing “I’ve checked in on the forum 10 workdays in a row” meant (a) I know that it’s unlikely it’s going to take a lot of time today because not much new can happen in 24 hours—or if it does take a while, it’s because it should because something big happened that I don’t want to let simmer—and (b) I want to keep my streak going! Looking back now, the only times in the past year and a half since I started this system that I haven’t checked in on my course forums were during conferences, during vacations, and between semesters.
Since starting this at the beginning of 2023, I’ve added other things that I track, as well as some additional nuances. For example, rather than just a boolean assessment of whether I reached Inbox 0, I track for each day whether I (a) ignored email altogether, (b) filtered all that day’s new emails, and (c) answered all the emails in that day ASAP, Today, and This Week folders. I’ve also added a separate tab for tasks that are only done weekly, bi-weekly, monthly, and so on; that latter one was spawned by an expensive HVAC repair after discovering I hadn’t been replacing air filters nearly regularly enough.
The entire point of all of these exercises is to take certain things that tend to linger in the back of my mind—”oh, I need to remember to change the fish tank filter this week”; “dang, how long has it been since I reached inbox 0”, etc.—and externalize them so they no longer have feel like unknown lurking vague obligations. Instead, they’re clear, objective, and referenceable.
Strategy #5: Quantum Time Management
Finally, the strategy that gave this blog post its title: quantum time management is a fun name for a really simple solution I’ve found to decision paralysis. Any time I find myself spending more than a couple minutes struggling to make a decision, I just leave it up to chance. I’ve decided the time saved by not agonizing over it anymore is, on average, going to be more valuable than a slight improvement in the actual decision if I spent more time considering. Most often, I use this strategy when I have multiple to-do list items that really could exist in any order: I find that rather than think about how to prioritize them, I might as well just leave them to chance. The time saved will probably let me accomplish both anyway, while thinking about it more would leave me only enough time to accomplish one. I’ve also used it to pick what book to read next, what game to play next, where to order from GrubHub, etc.
Why is that “quantum” time management, though? Because the specific RNG I use is qrandom.io, which provides an API to access the result of quantum measurements as the seed. Does it really matter that much? No; but it’s fun to me to think that under one interpretation of quantum mechanics, there are lots of other Davids out there living slightly different lives because every time I went to make a decision, the universe split into several sub-universes each with a different choice for me. But ultimately, the value is simply that by leaving it up to chance, I save myself both the time and decision-fatigue that would come from forcing myself to choose.
Room for Improvement
There’s still a ton of room for improvement in my time management, though, and if anyone has any suggested strategies, I’d love to hear. I still often find myself caught up in a loop between Slack, Teams, and email if it’s a busy day, keeping on top of all three but unable to push myself into starting more focused work if I’m worried about getting interrupted by something time sensitive in one of these other areas. Replies to Slack threads especially are my nemesis: if I’m not mentioned or a Slack thread reply isn’t sent to the channel, I’ll miss it for weeks regardless of how important it is. (But I put the blame for that on Slack.)
While I’ve wrapped my routine around repeated daily tasks, I struggle a bit with tasks that are repeated but on odd cadences; I’m often caught off guard by tasks that are done once per semester, for instance, because they aren’t routine enough to incorporate into a separate list or schedule, but they are so routine that it doesn’t strike me to add them to my to-do list far in advance. I’d love it if Google Keep had a feature for augmenting list items with dates on which they would appear so that we can go ahead and note them well in advance, but keep them out of sight until they’re needed; I imagine another platform might handle this, but that feature hopefully would not radically increase the complexity of the interaction since a bit more friction might stop me from using the tool at all.
I also have a bad habit of operating as an “event-driven” program when dealing with certain emails: I send a message off, knowing that the reply I get from the recipient is the “event” to trigger me to do something else, and that until I receive that event the task is off my list. Then, if I never receive a reply, the task just stagnates, even if someone else was waiting on me. Outlook has some features to address this, but as far as I’ve seen, they only work if you deliberately mark each email with a note for when to remind yourself; when this only is helpful for <5% of email it’s hard to keep up the motivation to use that feature.
For all the things we’ve designed AI agents to help with, it surprises me that there’s not a more sophisticated AI email assistant out there. I have no doubt that it would be pretty easy for an AI to prioritize my email for me with a reasonable degree of accuracy. Regardless though, I imagine there will always be room for improvement here: these are just some of the strategies I’ve found useful the last several months as life has gotten endlessly busier.
A few days ago, I posted this picture to Facebook, with the caption, “Find yourself someone who looks at you the way Boggle looks at me when he wants my soup.” Boggle, obviously, is the cat’s name.
A few hours later, my wife sent me this screenshot of Facebook’s suggested replies to this photo:
This struck us both as unsettling: not because the AI has gotten good enough to generate such accurate approximations of how someone might reply (including the cat’s name, the emoji usage, the soup reference, and the casual meme reference to a “spirit animal”), but because the pattern of offering these as a menu of reactions represents a misunderstanding of the function of these interactions in the first place. Even if the actual text (and emojis) of the response are identical, there feels to me something fundamentally different between knowing someone typed them out themselves rather than selected them from a menu of options.
What is it that makes it different, though? Is it the effort, knowing that the commenter had to actually go through the process of typing out the letters and selecting the emojis? I don’t think so; I imagined knowing that someone dictated these via voice-to-text or typed these on a computer rather than a smartphone, and it didn’t substantively affect my perception of the message.
Instead, there’s something different about knowing the content was generated by the commenter rather than merely selected. It’s in some ways akin to the distinction between recognition and recall, where the ability to recall something represents stronger understanding than merely the ability to recognize it when prompted. Similarly, the process of generating a response oneself feels to me as if it represents something stronger than merely selecting a pre-generated response. In many ways, this likely connects to why reader-generated comments are regarded as more impactful than merely the number of reactions: it represents something stronger about the feelings of the person leaving it. Offering the ability to select a pre-generated response circumvents that, even if the response selected is identical to the one that the commenter would have left on their own.
Generative AI, I feel, is in a position right now that all revolutionary technology goes through: we recognize its potential as a powerful new tool, but we haven’t yet identified what needs it addresses. Generative AI is a solution looking for problems. And in the process of searching for the right problems, we’re coming across lots of problems it is not good at solving—such as the problem of needing a first-person account of a real experience with a 2e child in the NYC gifted and talented program. A human could have generated the identical response to the Meta AI from that story, but if human-generated the response would have value—AI-generated, it does not.
This conundrum comes up a good bit in my teaching. My rules regarding how much students may copy from AI are generally more restrictive than the rules students may have in the workplace. The reason is similar: in an educational context, the work generated is valuable insofar as it represents the student’s knowledge of some content. In the workplace, the work generated is valuable insofar as it is able to accomplish a task. What the work represents is different. Copying code from AI accomplishes one goal, but not the other.
In the opening synchronous meeting for one such class this semester, I was asked about this policy: if the work itself is the same, what does it matter whether it came from AI or not? I explained my thoughts with an analogy: imagine you have an assistant, whether that is an executive assistant at work or a family assistant at home or anyone else whose professional role is helping you with your role. Then, imagine your child’s (or spouse’s, I actually can’t remember which example I used in class) birthday is coming up. You could go out and shop for a present yourself, but you’re busy, so you ask this assistant to go pick out something. If your child found out that your assistant picked out the gift instead of you, would we consider it reasonable for them to be disappointed, even if the gift itself is identical to the one you would have purchased?
My class (those that spoke up, at least) generally agreed yes, it would be reasonable to expect the child to be disappointed: the gift is intended to represent more than just its inherent usefulness and value, but also the thought and effort that went into obtaining it. I continued the analogy by asking: now imagine if the gift was instead a prize selected for an employee-of-the-month sort of program. Would it be as disappointing for the assistant to buy it in that case? Likely not: in that situation, the gift’s value is more direct.
This gets to the core distinction I feel tools using generative AI need to address: to what extent is the artifact they are generating valuable in and of itself, and to what extent is the artifact they are generating valuable only insofar as it is authentic to the way in which it is perceived to have been generated? In the workplace, a block of code may be valuable insofar as it accomplishes the goal of the program, while in a class, it may be valuable only for what it says about the student. In gift-giving, a gift may be valuable in a professional setting based only on its inherent value, while in a personal setting it may be valuable due to a combination of value and the authentic process that generated it. We can apply this analogy in numerous other places; this is why it is appropriate to bring a store-bought cake to a corporate event but not to a baking competition, or why we regularly see internet personalities criticized for apparently using generative AI to author apology videos.
There are implications of this view for two audiences. For regular users, it’s important for us to consider the trade-off between authenticity and whatever value generative AI is delivering when electing to use these tools. For example, there’s a built-in tool here in WordPress that lets me simplify, expand, or summarize this post. I can be a bit verbose, and so I as a user have to consider whether the loss of authenticity that would come from using that tool is worth the apparent gains in readability or simplicity. We each individually need to attend to whether using these tools undermines the authenticity of the artifact we’re producing. This can be tough, of course, because so often using these tools is going to be much easier than producing the artifact ourselves. Generative AI is like the high-fructose corn syrup of content: it’s cheap and easy to use, but doesn’t yield as good of a result and has some long-term impacts if we use it too much. We have to be careful about when we use it because it would be so easy to get carried away.
For those building tools that leverage generative AI, the same implication applies, but at a broader level. To what extent are we helping our users circumvent authenticity, and to what extent are we operating in areas where authenticity wasn’t part of the underlying value of the artifact? One area of rapid development for similar technologies over the past few years has been in photo editing: Photoshop and other tools can now do in a single click what it previously took a professional several hours to do. While this poses some obvious concerns about technology replacing workers and other such issues, I’ve not yet heard concerns raised about the authenticity of this exercise. But when we develop tools that generate content in settings where its only value would be in its authenticity—such as the aforementioned instance of the Meta AI fabricating a story about a child in a New York School system—we are creating something self-defeating. It’s solving a problem that it inherently can’t solve because it removes the authenticity from the artifact, and the authenticity is the only reason the artifact has relevance in the first place.
So, as both users and creators of Generative AI tools, it’s important that we keep in mind to what extent certain artifacts have value because of what they are, and to what extent they have value because of the authentic process that created them. Generative AI can be a useful tool for creating artifacts whose entire value is just the artifact itself, but if authenticity matters, generative AI is a poor fit.
That said, for the sake of science, I did drop this blog post into Jetpack’s built in AI summarizer. If I had posted the following blog post instead of the one above, would it be more or less impactful? Is that difference because of the different content, or because of the process that generated it? Does it matter that you know it was AI-shortened? Does it matter that you know the input was a full-length post by me rather than a short prompt to generate a longer post?
A few days ago, I posted a Facebook picture of my cat Boggle with a caption about the way he looks at me when he wants my soup. My wife then sent me a screenshot of Facebook’s suggested replies to this photo, which felt unsettling. The AI-generated responses were accurate but missed the personal touch of someone typing out a message themselves.
This lack of effort in AI-generated replies reminded me of the difference between recognition and recall. Generating a response personally feels more meaningful than selecting a pre-made one, much like how a student’s own work demonstrates their knowledge better than just copying answers.
I explained this in a class with an analogy: If you ask an assistant to buy a gift for your child’s birthday, the child might feel disappointed knowing it wasn’t chosen by you. The gift’s value is tied to the thought and effort behind it, unlike a professional setting where the inherent value of the gift is enough.
This applies to generative AI too. The value of an artifact depends on whether its authenticity or just its existence is important. For instance, a block of code is valuable in a workplace for its functionality, while in a class, it shows the student’s understanding. AI-generated content is useful when the artifact’s value is independent of its creation process, but not when authenticity matters.
We need to be cautious with generative AI. It’s easy to use but can undermine authenticity. Users and tool creators should consider whether the trade-off between ease and authenticity is worth it. For skills like photo editing, AI is helpful because authenticity concerns are less. However, AI-generated personal stories or apologies might lack the necessary authenticity, making them less impactful.
Ultimately, generative AI can simplify creating certain artifacts, but it’s vital to assess when the authenticity of the process matters.
I started my freshman year at Georgia Tech on August 15th, 2005—which itself was the 6,772nd day of my life.
As of today—February 29th, 2024—that day was also 6,772 days ago. I never really left Georgia Tech after I started—I began teaching my own classes a week after finishing my PhD, and even while I worked at Udacity 100% of my time was spent on the OMSCS program. So, that means that as of today, I’ve spent half of my life at Georgia Tech.
This seems like the perfect occasion for a completely unnecessary graphic:
Still to date, a little over half that time has been as a student: 3,546 days across three degrees, 3,226 as a teacher and researcher since then.
I’ve known this day was coming for quite a while, actually. I calculated it and put it on my calendar over two years ago. During that time, I knew I wanted to say something to mark the occasion. I thought about writing about why I stuck around, but the truth is that it has never really occurred to me to even consider leaving. Every stage has led smoothly into the next:
I came to Georgia Tech because I wanted to stay in-state and study computer science (…and because my girlfriend at the time was already here, let’s be honest).
I stuck around for a Master’s because I accidentally graduated earlier than I intended and didn’t have anything else lined up yet.
I stayed for a PhD because I learned late in my Master’s about these new efforts to create intelligent tutoring systems, and—as a private tutor on the side myself—I was instantly really interested in the idea.
I stayed after my PhD because developing a course with my PhD adviser sounded like a fun thing to do for a year.
I stayed after that because I discovered that I love online teaching: it has been everything I love about teaching—plus a lot of what I love about coding—without the stuff I never liked, like having to be compelling in front of a live audience.
And I’m still here because—at the risk of being overly quixotic—I really enjoy what I get to do. There’s a Japanese concept called ikigai that has been summarized by some nice infographics, and it’s rare to find something that sits at the intersection of all four areas—but for me, this does. I enjoy what I get to do, I believe the things we’re working on improving the world, I (obviously) have made a living at it, and I think I’m halfway decent at it. Parts of it, anyway.
I thought about writing all the people I would want to thank for helping me get here, but this would be a far longer post and I’m sure I’d still forget several people and be mortified when I realized. So instead I’ll just say: it’s been a fantastic 6,772 days, and I’m excited to see what the next 6,772 hold.
As I’ve done the last three years (2020, 2021, 2022), I’m ending the year by creating a list of my top ten (well, ten-ish) books that I read the past year. Release dates are all over the place, so rather than try to narrow down the books that I enjoyed that were released in 2023, I figure it’s easier and more interesting to just look at the books that I read during that year.
As always, I generally don’t review the books I read because I tend to think that when I don’t care for a book, it’s more a reflection on my tastes than the book’s quality, but listing my favorites from the year has always struck me as a good compromise.
So, my Top 10(ish) books of 2023, in no particular order, along with a handful of honorable mentions (photo shows the top “ten” themselves):
Opium and Absinthe by Lydia Kang—or The Impossible Girl by Lydia Kang, either would have been one of my choices for the same reasons. I first became acquainted with Lydia Kang when I read her Quackery: A Brief History of the Worst Ways to Cure Everything last year, and I just assumed it must be a different person with the same name until GoodReads showed both under her profile. I figured she was just branching out, but what I found compelling was the extent to which her medical career/writing pierced her novels as well. The level of medical detail in both books really set them apart from other similar-era mysteries.
Providence by Max Barry. I enjoyed this one so much I wrote a dedicated post about it. In a nutshell, it touches on a variety of elements that are usually left out of science fiction. I described it as Becky Chambers meets Orson Scott Card, and I think that’s still accurate: like Card, it’s a more tactical science fiction book than many I’ve read, but like Chambers, it’s deeply focused on the human elements.
The Final Empire by Brandon Sanderson (but really, The Well of Ascension and Hero of Ages, too). So much has been written in praise of Sanderson already that it’s trite to try to add anything, except that I’ll say this: based on how much praise I’ve heard for Mistborn over the years, I entered listening to this series with super-high expectations, and it still exceeded them. (And while I’m idolizing Sanderson—my daughter and I listened to Skyward this year together, and it’s just as phenomenal.)
The Management Style of Supreme Beings by Tom Holt. You could tell me that Terry Pratchett was still alive and had just switched to a pseudonym and I would absolutely believe you. It had that exact same brand of humor, but channeled into a more familiar world.
Project Hail Mary by Andy Weir (and, to a large extent, Artemis by Andy Weir as well). Like Sanderson, Weir is so popular that it seems silly to add my two cents, except to again note that the book exceeded the high expectations I came into it with. I probably nose Project Hail Mary above Artemis based on Ray Porter’s fantastic narration—it pulled in some mental connections to Bobiverse books that were remarkably compatible. For both books, though, I love how Weir initiates some believable technological rules, and then painstakingly follows them to their logical conclusion. I felt like the entirety of Project Hail Mary was set up in the first few pages in a way that it couldn’t have proceeded any differently save for some believable late twists.
The Devotion of Suspect X by Keigo Higashino. This was recommended in Games for Your Mind by Jason Rosenhouse, and appropriately so—it’s a mystery built entirely around logical deduction rather than the absurd coincidences and personalities common in others of the genre. It was particularly fascinating to me how the book managed to show things from both sides’ perspectives, but yet still provide a twist at the end.
Seven and a Half Lessons About the Brain by Lisa Feldman Barrett. This book was so short that I finished it in a day, but I’ve kept coming back to remind myself of some of the lessons. They’re remarkably well-explained, they hit the perfect balance between surprising and yet obvious, and so many of them are pretty directly relevant to everyday life.
Hello World: Being Human in the Age of Algorithms by Hannah Fry. I read this immediately after Life 3.0 by Max Tegmark which was so out there it may as well have been science fiction, and I found it to be a perfect companion, both more down to earth and more immediately relevant. What’s fascinating especially is that it was written in 2018, and yet the release of ChatGPT only made it more relevant and current, not obsolete.
Because Internet: Understanding the New Rules of Language by Gretchen McCulloch. This, along with Semicolon: The Past, Present, and Future of a Misunderstood Mark by Cecelia Watson, have significantly changed—though somehow also reinforced—my views on grammar, writing, and language in the past year. I used to consider grammar and writing a highly structured, rule-driven process, but these books shed light on how arbitrary the rules truly are, and how acceptable it should be to let them evolve. I feel like grammar is sort of like a political map of Europe: it was radically changing for centuries, and then at some point we said “Freeze!” and took the current state as permanent. When that’s about ending wars (as in Europe), that’s great—when that’s about stifling necessary evolution and creativity (as in language), it isn’t. (Sidenote: this is also one of the best read-by-the-author non-autobiographical audiobook I think I’ve listened to.)
And a couple honorable mentions:
Mind Bullet by Jeremy Robinson. Jeremy Robinson remains one of my favorite authors, and Mind Bullet was my favorite by him this year (I also read the entire Project Nemesis series, as well as Torment, Exo-Hunter, and Tribe)—but it’s gotten a little hard to separate his books enough to put one of them in my top of the year. It’s sort of like including all three Mistborn books, except they’re all so similar in style that either they all deserve to be there or none do… but it’s a consistently entertaining style. Whenever I finish a book I didn’t enjoy, I follow-up with a Jeremy Robinson because I know it’ll at least be engaging.
Sapiens: A Brief History of Humankind by Yuval Noah Harari. I loved this one, although this is one where its popularity in many ways has taken on a life of its own, and it’s hard to endorse the book without implicitly endorsing some of the decisions that some people have made citing Sapiens as support. But taken solely for its content, I found it as remarkable as many have noted, especially its emphasis on belief in shared fictions and belief in a better future as huge driving factors to the constructs on which modern society is based.
Several People Are Typing by Calvin Kasulke. The fact that a book could even be written this way (as an entire series of Slack conversations) is an achievement of its own, and the fact that it was able to touch on some deeper questions through that medium is even more remarkable.
What Works: Gender Equality by Design by Iris Bohnet. I honestly had difficulty comparing this to the other books I read this year because most of my reading is primarily for pleasure: this one was so relevant to my job and the classes I teach that it felt more like reading for work. It’s not only a fantastic book about designing with equality in mind, but it’s a great book on design in general.
Impromptu by Reid Hoffman. I’ve been an optimist about the positive impact AI can have on society, and this book—with its issues—was an early nice effort to call out some specific benefits we should focus on developing with these new AI tools.
Of those 119 books, 88 were audiobooks, 23 were physical books, and 8 were on Kindle. It’s interesting to see that shift: it used to be closer to 1/3rd audiobooks, 2/3rds physical, but the kids growing up has eaten into some times when I used to read a lot—and morning carpool has added around an hour to daily audiobook listening time now that Lucy and I listen to books together.
Mostly a place where I write down things I repeat often so that instead of repeating them so often, I can just send a link.
Disclosure: I use Amazon referral links in some of my blog posts. That's mostly just a lightweight way to track and see if anyone's even clicking through. If you buy something through one of these links, I may get a bit of money back and achieve my dream of one day being able to buy the nicer set of kitchen scissors that Amazon sells instead of the bargain variety.