Change is Hard
Change is Hard Podcast
The Robots are Coming! The Robots are Coming!
0:00
Current time: 0:00 / Total time: -24:24
-24:24

The Robots are Coming! The Robots are Coming!

Nah, the robots are here.

A long one for us today, friends! In this episode, we’ll chat through what I think the arrival of Chat GPT means for us in higher ed. Key point: Yes, we can try to get around it, but really, we should invite it in.

Here are some resources I mention in the episode:

The text of this episode is below.

Hello, everyone, and welcome. You like that spooky intro music? We need a little mood music because we are about to enter a conversation of the future. And the future is now.

Let’s have a nice meaty discussion about Chat GPT: The artificial intelligence software that can create written content.

I’m not going to bury the lede here: My argument in this pod is that the developments in machine learning present a giant opportunity for us to move our approaches to knowledge development, teaching, learning, and assessment into new paradigms.

Further, I think that the future is for people who deeply understand how to interact with artificial intelligence (from here on referred to as “the robots” in my shorthand) because at the end of it, the robots are only as good as they can be programmed: They still operate in binary and will always lack context, depth, responsiveness, and nuance that a human brain brings to every social process.

I think we owe it to our students to teach them how to do that – how to interact effectively with the robots -- but since these paradigms are new to all of us, what we’ll really be doing is co-learning and co-creating, which is also a good change.

There are implications here for both individuals and organizations that I’ll talk through. And I’ll do some of your work for you here, too: I’ll share with you examples for how we can take some of what we might be doing now in various disciplines and adapt it for our new environment.

And finally, I’ll give you my two provocative suggestions for greeting our new robot overlords.

I’m just kidding about the overlord part; they’re not really overlords. They’re more like partners. At the end of it is this: Be not afraid. We’re not gonna play defense here because why would we? We’re not gonna be threatened here because there’s no need. We’re gonna take a deep breath and dig into this exciting development. We’re gonna do that.

I have been thinking about robots for a long time, and I blame the Jetsons because I grew up believing that by the year 2000, I would be living in a world where Rosie the Robot would anticipate my every need and bring me everything I wanted. Listen, I’m serious. I was totally convinced that this was going to happen by the time I reached what I felt would be a very old age in the year 2000. For the record, that age was 27, which – in my six-year old mind – was positively ancient.

These days, the robots are a routine part of my life and I get to learn all about them from a variety of sources. First, from my own kid who at age 21 is literally working in an industry of the future. He is working in an unmanned (unpersoned?) manufacturing plant where his job is to serve as the human liaison between robots that move massive amounts of goods and their mechanical and programming infrastructure. He’s working in one of three facilities in the world that operate in this way, and one of the most interesting things to me is that he’s on a site where two plants are operating at once: In one plant, the humans move the goods. In the other, his plant, the robots do.

The robots are the future. The robots are today.

And now here in the laptop life, we’ve seen the explosion of meaningful artificial intelligence agents that can literally construct written content. ChatGPT is the one you’ve likely heard about most recently.

A quick overview for those who maybe haven’t heard about ChatGPT and I’ve added links to the episode notes for you so that you can come up to speed on your own: Basically, we’ve been watching over the past few years the development of machine learning for writing. Maybe you’ve already interacted with chatbots on various websites, so you’re familiar with their strengths and limitations. Where we are today is that asking a chatbot a question can elicit a fairly coherent response. Do you want to know the main themes in Frankenstein? Ask, and ye shall receive, in structured paragraph format.

Here are a few additional resources:

One thing I read in 2019 that piqued my interest was from the GIS watch because after reading Safiya Noble’s work on algorithms of oppression I became interested in who is programming the robots and how their epistemologies inform their decisions in that space. That report is linked for you along with Noble’s book. These resources provide us with language and questions that can help us engage critically with the robots as we chart our future path. Let me underscore how important it is to engage with the robots critically on matters related to privacy and human rights and justice. If you’re interested in those things, you can read the sources I’ve listed here, as well as another piece I really liked on decolonizing AI. These are all linked in the notes.

So as I was reading these materials, I was thinking about Rosie’s coding and about the ongoing development of the robot capabilities as I kept up to speed on what was happening.

An article came my way back in January of this year that really piqued my interest about these developments and the implications in higher education. Interestingly it came my way as recommended reading based on the surveillance of my research activity, score one Robots. The article is posted for you: The Robots Cometh and it outlines some dire effects on content developers, those whose jobs have already been replaced by robots and also predicts that 30% of web content would be AI generated by the end of this year.

I think that’s probably true. As a reader, I can almost always tell when something is AI generated, and I’ve been seeing a lot of it. I’ll talk about an example or two in a minute, and you’ll see what I mean.  

So I started thinking about these new developments and how they would, in fact, be a paradigm shifting event in so many ways once average people got access to it. Chat GPT is a product that does exactly what’s outlined in that article: It can generate copy on literally any topic by pulling from the vast resources of the internet.

And people lost their minds. I saw articles suggesting that the essay is dead, that everyone’s jobs are in jeopardy, that education will be handed over to the robots.

And you’ll need to permit me just a moment snark in this episode, y’all, because sometimes I just can’t help it. Kevin Gannon on Twitter posted the same thing I first thought about when I saw people losing their minds about the robots generating essays: What did people think would happen when faculty submitted all that free data to Turn it in?

If you’ve ever heard me argue against plagiarism detection software, you’ve heard my three points. First, I’m not further engaging in surveillance behaviors. I can’t escape it altogether in light of some of the foundational surveillance matters in higher ed, but I refuse to willingly perpetuate that.

Second, plagiarism should be treated as the writing issue it is, not as a crime. This is my most unpopular point, by the way, but I stand by it. Plagiarism – and I’m not talking about cheating or copying wholesale --  is no worse than a lack or clarity or organization and is only a crime in light of our cultural values around property and ownership of knowledge; and third, I AM NOT TRAINING THE ROBOTS. I’m not handing over my students’ work under what is ostensibly a check for plagiarism but ultimately ends up used for aims I don’t intend.

Okay. Anyway. Moving on.

So I met over the summer with some colleagues at other colleges and we hashed through some discussion about implications specifically within the writing context, and I took that conversation to my deans and shared that we needed to open up discussion about it. I had every intention of launching this conversation with my own department this fall until some other serious matters came our way that needed our time and attention.

I wasn’t really concerned about my own department because we generally teach writing in a way that renders the robots a tool and not a replacement. So here’s my first recommendation for us as we consider how to integrate the robots as a tool into teaching and learning. I have always advocated for plagiarism proof assignments, of course, because if your essay assignment is “address the main themes of Frankenstein,” then you get what you get, right?

But I’ve spoken with many other colleagues in other disciplines who need to measure whether students know specific pieces of information and who tested out Chat GPT for their content. I’m going to use what I think is a great example from a colleague of mine in business. He asked ChatGPT about the 4 Ps of marketing, and here’s what it said:

The “four P’s” of marketing are product, price, promotion, and place. These are the four key elements of a marketing mix, which is a tool used by marketers to help them decide how to effectively promote and sell their products. The four P’s are also known as the “marketing mix” and are used to define the strategies and tactics that a business will use to reach its target market.

So a few things. The robots get the basics. It clearly and correctly identifies the four Ps and links them to another key concept, the marketing mix. But notice the repetition: The second and third sentences are essentially the same idea expressed in different ways.

But here’s the thing: It’s not altogether unlike the kind of writing students produce and we accept in college classrooms across the country. On the face of it, it looks fine. It’s structured, the sentences are complete, and the key words are evident. But it lacks depth, doesn’t it?

Could a student pass the question with this kind of writing? Sure. Is it good by my standards as a writing instructor? No. It’s boring, for one thing, as much robot-generated content is and repetitive. It says something but it really doesn’t say anything. It lacks development.

But it would probably pass, which means that perhaps we need a different way to measure whether students know the four Ps while at the same time embedding two other lessons about good writing and about interacting with the robots.

I might start my students with the robot passage or ask them to go find a robot passage themselves. Go ask ChatGPT what the four Ps are. Now, let’s analyze that answer in the context of clarity of expression because even though I’m teaching a content-based course – business 101, I should also be scaffolding the skills required to interact with the robots. That should be apparent in every level of course in my major, just like scaffolding writing skills and critical thinking should be apparent at every level in my major. My argument for faculty in other disciplines about writing is that if you want students to be good writers in your major, you need to teach them to do it: English 101 is not the only place where writing instruction needs to happen, and I’ll translate that argument into the interactions with the robots, too: The future belongs to the people who can effectively interact with the robots, the people who understand the limitations of the robots and who – in business speak – can add value to the environment by bringing a level of humanity to what the robots produce.

That’s my first suggestion, then: Think about every course we teach as needing some element of instruction around interaction with the content-generating robots. Because the robots can be great for general information right now, but at the end of it, they aren’t experts in any subject. They’re generalists, and it’s humans who will continue to bring the discernment into the conversation.

Right now, the robots can generate a bunch of Python language and create coherent code. What does that mean for programmers? Does it mean we don’t need people to speak Python? No, it means that we still need humans who understand Python because their knowledge of the language plus the knowledge of the robots exceeds our current abilities to produce good code. Think about what AI brings and what humans bring as interacting variables, at the intersection of which is the friction of brand new opportunities.

Here’s my second suggestion, and it aligns with what I think are our communication needs for the future along with our instructional needs of the present, particularly around helping students see themselves reflected in academic discourse and the value of a college education: Local, project-based learning. I was recently involved in a conversation at my home org whereby people were advocating for including more student voices in the analysis of institutional data. GREAT. One way to do that is by incorporating it into our current classes because nearly every discipline can theorize about our institutional data, and we can gain valuable insights about the student experience at the same time.

And listen, if I’m being honest, we can also stretch our very limited institutional resources and take what can be a giant research puzzle and make it more manageable. You want insights about the factors that affect student enrollment? Have them research it within the context of your discipline. What sociological theories could be at play in these data? Psychological theories? Systems theories? How might we theorize that, and what might that mean for population-level interventions? (And also, importantly, how does all that resonate with what students actually experience at your institution?)

How do we choose the right parametric tests in statistics when we’re faced with these types of variables in a data set? We can teach variable types and parametric testing through the use of our own local data and have students write that up. What are the bio-psycho-social elements at play for students attempting to navigate their path at this institution? What are the major issues affecting the communities in which your orgs conduct their business? What’s going on with the water? Or the air quality? Or access to healthcare? The opportunities are endless to create local learning experiences that eliminate the need to rely on a robot to complete an assignment. So if your primary concern is about inappropriate use of the robots, that’s one way around it.

But the robots can help us because imagine that in sociology I want to know about a particular theory. Let Chat GPT generate something, and then I can ask students to go track down the literature to see if what the robots produced is correct. I’d have them revise the piece, include the right academic citations, and fill in the gaps. That’s good learning.

In writing, sometimes the hardest part is starting something, so it could also be useful to plug an idea into Chat GPT and begin with a draft. Maybe you struggle with writing recommendation letters. For me, I’m a much better editor than I am content developer, to be honest, and having a draft to work from could be helpful. I might also, in a writing class, ask students to evaluate what’s working in a robot piece and what isn’t, using what we’ve learned in class about writing. For example, in that 4 Ps example, the first thing my students would say is that the paragraph lacks development.

I think at the end of it, our big transition is moving up the Bloom’s ladder, from identify and define to analyze and apply. That requires some investment in discussion around our learning outcomes, and I do think orgs will want to devote considerable time to thinking through how we’ll meet the demands of the robot future, how we’ll help students become equipped to interact with the robots in meaningful ways. This approach will require experimentation and co-creation of learning and knowledge, which I think is a fine opportunity to reimagination what’s happening in undergraduate education today.

Orgs will need to make space for faculty to adapt their assessment materials to more localized contexts. Case studies, fictional examples that AI cannot respond to, ways of measuring learning more authentically. And authentic assessment is beyond the scope of this episode but that’s a good place to start, and I will provide a link in the notes.

So that brings me to my provocative points.  

Number one: Orgs are gonna need to simmer down and take a nuanced approach to these developments. My biggest fear is not about cheating and it’s not about a robot taking my job. It’s about MBAs in the bean counting office thinking the robots are an opportunity to suggest that perhaps we don’t need humans delivering undergraduate education after all and that perhaps the robots can just handle most of the job.

If you think that’s way out there as a possible outcome of this conversation, I want to know where you work. I want to know where that kind of econ 101 logic DOES NOT drive the entire enterprise, where the discourse is NOT bound up in neoliberal logics. So that’s my first provocative comment: Simmer down, organizational leaders, and do your research about what this change actually means. Because from my seat, the human activity that can REALLY be replaced by the robots is bean counting. So just watch your step.

Oh, and wait. Here’s a bonus provocative point for leaders: Ed tech is gonna come whistling down the alley with the solutions to your purported AI problems.

I watched two strands of conversation unfold this month as ChatGPT made its way onto the scene. Interestingly, I saw a data point on a recent webinar on this subject that said that it took Netflix 40 months to reach consumers and critical mass; it took Chat GPT just weeks to do the same thing. I linked you to the webinar as well.

Anyway, the two strands of conversation were defensive and offensive: Playing for the defense were sentiments like OH NO! Another way for students to cheat! How will we get around it? That strand of conversation said to me that people want to just keep doing what they’re doing in the face of this change, and that is not gonna work, y’all. It’s not gonna work. But if that’s the conversation at your org, you are RIPE for the ed tech companies coming at you with solutions.  

On the program evaluation list-serv, someone put it to the robots. When we ask Chat GPT “How can professors craft writing assignments that make it difficult for students to get an AI system to write their essays for them?” the robots say this:

There are several ways that professors can craft writing assignments to make it difficult for students to use AI systems to write their essays. Some potential strategies include:

1.       Requiring students to write about a topic that is highly specific and/or niche, making it difficult for AI systems to find relevant information.

2.       Asking students to include personal experiences or perspectives in their writing, which are difficult for AI systems to replicate.

3.       Requesting that students use a specific writing style or structure, such as using first-person narrative or writing in the form of a letter or diary entry.

4.       Assessing students on their ability to integrate multiple sources and present their own original arguments, rather than simply summarizing information from other sources.

5.       Incorporating real-time or in-person components into the assignment, such as requiring students to participate in a debate or giving a presentation.

By using these and other creative approaches, professors can make it more difficult for students to use AI systems to complete their writing assignments.

All of that is true, right? And let me share with you what I do for my own final exam, for example: Students are require to advance an argument in support for their grade using evidence from their own writing in the course. It’s a meta assignment: They advance an argument using supporting evidence, which is a core component of the course. In carrying out that assignment, they’re also demonstrating the skills of the course, and the robots can’t do it for them because it requires the use of a unique data set. I can see this kind of activity in motion in a variety of courses.

But here’s the thing: This is the wrong question. The question isn’t how can we avoid something; it’s how can we incorporate something, and that’s what I saw in the second strand of conversation that emerged around Chat GPT.

Playing for the offense was the sentiment that this is a great tool for all sorts of things that will help us build better teaching and learning environments, presenting incredible opportunities for our work with students. I’m in that camp, obviously, and I believe this is the way.

My second and final provocative point and it’s primarily for people in camp one who are worried about cheating. Why do you think people cheat? What is the reward associated with cheating?

It’s invisible academic currency, right? It’s the grade.

I thought the pandemic was gonna be the primary formative context that moved alternative assessment practices like ungrading ahead, but now I think it’s really gonna be the robots. Because we have a LOT of opportunity ahead to re-imagine how we approach assessment in our classrooms, how we measure learning, how we come to a final grade in a course, how we create learning partnerships. There’s lots of information out there about alt assessment, and I have a chapter and an article coming out next year about the conditions that support this kind of work.

For now, I’m gonna leave it there, my friends, and wish you a happy holiday season filled with peace and relaxation. I’m not yet ready to wish the robots a happy holiday season with the exception of Rosie. She always gets a kiss from me. See you on the other side!

Discussion about this podcast

Change is Hard
Change is Hard Podcast
What's happening in the world of leadership and organizational change science? This podcast provides an overview of recent research and news you can use.