Chatgpt gets an mba

Do repost and rate:

The AI-powered chatbot did better than expected on a Wharton exam. That’s something to get excited about, says the professor behind the experiment.

Photographer: Leon Neal/Getty Images

In December, Christian Terwiesch, a professor at the University of Pennsylvania’s Wharton School, decided to put the latest version of ChatGPT, a chatbot powered by artificial intelligence algorithms, to the test with an exam from his operations management course. The bot did better than average, scoring a B-, and quickly became ubiquitous. Today it’s the pixelated face of AI’s promise — and its dangers, including the existential ones.

Is this a sign of great opportunity, or of machines encroaching further into human territory? We decided to check in with Terwiesch, who chairs Wharton’s department of operations, information and decisions and co-directs Penn’s Mack Institute for Innovation ManagementHe was keen to discuss his research paper on ChatGPT, ways to use the technology in the future, and what all the chatter about the bot has to teach us.

(Note: Interviews are edited for clarity and length.)

Hi, professor. So your experiment has created a bit of panic out there. Should we be afraid of the chatbot with the MBA?

Well, if panic includes excitement, that’s good. But if it’s fear, that’s a problem. I’m to blame for some of this fear, given the title — Would ChatGPT3 Get a Wharton MBA? A Prediction Based on Its Performance in the Operations Management CourseA good title matters, and I gave my paper one that people would likely respond to.

When did you run your experiment?

I started over the recent winter break. My sons, one is a designer — he’d used Midjourney, an AI-based computer program. And the other is a software developer, he’s a senior in college — he used ChatGPT. At one point the question came up, “Hey Dad, can this thing pass your exam?” So I thought I’d test it. I cut and pasted the exam questions into the prompt line. The result was amazing. I had no idea how it would do. On its first question, with a half-page of text, it was asked to do some case analysis. It not only got the answer right, but it was articulated well, it was nicely written. I really had no idea this would happen. You know, we go to Google and type in questions and oftentimes are amazed at what it sends us. With this, I had not expected the answer would be so well done. The confidence of ChatGPT — “here’s the answer!” — that’s what you want as a user. The first reaction was: “Aww, this is so cool.”

Talk us through the exercise and your reactions beyond that very first one. 

The first exam question was about finding a bottleneck in a process. Then it was asked how much working capital a retailer needs to sustain a business. The same thing happened — it did well. Then I took a harder question, and there it struggled and got the answer wrong. I gave it hints in a very human way, like you’d give a student, and it picked up on the hints. That surprised me, and it was fun to watch. It was adjusting its reasoning and response to the hint, and the answer got better. It started to feel like teamwork, like a co-production between me and the ChatGPT. I really enjoyed that. A third area of testing that I gave it, I have talked about this, it’s really bad at math. It made a really stupid set of mathematical mistakes that a sixth grader wouldn’t make — that’s a real irony there. It speaks well, applies good logic and reasoning and has a good conceptual understanding, but then it can’t do math. C’mon, you’re a computer, dude!

What does this tell us about testing, about people versus robots, about opportunities?

In the world right now, a big topic of discussion is about what testing policies we have. First and foremost, you have to ask yourself, why are you testing in the first place? There are three types of test questions that we as educators provide to our students. Skills certification is one. For example, what it takes to become a CPA or a lawyer or such. Let there be no doubt, we have to test the student and not the bot here. There can be no help from any AI in those testing settings.

Christian Terwiesch
Source: The Wharton School of the University of Pennsylvania

Then I do a lot of small tests in my course, to get a sense of where students stand on their learning journey. Do I need to speed up or slow down? Here, also, ChatGPT gives the wrong image of what the students are experiencing. So it’s my job as an educator to create a psychologically safe learning environment where the GPT has no place.

A third type of testing is where the action is. Really, from high school to graduate school, all the action is in the learning process. A student will engage with some material and come out of it a smarter person, we hope. If you were studying philosophy, say, versus , and I asked that you write a five-page essay, you’ll be a better version of yourself having studied and researched. We use exams or tests for the students to engage with some material. But what if you go to the library and tell yourself, “I’ll just take the shortcut here and use ChatGPT.” That would be a pity. The idea is to be creative and think of ways to use the technology to produce and enhance engagement. I think that is super possible.

How, specifically, would you use it? 

I could have the bot pretend to be a French philosopher himself and meet the student and have a chat, in French, where the student is interviewing the philosopher for an hour. There are all kinds of things that you can do now, given this technology, to create learning experiences. That’s what we as teachers and professors are there for. We have to create learning experiences.

I think this is going to be an amazing tool, and that’s especially true in the business school world. I want to add that I am very sensitive to the challenges other educators face. There are environments where teachers are struggling with so many other things, and the teacher is being so stretched they don’t have the time to imagine new learning experiences.

You and others have said this isn’t about putting people out of work. But again, those fears are out there.

I taught one of the early MOOCs [massive open online courses] on Coursera, back in 2012. And so many people at the outset said the online learning was going to put teachers out of work. I believe the opposite is true. All work has a cost/quality trade-off. You can spend little and get poor results, or you can spend more and get better results. We call that trade-off the efficiency frontier. Now, you get a new technology that makes you more efficient — say, for the sake of argument, it makes you twice as efficient. That means that you can either fire half of the teachers or you have a chance to make the students twice as smart. I very much hope we take the efficiency gains and give them to the students.

We are not running out of work. Just look around. Our doctors and nurses are overworked, there is a mental health crisis, a climate crisis, our classrooms are overcrowded. What is shifting is the efficiency frontier. ChatGPT could help us gain in productivity. It’s for us as a society to figure out what to do with it.

Regulation and Society adoption

Ждем новостей

Нет новых страниц

Следующая новость