ChatGPT vs The Travel Advisor…A Cautionary Tale
I read an article recently that talked about how useful Open AI’s ChatGPT was becoming with routine household and office tasks. Hey…if ChatGPT could mop my floors and clean my toilets, I’d be a fan forever. Sadly, those were not among the things it can do, but I decided to take ChatGPT and its AI engine for a test drive to see how it might help our travel business.
I started with some simple, non-travel related tasks to get a feel for how the tool works. I asked ChatGPT to generate a 7-day menu, complete with recipes and a shopping list. I went through several refinements giving ChatGPT specific dietary preferences to see how the results would change.
I next asked ChatGPT to write a food-related blog post for me. I like writing about food, but I have a problem with brevity…just read my last post, which clocked in at about 2500 words. I wanted to see if ChatGPT could help me with that.
Then I asked ChatGPT to help me out with a few travel-related tasks I was working on. I asked for a recommendation for resorts in Sorrento, Italy with beach access. If you know anything about Sorrento, you’ll appreciate how much of a loaded question that was.
I just completed two articles for the next issue of our newsletter, so for my last test I asked ChatGPT to write articles on the same topics. One is a review of the suitability of Galveston as a cruise port, and the other is a destination review of the two Sandals resorts we recently visited in Barbados.
The 7-day menu ChatGPT generated was not bad, but neither was it particularly creative. I mean, eggs and oatmeal for breakfast on alternating days is probably healthier than cereal in whole milk every day…but it’s boring. It also didn’t make use of leftovers. The grocery list was fine as far as identifying ingredients, but it lacked specificity in terms of quantity and quality. When I refined the task and asked for grocery lists for individual recipes with some specific dietary restrictions, the results were better. But that meant spending more time getting useable results from a tool that is supposed to help me save time. I gave ChatGPT a B- for this task.
I asked ChatGPT to write an 800-word blog article for Maryland crab soup that included the recipe. Apparently ChatGPT also has a problem with brevity. It hit the 800-word limit and stopped writing mid-recipe. I expanded the word budget enough to let it finish the recipe, and after reading the post ChatGPT generated, I discovered something I’ve suspected for some time. Most of the food blogs you read these days aren’t written by humans. The blog post ChatGPT generated sounded exactly like every other recipe blog post I get when I search for a recipe. I gave ChatGPT a C- for this task.
To be fair, the recipe wasn’t bad. It was the same generic recipe you’ll find with a google search, which is a good start…if you aren’t from the state of Maryland. I refined the task, asking for a blog post for Maryland crab soup that a waterman would make. I got the same generic recipe as before, along with a two-line definition of waterman.
For my kitchen tasks those results were not bothersome, but neither were they particularly useful to me. It may be of some value for a busy family that doesn’t mind eating eggs and oatmeal for breakfast, tuna salad for lunch every day, and adds lima beans to their Maryland crab soup (who does that?). For this retired guy who loves creativity and variety in his food, and hates lima beans, I’ll keep ChatGPT out of my kitchen until it learns to clean my cabinets.
The travel tasks are where I encountered real problems. When I asked ChatGPT to recommend resorts in Sorrento with beach access, it gave me five options. The top recommendation was a 5-star resort that costs over $1000 per day and books up years in advance. To be fair, I did not specify a budget. But many of our clients don’t either…it’s up to me as a travel advisor to figure that out. The other options were hotels that don’t actually have beaches…they have websites that say they have beach access, which is not the same thing. And the beaches aren’t anything like the white sandy beach I just left in Barbados. I know all that because I’ve been to Sorrento. I know that the city is built on a cliff, and that to get to the beaches from those hotels you have to navigate steep and narrow steps. ChatGPT neglected to tell me that most beaches in Sorrento are narrow slivers of shore at the foot of the cliffs and covered in volcanic pebbles, useful information for my clients who are used to the white sandy beaches of the Caribbean. ChatGPT also failed to tell me that during peak season Sorrento’s beaches are so packed with tourists you can barely find space to sit without rubbing elbows, or some other body part, with another tourist. ChatGPT gets a D- for this task.
Out of curiosity, I asked ChatGPT to describe the beaches in Sorrento. I got a nice list of beaches in Sorrento along with a brief description of each, including directions on how to reach them. The response ChatGPT provided was a reformatted version of a published (and copyrighted) article written by a travel writer, disguised in a ChatGPT wrapper and without source attribution. I know that because I had already found the original article with a google search. Setting aside the issue of intellectual property rights, which is one of the real problems with ChatGPT, at least with the google search I got the original article which included a bit of backgound on the author so I could assess the information’s credibility.
The destination reviews were even more troublesome. I asked ChatGPT to write a critical review of Galveston as a cruise port. Galveston is a nice town, but as a cruise port it has issues. The review ChatGPT wrote for me sounded like it came from the Galveston Chamber of Commerce with none of the issues I’ve encountered with it as a cruise port. A solid F.
I refined the task, specifying that I wanted ChatGPT to write a negative review of Galveston as a cruise port. The damned thing chided me for asking it to produce a biased review, telling me it was designed to provide unbiased information. And then it did it anyway, and that review was much closer to the reality I experienced when I’ve cruised from Galveston. The information was readily available, and ChatGPT found it, but only after I effectively directed the AI engine to override the built-in bias which it claimed not to have.
I won’t even bother to share the results of the Barbados resort reviews I asked it to write, other than to say ChatGPT’s review read like it came from the resort’s sales brochure. Which is probably where ChatGPT got it. Right down to the description of the resorts’ “crystalline waters.” I don’t even know what that means. Another solid F.
ChatGPT has some strengths. Almost everything it wrote was grammatically correct with proper usage, and there were no spelling errors. But the only reason I know that is because I already know the rules of grammer and usage, and I know how to spell. I don’t always follow the rules when I write, but that’s my choice. It’s my voice. ChatGPT’s voice is sterile and generic.
I don’t think I’ll be using ChatGPT to help lighten my load anytime soon…not in my kitchen nor my office. I suppose for a busy family that needs help with menu ideas it might be useful. I can only say if you use it for that purpose, I hope you like oatmeal for breakfast and lima beans in your Maryland crab soup. Really…who does that? I hate lima beans!
All kidding aside, there are some serious issues when it comes to using AI technology in mass market applications. After my brief dalliance with ChatGPT, it is clear that intellectual property rights and source identification for content are problematic. AI driven tools are also full of both conscious and unconscious bias. The bias is imbedded in the code, so it isn’t evident. But bias permeates everything AI tools touch, and that’s a real problem. It gets even worse when AI is engineered to deliver results intended to manipulate the user, which is already a widespread practice. We live in a world where most people don’t understand that the “truths” they find when they turn to social media for their news and fact finding may be the exact opposite of the “truths” someone shoehorned into a different persona is presented. That’s a real problem. It’s manipulative, and it’s malevolent.
I’m a simple person these days…probably always have been but at least now I can admit it…so I don’t have any answers. I just know that in a world where far too many people who lack basic critical thinking skills turn to Facebook for help with major life decisions, AI tools like ChatGPT are a disaster waiting to happen. And if we can’t find a way to more closely link controls with the technology development, the result will be far worse than a botched vacation or lima beans in Maryland crab soup. I hate lima beans.
And that’s all I have to say about that.