• Acronym
  • Posts
  • EOD: AI false starts + how organizations get past them

EOD: AI false starts + how organizations get past them

Circling back on AI endeavors when they don’t work out as planned

Welcome back to Acronym, friends. Some quick transparency:

I’m slowly but surely 🐢 building this newsletter up to be a community, one where we’re not afraid to get candid and talk about what we’re really experiencing in business and beyond.

To make that happen, you’ll see me place one lil’ ad on just about every edition. Why?

I want to keep this newsletter free for readers, because subscriptions are EXHAUSTERWHELMULATING (a hilarious phrase I learned from my friend Jen over at Literacy with Dignity).

Sometimes, these ads earn me money every time one of you simply clicks it. Other times, it takes a subscribe. Rest assured you are never pressured to do either of these things, but I want to make sure you know how I intend to earn money from this venture down the line. This keeps me invested in providing you with high-quality content you actually ✨ give a shit about ✨.

After this edition, here’s what you have to look forward to:

  • 💸The price business leaders really pay by keeping salaries stagnant

  • 👣Am I shooting myself in the foot by being an independent worker? Auditing myself w/ a career growth expert for your reading pleasure!

  • 🫱🏼‍🫲🏽Q&A with Amplitude’s James Evans on what he learned when his business got acquired

On that note…here’s what this edition is really about:

Did you know? In a survey of AI decision-makers at companies, about half of them expect ROI on AI investments within 1–3 years. Another 44% expect a longer timeframe, according to a recent Forrester report.

Oftentimes, companies try out an AI venture only to realize it’s not working out as hoped.

On a large scale, Air Canada had to pull its AI-powered chatbot for misleading a customer about its bereavement fare policy. Patagonia was sued for its use of contact center AI, specifically recording, analyzing and profiting from customer data without permission.

Anne McGinty, podcast host at the revered How I Built My Small Business (which you can listen to on Apple Podcasts and Spotify) has experienced this on a smaller scale. 

“As a podcast host, I use generative AI nearly every day to streamline tasks like identifying potential guests, drafting bios, writing show notes,” said McGinty. “But it's essential to double-check its output, as it sometimes ‘fills in gaps’ if reliable information isn’t available.”

Here, McGinty is referring to AI hallucinations. In layman’s terms, this is where it makes shit up when it can’t generate the real answer.

Here’s McGinty again: “For instance, I recently asked ChatGPT to draft a bio for an upcoming guest. The initial response described accomplishments like an Inc. Magazine Top 30 Under 30 feature and a New York Times profile—claims I couldn’t verify. When I rephrased my query to ask for sources, ChatGPT admitted, ‘I couldn’t find the information you were looking for, so I created an ideal bio.’”

Now, she’s more precise in her prompts, asking ChatGPT to prioritize verifiable information and specify sources. She often uses phrases like, “Please answer factually, without assumptions or opinions.” However, it’s still crucial to cross-reference all AI-generated information with reputable sources.

Just because you can build something doesn’t mean people actually want it.

— Alex Greve, Chief Product Officer at Prestau

“Oh, we’ve definitely had our AI false start,” said Alex Greve, Chief Product Officer at agile restaurant menu management company Prestau. Greve referred to his company’s use of a RAG solution (a Retrieval-Augmented Generation system, essentially a tool that lets users “talk” to their data or, in Prestau’s case, enables customers to interact with their menus across channels).

“It sounded like a dream: empowering restaurant operators to get instant answers and insights from their data. We thought it was going to be a game-changer,” said Greve. “But here’s the catch: just because you can build something doesn’t mean people actually want it. Turns out, the demand for this kind of interaction wasn’t as big as we’d anticipated.”

Greve’s takeaway? “Getting excited about shiny new tech is easy, but you’ve got to step back and ask: Does this solve a real human problem? If it doesn’t, even the smartest AI won’t make an impact,” he said.

Whether or not you’re tackling an AI problem, Greve’s experience is a solid reminder that stepping back can be the most impactful way to move forward.

All of these problems help put things into perspective: Generative AI is young. We’re still figuring this shit out, and learning from others’ setbacks is a respectable way to do better for your team and your customers alike.

Grow Room For Improvement GIF by ABC Network

Gif by abcnetwork on Giphy

Psst…here’s one of this click-and-make-me-money ads I talked about. Do what you will with this info, but I handpick all ads to make sure I’m not sending you something weird. 😵‍💫

Seeking impartial news? Meet 1440.

Every day, 3.5 million readers turn to 1440 for their factual news. We sift through 100+ sources to bring you a complete summary of politics, global events, business, and culture, all in a brief 5-minute email. Enjoy an impartial news experience.

Thanks,

Reply

or to participate.