When I joined a company a few years ago there was an established part of the introduction to the wider team, where each new person had their turn to describe three interesting facts about themselves - where one of the "facts" would actually be made up. Then there would be a bit of voting to see which statement was least believable.
Some of my recent experiences with AI reminded me of that "Two truths, and a lie" experience.
Earlier this week I spent a couple of hours delving into what performance characteristics we should expect to get out of a particular configuration of an AWS service. The AI agent surfaced up the top handful of performance optimisation recommendations, followed up with some tables of numbers for estimated performance differences involved.
Given that our use case was mainly going to involve finding matches between two data sources, I figured that there would almost certainly be further performance benefits available if we worked with sorted data. So, I gave the agent a concise prompt to determine whether that would be worthwhile. What it came back with looked promising, but then looked too good to be true.
The first few paragraphs of the explanation of the benefits available from working with sorted data appeared to make sense, but later in the response there was a single sentence statement with bold formatting which gave the impression that it was an established fact - along the lines of "The report data is guaranteed to already be sorted". That turned out to be complete nonsense, directly contradicting a statement that was highlighted in the documentation that the AI had been using as a significant reference.
Unlike the icebreaker activity, when an AI agent presents information we don't know whether to have confidence that it will be accurate.
Comments
Post a Comment