Is Prompt Engineering just an overly fancy way of saying, “Here’s a better search query?” Maybe it should just be called Search 2.0? True enough, the output of an AI large language model is more than just a bunch of results, but the query itself is still ‘just’ an instruction of sorts, right?
In some cases, yes, it’s essentially the same as a fancy query. But mostly not. There are obvious differences in the use cases for prompts using AI Large Language Models (LLMs) vs how keywords get used in traditional search engines. And for all their potential faults and risks, LLMs can provide stunning new capabilities across a variety of use cases. At the same time, there seems to be some overblown expectations as to what prompts can do. For example, at least in some places, a misunderstanding that prompt engineering can make models better. While it may be true that prompts and responses can be iteratively honed and fed back into the fine-tuning of models to actually make models better, for the most part, they’re not used this way. I’d like to try to clear this up because I think it’s important we understand how we can use our tools and where they’re limited. Just to be clear, I’m not talking about the handful of folks who really are evaluating prompt output to adjust models. (If you’re one of those folks, you’re ideally operating more at the data science kind of level of prompt engineering.) For our purposes here, I’m talking about the typical consumer or business use that seems to have some people believing prompt input alone changes how the models themselves work.
[Read more…]