Is Prompt Engineering just an overly fancy way of saying, “Here’s a better search query?” Maybe it should just be called Search 2.0? True enough, the output of an AI large language model is more than just a bunch of results, but the query itself is still ‘just’ an instruction of sorts, right?
In some cases, yes, it’s essentially the same as a fancy query. But mostly not. There are obvious differences in the use cases for prompts using AI Large Language Models (LLMs) vs how keywords get used in traditional search engines. And for all their potential faults and risks, LLMs can provide stunning new capabilities across a variety of use cases. At the same time, there seems to be some overblown expectations as to what prompts can do. For example, at least in some places, a misunderstanding that prompt engineering can make models better. While it may be true that prompts and responses can be iteratively honed and fed back into the fine-tuning of models to actually make models better, for the most part, they’re not used this way. I’d like to try to clear this up because I think it’s important we understand how we can use our tools and where they’re limited.
[Read more…]