There’s a somewhat well-known joke about a photographer that went to a fancy dinner party. The host who was also the chef said to the photographer-guest, “I love your wonderful pictures; you must have a fantastic camera.” After a great dinner, the photographer said to the host, “Dinner was great. You must have a fantastic stove!”
The moral is clear enough. Though our tools can make a difference, going from good to great isn’t usually about the tool, it’s the craftsperson. The question is which kind of user do you want to be? This article isn’t about AI builders using what’s referred to as “Prompt Engineering.” The focus here is for generative AI users who want to improve results without deep technical exploration. Let’s do a little background first to set some context, then bullet point some techniques to try out. (Skip down to the bullet point list if you don’t care for the background.)
Generative AI vs Search
Generative AI tools, (ChatGPT, Claude, Bard, Perplexity, etc.), are impressive, but traditional search engines remain useful. Both are valuable, with unique strengths. Understanding when to use each is key to maximizing results.
Why is the Input for Generative AI Tools Called a Prompt?
In computer science, a “prompt” signals readiness for input. Here, it’s more than a question; your prompt guides the AI’s response, as a script might guide an actor. By treating prompts as instructions, (possibly with context, tone and specificity), rather than basic keywords or phrase queries, you can unlock better responses. Prompts instruct AI to analyze, create, or combine information. Think of it as the difference between asking a librarian to find a book (search) versus asking a teacher to explain a concept or help solve a problem (prompt).
Traditional Search vs. Generative AI Use Cases
Search Use Cases
Traditional search engines have three typical use cases: Transactional, Navigational, and Informational. With transactional, a user is likely still best off using search for goal-oriented transactions; like seeking products, booking a service, or signing up for an offer. (Some might also add Commercial as a search type, where you kind of know what you want, but are still seeking information prior to seeking a specific transaction.) For these tasks, Generative AI, while able to suggest options, lacks direct buying and linking functionalities, (and usually doesn’t have current information), making search engines preferable here; at least for now.
Navigational Searches aim to locate a specific site or page or possibly general content area. Generative AI might describe a site or brand but often lacks the direct access provided by search engines for this purpose. (This also varies and over time will likely change as these tools get closer together.)
For pure informational searches, queries like “how to start a business” or “symptoms of flu,” both tools are useful, but in distinct ways. And this is where generative AI is – arguably – really helping to change things for the better. Search Engines aggregate and index resources. Generative AI synthesizes information, providing custom, context-driven answers in a conversational format. This is useful for in-depth explanations or when follow-up questions are needed.
Generative AI Use Cases Beyond Search
Generative AI can be used for some of the same needs as search, but often might not be the best choice. This isn’t why it was built. It just so happens that the content repositories used to craft these AI products is the same or similar as search.
However, while AI tools increasingly do real-time search augmentation, they’re likely going to fall down for awhile on transactional and navigational uses. So let’s now get into deeper information needs, and new use cases beyond search.
Where generative AI leaves traditional search behind in the information seeking realm is in crafting summaries and eduction for various requests, no blue links required. And now we move past search and retrieval use cases and into generating entirely new things. This is what’s largely new. Traditional search doesn’t do this at all. Prior to the recent explosion of large language models and their generative text capabilities on the consumer marketplace, there were capabilities for things like text summarization. These tools were mostly either academic or behind the scenes of content production. As of this writing, major search engines have started embedding some simple summaries within search results. But the whole interface is not really amenable to iteration. And is certainly not inviting towards creation. Where the new tech really shines are in more robust summaries, and to create entirely new things. Beyond this, these tools can maintain continuous contextual understanding during extended layered conversations. There’s also – sometimes – multi-modal capabilities to create text, images, code, video, and other content types. Then there’s whole concept and creative generation and use of goal-directed workflows.
So generative AI can certainly include information retrieval as part of its functionality, but its core purpose extends beyond retrieval into the realms of synthesis, transformation, and creation. While information retrieval systems are built to locate and deliver relevant data or documents, generative AI models are designed to analyze, process, and produce new content based on patterns in large datasets.
Iterating with Prompts
Effective prompting is about refining and evolving input to get closer to desired output. Traditional search is often also iterative as you learn things on your journey. But this is usually to drill towards particular targets. Knowing when you’re done – either with answers or failure to find them – is usually clear enough. But when you’re crafting something new with a generative AI, whatever that goal is, knowing when to stop can be harder.
There are varying definitions of done depending on what field we’re talking about. Artists may face intuitive decisions deciding if a piece is complete. Or be guided by deadlines, limited materials, or audience needs. It’s as much an emotional judgment as a technical one. For authors, feeling done may be similarly intuitive but also shaped by structure and revision, when they feel it accurately expresses the story, ideas, or emotions they set out to convey. Or again, there could be deadlines. Scientists are maybe done on reaching a point where the data fully supports a hypothesis or answers a research question. In other fields, the sense of completion varies but often blends intuition with structure, depending on goals and context. What about your needs? Who are you? An Engineer? Chef? Filmmaker? Businessperson?
Some Practical Iteration Approaches
Following is a list of strategies and tactics to go after completing whatever your generative AI task may be. They’re primarily textual large language model focused. Consider taking a moment with a task and experimenting with these ideas and see where they take you.
- Start Broad / Wide: Opening with general questions can help gauge the tool’s strengths.
- Refine or Expand Based on Response: As with search, analyze initial output and identify areas that could benefit from more detail, context, or specificity. Or perhaps a wider scope. You might say “Make this explanation more technical” or “Simplify this for a beginner.” Or “Give me a more general view.” So you’re not just refining ‘down’ into a topic, you can refine whole contexts.
- Building Blocks: Use the AI’s previous output as a foundation. You might say “Using the code you just generated, add error handling” or “Take that summary and convert it into bullet points.”
- Introduce Constraints or New Perspectives: If response is too lengthy, prompt to “summarize in one paragraph.” Or specify a tone or format, like “in the style of a press release.” You can ask it to act as a different persona; a teacher, a poet, a starship captain, perhaps a fish, the color blue.
- Clarification: When responses are unclear or incomplete, ask follow-up questions. “Can you explain what you mean by X?” or “Provide an example of Y?” Or ask it to explain its thought process.
- Give Examples: You can try giving one or more examples of the kind of output you’d like. Or even list some instructions and tell it to give you answers in this kind of format. (Varying tools may allow for varying levels of input, up to possibly thousands of words for each message.)
- Scope Adjustment: Narrow or broaden the focus as needed: “Let’s focus just on the security aspects” or “Expand this to cover more use cases.”
- Try Different Tools: There are many tools available for free or low cost. Whether text, imagery, video, some combination… try different tools. Perhaps even feed them each others’ output. It might be telling in the similarities and differences given foundational models likely have different training source material as well as assumptions on the part of the model builders.
- Iterate Until Satisfied: Consider acting as a writer might, going through rough drafts to improve clarity and impact. Refining prompts is like revising a piece of writing. With each iteration, you fine-tune the language, structure, and intent to get closer to your ideal response. This iterative process is crucial in tapping into generative AI’s full potential.
- Be Specific: While the first piece of advice was to go broad, if you have a clear output you need, (such as an Excel formula or similar), in these cases it may be appropriate to just start with specifics in these particular use cases.
What About Embedded AI?
Embedded AI is where these still new tools are made part of existing tools. There are design tools where you can prompt the tool to do tasks. There are AI tools to help write computer code, and so on. Most of us have probably already experienced customer support chat bots. Task specific tools may be less capable in that these smaller task-specific models may not allow for broader scope explorations as they lack the foundational linguistic models for understanding. However, they’re likely more capable in that they’re likely to have been enhanced with detailed information on how to get certain things done.
Conclusion
As we navigate this new frontier of generative AI, it’s clear that these tools can be empowering, but they’re best used with with intention and curiosity. The skill of prompting is evolving, and, much like other crafts, it rewards practice, exploration, and the willingness to iterate. Whether you’re generating ideas, refining answers, or creating entirely new content, your engagement with the tool shapes the outcome. By approaching AI as a collaborator rather than just a resource, you open the door to richer, more meaningful results. Embrace the learning curve, experiment boldly, and enjoy the process. We’re just at the beginning.
—
Background / Disclaimer: I’m a product person, not a programmer. If or where I describe detailed results from code or evaluations of output, it’s from my own hobbyist adventures, not production code. And while I’ve delivered a few ML based product to market, and have a background in information science, taxonomy and search, I’ve only just started working on products that use generative tools. Any links to my Colab notebooks or similar should not be used beyond examples only without checking with a professional developer.