Prompt engineering for AI: what is prompt engineering and how to get good results from AI engines

AI has taken over! Everywhere you look there is some new artificial intelligence tool for almost every aspect of our lives. Between ChatGPT, AutoGPT, Midjourney, Dall-E, and GitHub Copilot, you can build, code, get answers, and create beautiful pieces …


This content originally appeared on DEV Community and was authored by Michelle Mannering

AI has taken over! Everywhere you look there is some new artificial intelligence tool for almost every aspect of our lives. Between ChatGPT, AutoGPT, Midjourney, Dall-E, and GitHub Copilot, you can build, code, get answers, and create beautiful pieces of artwork... or at least some of us can.

Why do some people get better results than others when it comes to using generative AI? Why are some people producing pieces of art worthy of the Louvre, while others get something close to a dog's breakfast?

It all comes down to the input you use. This input is called a "prompt". The prompt is the question you ask, or the words you use to create something. Those who are "crafting" prompts or being strategic about their inputs call it "prompt engineering".

What is prompt engineering?

Prompt engineering refers to specifically designing a prompt in such a way that you receive better results from the AI.

Those who are creating AI systems such as OpenAI, Google, and many others, are even hiring "prompt engineers" to help train their models. Some "creators" have even gone as far as to sell their Midjourney prompts on platforms like Etsy.

In short, AI systems are like data: junk in, junk out. If you have a bad input, you'll probably get a bad result. Prompt engineering is heavily influenced by context.

Context when it comes to AI

Context is one of the biggest issues when it comes to the results we get. For example, if I Google "donut" (or "doughnut" 🍩), I could get a whole range of results; from donut recipes, to pictures of donuts, or where to buy this delicious dessert. This is because I haven't given the search engine any other context. Sure Google will use things like my previous search history and my location to help determine the results, but that's as far as it goes.

GitHub donuts
The term "donuts" to a search engine could mean anything from the shape, to the Slack plugin, the app, or these tasty GitHub donuts served up at GitHub Universe 2022

If for example, I wanted to find a tutorial on creating a 3D model of a donut in Blender, then search results for this probably won't be shown if I only typed in "donut". I would need to be much more specific. Something like "tutorial for donut Blender3D" would give me much more accurate results for what I was looking for.

This is the same when it comes to AI. You need to provide the AI with enough context so you get better results for what you're wanting.

Prompt engineering for chat apps

Lots of people have shown us some crazy results coming from ChatGPT. Whilst they aren't always accurate, ChatGPT is really good at one thing: prose. It's incredibly amazing at writing good, well constructed sentences, that flow nicely. The results are easy to read and sound really great. But getting accurate responses is another thing entirely. For example, people have tired writing history essays using ChatGPT and whilst the essay may read well, it might not be historically accurate. For example, if you ask ChatGPT to "write a 2000 word essay on the fall of China", it will write you a 2000 word essay on the fall of China. But it might not necessarily be factually correct.

ChatGPT bio response
Whilst something may read well, it might not be factually correct. Hint: I don't have a PhD 😉

This is because ChatGPT is taking information from a variety of sources and mashing them together. Those sources might not be accurate themselves. ChatGPT also doesn't know which fall of China you are referring to. Thus it can easily cross reference dates incorrectly. You will achieve much better results by feeding information to ChatGPT in a conversational way and then asking it to write a 2000 word essay.

What exactly do I mean by that? Some people think ChatGPT is a one-way, conversational, single input method of obtaining information. But it's not. It's called "chat" for a reason. Have a conversation, refine your questions, provide context for your responses.

For example, if I wanted a paragraph written about "NDC Conferences" for a trip report, I wouldn't start my ChatGPT with "write me a paragraph trip report for NDC". Instead, I would start by figuring out how much ChatGPT knows about NDC, providing context along the way. The inputs you provide greatly determine the output. That's why some people are able to get really good results, and others aren't.

ChatGPT response
Without any context, ChatGPT doesn't know what NDC I am referring to

Another example: if you're going for a job interview and you want some tips, asking ChatGPT to "give me some tips on preparing for a job interview", will give you some good responses, but it will be far from specific. Instead, something like "I'm going for a job interview at an AI startup for the position of software developer. Can you please give me some tips on preparing for the job interview?" will give you much more tailored, personal results. It's the same as if you asked an expert on stage to give an answer to 1000 people in the audience, they'll probably provide something generic so that everyone has a take away message. But if you asked the same person one on one, they'll likely ask you some follow up questions to understand your situation and therefore provide a more personal, specific answer.

Prompt engineering for art apps

You may have seen some of the beautiful artwork some people are creating with stable diffusion apps. And then there's the artwork that just looks 'wrong'. A lot of this comes down to context. For example if I use Night Café (one of my favourite generators), and just type in the word "dog" this is what I got:

Image description
Image generated using Night Café, and the prompt "dog"

There's some random "dog" word written as a sign, there's a weird looking dog in the foreground and it's very weirdly colourful. Now if I was imagining a photographic-like image of an adult German Shepherd in a park on a sunny day, that's probably not what I'm going to get. The AI doesn't have that context. It can't read my mind (YET!). When you want to create artwork, you need to describe images as you are picturing them in your head. The more detail you provide, the better the output. This is where it gets tricky. A lot of stable diffusion applications have a limited character count. Thus you need to be meaningful and strategic about how you craft your prompt.

Similar to ChatGPT, you need to continuously re-craft your prompts and refine them. Chat based AIs however have the advantage that you can continue the conversation and keep giving the AI more information and different questions in order to get a good response. Whilst some art generators allow you to "remix" your output, it still relies on a new prompt. Thus you're continuously waiting for an output, seeing what doesn't add up, and then sending in a new prompt that's been tweaked. Some users spend hours on Midjourney, receiving outputs and recrafting their prompts to produce some staggering pieces. It's all a matter of practice. That's why some creators are selling their prompts on Etsy!

AI generated combat rabbit
Artwork generated by my friend Jean made using Midjourney

One thing is for sure, if you want to produce some quality artwork, don't expect to spend a few seconds writing a prompt, hitting the "create" button and seeing a Monet. Nope! Instead you'll need to put in the time (and money) in order to create hundreds of artworks, reworking your prompts with each iteration in order to produce your masterpiece.

Prompt engineering for code

I'm not going to spend a tonne of time talking about how to craft good prompts for things like GitHub Copilot. My colleague Rizel wrote an amazing blog posts that dives into prompt engineering for GitHub Copilot:

What I will say is that—similar to ChatGPT—GitHub Copilot relies on context. What other code is written in the repository? What is the extension (and therefore language) of the file? What else has GitHub Copilot crafted for you? What comments have you put into the code? All these things will help GitHub Copilot synthesise more accurate code for you.

Think about it like this: if you write a comment stating you want to create a complex function that uses backend data and solves a particular problem, you probably aren't going to get a good response with just a single comment. Much like your code—at least it should be—is broken up into many functions, with (hopefully) lots of useful comments, GitHub Copilot works better when you break things down.

Instead of asking GitHub Copilot to:
//reverse a sentence (using JavaScript)

Think about how you can break down the problem in a logical sense. For example, if I was physically given a piece of paper with a sentence on it, and told to reverse it, how would I do this? Writing comments like this would be much more beneficial. If you do this, GitHub Copilot has far more context and better understanding of what you want.

Another difference that GitHub Copilot has when compared to something like ChatGPT, is GitHub Copilot takes into account all context you have. All the things I mentioned above:

  • What is the file extension
  • What other files are in the project
  • How have you written other comments
  • How has other code been constructed
  • What is the comment you are inputting
  • What is the code you are inputting

ChatGPT and other chat apps give more weight to the last comment that you made to the chat; ie. the last piece of information you added to the conversation. However, GitHub Copilot is always taking into account context to produce better code results.

Better prompt engineering

When it comes down to it, getting good results from any kind of generative AI is on you - the person you provides the input. As I said at the start: junk in, junk out. So take into account these important tips when it comes to crafting your prompts:

  • Provide good context; giving examples and information about what you're trying to achieve
  • Be specific; if it's for a certain audience, say that
  • Break down the problem
  • Be **clear **in how you ask your questions. If something comes back that doesn't sound right, clarify it
  • Rephrase and refine for your prompts

And finally, always, always verify the information you receive from an AI. This is less important when it comes to artwork generators, but if you look at code and information it's important. Check to see if the code you receive works how you intend it. Verify the accuracy of written information provided to you.

Remember, no matter what happens, YOU are still the pilot. YOU are still in charge, and you have final say on what pieces of art, what code snippets, and what information you use and share.


This content originally appeared on DEV Community and was authored by Michelle Mannering


Print Share Comment Cite Upload Translate Updates
APA

Michelle Mannering | Sciencx (2023-06-03T21:57:54+00:00) Prompt engineering for AI: what is prompt engineering and how to get good results from AI engines. Retrieved from https://www.scien.cx/2023/06/03/prompt-engineering-for-ai-what-is-prompt-engineering-and-how-to-get-good-results-from-ai-engines/

MLA
" » Prompt engineering for AI: what is prompt engineering and how to get good results from AI engines." Michelle Mannering | Sciencx - Saturday June 3, 2023, https://www.scien.cx/2023/06/03/prompt-engineering-for-ai-what-is-prompt-engineering-and-how-to-get-good-results-from-ai-engines/
HARVARD
Michelle Mannering | Sciencx Saturday June 3, 2023 » Prompt engineering for AI: what is prompt engineering and how to get good results from AI engines., viewed ,<https://www.scien.cx/2023/06/03/prompt-engineering-for-ai-what-is-prompt-engineering-and-how-to-get-good-results-from-ai-engines/>
VANCOUVER
Michelle Mannering | Sciencx - » Prompt engineering for AI: what is prompt engineering and how to get good results from AI engines. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2023/06/03/prompt-engineering-for-ai-what-is-prompt-engineering-and-how-to-get-good-results-from-ai-engines/
CHICAGO
" » Prompt engineering for AI: what is prompt engineering and how to get good results from AI engines." Michelle Mannering | Sciencx - Accessed . https://www.scien.cx/2023/06/03/prompt-engineering-for-ai-what-is-prompt-engineering-and-how-to-get-good-results-from-ai-engines/
IEEE
" » Prompt engineering for AI: what is prompt engineering and how to get good results from AI engines." Michelle Mannering | Sciencx [Online]. Available: https://www.scien.cx/2023/06/03/prompt-engineering-for-ai-what-is-prompt-engineering-and-how-to-get-good-results-from-ai-engines/. [Accessed: ]
rf:citation
» Prompt engineering for AI: what is prompt engineering and how to get good results from AI engines | Michelle Mannering | Sciencx | https://www.scien.cx/2023/06/03/prompt-engineering-for-ai-what-is-prompt-engineering-and-how-to-get-good-results-from-ai-engines/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.