May 13, 2024

Few-Shot Prompting

Few-shot prompting explained

Despite the impressive zero-shot capabilities exhibited by modern large-language models, they still struggle with more complex tasks when operating under theg. To overcome this limitation, few-shot prompting can be employed as a technique to facilitate in-context learning. Few-shot prompting involves providing a limited number of examples or demonstrations to the AI model to guide its learning process. These demonstrations condition the model for subsequent examples where responses need to be generated.

Let's try it on a simple mathematical task.

Prompt:

Q: 1 3 5 7 9 11
A: 13

Q: 17 19 21
A:

Output:

23

By utilizing few-shot prompting, developers can enable in-context learning, allowing the model to learn tasks with minimal demonstrations. The model can perform tasks efficiently with just a few examples, such as 1-shot learning. For more challenging tasks, increasing the number of demonstrations, such as 3-shot, 5-shot, or 10-shot learning, can be beneficial.

But what's the best way to present such context prompts? A study from Min. et al [2022] has a couple of tips.

  1. Be consistent with how you label the input data.
  2. If the format is random - it still leads to better performance than no labels.

Let's experiment with it.
Prompt with random labels:

Text: It rained all weekend, ruining our camping trip.
Classification: Negative

Text: The delicious smell of cookies filled the kitchen.
Classification: Positive

Text: The movie had a predictable plot and forgettable characters.
Classification:

Output:

Negative
With a consistent labeling format, the model delivered accurate results. Our ongoing experiments suggest newer AI models are becoming even more resistant to format variations.


Prompt: 

Negative It rained all weekend, ruining our camping trip.

Text: The delicious smell of cookies filled the kitchen - Positive

The movie had a predictable plot and forgettable characters.

Output:

Negative

The AI output holds up. Still, there's no guarantee it would hold up in a more complex scenario

Few-shot prompting pitfalls:

Let's test our model by combining textual tasks with numbers. We'll feed lines from 2 songs to the model - "Crazy" by Gnarls Barkley, and "It's My Life" by Bon Jovi. As a result, we expect the AI Model to give us 2 following lines from Frank Sinatra's "Fly Me to the Moon"
Prompt:

Q: I remember when
A: I remember, I remember when I lost my mind
There was something so pleasant about that place

Q: It's my life, it's now or never
A: I ain't gonna live forever
I just want to live while I'm alive

Q: Fly me to the moon
A:

Output:

Let me play among the stars, 
let me see what spring is like on Jupiter and Mars. 
In other words, hold my hand, in other words, baby, kiss me. 

Fill my heart with song
Let me sing forevermore
You are all I long for
All I worship and adore

...

As you can see - the model gives a different answer to what we'd expect, albeit similar. Such inaccuracy can be problematic, especially in . Indeed, few-shot prompting has limitations. The example we showed required more complex reasoning than it could handle, even with some basic information.  For these types of problems, we might need to break things down step-by-step for the model.  Chain-of-thought prompting (next article) is a newer technique that's been successful in tackling these more challenging reasoning tasks.

Test out the prompts with AI/ML AI Playground.

We're excited to see what amazing projects you will bring to life. Happy prompting!

Get API Key

More categorie article

Browse all articles