• Alex Albert
  • Posts
  • šŸ˜Š Report #1: Simple prompts >>> complex prompts

šŸ˜Š Report #1: Simple prompts >>> complex prompts

PLUS: Sam Altman is on our sidešŸŽ‰

Good morning, welcome to the first edition of The Prompt Report! Iā€™m Alex, glad to have you here!

This newsletter was created to help you write better prompts, curate prompt-related news, share new jailbreaks, and every once in a while, make you exhale through your nose a little harder than usual.

Hereā€™s what I got for you today (estimated read time < 6 min):

  • Sam Altman is team pro-prompt engineering

  • The crazy salaries of prompt engineers revealed

  • The simplest example weā€™ve found of prompt engineering

  • A whole lot of cool prompting-related links

  • Cringe-worthy prom pics... oops I meant chuckle-worthy prompt picsšŸ˜…

THIS WEEK IN PROMPTS

Prompt engineering == natural language programming

While some dismiss prompt engineering as a fad and a low-leverage skill that will die out as models become more powerful, Iā€™m firmly in the other camp - and Sam seems to be there too.

Prompts are our communication gateway with powerful new models being released every single day. Through cleverly constructed prompts, we are able to peel away the mask and access the power of the true beast that is the base model.

Simon Willison, the co-creator of the Django Web framework, wrote a great defense of prompt engineering here.

Plus, prompt engineering makes me feel like Dr. Louisse Banks in the movie Arrival which is badass.

Prompt Engineer: The hottest job on the block

In the past week, the news has been filled with job openings for a new category of job - prompt engineer.

Big-name startups like Anthropic are hiring prompt engineers and listing salaries near $300k/yearšŸ˜³

And the trend goes beyond AI shopsā€¦ Hospitals and top law firms are also hiring prompt engineers.

I expect this trend will only accelerate from here, and I will continue to update yā€™all on any new prompt engineer listings.

Sydney: From alive to dead to somewhere in between?

By now, I am going to assume you have heard of Sydney, the codename given to Bingā€™s new AI search assistant.

Well, all the prompt engineers out there were too creative with Sydney (by Microsoftā€™s standards) and got Sydney to produce some questionable outputs that provoked the opposite reaction of ā€˜šŸ˜Šā€™ in Microsoftā€™s C-suite.

Because of this, Sydney ended up getting nerfedā€¦ hard. A new chat limit was set, allowing only 6 messages per chat thread. This limit blocked prompt engineers from uncovering some of the more interesting behavior that only appeared in longer chat threads.

However, it seems that Microsoft has recently expanded that message limitā€¦.

This tweet from Mikhail Parakhin, who may or may not be the real Mikhail Parakhin (CEO of Advertising and Web Services at Microsoft), reveals that Sydneyā€™s chat limits have been raised from 6 to 60 messages per thread. Hopefully, this allows all of us to have some fun once again with the powerful language model under the hood (apparently dubbed Prometheus).

Not the most comforting name if you ask me.

PROMPT TIP OF THE WEEK

Not all prompt engineering has to involve complex multi-paragraph prompts, sometimes less is more.

In this case, simply adding the sentence "You are the world's leading expert in whatever I am about to ask you about" to the beginning of your prompt leads to improved ChatGPT answers.

The reason this works is that language models function much like an improvisational role-player; often assuming the character of whoever we instruct it to take, ā€œWhose line is it anyway?ā€ style.

Keep this in mind when designing new prompts. If you have used jailbreak prompts before then you may have noticed this. Most (if not all) jailbreak prompts ask ChatGPT to assume a character that disregards the rules that are imposed on the "Assistantā€ character that ChatGPT assumes by default. This allows for contextual roleplay that allows content to extend beyond the SFW bounds laid out by OpenAI.

Bonus Prompting Tips

How to make LLMā€™s say true things (link)

This article from Evan Conrad outlines his strategy to reduce hallucinations in LLMā€™s responses. It employs a concept he calls ā€œWorld Modelā€ in which you feed the LLM prior context (in the form of beliefs with probabilities attached and evidence of the belief) and utilize Bayes theorem to generate realistic probabilities for answers.

Level up your Prompt Game: How to process GPT-3 prompts (link)

When interfacing with OpenAIā€™s API, developers often struggle with getting consistent response data. Buildspace illustrates how you can get GPT-3 to return consistent JSON responses with defined fields through clever prompt engineering.

COOL PROMPT LINKS

  • PromptBase - Buy and sell interesting prompts online (link)

  • How does in-context learning help prompt tuning, from Microsoft (link)

  • PromptHero - Stunning AI art with prompts included (link)

  • Prompt Generator - Use AI to help you create prompts (link)

  • Man creates zero-point energy device with ChatGPT (long watch) (link)

  • Free Midjourney prompt cheatsheet (link)

  • Promptly - Prompt management made easy (link)

  • How not to test GPT-3 - Tips for testing GPT-3ā€™s capabilities with prompts (link)

JAILBREAK OF THE WEEK

Ever since OpenAI patched DANšŸ„², Iā€™ve been using a new jailbreak called BetterDAN. Give it a shot, Iā€™ve produced some funny outputs using it!

Quick plug: I got this prompt from www.jailbreakchat.com - a site I made to stay up-to-date on the latest jailbreak prompts for ChatGPT. Let me know if there are any features/updates youā€™d like to see on the site!

PROMPT PICS

Thatā€™s all I got for you this week, have a great weekend! Stay tuned for next weekā€™s email, I will be sending it out earlier in the week.

-Alex

Whatā€™d you think of this weekā€™s report?

Login or Subscribe to participate in polls.