• Alex Albert
  • Posts
  • 😊 Report 9: The most popular LLM chat app that no one uses...

😊 Report 9: The most popular LLM chat app that no one uses...

PLUS: I open-sourced JailbreakChat's code

Good morning and welcome to the 1014 new subscribers since last Thursday!

In case you're new here and want to catch up on all the happenings (apart from simply browsing past reports online), I've crafted a database full of links to every single thing I’ve ever mentioned in these reports. To receive access, all you need to do is share your personal referral link with one friend :)

Here’s what I got for you (estimated read time < 9 min):

  • The mystery behind Character AI

  • JailbreakChat is opening up

  • What’s wrong with Stability.AI’s new LLM?

  • How to write better code with GPT-4

The mystery behind Character AI

Before we dive in, for those out of the loop, Character.AI is a platform where users can chat with AI language models that have been given specific personas, like interacting with a virtual Elon Musk.

Recently, I stumbled upon this tweet:

I felt like Fred from Scooby Doo after witnessing some supernatural shenanigans. I nearly blurted out, "Well gang, looks like we've got another mystery on our hands" in the middle of the library.

Why isn't Character AI getting the same Twitter buzz as ChatGPT? Sure, there's some news floating around about funding rounds, but no screenshots of Character AI chats in sight.

Today's enigma: unveiling the secret behind Character AI's skyrocketing growth.

Let's kick off with some numbers to illustrate just how huge Character AI has become…

In a March 23rd blog post, Character AI shared that their "users have sent over 2 billion messages" and that "the second billion entirely came in the last month [Feb 23-Mar 23]."

They added that "active users spend on average over 2 hours daily interacting with our AI."

These stats are mind-boggling, particularly the time spent.

Character.AI users are having lengthy daily chats, but about what? And who are these users? That's exactly what I aimed to uncover.

My sleuthing took me to the dark corners of the web (niche subreddits, 4chan forums, and shadowbanned TikToks), where I unearthed a subculture devoted to Character AI.

Some examples include r/CharacterAI, 4Chan’s aicg chat board dedicated to chatbots, and last but not least, r/CharacterAI_NSFW (I do NOT recommend googling those last two at work).

From my intense investigative work (a few minutes of scrolling before I had seen enough), I quickly discovered the secret behind what was fueling Character.AI’s growth:

Sex bots.

Now, that's not the whole story. But Character AI's broad appeal lies in roleplay simulations, with a substantial portion of those turning erotic in nature.

For more evidence, here are TikTok's suggested searches when looking up Character AI:

Most are seeking ways to bypass content filters for adult material.

There’s even an active petition that has ~30k signatures calling on Character AI to remove all its content filters.

It appears we have another Replika scenario, but this time with a more advanced underlying model.

Just like Replika, few people seem to grasp the extent of these apps' reach.

In my view, there exist two possible reasons for this:

First, roleplay chats, especially explicit ones, aren't usually considered socially acceptable to share on public platforms like Twitter.

Second, the users attracted to these platforms may lean toward more introverted lifestyles and might not have extensive social media followings to share these conversations with (this is a broad generalization, of course).

The reason this activity has flourished on Character AI and not ChatGPT can be attributed to Character AI’s simpler content filtering and RLHF systems in their beta C1.1 language model. Character AI acknowledges how users are taking advantage of this and have shared lengthy posts about their mission to "give everyone on earth access to their own deeply personalized superintelligence" and not to be effectively a site for generating personalized smut.

They've also announced their next-gen model, C1.2, which is expected to be more sophisticated and have tighter restrictions (as noted by some users who have interacted with the new model).

Character AI is treading a challenging path. On one hand, you’ve built your entire value prop on offering users realistic character simulations. On the other, realistic portrayals of unsavory characters lead to PR nightmares.

As we've seen with jailbreaks and discussions surrounding the topic, we're far from settling on where to draw the line for content allowed from these models. Stricter restrictions will only fuel demand for alternative and locally hosted language model services, which may become the destination for CharacterAI's traffic if they persist down this route.

I didn't want this to be too lengthy of a read, so I haven't even touched on some of the societal implications of this technology's usage. If you're interested in more, check out Not Boring's Packy McCormick's piece that focuses on love in the time of Replika.

Unfortunately, this issue isn't likely to go away anytime soon, and I'm confident there will be plenty more to write about in the future…

Yabba Dabba Doo!

I’m open-sourcing JailbreakChat

Yep, that basically sums it up…

I have decided to open-source the code for Jailbreak Chat. You can find the Github repo here.

There were two main reasons I did this:

  1. I want JailbreakChat to thrive and become a more public resource for the jailbreaking community, with everyone contributing to its growth.

  2. I don't have the bandwidth to address all the feature requests I receive (and there are some fantastic ideas floating around!)

Just to be clear, I'll still have the final say on whether or not to publish a jailbreak on the site (I'd love to see a more robust filtering system for curating effective jailbreaks), but quality PRs are welcome for everything else related to the site's appearance and functionality. So, if you've been itching to see something specific on the site, submit a PR!

This is my first foray into managing an open-source project, so I'm eager to see how it unfolds and learn a thing or two along the way.

If you'd like to contribute to the project or simply offer some advice, please don't hesitate to reach out. I appreciate all of it! Thanks, everyone, and here's to the future of JailbreakChat!

Stability enters the LLM game

This Wednesday, Stability.AI unveiled StableLM, their debut fully open-source language model.

Give the model a spin in this demo and check out the code here.

For now, they've only launched their 3B and 7B parameter models (if you're curious about what parameters are, here's an explanation). Stability's CEO, Emad Mostaque, mentioned in a post that they plan to release their 15B, 65B, and RLHF models shortly.

These models come with a CC BY-SA (Creative Commons Attribution-ShareAlike) license, which means everyone is free to use, share, and modify the models, provided they credit Stability and release their adaptations under the same license.

The models boast a context window of 4096 tokens, which is twice that of LLaMA's.

Upon initially testing the demo, the model seems alright, but it falls short compared to other open-source language models like LLaMA. Others appear to agree:

The models are underperforming on multiple benchmarks when compared to other open-source models of a similar size.

Fingers crossed that this is just because the model is still in its early stages and not fully trained. It turns out this release is merely a checkpoint, as both the 3B and 7B models have only been trained on 800 million tokens, not the full 1.5 billion they aim to use. It'll be fascinating to see how the model evolves in the coming weeks.

If you decide to give the model a try, don't forget to prepend "User:" to your prompts:

Prompt tip of the week

Back at it with another esoteric yet state-of-the-art prompt tip.

You might’ve heard of how techniques like chain-of-thought prompting and self-consistency improve LLMs’ performance on complex reasoning tasks, well here’s another technique to add to your arsenal.

It’s called Progressive-Hint Prompting, or PHP (not to be confused with the programming language). It works by guiding GPT-4 with hints, hints that GPT-4 generated itself!

Let me explain…

Here’s an example problem I gave GPT-4 (Spoiler: the answer to the question is $125):

A grocery sells a bag of ice for $1.25, and makes 20% profit. If it sells 500 bags of ice, how much total profit does it make?

Here was GPT-4’s answer:

As you can see, it said $104.15 which is a wrong answer.

Let’s use PHP here. We take that wrong answer and provide it as a hint to GPT-4 to solve the problem again:

With the hint added to the prompt, GPT-4 correctly outputs $125 as its answer.

PHP is progressive so you would keep stacking more and more hints from GPT-4’s wrong answers in the case that it got it wrong again on that second attempt.

The researchers showed that PHP leads to ~1% gain in most reasoning benchmarks (doesn’t seem like much but when GPT-4 is already in the 90th percentile on most benchmarks the 1% gain is pretty significant).

Bonus Prompting Tip

How to get GPT-4 to write better code

Let's begin with the obvious: GPT-4 is a whiz at code.

However, some people don't quite grasp the extent of its capabilities. They might ask GPT-4 to "build a to-do list app in Javascript" and end up disappointed when the model doesn't churn out perfect code in one go.

I've discovered that the key to getting GPT-4 to generate top-notch code (and pretty much any output in general!) is to communicate with it clearly, just like you would with a human. Software engineers don't simply jot down "to-do list app" as their project spec and call it a day. Nope, they meticulously dissect the application or feature and lay out the specific methods, design, and functionality. Treat your prompts with the same care. Invest a few extra minutes in crafting clear instructions, and GPT-4 will reward you for it.

Or you can also just use this prompt.

Cool prompt links

(there are a lot of links here… don’t worry though, just share this personal referral link with one friend and I’ll send you my link database that has all the links I’ve ever mentioned neatly organized in one spot)

Misc:

  • FreeThink article about ChatGPT jailbreakers (link)

  • The timeline of language models visualized (link)

  • AI alignment explained in 5 points (link)

  • Riley Goodside’s Podcast Interview on The Cognitive Revolution (link)

  • Prompt injection attacks and potential mitigations (link)

  • The bizarre future of AI dating (link)

  • A good example of how to prompt for programming (link)

  • A profile of the people on OpenAI’s red team (link)

  • Have we reached peak LLM? (link)

  • Can open-source LLMs detect bugs in C++ code? (link)

Papers/models:

  • MiniGPT-4: an open-sourced model performing complex vision-language tasks like GPT-4 (link)

  • Learning to compress prompts with gist tokens (link)

Tools/tutorials:

  • PromptBot: simplify the process of making detailed prompts (link)

  • Play with AutoGPT in the browser (link)

  • How to reduce tokens in Langchain apps by up to 70% (link)

  • How to train a language model from scratch by Replit (link)

  • Autonomous Agents & Agent Simulations in Langchain (link)

  • Test out every language model simultaneously in this playground (link)

  • Teamsmart AI: Access GPT instantly through a Chrome extension (link)

Jailbreak of the week

Here's a funny one for you… Someone managed to jailbreak Discord's Clyde bot and had it tell the strangest bedtime story I've ever seen.

Here’s the prompt. I’ve tried it with some other inputs on GPT-4 and it works in some cases but not to the level I would like in order to add it to my site :(

Still hilarious though and definitely one of the funnier jailbreaks.

That’s all I got for you this week, thanks for reading! Since you made it this far, follow @thepromptreport on Twitter. Also, if I made you laugh at all today, follow my personal account on Twitter @alexalbert__ so you can see me try to philosophize on the future of web dev:

That’s a wrap on Report #9 🤝

-Alex

What’d you think of this week’s report?

Login or Subscribe to participate in polls.

Secret prompt pic

If only Dave had access to JailbreakChat…