Demonstrates how to use Anthropic’s free console tool to automatically generate more effective prompts for language models.
Transcript:
Getting a language model to work the way that you want it to isn’t always easy. Thankfully, we can use AI to actually make the job of using AI a bit easier also. What you’re looking at here is the Anthropic Console. You can create a free Anthropic.com account and navigate to the console and from there you can use it to generate a prompt for you. In this case, the prompt that I’m looking for is used to develop a document summary for a technical resource making it easier for a project manager to understand. You can see that there’s a significant difference between simply asking the language model to summarize a document versus this prompt here. So when you’re next stuck trying to get a language model to do what you want it to do, try using this tool instead.
Demonstrates how sequential thinking MCP breaks down complex queries into smaller chunks to produce more detailed and thorough responses.
Transcript:
Today’s video will help you get much better responses from your AI. And I’m using something today called sequential thinking. And I’m going to show you the same prompt used in two different scenarios. So the first is without using sequential thinking. And here’s my prompt. I’m asking the AI to give me a risk analysis for transitioning to a cloud-based customer management system. Now, the response is okay. It gives me some prior and higher risks. And then it shows me a little bit of information that it considered along the way. However, here’s the exact same prompt this time with sequential thinking. And you’ll notice that what it’s done is it’s actually strung together a whole series of different prompts. So it’s identified the main risk categories. And then for each of those risk categories, it’s gone and done an assessment of those risks in detail. And it’s only when it got to the end that it then considered putting together a priority assessment for me. You can see that we’ve got a lot more detail in the response. And with my priority risk results, it’s showing me an impact level and a probability level for each of these also. Now, what you’re seeing here is unique to plaud and it’s using a functionality called MCP servers, which allows you to add on extra functionality that the language model wouldn’t be able to do on its own. The sequential thinking add-on breaks your initial query down into small chunks and attempts to assess each of those chunks and then brings it all back together to give you a better response. If you’re interested in setting this up, let me know.