Language Models

Using Anthropic Console to Generate Better AI Prompts

Demonstrates how to use Anthropic’s free console tool to automatically generate more effective prompts for language models.

Transcript:

Getting a language model to work the way that you want it to isn’t always easy. Thankfully, we can use AI to actually make the job of using AI a bit easier also. What you’re looking at here is the Anthropic Console. You can create a free Anthropic.com account and navigate to the console and from there you can use it to generate a prompt for you. In this case, the prompt that I’m looking for is used to develop a document summary for a technical resource making it easier for a project manager to understand. You can see that there’s a significant difference between simply asking the language model to summarize a document versus this prompt here. So when you’re next stuck trying to get a language model to do what you want it to do, try using this tool instead.

Understanding Tokens and Context Windows in Language Models

Explains how tokens work as the measurement system for language models, affecting both processing and pricing.

Transcript:

When we start dealing with language models like ChatGPT or Claude, one of the important things we need to manage is something called a context window. A context window represents the amount of information that the language model can handle at any one time. Now the context window is actually measured in what are called tokens. And tokens are kind of like the call measurement system for the way that we deal with language models. Generally tokens are about four characters long and it’s important to understand them because they can also affect pricing that you pay when you’re dealing with language models. What you’re seeing in front of you is a token counting system that I’ve built that allows me to put in some text and this is just some text off our website and it then goes through and derives the number of tokens that are used. And you can see in most cases, if breaking the words down into small chunks are about four characters but there are a couple of things I wanted to highlight for you because one of the ways that a language model works is by finding patterns in words. So if you look at the word information here, you can see that rather than just breaking it up into chunks of four characters, if identified common patterns, the letters I N, the letters F-O-R-M, A-T and I-O-N. And so the way that a language model breaks up the words makes it easier for it to then be able to interpret what you’re asking and be able to look for patterns to be able to give you a meaningful answer. If you’d like to know more about this, let me know.

Sequential Thinking Improves AI Prompt Responses

Demonstrates how sequential thinking MCP breaks down complex queries into smaller chunks to produce more detailed and thorough responses.

Transcript:

Today’s video will help you get much better responses from your AI. And I’m using something today called sequential thinking. And I’m going to show you the same prompt used in two different scenarios. So the first is without using sequential thinking. And here’s my prompt. I’m asking the AI to give me a risk analysis for transitioning to a cloud-based customer management system. Now, the response is okay. It gives me some prior and higher risks. And then it shows me a little bit of information that it considered along the way. However, here’s the exact same prompt this time with sequential thinking. And you’ll notice that what it’s done is it’s actually strung together a whole series of different prompts. So it’s identified the main risk categories. And then for each of those risk categories, it’s gone and done an assessment of those risks in detail. And it’s only when it got to the end that it then considered putting together a priority assessment for me. You can see that we’ve got a lot more detail in the response. And with my priority risk results, it’s showing me an impact level and a probability level for each of these also. Now, what you’re seeing here is unique to plaud and it’s using a functionality called MCP servers, which allows you to add on extra functionality that the language model wouldn’t be able to do on its own. The sequential thinking add-on breaks your initial query down into small chunks and attempts to assess each of those chunks and then brings it all back together to give you a better response. If you’re interested in setting this up, let me know.

Creating Custom Writing Styles in Claude AI

Shows how to create personalized writing styles in Claude by analyzing your own sent emails to make AI-generated content sound like you.

Transcript:

In today’s video, I wanted to show you a technique for being able to get a language model to sound more like you when it’s generating content. One of the ways that I see a lot of people using a language model is to generate content for, could be marketing campaigns, could be an email, a response to an email, a whole range of different things. However, getting the language model to actually sound like you is a challenge. Now, in Claude, we have the ability to use writing styles. There’s four writing styles included as standard, but you can create your own writing style. So I wanted to show you the key differences between them and then give you an idea as to how to create your own writing style. In this case, it’s a pretty straightforward prompt, nothing too detailed here. We’re just asking the system to generate an email that we might use on a day-to-day basis. The initial response is okay. It’s giving you what you asked for, but it’s very generic in its response. It’s very vanilla. There’s no personality. There’s no indications that you’ve actually written this message. So here’s the same prompt, but with my own personal writing style applied to it. And you can see here that the output actually sounds like something that I would write. You may not be familiar with my writing style, but it’s generally relatively brief, bullet point related. Here’s the outcomes. Here’s what we need to do next. So let’s have a look at the writing style. This is a relatively lengthy description of how I communicate. So how did I actually generate this? The process I used was a very specific prompt that was written that analyzed emails from my sent items in Outlook. So one day I took about the last 100 emails that I’ve ever written. I anonymized them and I gave them to the language model with a specialized prompt and it then in turn developed this prompt for me. That’s a pretty common aspect of using language models in a smart way. So if you’re generating content using a language model, getting it to sound like you is critical. So let me know if you want to know how to do this.

Computer Vision Analysis with Language Models Demo

Demonstrates how language models can analyze images using computer vision to identify objects, count people, and extract details.

Transcript:

When we think about language models a lot of the time, we think about how it’s dealing with text, but something that’s really useful with language models is also dealing with images. Both chat GPT and Clawed and other language models use a tool called computer vision, which enables it to interpret an image in some form. So in this case, I found an image on the Wander and we’re going to use computer vision to do an analysis of the image. So I’ve asked Clawed in this case to describe the key elements in the image, identify any text that appears, explain the relationships between objects and unusual, on notable aspects, and to specifically be aware of objects and their attributes, text and labels, spatial relationships and context clues. In this case, the analysis is quite detailed. It’s told us that it can see market stalls with green and teal awning labeled are a market. It’s identified that there’s vendors and shoppers and visitors and there’s historic architecture visible in the background. And then the market is under some metal beams. It’s identified relationships between crowds of people navigating the stalls, the historic buildings, the trees and the greenery and the merchandise that’s displayed up to. Now in this case, this isn’t amazingly useful information unless our job was analyzing images, but it may be useful because it allows us to get more detail about a specific aspect of the image. In this case, I’ve asked can you count the number of people in the image and it’s identified approximately 15 to 20 people and told me why it’s hard to do that. I’ve then pressed it for an exact count and you can see what it’s done is it’s identified 16 distinctly visible people in the image. In isolation, analyzing a single image on its own, not amazingly useful, but you can see the power available when it comes to analyzing dozens or hundreds of these sort of images. In this case, we looked at a photograph, but it could be a screenshot of a website to a diagram from a textbook, really anything that you wanted to analyze that has an image.