ChatGPT

Using AI Analogies to Simplify Complex Economic Concepts

Shows how asking AI for analogies and diagrams can transform complex technical answers into easily understandable explanations.

Transcript:

Today’s videos are pretty simple one. Sometimes when we ask an AI a question, the response that we get is technical in nature. In this case, the question I’m asking is how does the Reserve Bank manage inflation with interest rates? They’ll, we’d probably all know the heads to the question, but it’s a good example of how to take a complex answer and make it easier to understand. So without any additional prompting, the response that I got talks about the core mechanisms, additional tools that are available to the Reserve Bank. However, I then simply said, can you use an analogy and diagrams to explain this to me? And in this case, the analogy that was selected was a car. The car is the economy, the accelerator, is consumer spending, and the brake is the interest rates. However, the diagram that was built legs are clearer again. So the next time you’re getting a complicated answer from chat.com or from Claude, try this approach.

Understanding Tokens and Context Windows in Language Models

Explains how tokens work as the measurement system for language models, affecting both processing and pricing.

Transcript:

When we start dealing with language models like ChatGPT or Claude, one of the important things we need to manage is something called a context window. A context window represents the amount of information that the language model can handle at any one time. Now the context window is actually measured in what are called tokens. And tokens are kind of like the call measurement system for the way that we deal with language models. Generally tokens are about four characters long and it’s important to understand them because they can also affect pricing that you pay when you’re dealing with language models. What you’re seeing in front of you is a token counting system that I’ve built that allows me to put in some text and this is just some text off our website and it then goes through and derives the number of tokens that are used. And you can see in most cases, if breaking the words down into small chunks are about four characters but there are a couple of things I wanted to highlight for you because one of the ways that a language model works is by finding patterns in words. So if you look at the word information here, you can see that rather than just breaking it up into chunks of four characters, if identified common patterns, the letters I N, the letters F-O-R-M, A-T and I-O-N. And so the way that a language model breaks up the words makes it easier for it to then be able to interpret what you’re asking and be able to look for patterns to be able to give you a meaningful answer. If you’d like to know more about this, let me know.

Computer Vision Analysis with Language Models Demo

Demonstrates how language models can analyze images using computer vision to identify objects, count people, and extract details.

Transcript:

When we think about language models a lot of the time, we think about how it’s dealing with text, but something that’s really useful with language models is also dealing with images. Both chat GPT and Clawed and other language models use a tool called computer vision, which enables it to interpret an image in some form. So in this case, I found an image on the Wander and we’re going to use computer vision to do an analysis of the image. So I’ve asked Clawed in this case to describe the key elements in the image, identify any text that appears, explain the relationships between objects and unusual, on notable aspects, and to specifically be aware of objects and their attributes, text and labels, spatial relationships and context clues. In this case, the analysis is quite detailed. It’s told us that it can see market stalls with green and teal awning labeled are a market. It’s identified that there’s vendors and shoppers and visitors and there’s historic architecture visible in the background. And then the market is under some metal beams. It’s identified relationships between crowds of people navigating the stalls, the historic buildings, the trees and the greenery and the merchandise that’s displayed up to. Now in this case, this isn’t amazingly useful information unless our job was analyzing images, but it may be useful because it allows us to get more detail about a specific aspect of the image. In this case, I’ve asked can you count the number of people in the image and it’s identified approximately 15 to 20 people and told me why it’s hard to do that. I’ve then pressed it for an exact count and you can see what it’s done is it’s identified 16 distinctly visible people in the image. In isolation, analyzing a single image on its own, not amazingly useful, but you can see the power available when it comes to analyzing dozens or hundreds of these sort of images. In this case, we looked at a photograph, but it could be a screenshot of a website to a diagram from a textbook, really anything that you wanted to analyze that has an image.

AI Tools Evolution: From Web Scraping to MCP Integration

Explains the Model Context Protocol (MCP), an open standard that allows AI tools to communicate with external services in a predictable way.

Transcript:

One of the challenges of using a tool like ChatGPT or Claude is that their capabilities are limited. OpenAI introduced operator and andthropic launched computer use in the last few months. However, even these are limited. The challenge is that these tools are built to imitate how a human reacts with the web, but AI isn’t a human. So it makes more sense that machines talk to machines rather than machines pretending to be humans, talk to websites built for humans. This problem has solved a long time ago. When you open the weather app on your phone, your phone uses something called an API to get the information it needs from another computer on the internet. The great thing about this is that the format in which your phone requests the information and the way it gets that information back is known and predictable. When we want to tool like ChatGPT to go beyond a chat and start interacting with other services, generally we need to babysit what’s going on, continually guiding it to make sure it’s going to the right website and looking at the right information. However, there’s a new way of doing things that makes it much easier for AI tools to talk to other services in a known and predictable way. It’s called the model context protocol or MCP. Now, the name doesn’t really tell you much about what it is or how it works. However, MCP allows for services like the weather that I mentioned before to be made available to your AI chat tools. It’s an open standard and there are only a handful of tools that support MCP today. However, dozens and dozens of services are being built to support MCP and more every day. For example, there’s an MCP connection for Salesforce so you can ask your AI about customer information and another one for connecting to any database to ask questions about your data using natural language. MCP is gaining a lot of traction. It allows you to get your AI chats to do so much more in a predictable way without having to do the babysitting. If you want to know more about this comment below with your questions.