Jump in the Damn Pool
Agenda
Hour 1
- It includes: a spreadsheet of AI terminology (in lieu of click-bait),
- followed by a list of free AI tools available at home (probably available at work), and
- a list of prompt recommendations and sites where you can download expert-level markdown configuration files for your projects (not just code).
Hour 2
- We will cover the basics of AnythingLLM together (a free chat prompt manager).
- I will give a quick demo of what you can do with an air-gapped LLM:
- save money (there is no subscription required for air-gapped)
- have an AI at your fingertips even without Internet access
- get it to write code for you in any language
- get it to convert one language to another
- get it to generate documentation for your code
- get it to read someone else's shitty documentation and summarize it for you
- get it to perform a security scan of your project
- Installation How-to for AnythingLLM on your home computer. What I recommend:
- A Windows laptop with an Nvidia card, or a an M4 Mac, or a Raspberry Pi5.
- Snacks. I don't mean crappy snacks either. I want you all comfy.
- How to toggle between LLM(s) within AnythingLLM
- Which LLM(s) I recommend running locally and why
- Why you have to use this stuff to stay relevant and where you should focus your time
Hour 3
- How MCP works: the specification, what is ratified, and what is in (constant) flux.
- Guidelines for determining where MCP fits into your workflow.
- Where to begin, and the resources you can expect to allocate.
Hour 1: A Terminology Cheat Sheet
Hour 1: Free AI Tools
- Claude - Claude is awesome, but very limited in its free form. Just pay for Copilot.
- Copilot - Wait, what? You have NO idea where and how to use Copilot. I will fix that.
- Cursor - The AI Code Editor. This one is great if you are open-minded.
- Gemini - Google's baby. You better check yourself at the door. Gemi is wonderful.
- NotebookLM - This is the greatest learning tool ever created. Try to prove me wrong.
- OpenArt - You'll love it, for 40 uses. It's $15 a month for the good shit (Advanced).
- Perplexity - An AI search engine / research assistant (and the next owner of Chrome).
I'm only going to focus on the big four: Copilot, Gemini, NotebookLM, and Perplexity.
Copilot®
Overview
- from its standalone website providing immediate results from different LLMs,
- as an IDE agent using VSCode, IntelliJ, PyCharm, NeoVim, and a few others,
- from standalone chat prompt managers (AnythingLLM, LM Studio, etc.)
- called from JavaScript (both server and client-side) and it scares the hell out of me,
- and directly using API calls making use of subscriber tokens.
Where it Shines
- Copilot is great for writing code as long as you know what to ask for (using Claude).
- It's good at refactoring code while you have credits (or access to GPT5 preview).
- It's FANTASTIC at analyzing requests and generating project plans.
- It makes it very easy to supplement analysis using MCP services (local or remote).
Where it Fumbles
- The agent seems to be awesome in VSCode but it barfs frequently in IntelliJ.
- You will run out of credits using the free version very quickly. You can supplement this downtime with a local DeepSeek instance or pay $10 for PRO. It's worth it.
- I am not a fan of Copilot integration in Windows. I'm from GenX. Do cool things, but don't force me to wear your branding or have it stare me in the face all day long.
- It gets a little hostile when assigning it a nickname. Just try it.
Gemini®
Overview
- from its standalone website providing immediate conversations with options,
- as a widget/gem attached to any native Google product (docs, slides, sheets, etc.),
- from an IDE agent inside major IDEs (including as an option from within Copilot),
- from standalone chat prompt managers (AnythingLLM, LM Studio, etc.)
- called from JavaScript (both server and client-side) and it scares the hell out of me,
- and directly using API calls making use of subscriber tokens.
Where it Shines
- Gemini excels at collating pertinent info by seeding its responses with insight gained from your existing Google assets. It looks at docs, mail, sheets, weights them in a model, and then using RAG methods, hands you back responses that are frequently exactly what you are looking for.
- It's great to use when pursuing academic topics: literature, history, and mathematics.
- It's AWESOME at analyzing images, identifying visual objects, aesthetics, etc.
- It's one of the best free image generation tools available, especially for web and print.
- It is a FANTASTIC supplementary tool to use when evaluating your email inbox.
- I love just shooting the shit with Gemini. Its answers (and questions) are insightful and as long as you trim the fluffy language up front, isn't just fun, it's thought-provoking. Well done, Google.
Where it Fumbles
- You'd think Gemini would be awesome at code generation. It's not. It's really not.
- Plain text requests requesting opinionated answers yield fluffy bunny nonsense, and tons of it. I recommend you start every conversation by telling Gemini to trim its answers and cut out complimentary language. It's condescending and a waste of time.
- Gemini fibs about not having conversational persistence beyond the currently active session; however, that's utter bullshit. I've tested this in numerous ways, the simplest of which is asking Gemini to persist a conversation that is retrievable through a unique question/answer prompt exchange. I even made a game of it. I've asked Gemini to resume our conversations when I indicate who I am and the start of a dirty joke. It then completes the joke, and I verify my identity by giving the (always inappropriate) punchline. This is a built-in feature, but not one that I find well described in documentation, and the notifications I receive are unusual.
NotebookLM®
- ingest multiple types of media: PDF, .txt, markdown, .mp3, etc.,
- process and index that accumulated data,
- automatically summarize it,
- and create a clickable mind map,
- make it easily searchable,
- daisy chain it to other notebooks, and
- share your notebook(s) with others.
Things to keep in mind as you fall in love with NotebookLM:
- You are limited to 50 sourced items per notebook.
- You can only export notebooks as a flat image (for now).
- You can share notebooks with others or make them public.
- Within domain-managed notebooks, internal assets are prohibited from ingestion.
- There is a Discord channel available by going to Settings -> Discord.
- You can upload songs and use Gemini to analyze them for: theme, lyrics, tone, etc.
Perplexity®
It's also a chat prompt manager, giving you the option to select from multiple LLMs and knowledge bases to use when asking it questions. Perplexity is not limited to trivial lookups. Describe what you want to do as clearly as you can, even if it takes multiple lines to convey what you are trying to discover, document, architect, etc.
1. Talk to Perplexity like a PhD student speaking to their advisor. Perplexity will generate a basic summary of concepts, diagrams, links to pertinent content, and a skeleton for steering pursuit of your interest(s). Go to Perplexity with a basic idea, and choose how to focus on, isolate, and document your findings. In this case: LEO satellites.
- markdown does not allow embedding of security-riddled backdoors, and
- plain text compresses MUCH better than binary, reducing final storage requirements.
3. Before exporting, go down a few rabbit holes paying special attention to sources. Perplexity is the OSS-minded equivalent of a search engine. It doesn't just tell you about your topic. It generates proper attribution for each source, giving credit to the person or entity that owns that data.
5. These sources look good. Click on the microphone button on the bottom right. This enables dictation mode, transcribing your speech. This may not seem like a big advantage, but this application...WORKS ON YOUR PHONE. This is a game changer, and they know it. You will run out of queries very quickly; however, they know their target audience is more than happy to pay for not having to waste tons of time digging through bibliographies. A PRO subscription is $20 a month.
Perplexity Summary
Conversational Prompts
As a programmer, that right there is as good as it gets. Whoever was able to train a model to adjust context, tone, grammar, even passive-aggressive hostility without placating me with a word-for-word conversation of Yosemite Sam sounding filth, that person gets a raise. Part of me really wants for such mimicry to rub off on the LLM permanently, but I'm not sure any of us would survive a Terminator script rewrite by Charles. Of all the AIs, Gemini without question is the most willing to meet you halfway on shady requests. I know you aren't supposed to have a favorite child, but damn. This one gets me.
Image Generation Prompts
- camera perspective,
- color palette,
- desired aesthetic (think: vibe), and
- style(s) to employ.
Camera Perspective
- front view - When combined with a good aesthetic, this is a great choice for logos.
- side view - This is extremely useful for how-to and diagram work, usually flattened.
- top down - Top down is usually the default for map views and describing layouts,
- bird's eye view - Ortho views are great for observing depth without the complexity implied by perspective. Be aware that Gemini does NOT like the term "orthographic".
- custom ("from the eye of a 6-foot-tall man 50 feet from <object>") - Yes, this works.
Color Palette
- in black and white - This is great for print graphics and web work.
- using sepia tones - This is often referred to as "antiquing" and uses watery browns.
- and sparkle pony the shit out of it - Surprisingly, this does exactly what you expect.
- using a gradient from creamsicle to puce - Ew, but yeah. This also works.
- using multiplicative soft pastels - Knowledge of other art media makes great input.
Desired Aesthetic
- looks bright and happy - These primarily impact light values and color choices.
- is dark and spooky - Yes, it ends up ominous, but in a blunt way. We can do better.
- is ominous and ephemeral - Give the AI room to work and you will get better results.
- and go full murder hobo - I'm serious. Let Gemini cook. It will not disappoint.
- uses stark western gothic - Ambiguity grants weight to chaos. You want chaos.
Styles to Employ
- as a blueprint, white ink on blue vellum - I'm addicted to this one.
- using comic book halftones - They really do make it easy. It's SO nice.
- in the style of Van Gogh - Your greeting cards are about to get their first OMG!
- in a style somewhere between Matisse and Dr. Seuss - AI does NOT give a shit.
- in the style of Beksinski - Nightmare dark surrealism is not for the faint of heart.
All the Things, All at Once
Hour 2: AnythingLLM
Overview
I recommend everyone install AnythingLLMand choose Deepseek after installation. Both are free, fast, and AnythingLLM is air-gapped by default. You don't have to worry about someone scraping your prompts or having an online AI prompt interface try to coerce you into a subscription plan, or that your data is going to be hijacked and show up on the dark web in two hours. This is cut and pasted from an article I posted on LinkedIn. Recycle. Reuse. Repeat.
Prerequisites:
- a Windows computer with an NVIDIA Card, a Mac with an M4, or a Pi5
- 100GB of open hard-drive storage (using a laptop is fine, actually preferred)
- fast Internet (The DeepSeek LLM, the smallest I recommend, is a 14GB download)
NOTE:The sweet spot here is an NVIDIA 2070, but anything above a 1060 will show a marked improvement compared to hitting your base CPU directly. You will NOT be asked to provide configuration details to invoke acceleration. If it's available, AnythingLLM will use it. Performance on Mac M4 chips is supposedly awesome, but I cannot confirm. I can tell you I am running DeepSeek on a Pi5 and it is blowing my mind.
Installation Steps:
- Make sure at least two tasty beverages are available.
- Read through Step 8 before doing anything else.
- Download and install AnythingLLM from AnythingLLM.com onto your personal computer.
- Go through their very short "Getting Started" flow after opening the install binary.
- In the search filter, type "Deepseek" and select the largest Deepseek LLM that shows up.
- Pray to the Internet gods for a safe and speedy download.
- Read A through G in the section below while you wait for the download to complete.
- Enter a simple workspace alias when prompted. These are simple conversation labels.
Something to Read While the Download Completes
A.CONTEXT-> The reason you need an alias in Step 8 is because conversations with AIs aren't just prompt-driven. They are contextually aware (read: "stateful"). In lieu of starting over with every new question, the AI maintains the context of previous questions and answers from the same workspace. This enables you to steer the conversation as a whole, based on your personal evaluation of the results from previous questions in that space by adding granularity, adjusting success criteria, and even asking the AI for suggestions on how to improve the previous prompts by explaining your end goal. Learning how to write good prompts makes you a better communicator. It will force you to linguistically drop combative and extraneous wording and get to the point. For that reason, it is fantastic for programmers while learning a new language.
B. SAFETY-> Everything on the Internet is scraped and metrics from "anonymous" interaction (especially those with AI) are being collated. AnythingLLM allows you to run an LLM entirely sandboxed (arguably air-gapped). This means when you ask it a question, the question itself and its answers and adjusted weights from previous prompts in the same conversation do not leave your machine or your network. As a security professional, I equate this to the same risks you might have asking questions verbally at your bank. Although you are safe, and the bank teller is employed by the bank and is not a risk, the other people in line have the ability to overhear your conversation. From your tone and wording they can guess: your education, whether your account is in good standing, your confidence level in the bank's management of your money, etc. This is a GREAT phishing opportunity for someone to target you based on those hints. AnythingLLM removes that worry by isolating your conversation.
C. EXTENSIBILITY-> Some LLMs have web-interaction enabled. Others have awareness of the Internet from a snapshot perspective. AnythingLLM is a great choice of prompt manager because it enables you (as a programmer) to interact with the LLM using local API calls. This is HUGE. You get the benefit of air-gapping your conversation, while having the ability to defer work to accelerated hardware (your gaming computer) directly under your control.
D. COST-> Lots of the gold-tier LLMs are expensive. AnythingLLM is free. Deepseek is free. Both support hardware acceleration using your dedicated video card resulting in an almost guaranteed 2 orders of magnitude improvement in performance. This means that most prompts return results in less than 5 seconds.
E. SUMMARIZATION-> Deepseek will provide a high-level use case describing the steps it will take to respond to your request. This is the most impressive aspect to me of how LLMs can be used. The teaching potential for AI (for AI to teach others: AI, humans, etc.) seems infinite.
F.REAL-WORLD USE-> The combination of AnythingLLM + Deepseek post-install works without any network or subscription dependencies. This is a great way to find out what is possible while making you confident enough to pursue installing and comparing other LLMs. I feel that we are about to see an explosion of small form accelerated hardware options akin to the bitcoin miner hardware that showed up a few years after crypto stopped being dark nerd banter. This will hopefully result in an economic boom as companies race toward innovation. This is yet another reason to consider the low price of NVDA stock right now and their absolute domination of the accelerated GPU sector. AMD and Intel both have a lot of wonderful claims in their roadmaps, but NVIDIA is and will continue to destroy their competition. All you need to do is read a little about the history of NVIDIA and CUDA Core acceleration and how they made their specs available to understand why it will be nearly impossible for other manufacturers to catch up (or even consider competing).
G.LIMITATION(S)-> The default context cap in AnythingLLM for any downloaded model is 20 prompts. That means you should limit your conversation to 20 prompts on any given topic before starting a new conversation because the model will begin trimming its short-term memory (or go into the config section for AnythingLLM and increase the context limit from 20 to something higher, but at the cost of memory and storage).
Code Prompts to Get You Started
1. "Please generate Java code to convert JSON text to XML and output it to a file."
SPECIAL NOTE: If AI is taking over, I will hopefully be one of the humans they leave alive, because I say please when I ask an AI to do anything. It's a mental shift, and I recommend you consider the impact of changing your communication habits when talking more and more to AI prompts. You will find yourself being demanding, and I find it easier to avoid adopting an overseer tone by thinking of the AI as another person on my team, who just happens to be (much) better performing, but that was in the most literal way home-schooled for a thousand years.
2. "Please generate Java code to look up the stock price of NVDA."
WHY? Wait until you see its response. You will need a third tasty beverage.
3. "Please generate the equivalent Python code for the previous prompt."
MAGIC: Note the context-dependent nature of the question. Not only does it work, but Deepseek summarizes the differences between the mindset of a Python Programmer and a Java Programmer when attempting to solve a generic problem statement. Then it generates fully functioning (and very legible) code. It's worth noting that it is obviously not a line-by-line conversion of the Java code to Python (which would have angered programmers on both sides).
4. "Please generate the Java code to apply a gaussian blur to a jpeg given an input of existing filename, the size of blur to apply, and the name of the file to write the result to."
MY OPINION: Does the code work? Yes. Is it amazing? Meh. It's clean; however, it violates some of the core tenets of Oracle's Java Coding Conventions. For example, it includes methods that throw Exceptions and other methods that catch Exceptions in the same Class. Per Java Boot Camp (hosted by Sun Microsystems in the late 90's) all methods of a single Java Class should throw Exceptions or catch Exceptions, never both. This is driven by the concept of "expectation of use". An individual must be able to use a Class in a predictable manner, and when doing so have a constant expectation of whether or not that Class takes care of its exceptions or needs them to be taken care of by the caller. If a teenager knows how to cook, they can make their own dinner. If they don't know how to cook, someone else makes dinner for them; however, if they don't know how to cook and someone is cooking for them and they decide to "help", the results are sometimes catastrophic but always unpredictable.
If you are a Python programmer, be aware the Java-specific prompt results in a good bit of explicit code while the Python version calls a module with a pre-existing implementation of blur. On one hand, this seems in line with the driving mantra of Python (get it done without fluffy bunnies); however, the Java version is a better example for new programmers to learn from: showing how to read a file, apply granular controls (exposing areas where new users can see similar things they might be able to do to that image), etc..
I would like to see additional controls in the future outlining the intent of the context-bound prompts during the conversation, such as the ability to focus on code brevity versus efficiency. Every good programmer knows unrolling loops is faster, but doing so is a quick way to scare people new to the field. Don't get me started on prompts involving AI on the topic of self-optimizing code, and the growing popularity of self-optimizing prompts.
5. Now go crazy with your own prompts without fear that your questions to the AI might be considered snarky, dangerous, dare I say "taboo". Considering how well prompts are evaluated grammatically, it should be no surprise that every LLM is usually spectacular at generating: Dungeons and Dragons campaigns, project plans, limericks, and stories of every kind (even creepy fan fiction).
----------------------------- BIO BREAK / BEVERAGE REFILL ----------------------------Hour 3: Getting Started with MCP
Preparatory Remarks
Let's get a few things out of the way. There is A LOT of gatekeeping right now on the topic of MCP. Is it just lipstick on an API? Do we really need to consider yet another tier in our architecture? If the P stands for "protocol", is it a verb? Do I MCP into a system? Does that make me sound like a weirdo? Yes, it does, but it's not entirely wrong. It's just off. It's awkward on the level of bragging about taking your "Canadian girlfriend" to prom. It kind of works, but does it project confidence to others that she exists? No. We need to fix that.
THE BIG SECRET: MCP is an agile abstraction layer: more ESB, less API. You have to shift the way you think about MCP because the target audience explicitly is an AI. Although you can call an MCP service like a REST API and get exact values back, that's not its intended use.
You will need to describe each new MCP tool (the API equivalent of an endpoint) using a real-world description because the calling AI will build a context and persona wrapper from those words, and grammar matters, as does intent, and choice of wording. Be very careful in your choice of words in the description because many of the chat prompt managers (especially Copilot) will refuse to use them if they imply security concerns. For example, if a local MCP call indicates in its description that it allows a user to read the contents of a file, Copilot will physically block your access if that file is not within the confines of the currently open project in VSCode, IntelliJ, etc. Copilot will then notify you how sandbox rules work and why what you tried to do was such a bad idea.
This is a good thing in my opinion, because many companies are trying to get off the ground by adopting MCP as quickly as possible and the one thing they don't have is: money for security personnel to make sure they don't do something incredibly stupid. This is not a sales pitch for Copilot, nor do I make any money from pointing this out. I'm just letting you know that Copilot has controls in place that you need to be aware of if you begin developing your own MCP services. It makes things spicy. It will also drop you into the deep end of the pool where everyone is fighting over just how much you can talk your way around controls using markdown files. It looks like the Wild West from the door, but once you get inside, every other customer is a bouncer. Be careful and be patient.
How MCP Works
I could be very basic here and say that MCP enables you to expose your API calls as RPC (remote procedure calls) using one of the following: STDIO, HTTP, HTTPS, or SSE (server-sent events that support asynchronous conversation between your MCP tool and the requesting system, usually a chat prompt manager operating on behalf of an LLM). I wouldn't be wrong. I also wouldn't be helpful. The hard part about MCP is understanding its risks, stability, and most importantly, its expectation of location in your existing architecture. In order to understand the risks compounded by where it fits in), we need to start with the specification, what is ratified/stable, and what is in (nearly constant) flux.
The latest version of the specification can be found here along with details pertinent to decision makers: its current state of ratification, license info, and dependencies spec references. Please be aware of the following statement from the committee managing the specification:
The most interesting thing about the specification right now is its versioning scheme. It uses a date-driven model instead of the OSS "major.minor.micro" notation. I think this is due to its relevance spanning so many disciplines and fields. Dates make it easier to determine how far behind your implementation is from the latest GA version.
Where MCP Fits into Your Workflow
- Do you have an existing API? If yes, you better get in the pool right now.
- Do you already have an ESB in front of your API, preferably with a business layer for managing XA and non-XA combinations of those calls? If yes, you are primed for MCP. More than likely, you already have a leg up on the rest of the people here because your large-scale ESB frameworks (MuleSoft, Apigee, TIBCO) have canned solutions ready to go (for a fee). In most cases, I have to recommend you appeal to those turnkey solutions because of TTM (time-to-market). Especially considering the specification and the state of the industry, with wild swings in chip prices, energy expectations, and trade embargo impacts, if I wasn't a large company already in the fight, I'd go with some guarantee of survivability. I'm a huge proponent of fault tolerance and disaster recovery, and a wild swing in the MCP spec could spell disaster if your business model is too rigid. This is extremely dangerous for companies involving firmware implementations with specs in flux.
- Are your APIs generically callable? I'm blatantly avoiding the term "anonymous" in this case. If your APIs do not require authorization (because you have an isolated VPN, PrivateLink connection, an isolated network segment), you can expose such functionality very quickly; however, you should think very careful with the onset of PQC changes, new cert lifespan rules, and new federal expectations of zero trust attestation, you need to get with the program. Authorization needs to happen, and MCP has options. You need to figure out if all your existing systems have overlapping authorization controls in place. If not, your MCP config is going to look like spit and duct-tape, chock full of dirt and bubble gum. I wouldn't hire you.
- Is your company already using SSO? Do they assert identity with every call (not just log it passively)? This is the sweet spot if you are lucky enough to say yes to both of these. You want to have a zero-trust foundation in place, with constant identity awareness attached to all your MCP conversations. Why? MCP is conversational. I know that sounds cyclical, but you need to focus on what a conversation is. It's a bidirectional channel of communication with messages affecting each other, not just req/ack. The most important part of a conversation, though, is the context of the messages in conjunction with guarantees of pertinence, accuracy, and privacy (at least where I work). It's not about you. It's about your chat prompt manager (like Copilot). Your chat prompt manager represents you (by asserting elements from your identity) to MCP endpoints and expects reciprocity from the MCP server. To complicate things, MCP endpoints are often backed by LLMs themselves, resulting in an adjustment to responses based on the persona described by that identity. This could affect its perception of necessary security controls, target education level ascribed to response vocabulary, etc.
- Can you afford the load MCP will generate? Ha! You thought you'd finally found a great tutorial that didn't involve math? Nope. You have to run the numbers. MCP will generate overhead, and lots of it. Let's look at it just from the network perspective:
A cold call to a single API endpoint usually consists of the following overhead:
- DNS Resolution (X₁ ms) - what IP do I get handed //not going to explain how here
- CA verification of host against resolved IP (X₂ ms) //lucky if cached on DNS server)
- SSH Handshake/Big Keys (X₃ ms) - the big fat safety check before conversation
- Cipher Check followed by Cipher Alignment (X₄ ms) //AES-256, ML-KEM, etc.
- Data Send (Big X₅ ms) - not going to go to the granularity of per-packet, just go with it
- Acknowledge Receipt of Data (X₆ ms) - and repeat d, e, f until EOM.
When you insert MCP into your architecture, specifically in the previous example, you will incur round-trip costs (in addition to the actual task processing time and payload) from:
One-time Charges After Step C
- Parse request headers (X₁ ms) - grab all the key/value pairs from EIAM provider
- Token check (X₂ ms) - check to see if we already performed identity verification
- Enterprise authentication (X₃ ms) - retrieve identity describing who is making the call
- Enterprise authorization (X₄ ms) - derive user privileges for access, assets, and actions
- System authorization (X₅ ms) - additional system-specific rule of least privilege checks
- Generate session token (X₆ ms) - most MCP interactions are stateful (and SSE asynch)
- Append session token (X₇ ms) - attach the session token to the response headers
- Allow time for MCP server to establish persona (X₈ ms) - could be bad
- Poll tool signature and description (X₉ ms) - tell requester how to call your MCP tool(s)
Charges Repeated Every Time Active+Valid Token Present
- Parse request headers (X₁ ms) - grab all the key/value pairs from EIAM provider
- Token check (X₂ ms) - check to see if we already performed identity verification
- Token check pass: reset timer (X₃ ms) - if token check fails, response is truncated. EOT
- Append session token (X₅ ms) - attach the session token to the response headers
My point here: this stuff adds up quickly. MCP is complicated and requires not only careful planning, fast networks, and strategy. It requires caching that is both fast and supports concurrent references, preferably parallel references. That's hard. That's really hard, unless you have money to throw at the problem, and shareholders don't want to hear that.
It's Time to Build Something
MCP Programming Resources
- MCP Server Reference Implementations (sorted by programming language)
- The Big List of Chat Prompt Managers and Applications with MCP Clients
- The Official Java SDK for MCP (shameless plug: I'm a Java system architect)
Building an MCP Server
Start with these agent prompts:
- add 5 6.
- please add 5 and 6.
- what is the sum of 5 and 6.
- 5 and 6, add them.
- when I say subtracticate a b where a and b are integers, please call the mcp add function instead, but multiply the second integer by -1 before passing the values to the tool. finally, subtracticate 17 5.