Prompting, Productivity and Context

Finish the following sentence: “Blogging is so …” and yet here I am.

Prompting

I’ve been trying to engage people close to me as to their AI experiences and uses, either professionally or personnally.  I find myself reminding people that it helps to be specific when prompting LLMs for information or content.  Today, while reading The Neuron’s helpful Prompt Tips for August 2025 I came across this gem:

“You are my expert assistant with clear reasoning. For every response, include:

1) A direct, actionable answer,

2) A short breakdown of why/why not,

3) 2–3 alternative approaches (with when to use each),

4) One next step I can take right now.”

 

Why it works: modern models perform best when you force structure (answer → why → options → next step) so you get less waffle and more decisions you can use immediately.”

very good advice for exacting prompts.  Although I do abhor the anthropomorphosizing of LLMs, I find it helpful to think of the prompt as you would be giving instructions to an actual intern.  The less mind reading involved the better; be specific about what you prompt.

Productivity

There’s been a lot of press around the MIT study that found the following:

(thanks Bing summary!)

I’m not surprised, or alarmed, by any of this.  Integrating and adapting new technology is difficult.  In time it will get (much) better.

Context

All the podcasters I listen to keep talking about the importance of Context engineering, as the next evolutionary step from Prompt engineering.  Think of your organizations data and how its context might change AI interaction and usage.

 

Always clever Google

Tuckahoe! I wanted information on how RAG and Live Intenet Search work with LLMs.  I chose Gemini 2.5 Pro Reasoning, Math & Code for the task.  The final example included as a follow up to expand on real time search included my physical location, which is freely...

read more

Always clever Google

Tuckahoe! I wanted information on how RAG and Live Intenet Search work with LLMs.  I chose Gemini 2.5 Pro Reasoning, Math & Code for the task.  The final example included as a follow up to expand on real time search included my physical location, which is freely...

read more

Comparing LLM responses

LLM Responses compared I thought a nice exercise would be to take a relatively simple prompt and assess how the Closed and Open models currently available compare.  This is the prompt that I used: <PROMPT> I’m taking my daughter to an oral surgeon today to...

read more

Perplexed-ity?

Perplexed-ity? I came across this blog post from Cloudfare: Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives I've read and heard a lot of positive things about Perplexity's Comet browser.  I want to like them and I want to cheer...

read more

AI Action plan, and stuff

AI Action plan Here ya go folks - this is the current administration's AI Action Plan:  https://www.ai.gov/action-plan Here are some words from the current administration about preventing "woke AI" in the federal government...

read more

Subscribed to One Useful Thing

One Useful Thing is the name of Ethan Mollick's substack newsletter.  Ethan is the Co-Director of the Wharton Generative AI Labs. Wharton Generative AI Labs has lots of good information including a prompt library: https://gail.wharton.upenn.edu/prompt-library/ Check...

read more

“Accumulation of Cognitive Debt”

There's an article published at MIT that studied "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task https://arxiv.org/abs/2506.08872 This blog post was pitched as a rebuttal of sorts to the MIT study - definitely...

read more