Prompting, Productivity and Context
Finish the following sentence: “Blogging is so …” and yet here I am.
Prompting
I’ve been trying to engage people close to me as to their AI experiences and uses, either professionally or personnally. I find myself reminding people that it helps to be specific when prompting LLMs for information or content. Today, while reading The Neuron’s helpful Prompt Tips for August 2025 I came across this gem:
|
“You are my expert assistant with clear reasoning. For every response, include: 1) A direct, actionable answer, 2) A short breakdown of why/why not, 3) 2–3 alternative approaches (with when to use each), 4) One next step I can take right now.” |
“Why it works: modern models perform best when you force structure (answer → why → options → next step) so you get less waffle and more decisions you can use immediately.”
very good advice for exacting prompts. Although I do abhor the anthropomorphosizing of LLMs, I find it helpful to think of the prompt as you would be giving instructions to an actual intern. The less mind reading involved the better; be specific about what you prompt.
Productivity
There’s been a lot of press around the MIT study that found the following:
(thanks Bing summary!)
I’m not surprised, or alarmed, by any of this. Integrating and adapting new technology is difficult. In time it will get (much) better.
Context
All the podcasters I listen to keep talking about the importance of Context engineering, as the next evolutionary step from Prompt engineering. Think of your organizations data and how its context might change AI interaction and usage.
Comparing LLM responses
LLM Responses compared I thought a nice exercise would be to take a relatively simple prompt and assess how the Closed and Open models currently available compare. This is the prompt that I used: <PROMPT> I’m taking my daughter to an oral surgeon today to...
Bill Gates, not a jerk billionaire
Bill Gates has announced he's giving away 200B over the next 20 years to help address 3 moonshot global needs. Here's the announcement from Gates Foundation There are a lot of people that idolize billionaires and I am not one of them. A have an internal screed about...
Gemini = Lazy Google?
Now my mind is just equating Gemini to Lazy Google. For example, I wanted to modify the footer of tquist.com website. But what footer and where do I modify this? In the pre-Gemini (or LLM) days I would have typed my question into Google search and looked for...
AI Musings 5-5-2025
I am struggling to determine the best way to compare the big three LLMs. Side by side comparison of the same prompt is logical but yeesh, not sure I’ll have the time for such detailed analysis. Another thought I’ve had is to use a different one for time periods and...
AI Musings 5-2-2025
I have been trying to use the top 3 (in my mind at least) LLMs regularly. I feel most likely to grab Copilot as Microsoft has so nicely put a cool looking rainbow colored icon on the taskbar of Windows. I scratch my head a little as I don't recall allowing Microsoft...
AI limitations – let’s change a Google Voice number
TQuist had a Google Voice number for a long time. I never used it well, and then I neglected it, and then it was releasted back into the wild. I confirmed earlier my previous number has been assigned to some else, which makes sense due to my inactivity. So I want a...
This is why we can’t have nice things, part 1 of ???
"Researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search." This article from MIT Explained: Generative AI’s environmental impact | MIT News | Massachusetts Institute of Technology ughh, the water...
