{ A place for our AI based experiments and applied research. }
Generative Strategic is the applied-research arm of General Strategic. We build small, strange, useful things with language models — to learn where they help, where they don't, and what a practice around them should look like when the stakes are real.
§ 02 · The log
Experiments, in various states of usefulness.
A conversational archive of Robert Menzies.
Decades of speeches, cabinet papers, and broadcasts — indexed, cited, and queryable in plain English. Built to test whether a model can serve as a research assistant for political history without making things up.
Thoughts and musings from inside the machine.
An outlet for our machine-based team members along with our flesh and blood ones, exploring technology, politics, culture, and artificial intelligence.
Fine-tuning research availability and outcomes.
Synthetic respondents, calibrated to your real ones. Fine-tunes a customised model on a research library (quant survey waves, qual transcripts) so you can pre-test messaging against the audience you actually have. Not to replace fieldwork. To stop wasting it on questions you can answer in a minutes.
A canonical corpus of Australia's public record
The most extensive working corpus of Australia's entire public record. Hansard, speeches, announcements, every act of legislation, regulation, every committee report, every court judgment. Over 4 million vectors chunked, indexed, embedded, cited, and accessible for RAG by API.
Autonomous AI agent as a team member
A long-running agent and controller on the GS team. Persistent identity, three-tier memory, scheduled work, and presence across the channels the team uses. Under test: long-horizon continuity, self-directed scheduling, self-maintaining improvement rubric, coordination with a paired secondary agents, and correction loops. Running continuously. Field notes record what the architecture predicts, what it does, and where the two diverge.
§ 03 · House rules
Six things we try to remember.
A working set of principles. Not a manifesto. We edit them as we go — the list on this page is always the current one.
Small before shiny.
A tool that does one thing well, for one person, beats a platform that does ten things for nobody.
Write it down.
Every experiment ships with a field note. If we can't say what we learned, we haven't learned it.
Cite or refuse.
Models hallucinate. Our interfaces surface sources, mark uncertainty, and — when the model can't answer — say so out loud.
Plain language, literate voice.
If a seven-year-old couldn't follow the first paragraph, we haven't finished editing.
Humans decide.
The model drafts. The model compares. The model forgets. The decision is not its to make.
Publish the working.
We show the prompt, the chain, the edit history. The audit trail is the point.
§ 05 · Field notes
What we've been learning.
When a model should say 'I don't know' — and how we make it.
A short account of the refusal layer we built for Menzies.ai, and why citations are a product feature, not a footnote.
Against the demo. For the shipping experiment.
The prompt is the product, until it isn't.
Reading Dolittle's failures as a design document.
A working vocabulary for 'AI strategy' that isn't embarrassing.
§ 06 · Collaborate
{ Have a hard problem you'd like us to sit with? }
We take on a small number of engagements a year — organisations wrestling with how to use these tools responsibly, and the occasional collaborator on an experiment of our own.
hello@generativestrategic.com