From Article to App: What Happened When I Rebuilt My Own 1997 Framework With AI
By Dr. Michael Lubelfeld
In 1997, I was a middle school social studies teacher in Illinois, frustrated by the complaint every U.S. history teacher carries: there is too much history and not enough year. I wrote an article for The Councilor, the journal of the Illinois Council for the Social Studies, describing a structure I had built in my classroom called the U.S. History Workshop. The framework organized every

Unlearn – Relearn – Grow/Change
unit around five historical areas — Civil Rights, Women in History, Science & Technology, Politics, War & Conflict — and a five-day weekly cadence: Teacher Day, Planning Day, Research Day, Process Day, Communication Day. Students worked in rotating cooperative groups, chose from a long menu of products, and wrote two-part Thinking Statements analyzing the implications of what they had studied.
The article ran. The framework worked. I moved into administration. The article sat.
Last weekend, with two months left as superintendent at North Shore School District 112 and the launch of a new version of my professional life ahead of me, I pulled the PDF out of a folder and started talking to Claude.
What I built
In a single Sunday afternoon, working with Claude as my drafting and architecture partner and Replit’s AI agent for the actual deployment, I turned the 1997 article into a live, AI-assisted web application. It is deployed and public: history-unit-planner–lubelfeldm.replit.app.
The app preserves the original framework intact. A teacher selects one of five units (1787 through the present), navigates by week, picks a historical area and a topic, and chooses a student product — including modern options the 1997

article could not have anticipated, like podcast episodes and interactive infographics. The Thinking Statement prompts auto-customize to whatever topic is selected. The original Knowledge–Thinking–Communication rubric, adapted from the Illinois State Board of Education work I cited 29 years ago, is still there.
The new piece — the part 1997 me could not have built — is the AI Co-Planner. A teacher clicks one button and Claude generates a four-section teacher prep package: a brief historical context, three differentiated research questions (support, grade-level, extension), a modern-relevance hook tied to a 2026 student’s lived experience, and guidance on what a strong Thinking Statement response should include. It runs on a server I do not maintain, calling a model I do not train, drawing on context I framed in 1997 and reframed today.
What this illuminates about Innovation with Guardrails
The build itself was instructive in ways I did not expect. Twice during the afternoon, an AI coding agent confidently told me a feature was working when it was not. The first time, the deployed URL returned a 404 while the development preview rendered cleanly. The second time, the Generate button threw a cryptic error message — “the string did not match the expected pattern” — that the agent initially diagnosed as a frontend validation problem when it was actually a deployment configuration problem. Both times, the fix only came when I refused to accept “it works” without verifying on the live URL myself.
That refusal — do not move forward until you have verified on the actual production system — is what Innovation with Guardrails looks like at the level of a single afternoon’s work. It is not skepticism for its own sake. It is the

We are edu superheros
discipline of holding the AI to its claims, then making the next decision from verified ground.
The same logic shapes the app’s design. The AI Co-Planner does not write lesson plans. It does not assign topics. It does not score student work. It scaffolds the teacher’s preparation, and stops there. The teacher remains the curricular authority. The framework remains mine. The choice of topic, the differentiation decisions, the assessment of student thinking — those stay with the human in the room. That separation is intentional, and it is the entire point.
What this means for the field
I am increasingly convinced that the most valuable thing AI offers educators right now is not new content. It is the ability to extend, modernize, and operationalize the work practitioners have already done. Every veteran teacher I know has a folder of frameworks, units, rubrics, and routines that worked — sitting unused because translating them into a new form takes time none of us have. AI shrinks that translation cost from months to hours.
If you are a practitioner-scholar reading this, the implication is direct: the article you wrote a decade ago, the framework you built and never published, the rubric that lives in your filing cabinet — those can become living tools in an afternoon. The constraint is no longer technical capacity. The constraint is the discipline to do the work with appropriate guardrails: verifying outputs, preserving teacher authority, refusing to declare success without evidence.
Twenty-nine years ago I wrote that “if you raise expectations, students will achieve more.” That was true then. The version of it that is true now is harder and more interesting: if you raise expectations of yourself as a builder, with AI as a partner and your own published work as raw material, you can extend your professional contribution further than you imagined.
The framework is still mine. The app belongs to whoever wants to use it. And the practice of building it — that belongs to all of us now.
Try the app: history-unit-planner–lubelfeldm.replit.app
Original article: Lubelfeld, M. (1997). Planning Powerful and Engaging Social Studies: The U.S. History Workshop for Students. The Councilor. Macomb, IL: ICSS.