One of the most important things we do at Steward, on behalf of and in partnership with families, is to pay attention, sift, prioritize, and plan for changes in education and for the world after school. To that end, I’d like to share some of my thoughts on artificial intelligence (AI) at The Steward School.
In addition to conversations with my peer group of local Heads of School, I’m also part of two executive roundtables: a regional group and one composed more broadly of Heads of School from across the U.S. Since the start of school, each group and every conversational thread has included one topic: AI. Likewise, at our recent Admissions open house, it was the number one topic for discussion.
The power of AI to transform us is profound and its uses are multiplying faster than any other technological change I can think of. There seems to be nothing it can’t do. I frequently use it to draft guidelines, produce summaries, sift through data, ideate, and get feedback on my early-stage work. I’m excited about its use in many of our business operations as well. It is quickly changing our work: what we do, who does it, and how it’s done. At the same time, I’m cautious, often even trepidatious, as it is included in more core elements of school operations, classrooms, and learning.
Concerns
Although preserving academic honesty and ensuring privacy are big concerns, my top concerns have more to do with minimizing the risks to the emotional, intellectual, and social growth of our children.
1. I’ve been an educator before and through the start of the internet and the launch of the iPhone. Both were instantaneous game changers, and, at the time, neither was something we could hold back from school. But with hindsight, despite their promise, each has turned out arguably to be more detrimental than beneficial for kids’ learning and development. (As a parent, I would not have disadvantaged my children by depriving them of either, but I tried to help them use these tools judiciously.) With a chance now to learn from history, we have a deep ethical obligation to make sure kids learn to use AI well. No matter what, they will be using it outside of school, and universities and workplaces will require facility with it. But because the AI-bot follows a lead and seems to want to please, it can advise and support in response to bad prompts and lead to unfortunate decisions. One example is students over-relying on AI for help with mental health concerns. Sadly, there are already too many incidents of AI conversations going too far, generating bad advice and poor solutions — some of them life-altering or ending — without adult awareness or context. In addition, the second possibility that keeps me up at night is the very real threat of cognitive atrophy.
2. I learned the term cognitive atrophy from an executive at Capital One who works on an AI team intended to develop strategies to combat its threat to the growth and innovation required for short- and long-term success. Education operates in the same space, as it prepares young people for an imagined future. In the process of building knowledge and skills, our students learn to manage dissonance, engage deeply over time, ask and live with important ethical and moral questions, be wrong and make mistakes, and repeatedly experiment with what has value and what doesn’t for oneself and for the community. Without these steps — now so easily and quickly completed by AI — and without building this gradual cognitive strength, students cannot develop the essential skills they will need for success in college or in the new AI-powered world, where entry-level, task-driven work is already being done effectively and efficiently by AI bots. Today and tomorrow, our students must be ready to enter and participate in the workforce at more strategic levels. This requires creativity, critical thinking, judgment, imagination, empathy, and strong technological skills.
Teaching and learning
AI is too powerful a learning tool not to employ it. However, we do have to make sure we use it as a tool and do not become subservient to it in the way that we are now unintentionally influenced by our browsing and online shopping algorithms.
Very early on — two years ago — we developed a set of ethical guidelines for its use, which includes our chosen platform. As a Google school, we ask faculty and staff to use Gemini, as it helps protect the privacy of our student users and integrates better than others with our enterprise software. Even so, given the pace of change, the guidelines need to be revisited at least annually for the foreseeable future.
In terms of classroom use, I often describe the need for adults to put on their oxygen masks first, so we’ll be able to help the students with theirs. Teachers need to build skills, understanding, and capacity with AI to be ahead of and helpful to the kids: help them discriminate between good and bad uses, true and questionable results, knowledge and misinformation, and whether and when to consider an AI-assisted piece of work their own in order not to plagiarize. To this end, teachers have been asked to develop at least one AI-centered lesson each semester. This could be integrated into their lesson design or by directing students to deliberately use AI in a project or task.
Last year, we developed a pilot program named SAIL (Steward AI Leaders), consisting of a group of faculty members (SAILors) who are leading at the frontier, willing to study and learn more, trying new things first, and sharing their learnings with the rest of the community.
Using AI calls on teachers to create thoughtful assignments that cannot be completed in whole by AI. More broadly, it asks all of us — faculty and families — to help kids feel engaged and willing to grapple with challenging academic work and the high expectations of the community. The more invested they are, the less incentive they have to cheat on themselves or on the coursework.
What’s next?
Moving forward, I’m keeping my eye on two things.
1. First, the bottom of the job market has already dropped out. As the current middle and top levels of the workforce move up and out, how will these people be replaced if there is no one behind them? What is the corporate world planning regarding recruiting and training its people? When we know this, we can better fine-tune our own programming.
2. By some accounts, agentic AI, in which AI tools will initiate and execute work on their own with less human input on decision-making, is only 2-5 years away from commercial use. What add-on effects will this have in the workplace and in the classroom? Will agentic AI create a sea change in the way that generative AI is, or will it be incremental, in the way, for instance, that the Internet of Things, social media (Web 2.0), and Web 3.0 (for instance, cryptocurrency and blockchain technology), have been? Given that these “incremental” examples are things we normally consider transformational, they suggest the scale at which agentic AI might change our world.
A final thought: A Third Space
How can we build intentionally for this new reality? What can we provide to students so that they are better prepared for whatever comes next in the AGI (artificial general intelligence) and agentic AI worlds?
When we look at the surveys of jobs and skills needed in the future, the results lean heavily on “soft skills” such as creativity and collaboration as opposed to “hard skills.” The premise is that businesses can train people in the hard skills aspects of jobs, but the soft skills take longer to develop and require a developmental approach, such as the one we use in schools.
With this as background, I’ve been thinking about what this space could be. By that I mean, what is the third space between soft skills and hard skills? Right brain and left brain? Art and commerce?
Even though AI can accelerate much of our work, we cannot escape the need for young people to have a strong foundation of world knowledge gradually acquired through multiple experiences with history, science, economics, literature, language, and the arts, and sophisticated skills in reading, writing, speaking, and numeracy. And, how do we sustain our uniquely human traits, including the need to flourish through serving and working on behalf of our families and others?
We must create this third space in between the poles that provides balance, connection, meaningful work, and the ability to continue to grow.