Law School Deans as Cheerleaders, and the Delicate Marketing Dance, Part I: The Case of Agentic AI
The law school dean is and must be the law school’s top cheerleader. She will shout from the rooftops about the latest and great new hires, the accomplishments and credentials of the students, the legal victories of their clinic, the extraordinary scholarship produced by the faculty, and will expound on the bold new programs that promise to bring more luster to the already first-rate institution. What I said when I was dean (perhaps too many times) is that the dean’s job is to point out the key ways in which the law school is both distinguished and distinctive. Marketing materials, both old-fashioned and new -fangled, are where the dean truly brings the receipts and shows the glory.
There are, however, some trickier areas in which the law school’s accomplishments and ambitions trumpeted by the dean can bump up against matters that are more complicated – not in the sense that audiences will not understand what the dean is getting at, but in the sense that one might see this news as not-so-good news. In a series of posts beginning with this one, I want to say a bit about these dilemmas and how deans might navigate them.
Let me address here the brave new world of AI. In the olden days (by olden, I going back maybe twenty years or so years), many law schools conspicuously championed their efforts to bring technology more squarely into their classes and programs. Deans spoke about the ways in which their law schools were leaning into technology and how new technologies could enrich our curriculum and put us on the cutting, or perhaps even the bleeding, edge of modern legal education. Some especially ambitious law schools created new centers or certificate programs devoted to these tech-focused initiatives; others highlighted faculty hiring and curriculum development, along with outward-facing initiatives that revealed the schools’ appreciation for the fact that technology was impacting legal practice and schools needed to get on board.
Recent advances in artificial intelligence as applied to law have been of the same general character. Schools have looked more closely at potential faculty hires whose understanding of the nature, structure, and parameters of machine-learning, natural language processing, big data, and algorithms would help contribute to the curriculum and the law school’s scholarly reputation, even if there was no pretense to fundamentally reshape how we were educating lawyers and what scholarship was most influential and highest profile.
The rolling out of the first iteration of ChatGPT near the end of 2022 was a big deal, and as this new generative AI tool moved from a curiosity to a tool commonly used, law schools (like other university departments) grappled with questions concerning cheating and other negative impacts on the educational ecosystem. For the next two plus years, as ChatGPT became refined and more powerful, and as the products from other big players (e.g., Microsoft’s Co-Pilot and Google’s Gemini, as well as improvements in the search equipment by Lexis and Westlaw, among others), legal educators have grappled with emerging challenges while also tentatively nudging our faculty in the direction of learning more about generative AI and bringing their knowledge to bear in law school courses. Many faculty members and deans have worried privately and publicly about what the use of these tools would do to the integrity of our teaching and the success of student learning. At the same time, law schools started to describe more boldly how they were getting onto the AI wave (if not, to mix a metaphor, always leaping onto the bandwagon) in order to create meaningful opportunities for their students. Some deans showed their school spirit by more actively cheerleading about AI and what they were doing to improve their school’s teaching. Others were, by their relative quiet, more ambivalent about what they saw and wanted to say about AI’s relevance and its progress in penetrating into the mission of the law school. Faculty views were diverse as faculty views always are, and deans could hardly satisfy everyone, given that views ranged across the spectrum from “this too will pass” to “AI will rock our world.”
Times in the AI world are changing, and fast. The latest iterations of AI are described as “agentic.” Tech companies such as OpenAI and Anthropic are developing tools that are notably autonomous, able to engage in what looks much what we regard as reasoning. They aren’t limited to using LLMs to respond directly to human prompts. Therein lies the essential difference. By contrast to generative AI tools developed and refined over the past few years, agentic AI is capable of carrying out a complex set of tasks through an iterative process that doesn’t necessarily depend on significant human action and interference. One thought leader described to a group of us working on ethical protocols for the use of AI by practicing lawyers that the human-machine interaction should distinguish between the human being in the loop, as traditional versions of LLMs presuppose in order for these tools to be effective, to humans being on the loop. Indeed, the essential utility of the agentic AI tools is that they enable humans to develop a goal and then task the tool with doing the research and also the reasoning to yield outputs that will realize this goal. One technologist has described the agentic AI advantage as being that these bots can engage in “recursive self-improvement.” Whereas generative AI has as the ultimate end product newly discovered content, agentic AI is designed to realize defined goals and to undertake the sequence of tasks necessary (not only research, but also reasoning and analysis). In the legal practice context, it is the difference between a tool that is highly successful at doing legal research and a tool that can undertake a full-throated legal task such as, for example, constructing a non-disclosure agreement that meets all the relevant legal requirements of a particular jurisdiction and accomplishes the objectives set out by the lawyer at the time that her “agent” is tasked with doing this project.
A search from ChatGPT describes this comparison in the form a very simple chart;
Generative AI typically performs one-step transformations:
Prompt → Output
Agentic AI performs multi-step reasoning chains:
Goal → Plan → Actions → Feedback → Revised Plan → Result
So now we come to the dilemma for the modern law dean. Whereas generative AI could be described by its cheerleaders as a mechanism that would improve legal practice and the welfare of law school graduates, by giving them access to tools that would facilitate legal research and drafting, developers and champions of the newest agentic AI models insist that these products now or in the near future will fundamentally replace the need for many lawyers devoted to solving their client’s discrete problems. To be clear, this is not the same as predictions of how so-called Artificial General Intelligence (AGI) will unfold, so to reflect a world in which robots are completely autonomous and which there is no real daylight between the cognitive functioning of humans and of machines. Rather, it is a world in which humans remain necessarily on the loop --- certainly to define the objectives of the client and also to frame for the bot the universe (which jurisdiction? Which sources of law? etc.) that is to be used for their project. Agentic AI tools function as mechanical agents to flesh-and-blood human principals. Nonetheless, what we colloquially refer to as manpower will be considerably affected by growing use of these tools. After all, why sic five associates on a project, working with, say, one or two partners, when the agentic AI can do basically anything that the directing partner needs or wants from their associates? Do the arithmetic, and we can see the worry that lawyer employment will be meaningfully affected and not in a positive way.
What is an enterprising dean supposed to do with all this? Is she supposed to push the collective opinion needle in the direction of more fear about these developments, either by introducing more skepticism about the utility and value of these tools, or insisting that there are sound reasons for consumers (including here both law firms and clients) to slow their roll, and maintain traditional patterns of hiring and mentoring? The dean may be caught between their beliefs as prognosticators and their perceived fiduciary responsibilities to their stakeholders.
There are promising strategies that might address these tensions, but let’s start by clarifying what they should not be doing, and that is ignoring agentic AI, on the assumption that it is more hype than help. The evidence shows that these tools are incredibly impressive; present trends suggest steady and maybe even rapid improvement; and we have signals from the market as recently as last month (ask Lexis and Thomson Reuters about their stock plunge) that Wall Street views agentic AI as a game changer for law.
Instead, deans should dust off their cheerleading uniforms and think creatively about how to message this new world of agentic AI in a way that is positive for students and for the law school’s overall welfare and future success. Three thoughts along these lines: First, take from the best playbooks that shows deans effectively communicating about generative AI over the past couple of years and speak about how even the more fast-developing agentic reasoning models assist lawyers and lawyering, rather than supplanting them. Nothing I have yet seen in this agentic AI revolution yet undermines the essential idea that the best lawyers are the ones who possess superb judgment and can articulate in their advice to and representation of clients the right outcomes. By “right” here I mean more than a concise and data-driven depiction of how courts (or agencies or legislatures, etc) are likely to decide disputes in litigation or how a transaction is more likely than not to furnish economic value. Right also also means how the successful completion of a legal task moves the law forward, enables it to adapt to present and changing social conditions, embeds successful results for clients in a larger framework that includes ethical, efficient, and just outcomes for law and our legal system generally. As responsible legal educators, we work hard to teach our students that the distinction between what the law is and what it ought to be is a porous one; and so the obligation of lawyers is not simply to follow the law but to shape the law, and to use it for salutary purposes. Such shaping requires our putting our collective minds’ eyes on the ways in which the legal information, doctrine, and institutions have goals that must be replenished and made more supple, efficacious and ambitious. The human may be only on the loop in terms of the iterative process run by the bot through agentic AI that accomplishes the goal defined by the principal; but this human remains fundamentally in the loop in the imperative of tying together the output of, say, Claude Code or Codex (or the next new thing) and the effects of the answer spit out by the bot on law and the legal ecosystem.
To hazard a prediction: Agentic AI systems will likely develop greater sophistication in working at high levels of efficiency on complex tasks. In doing so, there will be impacts on the workflow within firms, on traditional models of partner-associate and client-lawyer delegation, and perhaps ultimately on the overall contours of entry-level legal employment. That this will create some challenges for law schools and their business models seems unavoidable. On the other hand, innovative law schools can adapt knowing that humans are essentially at the helm of the epistemological structure of all this and thus the bulk of strategic choices made in teaching and supporting students are made by humans manning the post. Lawyers’ reasoning skills may be supplanted to a meaningful, if still uncertain, degree, but their creativity and responsibility for demonstrating and communicating sound legal judgment is less replaceable.
And so deans can and should champion their law schools’ efforts to develop in their students the skills that are less amenable to adaptation or even replacement through agentic AI tools. For example, and maybe paradoxically, this suggests that we should continue to insist that law schools continue to teach core legal doctrine, including deep dives into sources of law and how they are constructed and reconfigured. Tasking a bot with digging deeply into these legal sources as an essential part of their iterative process of recommending a particular course of legal action requires a dense understanding of, say, the common law method, how statutes and administrative regulations are designed, and debates about legal interpretation (just to mention a few examples). At the same time, ambitious deans should lean into agentic AI as much as possible, in order to show stakeholders how their students are learning the most robust and constructive tech tools available to help them become the very best humans on the loop as possible.
One of the “what’s next” questions the leaders of agentic AI and law are grappling with is how to adapt their still mostly off-the-rack tools to particular settings. Without suggesting that the best pathway is one that leads to a hundred or so bespoke tools that are designed around the identified needs of particular law schools, there is surely good cause for agentic AI developers to work collaboratively with law school faculty and leadership to improve the utility of such tools for what this newest generation of lawyers, and also their clients, will want and need to do legal practice in the “right” way. We have some experience in the law-tech ecosystem about the value of well-fashioned collaborations; and agentic AI seems like a good area for fueling and sustaining exciting and effective collaborations.
My optimistic ex-dean self tells me that there is potential for deans to work their way through the dilemma of championing new developments in technology that present serious risks to the employment model, and perhaps also the training model, of traditional legal education. One just needs to be exceptionally intentional and creative about exploring ways in which agentic AI presents opportunities, while also candidly acknowledging some of the risks. Likewise, deans can communicate to their important audiences that their law schools are not obstinately resisting, but are truly embracing, developments and initiatives in both generative and agentic AI that can possibly advance the welfare of individuals who lawyers service, while also enhancing rather than hobbling the education of new lawyers. To take just one quick slice at this, think about how progress in technology-enabled research and reasoning might help bring down the cost of legal services and help close the access to justice gap. (I will focus more squarely on A2J in my next post on this subject).


With respect, what law schools need to do, and should have done decades ago, and were exhorted to do multiple times over many decades (hello MacCrate Report), is to completely overhaul their pedagogy and their professoriate to emphasize actual practical and vocational skills. Instead, law schools have clung at every turn to hoary old Langdellian notions of law as an elite finishing school devoted to pseudo-philosophical discursions on appellate opinions masquerading as a liberal arts program masquerading as a professional graduate school. And let's be real about why Langdellian thought and acted thusly back in the 1870s. There are two reasons, really. The first, unspoken by Langdell, is that he was a poor kid from New Hampshire who scholarshipped his way into Harvard and strove his whole life to be accepted by the Boston Brahmans in the Gilded Age. The second, which Langdell was pretty open about, was that he thought the practice of law was sullied and ruined, and he thought this because a vanishingly small but not invisible part of the practicing bar appeared that was female, Catholic, poor, immigrant, and/or minority, and this horrified him. Even Bruce Kimball's hagiography of Langdell was unable to hide this disgust. His entire purpose and mission with his pedagogy was to restore law to its rightful place, as the haunt of wealthy white men and a tiny number of academically superior poor white kids like himself. Removing practical & vocational education from law school meant that only those with the right connections and social capital would succeed in finding mentoring and professional networks after law school. Installing a 'here's some ancient Greek to translate' entrance test for HLS was also pretty unsubtle, given that northeastern boarding schools were just about the only schools in America that taught ancient Greek. Then there was Langdell's frequent practice of exhorting state legislatures to close correspondence law schools for the high crime of teaching the poor or recent immigrants blackletter law, drafting, and courtroom strategy. These are the views of the very 19th century man who, 150 years later, EVERY law school still strictly adheres to, even the former correspondence law schools like Archer's Evening Law School, which we now call Suffolk (Langdell tried to get the MA Legislature to force Archer's closure at least twice per Kimball).
And students are told this pedagogy of not-at-all-Socratic lectures, 100% federal appellate law and 0% drafting/negotiating/statutory interpretation/etc. ad nauseum, with 1 final for 100% of their grade, is to teach them "how to think like a lawyer." A grimly amusing and ironic justification as almost all law faculty are hired with two years or less of legal experience and are not competent lawyers yet in their own right. They themselves do not know how to think like lawyers! At least not in the sense of "here's a client with a complex problem and you have to solve it by yourself with no input from senior law partners at the firm where you shifted documents around for nine months once decades ago." As I recall from Prawfsblawg, for most of the 2010s the fastest-growing segment of law professor hires were PhDs in other disciplines with no legal education! What, pray tell, did they know about how to think like a lawyer, much less how to teach others how to think like a lawyer?
Because of this educational malpractice (honestly, ask any Ed.D what they think of law school pedagogy), fresh law school graduates are, quite frankly, unskilled workers. They are not remotely prepared to do ANYTHING without extensive adult supervision. Ofttimes, they do not even know what the skills are that they do not possess. They are revenue sinks, requiring profitable actual lawyers to devote extensive time to mentor, supervise, edit, and educate these new lawyers on the basics that their law schools and professors deigned beneath their dignity to teach. And law firms and the profession of law themselves are under assault from the much larger business of law, as Susskind predicted 15 years ago. Real estate agents doing closings instead of attorneys, compliance departments giving legal guidance instead of junior inhouse lawyers, algorithms doing document review instead of lawyers, AI threatening most of it, etc., etc., etc. The typical law school graduate has no future in this world unless they have adequate social and cultural capital to install them in a firm willing to invest scant time and profit to mold them into everything law schools deemed it unworthy to teach, just as Langdell desired in 1875. A 19th century education for a 21st century world. The same as it ever was.
What law deans SHOULD do is hire a completely different faculty made up of well-seasoned practicing lawyers who can impart the crucial practice skills that law school students will desperately need to compete against the probability machines - but that would hurt the feelings of a long-dead elitist who law schools all uncritically follow for reasons that have never held water. Law professors who practiced for nine months in 1987 and haven't kept an active law license since Clinton was president are not going to solve anything.
While we are talking about the need for law deans to cheerlead, I do wonder how things are shaking out in the first post-GradPLUS admissions season. How many law schools are shoving their wide-eyed acceptees towards Sallie Mae and her ilk to fill their coffers, and how many are deepening their operating losses by filling in the difference between the new $50k/year federal lending limit and their cost of attendance, which in many cases runs over $100k/year up to over $125k/year?
I am currently integrating AI into my physics course. The rule is, the student needs to understand what they submit. I am also teaching, as you stated, the core material, and explaining that a people and AI have to work together for excellence. AI will 90% give you a right answer, but many times it is not THE right answer. The human has to teach AI at the same time. I devise questions and provide the first prompt. It will be a right answer but not THE right answer. The question is set up so that a fork in the neural highway will have a choice between two roads. One road provides the sophomoric answer, and the second road provides the expert answer.
https://newspotng.com/ai-is-powerful-but-it-cannot-replace-understanding-newspot-nigeria/
This is post I wrote that was republished in newspot nigeria that describes what many teachers are dealing with.