28 June 2001

Dueling Software - Computers That Argue

by Kate Melville

In the past decade, computers have crept into the legal profession, facilitating the use of artificial intelligence — computation used for intelligent decision making. Lawyers encounter A.I. systems in fairly mundane applications such as information retrieval, expert systems, and the management of complex documents. Thus far, A.I. has neither "killed all the lawyers" nor automated the death penalty appeals process. But, says a computer scientist at Washington University in St. Louis. it is on the brink of changing the legal profession by introducing a "new math" into artificial intelligence that incorporates the ability to argue into computer programs.

Ronald P. Loui, Ph.D., Washington University associate professor of computer science, has written the definitive article on the modelling of argument , consolidating the research results from the mid-80s to the present in his paper, "Logical Models of Argument," ACM Computing Surveys.

According to Loui, A.I argument systems permit a new kind of reasoning to be embedded in complex programs. He says the reasoning is much more natural, more human, more social, even more fair.

"There are researchers who want to build software for lawyers to improve information retrieval, dispose of routine tasks more efficiently, sift through evidence and build a convincing argument to present before a judge and jury," says Loui, an expert in A.I., legal reasoning and the philosophy of computing. and law "You have the modern versions of boilerplate contracts that build and manage expert systems and routinely adapt prior work for new clients, and two large retrieval systems, Westlaw and Lexis that are now subject to pressures to improve their technology. All of this is based partly on A.I. Things are definitely changing. The saying is ' old lawyers don't type.' Well, new lawyers come to class with their laptops. There is a cultural shift. The legal profession realizes it is in the information business, not just the people business."

Loui says since the mid-80s, the expert systems community actively explored problems and questioned the prevailing theory that systems had to be built upon induction and deduction, the two logics that were built to serve science.

"Too many people are taught that deductive reasoning is the model for reasoning, that logic consists of mathematical logic and nothing else," Loui says. "Mathematicians love deductive and inductive forms because the former is logic, the latter probability. But they haven't liked argumentation because that's taught by English professors. Well, guess what? Now it's mathematics. We have a mathematics that permits these new systems to take a proof and offer a counterproof.

"We raised the philosophical ante and said, 'It's not just expert rules that come into conflict, it's a whole way of thinking where there are arguments that come into conflict — proof and counterproof.'. We never had a mathematics for this and wanted to have one because if you try to program deductive logic and put it into a computer, nothing comes out as output that wasn't in as input. Deduction is nonampliative — it never goes beyond its premises."

Loui stakes the future of A.I. argumentation on defeasible reasoning — which recognizes that a rule supporting a conclusion can be defeated. The conclusion is what A.I. specialists call an argument instead of a proof.

Defeasible reasoning draws upon patterns of reasoning outside of mathematical logic, such as ones found in law, political science, rhetoric and ethics. Defeasible reasoning is based on rules that don't always hold if there are good reasons for there to be an exception. It also permits rules to be more or less relevant to a situation. In this sense it is like analogy: One analogy might be good, but a different one might be better.

A classical A.I. argument is: Opus is a penguin; therefore, he should not fly. The counterargument says: Opus is a bird; all birds fly, so Opus should fly. Loui offers this classical counterpart in law: A contract exists because there was offer, acceptance, memorandum, and consideration. The counterargument: But one of the parties to the contract is incompetent, so there is no contract. A.I. is incorporating this capability into systems, providing the legal profession more options for analysis and information processing.

Taking the legal concept of "time, place and manner" as an example, Loui cites a famous case, Gilleo vs. Ladue, where a suburban St. Louis woman was charged with violating a city ordinance against signs when she posted one in support of a peaceful solution to the Gulf War. Her lawyer said such a law violates the Constitutional right of free speech. The case ended up in the Supreme Court.

"This is a great case because it's not obvious that there is a right answer to be found in the rules," Loui says. "It's interesting because you can create great competing arguments. Two people can view time, place and manner in a different way. You can't say there is a clear rule on free speech to follow, or a clear rule on laws that restrict suburban sign clutter." Such adversarial positions fostering argumentation don't appear that frequently in daily law where there is an overwhelming amount of mundane decision-making, such as traffic tickets, contracts and tax code.

"A.I and automation already are helpful in low stakes practice," Loui says. "But a logic for the rationales for rules is undone. A logic for theory formation in law is undone. Weighing competing arguments is a start to the logic for rules. The A.I. community is hammering out logics now for modeling competing arguments, and the result should be software that can actually perform a bit of legal reasoning."