论文标题
GPT参加律师考试
GPT Takes the Bar Exam
论文作者
论文摘要
美国几乎所有司法管辖区都需要进行专业许可考试,通常称为“律师考试”,作为法律实践的前提。即使参加考试,大多数司法管辖区都要求申请人至少完成七年的大专教育,包括在认可的法学院完成三年。此外,大多数考试者还经历了几周到几个月的进一步的考试。尽管时间和资本进行了大量投资,但大约五分之一的考试者仍低于第一次尝试通过考试所需的速度。面对一项复杂的任务,需要知识深度,那么,我们应该期望“ AI”中的最新状态?在这项研究中,我们在考试的多阶段多项选择(MBE)部分上记录了对OpenAI的“ text-davinci-003”模型的性能的实验评估。尽管我们在训练数据的规模上对GPT-3.5的零射击性能没有任何好处,但我们确实发现超参数优化和促使工程对GPT-3.5的零球性能产生了积极影响。对于最佳的及时和参数,GPT-3.5在完整的NCBE MBE练习考试中达到了50.3%的标题正确率,显着超过了25%的基线猜测率,并以证据和侵权行为的传递速度表现出色。 GPT-3.5的响应排名也与正确性相关。它的前两个选择分别是正确的71%和88%的时间,表明非常强烈的非拖运表现。尽管我们解释这些结果的能力受到对LLM的新生科学理解的限制和GPT的专有性,但我们认为这些结果强烈表明LLM将在不久的将来通过律师律师考试的MBE组成部分。
Nearly all jurisdictions in the United States require a professional license exam, commonly referred to as "the Bar Exam," as a precondition for law practice. To even sit for the exam, most jurisdictions require that an applicant completes at least seven years of post-secondary education, including three years at an accredited law school. In addition, most test-takers also undergo weeks to months of further, exam-specific preparation. Despite this significant investment of time and capital, approximately one in five test-takers still score under the rate required to pass the exam on their first try. In the face of a complex task that requires such depth of knowledge, what, then, should we expect of the state of the art in "AI?" In this research, we document our experimental evaluation of the performance of OpenAI's `text-davinci-003` model, often-referred to as GPT-3.5, on the multistate multiple choice (MBE) section of the exam. While we find no benefit in fine-tuning over GPT-3.5's zero-shot performance at the scale of our training data, we do find that hyperparameter optimization and prompt engineering positively impacted GPT-3.5's zero-shot performance. For best prompt and parameters, GPT-3.5 achieves a headline correct rate of 50.3% on a complete NCBE MBE practice exam, significantly in excess of the 25% baseline guessing rate, and performs at a passing rate for both Evidence and Torts. GPT-3.5's ranking of responses is also highly-correlated with correctness; its top two and top three choices are correct 71% and 88% of the time, respectively, indicating very strong non-entailment performance. While our ability to interpret these results is limited by nascent scientific understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future.