• 0 Shopping Cart $ 0.00 -->

JD Advising

Looking to get a head start on bar prep?  Learn all about the bar exam and how to study in our FREE Bar Exam Primer Course !

Excelled at the Bar Exam? Share Your Success! JD Advising is seeking top-performing attorneys for flexible tutoring positions. Transform your expertise into extra income. Learn more and apply now at  JD Advising Employment .

Special Offer: Get a Free Hour of Tutoring! Enroll in any On Demand, Premium, or Repeat Taker course by April 10th and receive a complimentary one-hour tutoring session to jumpstart your bar exam prep. Add any course + 1-hour of tutoring to your cart and apply CourseCombo at checkout for your free session. Cannot be combined with any other offer.

How to Interpret Your Bar Exam Score Report

Here, we tell you how to interpret your bar exam score report and how to avoid some of the common mistakes that students make when they attempt to interpret their bar exam score reports! If you passed the bar exam, you may be curious about how well you did. If you failed the bar exam, figuring out what your score means is the first step to making sure you pass the next exam!

A bar exam score report will typically have a few things you will want to pay attention to your overall score, your MBE scaled score, and your written score. States give a varying level of detail. Some states (like Illinois) won’t tell you your score at all if you pass. Others will only tell you your overall MBE and essay score. Some (like New York and Washington) break down exactly how you did on each essay, each MPT, and each subject on the MBE! We will try to give you a general idea of how to figure out what your score report means, assuming your state provides you with some detail!

Note: if you are in New York, please check out this post on how to dissect your New York Bar Exam score report . 

Your overall score

The first thing you will probably look at is your overall score, as this will tell you if you passed or failed the bar exam. (Your bar exam score report should indicate this pretty clearly.)

Uniform Bar Exam states require a score between 260 and 280 to pass the Uniform Bar Exam . So, if your score was above 280, you technically received a score that is considered passing in every Uniform Bar Exam state. Congratulations if that is the case.

One thing you may wonder about when examining your overall score is what percentile you scored in (that is, how did you score in relation to other test takers?). If you read this post on UBE percentiles, you can figure out your approximate percentile . A few numbers to guide you if you are in a Uniform Bar Exam jurisdiction (note: these numbers change every administration, but not significantly; we have updated the below numbers to reflect the February 2019 bar exam. You can also see these in the chart below.)

  • A 330 is the top percentile (99th percentile for the February 2019 Uniform Bar Exam)
  • A 300 is approximately the 90th percentile
  • A 280 is approximately the 72nd percentile
  • A 270 is approximately the 58th percentile
  • A 260 is approximately the 44th percentile
  • A 250 is approximately the 26th percentile
  • A 240 is approximately the 16th percentile
  • A 230 is approximately the 8th percentile
  • A 220 is approximately 4th percentile
  • A 210 is approximately the 2nd percentile

Your percentile tells you the number of people you scored higher than. So, if you are in the 40th percentile, you scored higher than 40% of examinees (and lower than about 60% of examinees). If you failed the bar exam, your percentile can tell you how much work you need to do to pass! For example, if you are in the 2nd percentile, you have a lot more work to do compared to someone who is in the 44th percentile.

If you are not in a Uniform Bar Exam (UBE) jurisdiction, see if your state bar releases any information about percentiles. Many states do not. If that is the case, look at your MBE scaled score because you can figure out your percentile on your own for at least the MBE portion!

Your MBE scaled score

You will not get your MBE “raw” score (i.e., the exact amount of questions you answered correctly). Rather, you will get a scaled or “converted” score. The scaled score is converted from the raw score but the National Conference of Bar Examiners does not reveal the formula it uses to do this. So you will not know the exact percentage of questions you answered correctly.

It is very important to dissect your MBE scaled score to see how well you did. For example, review this chart below from the February 2019 bar exam. (This was promulgated by Illinois.) On the very left side, find the column titled “Scale Score” then see “MBE Percentile” right next to it. Compare your scaled score to your MBE percentile.  This will tell you about how you did even if you did not take the Illinois bar exam. 

failed the illinois bar exam, UBE percentile, illinois bar exam results released

If you scored a 105 on the MBE, you are in the bottom 2 percentile, meaning you have a lot of work to do. If you scored a 140, however, you are in the 73rd percentile! So, that is quite good. (Note: percentiles change from administration to administration, but this is a decent guide as to where you stand.)

The MBE is curved, so just because you scored “close” to passing doesn’t mean you are as close as you think. For example, a 125 is the 32nd percentile, and a 135 is in the 57th percentile. That is only a 10-point difference in score, but is a 25-point difference in percentile! So, if you are in the 120’s on the MBE, you may still have a lot of work to do to move your score up.

In most states, you want to aim for a score between 130 and 140 to “pass” the MBE. If you are not sure what score you should aim for, and you are in a Uniform Bar Exam state, just take the overall score needed and divide it by two. So, for example, in New York, you need a 266 to pass the bar exam. If you divide 266 by two, that is 133. So you should aim for at least a 133 on the MBE. (You also can check out this post on passing MBE scores by state if you don’t want to do the math!)

Note: Some jurisdictions also tell you how you scored in each MBE subject. This is worth paying attention to as it can reveal where your weaknesses are! If your jurisdiction tells you that you scored in the 70th percentile in Evidence and in only the 5th percentile in Torts—that’s a sign you likely need to work on Torts!

Your written score

In a Uniform Bar Exam score report, you may also see six scores for your Multistate Essay Exam (MEE) answers, and two scores for your Multistate Performance Test (MPT) answers. Not all states release this information, but most do. The vast majority of states grade on a 1–6 scale. (Some states grade on a 1–10 scale, and other states, like New York, do their own thing, which you can read about here .) Your score report may look something like this:

So, the first six scores generally are your MEE scores. And, the last two scores are your MPT scores. Remember that these are not weighted equally! The six MEE essays are worth 60% of your written score. The two MPTs are worth 40% of your written score! So, the MPTs are worth more.

In most states that grade on a 1–6 scale, a 4 is considered a passing score. Here is the exact number a passing score for each essay:

  • A 3.9 is considered passing in Uniform Bar Exam jurisdictions that require a 260 to pass.
  • A 4.0 is considered passing in Uniform Bar Exam jurisdictions that require a 266 to pass.
  • A 4.1 is considered passing in Uniform Bar Exam jurisdictions that require a 273 to pass.
  • A 4.2 is considered passing in Uniform Bar Exam jurisdictions that require a score of 280 to pass.

Most students make the mistake of just assuming they passed the MPT and MEE portion of the exam when in reality, they have work to do! Make sure to carefully examine your score report to see if you actually passed either portion of the exam!

If you are not in a Uniform Bar Exam state, please consult with your jurisdiction for its grading scale and what a passing score is.

Final thoughts

If you did not pass the bar exam, we recommend you read this detailed post on what to do if you failed the bar exam . The last thing you want to do is make the same mistakes and fail it again!  We tell you how to avoid that and how to study better!

New: We have an excellent On Demand course available for those looking for a fresh new approach on the bar exam. Check out the advantages of our course here . It is on sale for a limited time!

If you have questions about how to interpret your bar exam score report, feel free to post in the comments below or contact us .

Looking to Pass the Bar Exam?

Free Resources:

  • 🌟 Bar Exam Free Resource Center : Access our most popular free guides, webinars, and resources to set you on the path to success.
  • Free Bar Exam Guides : Expert advice on the MBE, the MEE, passing strategies, and overcoming failure.
  • Free Webinars : Get insight from top bar exam experts to ace your preparation.

Paid Resources:

  • 🏆 One-Sheets : Our most popular product! Master the Bar Exam with these five-star rated essentials.
  • Bar Exam Outlines : Our comprehensive and condensed bar exam outlines present key information in an organized, easy-to-digest layout.
  • Exclusive Mastery Classes : Dive deep into highly tested areas of the MBE, MEE, MPT, and CA bar exams in these live, one-time events.
  • Specialized Private Tutoring : With years of experience under our belt, our experts provide personalized guidance to ensure you excel.
  • Bar Exam Courses : On Demand and Premium options tailored to your needs.
  • Bar Exam Crash Course + Mini Outlines : A great review of the topics you need to know!

🔥 NEW! Check out our Repeat Taker Bar Exam Course and our new premier Guarantee Pass Program !

Related posts

Study Strategies for Bar Exam

Leave a Reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Privacy Policy
  • Terms of Use
  • Public Interest

attorney swearing in, july 2019 michigan bar exam subjects, Law School Outline Samples

By using this site, you allow the use of cookies, and you acknowledge that you have read and understand our Privacy Policy and Terms of Service .

Cookie and Privacy Settings

We may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.

Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.

These cookies are strictly necessary to provide you with services available through our website and to use some of its features.

Because these cookies are strictly necessary to deliver the website, refusing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.

We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.

We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.

We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.

Google Webfont Settings:

Google Map Settings:

Google reCaptcha Settings:

Vimeo and Youtube video embeds:

You can read about our cookies and privacy settings in detail on our Privacy Policy Page.

Accounting Institute of Success - CPA Exam Prep Logo

Bar Exam Scoring: Everything You Need to Know About Bar Exam Scores

It’s important to know how the bar exam is scored so that you can understand what’s required to pass. The following sections will provide an overview of how the bar exam is scored, including information on essay grading, the MBE scale, and converted scores.

Understanding How the Bar Exam Is Scored

First, we’ll look at the UBE (Uniform Bar Exam). This is the bar exam used in a majority of states, and it’s what we’ll be focusing on in this article.

The UBE is broken down into three sections: the MPT (Multistate Performance Test), the MBE (Multistate Bar Exam), and the MEE (Multistate Essay Exam) .

The MBE is a 200-question, multiple-choice exam covering six subject areas: contracts, torts, constitutional law, evidence, criminal law, and real property .

Each state sets its own passing bar exam score for the MBE. Thus, Pennsylvania, Georgia, New York, California, Minnesota, and Nevada bar exam scoring differs from state to state. But in general, it’s between 140 and 150.

The MEE consists of six essay questions covering the same six subject areas as the MBE.

Bar exam scoring in states may also include one or two additional, state-specific essay questions.

The MEE is graded on a scale of 0-30, and the bar exam pass score is typically between 24 and 27.

The MPT is a skills-based exam that consists of two 90-minute tasks. It’s designed to test your ability to complete common legal tasks, such as writing a memo or conducting research.

The MPT is graded on a pass/fail basis, and there is no set passing score.

Now that we’ve covered the basics of the UBE bar exam score range, let’s take a more in-depth look at how each section is graded.

The Bar Exam & What to Expect: Scaled Scoring, MBE Subjects, Essays, the Performance Test and More

Essay Grading

The MEE essays are graded by trained graders from the National Conference of Bar Examiners (NCBE).

Each essay is read and graded twice, once by a human grader, and once by a computer program.

If the human and computer grades differ by more than 2 points, a third grader will read and grade the essay.

The essays are graded on a scale of 0-30, with 6 being the lowest passing score.

To determine your score, the graders will look at the overall quality of your answer, as well as your ability to apply legal principles, analyze fact patterns, and communicate effectively.

The MBE Scale

The MBE is graded on a scale of 0-200, with 130 being the lowest passing score.

To determine your score, the NCBE will first convert your raw score (the number of questions you answered correctly) into a scaled score.

This is done to account for differences in difficulty between the exams.

After your raw score gets converted to scaled scoring, it’s then added to your MEE score to determine your overall UBE score.

Scaled Scores

The total score that an examinee receives on the bar exam is typically a scaled score.

A scaled score is a number that has been adjusted so that it can be compared to the scores of other examinees who have taken the same test.

The purpose of scaling is to ensure that the scores are accurate and reliable and to account for any differences in difficulty between different versions of the test.

For example, suppose that the average score on the MBE in one state is 100 points and the average score in another state is 10 points.

If the two states were to use the same passing score, then it would be much easier to pass the bar exam in the first state than in the second.

Scaling ensures that this is not the case by adjusting the scores so that they are comparable.

It is important to note that scaling does not necessarily mean that the scoring process is fair.

In some cases, an examinee who receives a scaled score of 140 points might have scored higher than another examinee who received a scaled score of 150 points.

However, this does not necessarily mean that the first examinee performed better on the exam overall. It could simply be that the exam was easier in the first examinee’s state than in the second examinee’s state.

What’s important is that, thanks to scaling, the two examinees have an equal chance of passing the bar exam .

Another factor that can affect an examinee’s total score is weighting.

Weighting is the process of giving certain sections of the exam more importance than others.

For example, in some states, the MBE counts for 50% of the total score, while the MEE counts for 30%, and the MPT counts for 20%.

In other states, however, the weights may be different. For example, the MBE might count for 60%, while the MEE and MPT count for 20% each.

The weights are determined by each state’s bar examiners and can be changed at any time.

It is important to note that weighting does not affect an examinee’s score on individual sections of the exam. It only affects the overall total score.

What Is a Passing UBE Score in Every State?

In order to pass the Uniform Bar Exam, examinees must earn a score of at least 260 on the MBE and MEE. However, a score of 280 is generally considered to be a good score, and a score of 300 to 330 (highest score on bar exam) is considered to be excellent.

Bar Exam Passing Score by State

It’s worth noting that a 280 will ensure a passing score in all states. And it will put you in the 73 bar exam scores percentiles.

The average baby bar exam score range is between 40 and 100. This means that the average score is between 70 and 80.

Scoring high on bar exam essays is extremely important to your overall score. For many students, essay writing is the most difficult and stressful part of the bar exam. But with a little practice and guidance, you can learn how to score high on bar exam essays so you can pass the first time to keep your bar exam costs down.

  • See Top Rated Bar Review Courses

FAQs on Bar Exam Scoring

The bar exam is scored by a point system, with 400 being the highest possible score. What score do you need to pass the bar exam?

The highest possible score on the bar exam is 400.

A good bar exam score depends on the jurisdiction but is typically between 260 and 280.

Scoring for the bar exam works by awarding points for each correct answer. The number of points awarded varies depending on the question and the jurisdiction.

essay bar exam score

Kenneth W. Boyd

Kenneth W. Boyd is a former Certified Public Accountant (CPA) and the author of several of the popular "For Dummies" books published by John Wiley & Sons including 'CPA Exam for Dummies' and 'Cost Accounting for Dummies'.

Ken has gained a wealth of business experience through his previous employment as a CPA, Auditor, Tax Preparer and College Professor. Today, Ken continues to use those finely tuned skills to educate students as a professional writer and teacher.

Related Posts

essay bar exam score

The State Bar of California

California Bar Exam Grading

Share on Facebook

The California Bar Examination consists of the General Bar Examination and the Attorneys’ Examination. 

The General Bar Exam consists of three parts: five essay questions, the Multistate Bar Exam (MBE), and one performance test (PT). The parts of the exam may not be taken separately, and California does not accept the transfer of MBE scores from other jurisdictions. The Attorneys’ Exam consists of the essay questions and performance test. 

Essay Questions

The essay questions are designed to be answered in one hour each and to test legal analysis under a given fact situation. Answers are expected to demonstrate the applicant’s ability to analyze the facts in the question, to tell the difference between material facts and immaterial facts, and to discern the points of law and fact upon which the case turns. The answer must show knowledge and understanding of the pertinent principles and theories of law, their qualifications and limitations, and their relationships to each other. The answer should evidence the applicant’s ability to apply the law to the given facts and to reason in a logical, lawyer-like manner from the premises adopted to a sound conclusion. An applicant should not merely show that they remember the legal principles but should demonstrate their proficiency in using and applying them.

Performance Test

PT questions are designed to be completed in 90 minutes and to test an applicant’s ability to handle a select number of legal authorities in the context of a factual problem involving a client. A PT question consists of two separate sets of materials: a file and a library, with instructions advising the applicant on the task(s) to be performed. In addition to measuring an applicant’s ability to analyze legal issues, PT questions require applicants to: 

  • Sift through detailed factual material and separate relevant from irrelevant facts, assess the consistency and reliability of facts, and determine the need for and source of additional facts; 
  • Analyze the legal rules and principles applicable to a problem and formulate legal theories from facts that may be only partly known and are being developed;
  • Recognize and resolve ethical issues arising in practical situations; 
  • Apply problem-solving skills to diagnose a problem, generate alternative solutions, and develop a plan of action; and 
  • Communicate effectively, whether advocating, advising a client, eliciting information, or effectuating a legal transaction.

An applicant’s performance test response is graded on its compliance with instructions and on its content, thoroughness, and organization.

Multistate Bar Examination (MBE)

The MBE is developed and typically graded by the  National Conference of Bar Examiners .

Grader Calibration

The Committee of Bar Examiners maintains a diverse pool of approximately 150 experienced attorneys from which graders are selected for each grading cycle. A majority of the graders have been grading bar exams for at least five years, and many of them have participated for well over 10 years.

Six groups, each consisting of experienced graders and up to four apprentice graders, are selected to grade the essay and PT answers. The groups convene three times early in the grading cycle for the purpose of calibration. A member of the Examination Development and Grading Team and a member of the Committee supervise each group of graders.   At the first calibration session, the graders discuss a set of sample answers they received prior to the session. These books are copies of answers written by a sample of the applicant pool. After this discussion, the graders receive a set of 15 copies of answers submitted for the current exam, and they begin by reading and assigning a grade to the first answer in the set. The group then discusses the grades assigned before arriving at a consensus, and the process is repeated for each answer in the set. After reading and reaching a consensus on the set of 15 books, the graders independently read a new set of 25 answers without further group discussion and submit grades for analysis and review at the second calibration session.

At the second calibration session, graders discuss the results of the first meeting, re-read and discuss any of the answers where significant disagreement was seen, and resolve the differences through further discussion. An additional 10 answer books are read, graded, and discussed before a consensus grade is assigned to each. The groups are then given their first grading assignments.  

During the third calibration session, the grading standards are reviewed, and the graders read 15 additional answer books as a group to ensure that they are still grading to the same standards.

Graders evaluate answers and assign grades solely on the content of the response. The quality of handwriting or the accuracy of spelling and grammar are not considered assigning a grade to an applicant’s answer. Answers are identified by code number only; nothing about an individual applicant is known to the graders. Based on the panel discussions and using the agreed-upon standards, graders assign raw scores to essay and performance test answers in five-point increments, on a scale of 40 to 100. To earn a grade of 40, the applicant must at least identify the subject of the question and attempt to apply the law to the facts of the question. If these criteria are not met, the answer is assigned a zero. 

Phased Grading

All written answers submitted by applicants who completed the exam in its entirety are read at least once before pass/fail decisions are made. Based on the results of empirical studies relative to reliability, scores have been established for passing and failing after one reading of the exam. For applicants whose scores after the first read are near the required passing score, all answer books are read a second time, and scores of the first and second readings are averaged. The total averaged score after two readings is then used to make a second set of pass/fail decisions.

To pass the exam in the first phase of grading, an applicant must have a total scale score (after one reading) of at least 1390 out of 2000 possible points. Those with total scale scores after one reading below 1350 fail the exam. If the applicant’s total scale score is at least 1350 but less than 1390 after one reading, their answers are read a second time by a different set of graders. If the applicant’s averaged total scale score after two readings is 1390 or higher, the applicant passes the exam. Applicants with averaged total scale scores of less than 1390 fail the exam. 

Results from the February administration of the exam are traditionally released in mid-May, and results from the July administration are released in mid-November. 

Results are posted on the Applicant Portal . At 6:00 p.m. on the day results are posted, applicants can also access the pass list on the  State Bar website .

Applicants who have failed the exam receive the grades assigned to their written answers as well as their MBE scaled score. The answer files of unsuccessful applicants will be posted in the Applicant Portal with the release of results. 

Successful applicants do not receive their grades nor their MBE scores and will not have their answers returned.

Reconsideration of grade

The Committee of Bar Examiners believes that its grading and administrative systems afford each applicant a full and fair opportunity to take the exam and a fair and careful consideration of all their exam answers on the bar exam, and that no useful purpose would be served by further consideration by the Committee. 

All scores have been automatically checked for mathematical errors. All answers with scores within the reread range after one reading have been regraded by a different set of graders and double-checked for any mathematical errors before grades were released. For these reasons, the Committee will consider requests for reconsideration only when an applicant establishes with documented evidence that a clerical error resulted in failure or prevented the exam from being properly graded.

The Committee will not extend reconsideration based on challenges to its grading system or the judgments of its professional graders. Requests for reconsideration submitted by or on behalf of an unsuccessful applicant must be in writing and meet the criteria noted above. Requests not meeting those criteria may be summarily denied on that basis, without further explanation. All requests for reconsideration of grades must be received by the Office of Admissions no later than two weeks after the release of exam results.

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Bar Exam Toolbox®

Get the tools you need for bar exam success

Advice from a Bar Grader: Tips to Maximize Your Essay Score

January 5, 2017 By Avenne McBride Leave a Comment

essay scores

Raw score, scaled score, what does it all mean? It means get all the points you can.

Both the written and MBE part of the bar exam are scaled which means your score may be subject to an adjustment in order to standardize the results of the exam. The scaling method is somewhat complicated and also dependent on your fellow bar takers. On the MBE some questions are weighted differently and that may depend on how many applicants answered correctly. A question that 98% of applicants answer correctly may be given less weight than a question that only 40% answered correctly.

So don’t spend a lot of time trying to game the scaling, just think of yourself as a game show contestant trying to grab as many prizes as you can in the time allowed.

On the written portion of the exam, there are some easy techniques to pick up points that you may accidentally be leaving behind.

Make sure you answer the call of the questions.

If there are multiple parts of the question, then answer them all. If a question asks for causes of actions and defenses, then make sure to cover defenses. If a question asks for the likelihood of success then make a statement as to the likelihood. Answer the question even if the answer is “maybe.”

If a question asks for a discussion about various people or properties, make sure to discuss each and every part of the question, even if you think that there isn’t much to say on a particular point. Resist the temptation to skip a section that seems unimportant. Clearly state your thoughts briefly and then move on. For example, a question might ask for causes of action against three defendants and possible defenses. The bulk of the question focuses on Defendant #1 and #3. But don’t skip Defendant #2 and if there are no causes of action and/or viable defenses for that defendant, make sure to point it out. There are points there, make sure to pick them up.

DON’T discuss causes of action or parties that are not called for in the question. You don’t have time and they aren’t worth any points no matter how interesting or brilliant your insights may be.

The Bar exam is not the place to demonstrate your superior creative writing abilities. Leave your ego at the door and stick to a simple IRAC approach. Remember your audience. Your audience is the person grading your exam.

Use headings.

Using a heading let’s the grader know that you have spotted an issue, already getting you some points. This will also help keep you on track. If you have typed for more than a page under one heading, it’s probably time to move on. You have either spent too much time on one issue or you haven’t broken out the sub-issues.

Write a conclusion without being conclusory.

Don’t forget to clearly state your conclusion at the end of each section so that the grader has easy access to the destination of your analysis. You may think that your discussion speaks for itself and you don’t want to be repetitive. A one sentence conclusion can wrap it up and demonstrate that you understand the issue, even if you don’t have a rock solid outcome. Try something like; “In conclusion, after considering all the factors discussed above, the Court may grant injunctive relief.” This shows that you understand the law and have drawn a conclusion. That is worth points and those are points that the grader wants to give to you, let them.

Outline it!

If you run out of time, use an outline format to cover what you can. Listing an issue with a short rule in incomplete sentences will be given points. Don’t worry about being perfect as the clock ticks down, just get down your knowledge on the paper. You can’t get points without demonstrating your understanding and sometime you can get the majority of points by using a quick outline format.

DON’T stop suddenly. An incomplete essay that ends mid-sentence without finishing certain sections and leaving issues unaddressed, is a red flag for the grader that you are not in control of your time or this essay subject.

Clean up oversights or mistakes quickly.

If you have missed a section or gotten disorganized in your essay, just add a clean up section with a clear heading. Points are not deducted for the order in which subjects are discussed. If the call of the question asked for defenses and you realize that you have skipped the defenses for each cause of action, then add a section and label it “defenses to all causes of action.” This may not be the most elegant organization but it immediately lets the grader know that you didn’t forget the discussion of defenses and you just got points.

DON’T start going back through your essay and trying to add to each section, this will take up twice as much time for the same amount of points.

Writing high scoring essays for the bar exam is a game of skill that anyone can master. Follow the rules and make it easy for the grader to give you that score that you deserve!

Did you find this post helpful? Check out some other great articles:

  • Don’t Do This on Your Bar Exam Essays
  • Why Really Wanting to Pass the Bar Exam Isn’t Enough
  • What You Can Do Now to Prepare for the Bar Exam
  • Train Like an Athlete for the Bar Exam 

Photo credit:  Shutterstock

essay bar exam score

Ready to pass the bar exam? Get the support and accountability you need with personalized one-on-one bar exam tutoring or one of our economical courses and workshops . We’re here to help!

' src=

About Avenne McBride

Avenne McBride is a Bar Exam Tutor for Bar Exam Toolbox. After graduating University of San Francisco Law School and passing the California Bar, Avenne jumped right into a litigation job, practicing plaintiff’s personal injury law and was thrilled to get right into the courtroom. But after a few years of trial work, Avenne decided to get involved with the Bar Exam process and became a grader, along with eventually transitioning to working in the non-profit sector.

Along the way Avenne worked for Legal Aid in Oakland and practiced family law specializing in child custody cases. The best part of the work was interacting with clients and helping regular people negotiate the legal system. Based on this experience, Avenne decided to spend more time teaching and educating.

So after many years of grading and working on the development of the California Bar Exam, Avenne is taking her experience to the other side of the table to work directly with students preparing for the exam.

Reader Interactions

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Need to Pass the Bar Exam?

Sign up for our free weekly email with useful tips!

  • Terms & Conditions

Copyright 2024 Bar Exam Toolbox®™

Test Prep Insight is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Learn more

What Is A Good Bar Exam Score?

What Is A Good Bar Exam Score?

Whether you just received your official score or you're still preparing, many students wonder what is a good score on the bar exam.

Like any major professional exam, you’ll likely want know what is considered a good bar exam score. This is true whether you’re just starting to prepare or you already received your official score and now want to see how you stack up against your peers. In this detailed guide, we cover how the bar exam is scored, as well as what score and percentiles are deemed “good” in the legal world. 

BarMax Resource

How Is The Bar Exam Scored?

The UBE has a possible score of 400 . The UBE has multiple components and consists of the Multistate Performance Test (MPT), the Multistate Essay Exam (MEE), and the Multistate Bar Exam (MBE).

The MPT and MEE together make up a total of 200 points and are scored by jurisdiction. However, the MBE is scored by the National Conference of Bar Examiners , and is worth 200 points itself .

Thus, the MBE is worth 50% of your score , and the MPT/MEE together make up the other 50% . Specifically, the MEE is worth 30% and the MPT 20%.

An important point to keep in mind is that you will not be given what is referred to as a “raw score”, which is the number of questions you answered correctly .

What you will be given is what is termed as a “scaled score.” While your raw score is used to determine your scaled score , this is done by the NCBE, and they do not release or publish the formula they use to calculate this.

bar exam score

This can be frustrating since you will not know the exact number of correct answers you gave, and therefore some feel you cannot adequately prepare for a retest if you were not satisfied with your initial score.

Another caveat is that the MBE, the portion explicitly scored by the NCBE, is graded on a curve . So if you think you might have scored well, once you see the results you may have to reconsider. A 10-15 point difference in raw score can mean a 30%+ difference in scaled score .

What Is Considered A Good Bar Exam Score?

While your UBE report should clearly state if you passed or failed, a score of 280 is a passing score in every state . The lowest possible passing score of 266 will suffice in states like South Carolina, Montana, and some others.

Depending on how many people have taken the UBE , a score of 280 is approximately the 73rd percentile . A 300 is in about the 90th percentile, and 330 is in the top 1% of all scores.

Keep in mind that out of the 200 points that make up the MPT and the MEE, only 175 of them are actually scored, and the remainder are considered to be ‘pretest’ questions that do not count towards your overall score .

Tips For Scoring High On The Bar Exam

Outlines are key.

Break down each outline into manageable pieces . For example, if you start with Torts , chop that up into pieces like “intentional torts”, which will each have their own subsections of duty, breach, cause, and harm. Then do the same for “intentional tort defenses” and so on.

Once you have your chunks, memorize them . Cover your outline and see if you can recite it. Change up the order and do it again . This will ensure that the information gets deeply embedded.

Once you can recite one portion verbatim, move on to the next after a brief bit of downtime . Make sure you are taking breaks, or you will get burnt out. Dealing with information in volumes like this means that breaks are an absolute necessity or fatigue will set in and your capacity to retain the information will be severely diminished.

good bar exam score

Review everything after your current active review period. Then put it away, and get some rest . Studies have shown time and again that sleep increases retention and promotes memory formation . Do not deprive yourself of these benefits.

Memorization

The first thing, and arguably one of the more effective ways to raise your score by 20 points or more, is to memorize the law . Memorize it, verbatim. Do not mistake your excellent overall understanding of the law for having it memorized .

The National Conference of Bar Examiners grades the MBE portion of the bar based on nuances and intricacies of the law. So it follows that the easiest way to squeeze more points from that portion is to memorize the nuances and intricacies of the law.

Understanding

Once you have the memorization down, you need to make sure you understand all the things you have memorized. Ensure that you can explain the principles and functions of the laws and codes you now know by memory .

Explain the concepts out loud, explain the concepts to laypeople. You should have created outlines that help you both memorize and understand the material you will be tested on, use them, and refine them.

An ideal length of your outlines should be about 40-50 pages , long enough without being overly detailed.

Methodical Application

Lots of law students try to race through the MBE portion , without giving much thought to the actual substantive content of the answers. Focus on quality over quantity, but pace yourself.

Use an hour or two daily to completely dissect and examine MBE questions and the fact pattern they are describing . Note dates or specific events that help illuminate the issue and the rule that is being tested in that question. Bar Prep Hero is an affordable prep solution that offers official MBE questions for practice purposes.

Evaluate each answer option and determine why each one is either right or wrong, even once you have the correct answer, determine why the other options are not correct. While this may lead to spending 15-20 minutes on a given question , you will gain more from it.

Use Your Resources

You have countless resources such as study groups, tutoring, and actual MBE questions at your disposal. The NCBE even releases actual past questions for study purposes, and if you are using study aides or bar review courses that do not advertise that they are using official NCBE questions, they generally are not .

What is a passing bar exam score?

What score is deemed “passing” totally depends on your state. For some states, a UBE score as low as 266 (out of 400) is considered passing. In other states, you need to score 280 or above. It just depends on your jurisdiction.

What is a good bar exam score?

Given that the bar exam is a pass/fail test, a “good” score is one that gets you a passing score report. That means a good score will turn on what the cut-off threshold is for passing scores in your individual state.

essay bar exam score

  • Sep 24, 2019

Breaking down Essay Grading by the California Bar Exam

If you are gearing up to take the February 2020 California Bar Exam, you may be wondering how the California bar essay portion is graded. California recently made some changes to their bar exam going from a three day examination period to two days. Day one is the written portion of the exam and consists of 5 one-hour essays and one 90 minute Performance Test.

California divides the graders into six groups, each consisting of 12 experienced graders and up to 4 apprentice graders. Both groups are supervised. The graders assign a raw score to each essay on a scale from 40 – 100. The State Bar of California has explained, “in order to earn a 40, the applicant must at least identify the subject of the question and attempt to apply the law to the facts of the question. If these criteria are not met, the answer is assigned a zero.” We’re going to go out on a limb here and assume you want to hit a score of 65 and above. That’s exactly what our inexpensive materials are geared to accomplish.

A score of 55 is designated as a below passing paper . The applicant missed or incompletely discussed two or more major issues. The applicant had a weak or incomplete analysis of the issues addressed and the overall organization was poor.

A score of 60 is a slightly below passing paper . The applicant may have missed or incompletely discussed one major issue. Discussion of all issues was incomplete and organization of the issues was poor.

A score of 65 is an average passing paper . Applicant had a lawyer-like discussion of all major issues and missed some minor issues. Overall paper could have been better.

A score of 70 is a slightly above average paper . Applicant had a lawyer-like discussion of all major issues and missed some minor issues. Paper could have been better, but analysis and reasoning warrants more than a 65. Well organized paper.

A score of 75 is a distinctly above average paper . Applicant discussed all major issues in a lawyer-like fashion and discussed the ancillary minor issues. Overall, a well organized paper.

A score of 80-85 is unusually complete and thorough paper. Applicant discussed all major and minor issues in a lawyer-like fashion and very well organized.

Components used for grading include: organization/format, issue spotting, rule statement, and analysis. A passing paper will have use of headings and IRAC used to organize issues discussed. Issues are generally discussed in a logical order. A passing paper will discuss all the main issues, but may fail to discuss some of the minor issues. A passing paper will have clear rule statements that may be stated verbatim or in your own words that blend some of the concepts into one statement. Rules are correctly applied to the facts of the case and there is infrequent discussion of both sides of an issue, but paper still discusses major issues raised from the fact pattern.

We hope this gives you a better idea of what a passing (65 and above) paper looks like. Feel free to head to our sample page to see a CBB sample of Civil Procedure. All subjects are organized in the same fashion and are specifically geared towards a passing score of 65 and above.

essay bar exam score

Recent Posts

Thank You, Cal Bar Bible Subscribers: A Special Shoutout to Feedpost!

Conquering the California Bar Exam: A 3-Month Study Plan for Success

July 2023 California Bar Exam Predictions

CRUSH The Bar Exam

How to Tackle Essay Writing on the Bar Exam

How-To-Tackle-Essay-Writing-On-The-Bar-Exam

One skill that is expected to be cultivated and refined during law school is the ability to write well. This makes sense, since good writing will be essential for many legal careers. You will likely need to write memos, client letters, motions, petitions, briefs and other legal documents— so good writing is important! Consequently, the bar exam takes note of this and makes writing an essential component of it. 

Whether you’re taking the Multistate Essay Exam or a state-specific bar exam , you will be writing lots of essays during the bar exam and in your preparation for it. So here’s what you need to know about essay writing on the bar exam and strategies you can implement to improve your score.

Check out the most important bar exam essay writing tips below!

See the Top BAR Review Courses

  • 1. BarMax Review Course ◄◄ Best Overall BAR Review Course + No Discounts
  • 2. Quimbee BAR Prep Course ◄◄ Best Price
  • 3. Kaplan BAR Review Courses ◄◄ Expert Instructors

Multistate Essay Exam (MEE) Jurisdictions

Most states use the Multistate Essay Exam. If you’ll be testing in one of these states, here are the basics you need to know:

There are 6 essay questions in total . This part of the test is 3 hours, so you have 30 minutes per question. Also, the subjects for this portion of the test cover:

  • Partnerships
  • Corporations and limited liability companies
  • Civil procedure
  • Conflict of laws
  • Constitutional law
  • Criminal law and procedure
  • Real property
  • Secured transactions
  • Trusts and future interests
  • Wills and estates

While any of these topics are fair game, these particular topics make up the majority of the MEE:

  • Corporations and LLCs
  • Family law and trusts
  • Future interests

Consequently, you may want to spend extra time preparing for these areas of the law while also studying for the other subjects. 

The good news is that there are guides you can use to determine the most highly tested essay rules. These bar exam study resources will identify these rules and teach you additional rules of law. 

Here’s another important tip: focus your time on these major rules instead of wasting too much energy on nuanced rules that are less likely to be tested. 

Keep reading for more important study tips to help you pass the MEE:

Bar Essays Studying Tips 

The first part of learning how to tackle the essay writing portion of the bar exam is to develop a solid study plan . Your plan should incorporate the following: 

Learn More About the IRAC Method and Format

You may have used a variety of writing styles in law school, such as IRAC, CRAC or CREAC. However, the IRAC structure is the most commonly used one on the bar exam, and is what bar examiners will expect. Hence, you need to be familiar with this writing system:

  • I – Issue
  • R – Rule
  • A – Analysis
  • C – Conclusion

This system ensures that you write concisely and only include the necessary information. It’s not flowery and won’t contain a lot of excess content— which is a good thing, since you’re on such a constrained time limit!

As you practice, read through your answers and label each sentence with an I,R,A, or C . if a sentence cannot be labeled under one of these letters, it probably does not belong.

Practice Essay Writing Each Week

When you spend so much time studying for the bar exam , it may feel tempting to skip practicing the lengthy essay portion of the test. However, this is one of the biggest mistakes made by most test takers. 

Bar essays are an essential component of the test; they can often help leverage a higher score if you don’t do as well on some of the other test portions. Furthermore, while reviewing the rules of law is important, writing about them can show you understand them and know how they apply. 

Basically, don’t leave practicing these essays until the end of your preparation. Instead, make practicing essays part of your weekly study plan!

Bar Exam Essay Practice Tips

Practice Under Timed Conditions

When you first begin practicing the essay portion of the bar exam, you may not want to time yourself so that you can be sure you are spotting all the issues and honing your writing style . However, toward the middle of your study time, you will want to start practicing under timed conditions. 

It is not enough to know how to write a good essay. You need to know how to write a good essay quickly . You need to be able to quickly discuss the most important issues and know when not to elaborate on others.

The best way to study for these questions is to find previous MEE questions and practice them under timed conditions. Then, review the analysis to determine how you did.

Review Rules the Last Two Weeks of Your Study

Focus on memorizing as many rules of law as possible during your last two weeks of studying. You’ll need to be able to recall these basic rules as part of your essay writing without hesitation, so be sure that you can recite rules of law without even thinking about them.

Learn More About The BAR Exam

  • Take These Steps To Pass The Bar Exam!
  • How To Crush The Essay Portion Of The Bar Exam
  • How To Study For The BAR While Working Full Time!
  • How To Pass The BAR After Failing The First Time
  • How To Become A Lawyer

Tips for the Day of the Bar Exam 

Okay, so now it’s the day of the bar exam— you need to know how to truly tackle these questions in the moment of truth. Here’s what you need to do:

Plan The Time You Have for Writing Essays 

Before beginning this portion of the test, you should have a plan on how you will manage your time, such as:

  • First 10 minutes: Read the essay prompt. Maybe read it multiple times. Don’t rush this part; your ability to recall this information will be essential to answering the question. Also, outline your answer as you read through the prompt.
  • Next 15 – 17 minutes: Write your answer.
  • Last 3 to 5 minutes: Review your answer to check for competition and to make necessary edits.

Bar Essay Time Managment

Stick to this timeline for every question. If you start going over 5 minutes on every question, you won’t have enough time to tackle the last question. Ultimately, it’s far better to get out an analysis of all the questions than to answer one question perfectly and not even address another.

Make an Outline

Making an outline can help you organize your thoughts and create a plan on what you will be writing about. Mark up the prompt as you go— you may want to highlight or underline certain information to help your recall later. 

Try to make this outline clear, such as making a bullet list of items related to the prompt. If you run low on time, you can always copy and paste this information to provide a semi-answer to the prompt. Write your rule statement and list the relevant facts that will support your analysis. Also, consider how much time you will need to discuss each subpart of the answer. 

Apply the IRAC Structure

Now it’s time for you to apply what you’ve learned. Use IRAC to fully answer the question. 

How To Use IRAC Method and Format to Crush the Bar Exam Essay Portion

Briefly state the issue in a bolded heading. Issues are usually clearly stated on bar exam essay questions rather than hidden in a fact pattern, so this should be an easy way to pick up points. Restate the issue and move onto the next part of your answer. 

State the rules that apply to the case. This is where rote memorization comes into play, since you need to be able to state the proper rule that applies to the question. Bold key terms to show that you know what rules and terms apply. This will get you the points you need on this section.

The summary of rules should be clear and concise and should demonstrate that you understand what is involved. Only address those rules that actually apply to this case and address the specific question. 

Show how the rule applies, given the particular fact pattern. This will be the longest portion of your answer. However, your analysis should still be shorter than your analysis in your legal writing class. You can pick up (or lose) a lot of points in this portion of the answer! You need to demonstrate that you know how to apply the law to the facts. Generally speaking, the more facts you’re able to explain, the higher your score will be.

Most of the facts in the fact pattern will be there for a reason— and you need to explain why these facts matter in your analysis. Provide a step-by-step analysis of how the facts support your conclusion. You may be able to score extra points by identifying counter-arguments or a majority and minority view. 

Conclusion 

End with a brief conclusion. One sentence is fine here. Perhaps unlike law school exams, there is usually a “right” conclusion. Some writing structures will use a conclusion first and then end with a conclusion, but this is not recommended on the bar exam. If you start with the wrong conclusion, the grader will look for ways to prove why you are wrong while grading your answer; therefore, save your conclusion for the end!

Organize Your Content 

Make your essay simple to read by taking advantage of all the tools at your disposal. Use paragraph breaks to organize your content, creating a clear I, R, A , and C section. Additionally, bold and underline key words and principles of law. Many essay graders will be scanning your work, so make it easy to identify that you understood the legal issues involved by drawing their attention to these key terms.

Also, use transitional words to qualify certain statements and to explain where you are going with your answer. This makes it easier for the grader to follow your analysis, as well as helps you to stay on track.

Answer the Question

Seems obvious, right? Listen:

While it seems simple to just answer the question you are asked, many bar exam essay questions include numerous fact patterns, potential rules of law that apply, and even some red herrings. Be sure that you only answer the question that is asked; don’t go off on a tangent that will not score you any extra points! 

Read over the instructions to the question and follow these instructions, even if that means ignoring something or assuming certain facts are true. Any time you devote to issues that are not relevant to the instructions takes away from time that can score you more points.

Manage Your Time 

Now that you’re in the middle of your answers, keep a close eye on time. It can be tempting to take just a few more minutes to feel you completed a question, but this can come back to haunt you by taking away necessary time from another question. Set alarms if you need to — and are permitted to — so that you know when time is up for each section. Also, you may want to set a reminder a few minutes before your allotted time so that you can quickly wrap up the question before moving on to the next one. 

With that being said, avoid writing a partial essay and then moving onto another one. It can take several minutes to regain your bearings and remember what the essay was about when you switch back and forth. Instead, finish each question in the allotted time and then move onto the next.

Get Discounts On BAR Review Courses!

BarMax-Best-Bar-Prep-Course-1-280x280

Enjoy Up to $2,700 Off on BarMax Course

Kaplan-Best-Bar-Prep-Course-1

Save Up to $700 on Kaplan BAR Review Course

Quimbee-bar-review-280x150

$300 Savings on Quimbee Bar Review+

Crushendo-Bar-Chart-Logo-280x280-1-280x280

Crushendo Coupon: 10% Off Bar Prep Products

Quick tips for essay writing.

Here’s a quick round-up of tips to keep you on track when preparing your bar exam essays:

  • Read the facts more than once. Don’t rush this part!
  • Don’t write a lengthy, historical background of the law. Instead, make it concise.
  • Don’t write a long analysis regarding policy if the question does not ask for it.
  • Present counter-arguments but spend less time on them than arguments
  • Provide a clear and decisive conclusion.
  • Pace yourself. The two-day bar exam is a marathon, not a sprint. Approach each question with patience and don’t try to rush it.
  • Don’t talk to anyone about your answers. This will undoubtedly make you doubt yourself; you don’t need a hit to your self-confidence at this time!
  • Have a fun plan for what to do after the bar exam to have something to look forward to.

essay bar exam score

So, there you have it— a plan to help you tackle the essay portion of the bar exam. Use these strategies to help boost your score and you will soon be a licensed attorney!

Thanks for reading and good luck on your exam!

Frequently Asked Questions About Bar Essays

How do you write an essay for the bar exam.

There’s a specific structure that bar examiners expect when you write answers to essay questions. This structure is called IRAC, which is short for “Issue, Rule, Analysis, and Conclusion.” When writing a bar essay, try and structure all of your sentences around these four subjects in a way that makes sense.

How many essays are on the bar exam?

The essay portion of the bar exam is called the Multistate Essay Exam, or MEE for short. It is made up of six different essay questions that you must write answers to over the course of three hours. The subjects can vary depending on what test you take, but all are related to the legal field and will require excellent logical reasoning and critical thinking to earn a high score.

How long should bar exam essays be?

Although there may not be a set word limit for your bar exam essay, a good rule of thumb is to write at least 1,000 words for each answer. However, you should avoid padding out your article’s word count with excessively detailed descriptions of legal concepts; stick to the IRAC format and ensure each word in each sentence has a purpose.

Is it better to write or type the bar exam?

There’s no universal answer to this question, since some students will prefer to write by hand and others will prefer typing. However, there are significant benefits to typing your bar exam essay questions over using a pen and paper, such as easy erasing and the ability to copy and paste. However, power issues on rare occasions have forced essay writers to resort to pen and paper, and it makes it impossible to lose progress due to a software error.

COMPARE THE BEST BAR PREP COURSES

essay bar exam score

Valerie Keene is an experienced lawyer and legal writer. Valerie’s litigation successes have included wins for cases involving contract disputes, real property disputes, and consumer issues. She has also assisted countless families with estate planning, guardianship issues, divorce and other family law matters. She provides clients with solid legal advice and representation.

Related Posts

essay bar exam score

Additional Links

Bar Discounts Bar Exam Tips Bar Exam Requirements Policies and Disclosure Terms of Service Contact Us

  • Bar Exam Info
  • Bar Exam Study Materials
  • Legal Services

How-To-Study-For-BAR-While-Working

An Essential California Bar Exam Supplement

  • SELL YOUR EXAMS
  • TESTIMONIALS
  • Berkeley Law
  • Boston University
  • Cal Western
  • Chapman Law
  • Concord Law
  • La Verne College of Law
  • Monterey College of Law
  • Pepperdine Law School
  • San Diego Law
  • Santa Clara Law
  • Southwestern
  • Trinity Law
  • UC Hastings
  • Univ. of West LA
  • Western State
  • Whittier Law
  • Bar Exam 101
  • Bar Exam Doctor
  • Bar Exam Toolbox
  • Bar Secrets
  • Be Prepared
  • Jurax Bar Prep
  • Make This Your Last Time

REAL GRADED ESSAYS:

BarEssays.com has spent years and countless hours constructing our database of California Bar Exam essays . You will have access to thousands of authentic essays that have been graded by the California Bar Examiners.

BAR GRADER REVIEWS:

With each premium subscription to BarEssays.com, you gain access to hundreds of Professional Bar Grader Model Answers and Professional Bar Grader Reviews of selected essays in the BarEssays.com database, written by official former graders of the California Bar Exam.

OUTLINES, CHECKLISTS, TEMPLATES

With each premium subscription to BarEssays.com, you receive a complete set of California Bar Exam Short Review Outlines, Checklists, and Essay Attack Templates.

EASY SEARCH:

The BarEssays.com search engine allows you to browse essays by subject, essay score, handwritten/typed essays or examination year and includes links to officially released answers. The BarEssays.com search engine will dramatically increase the efficiency of your California Bar Exam studies.

A Database of Over Three Thousand Authentic Graded California Bar Exam Essays

BarEssays.com is a unique and invaluable study tool for the essay portion of the California Bar Exam. We are, by far, the most comprehensive service that provides REAL examples of REAL essays and performance exams by REAL students that were actually taken during the California Bar Exam and graded by the California Bar Examiners. Since launching in 2007, thousands have successfully used the BarEssays.com essay database to prepare for the essay portion of the California Bar Exam, with several entire law schools, review courses, and tutors providing access to all of their students.

  • info@virginiabarexamtutor.com

Virginia Bar Essays Online Course

Mastering the Virginia Bar Exam Essay Section: A Comprehensive Guide

Tackling the essay section of the Virginia Bar Exam requires a unique strategy. This guide aims to provide a comprehensive understanding of how to effectively navigate this challenging segment of the test, given its different grading standards and essay expectations compared to other states.

Understanding the Unique Nature of the Virginia Bar Exam

essay bar exam score

Historically, each state had its own unique bar exam. Major bar prep programs like Barbri, Kaplan, and Themis used to customize their materials for every state. However, as most states adopted the Uniform Bar Exam (UBE), these prep programs adjusted their materials to suit a larger audience. They still offer customized outlines and lectures for Virginia, but much of their general essay-writing advice is not well-tailored for Virginia essays and is more accurate for students taking the Multi-State Essay Exam as part of the UBE.

The issue lies in the fact that the essay section of the Virginia Bar exam differs significantly from the UBE’s multi-state essay exam in terms of what it assesses and how it is graded. Consequently, passing the Virginia essay exam calls for specific, tailored advice, which often diverges from the general essay writing advice provided by these prep courses.

The Essentials of Navigating the Virginia Bar Exam

More than just a test of legal knowledge, the Virginia Bar exam gauges your ability to distinguish between relevant and irrelevant information. The examiners will supply specific questions related to the fact pattern, unlike in UBE states or on law school exams, where you may have been asked to find all legal issues within the pattern.

Furthermore, the examiners grade partially based on relevance and scope, meaning you could lose points for including legally correct rules that don’t apply to the question. This practice contrasts with the multi-state essay exam, where the goal is often to find and discuss all legal issues in each fact pattern.

There is almost no overlap between the MBE and the Virginia Essay Exam

Unlike UBE states, where examinees take the MBE and the MEE, in Virginia, you must approach the Multistate Bar Exam (MBE) and the Virginia essay exam as two completely separate tests. Besides federal jurisdiction and the Uniform Commercial Code, there’s hardly any overlap between the two exams, necessitating customized study strategies. What might be a correct response on the MBE could be incorrect on the Virginia essay exam, and vice versa.

How to Prioritize Study Topics

essay bar exam score

A common yet misguided practice is to predict the frequency of topics that might appear on the exam by using a “subject frequency chart”. But the Virginia exam is no more or less likely to test a particular subject based on past frequency. Instead, the emphasis should be on the weight each topic carries when it does appear, as that significantly influences your overall score and determines how to spend your valuable study time.

The unique grading practices and essay expectations of the Virginia Bar exam present a distinct challenge. To succeed, it’s critical to understand the workings of the exam, adapt your study strategies accordingly, and prioritize topics based on their weight rather than frequency.

As you embark on the journey of preparing for the essay section of the Virginia Bar Exam, there are a few strategies and concepts that will be key to your success. With a focus on understanding the weight of different topics, applying a solid framework for answering questions, and a thoughtful approach to studying, you will increase your chances of achieving a positive outcome.

Prioritizing Subjects in Your Study

Knowing where to focus your attention is a crucial part of preparing for the bar exam. Virginia Civil Procedure is the most important subject, constituting sometimes as much as an estimated 20% of the exam points. Therefore, a solid grasp of this subject will set you miles ahead of the competition. Virginia procedure is often tested with its own fact pattern, and then elements of it are also tested within other subject areas. Wills and Federal Jurisdiction also rank high, and when tested, they often form the core of entire fact patterns.

Less important are the second-tier and lower subjects. Subject charts within the LexBar course categorize subjects within groups, or “tiers” according to how important they are for your study time.

The third-tier subjects, although they may come up relatively often, are less important. Spending significant time on these at the expense of the higher tier subjects can lead to sub-optimal outcomes as you neglect enough time with higher-weight subjects in exchange for trying to learn every detail of a very low-weight subject.

Structuring Your Essay: The Rule-Analysis-Conclusion (RAC) Method

A useful strategy for structuring your essay responses is the Rule-Analysis-Conclusion (RAC) format . This method involves grouping all your rules together, followed by the analysis, and then concluding with a succinct answer for each sub-question under a given fact pattern.

Avoid intermingling your rules and analysis, as clarity is key. Ensure your answer is precise and direct, mirroring the examiners’ labeling. No other subheadings are needed.

Understanding and Answering Essay Questions

When it comes to the actual answering of essay questions, a methodical approach is recommended. Start by reading the question, then preview what your conclusion might sound like. Next, read the fact pattern to figure out what facts are relevant to the rules. Your conclusion should mirror the question, forming a declarative sentence that directly answers the question. This approach keeps you focused on the scope of the question and prevents you from veering off into irrelevant facts or rules.

Practicing with Past Virginia Bar Exams

Past exams can be a powerful tool in your preparation. They allow you to get a feel for the style of questions and how they are structured. Look out for multi-part questions, and practice turning the question into a declarative sentence to create your conclusion at the end of your answer.

Remember, the goal is to understand the question, decide on your conclusion, then which legal rules apply, and finally, use the facts to provide relevant analysis under the rules. This meticulous approach to essay answering will help you master the Virginia Bar Exam essay section.

The Virginia Bar Exam essay section is not merely about producing a good analysis; it’s about demonstrating your knowledge of the specific legal rules that apply within Virginia. In other words, a beautiful, logically constructed analysis can fall flat if it doesn’t include the correct legal rules. Therefore, the first step in succeeding in the essay section is to learn the law. But how should one go about this?

Learning Through Testing

One effective method of learning and remembering legal rules is through testing. This might involve writing full tests or, more practically, using a tool like flashcards. Creating flashcards on the specific legal rules you need to remember, and going through them repeatedly until you’ve memorized them, can be a very effective strategy. For instance, a flashcard could simply state “federal diversity jurisdiction” on one side, and on the other side, the associated legal rule: “The parties are completely diverse, and the amount in controversy is greater than $75,000 exclusive of costs and interest. Complete diversity means that no Plaintiff is a citizen of the same state as any defendant.”

This method of learning leverages the power of spaced-repetition testing, which research has shown to enhance memory retention. Even getting the answer wrong can help, as it primes your brain to correct and remember the right answer the next time you see it. Testing yourself could also take the form of practicing essays and self-grading, then making a note of every legal rule you miss or state incorrectly and committing those to memory.

Where to Get Material for Study

Deciding where to source your study material depends on the importance of the subject matter. For heavily tested subjects like Virginia procedure, an outline from a major bar prep program, such as Barbri, Themis, or Kaplan, could be invaluable. These detailed outlines can be transformed into flashcards for repeated study and memorization.

However, for less significant subjects like the Uniform Commercial Code (UCC), past exam questions and sample answers may suffice. Going through these past exams and extracting the legal rules will equip you with most of the legal rules that may come up on your exam. Virginia tends to test the same rules repeatedly, so familiarizing yourself with these can give you an edge.

The number of flashcards you make should reflect the importance of the subject matter. For instance, Virginia procedure, being the most important subject, should yield close to 300 flashcards. On the other hand, less significant subjects can have fewer flashcards.

Preparing for Certain Questions on the Virginia Bar Exam

There are certain questions that appear so often that you should have “rule scripts” memorized verbatim, that you could recite at a moment’s notice with virtually no forethought. For instance, questions involving Dillon’s rule, subject matter jurisdiction by diversity, and perfecting state court civil appeals come up so often that you should be able to write the rules without having to think. Having these rules hard-coded in your mind can save a significant amount of time during the exam.

Preparing for the essay section of the Virginia Bar Exam can be a daunting task. With the proper strategies and a clear understanding of how to approach legal analysis, you can significantly improve your chances of success. Let’s look at essential aspects of legal analysis, discuss key elements, and provide you with valuable tips to effectively tackle essay questions.

Understanding Legal Analysis

Legal analysis forms a crucial part of the “RAC” (Rule, Analysis, Conclusion) organization format that you should use for the Virginia exam. In the analysis section, you show your ability to connect the facts of the case to the relevant rules and draw logical conclusions. By effectively marrying the facts with the rules, you highlight your understanding of the law and its application.

Identifying Relevant Rules and Elements

To perform a strong legal analysis, you must start by clearly identifying the rules that apply to the given essay question. Different acronyms and keywords may be used to describe these rules, such as the “ocean” acronym, which stands for open and notorious, continuous, exclusive, adverse, and hostile, the elements of adverse possession. When encountering an adverse possession question under the real property rubric, you know that all these elements must be met for a period of 15 years in Virginia to establish title by adverse possession.

Crafting an Effective Analysis

In your analysis section, it is crucial to connect the facts of the case to the identified rules. By linking the facts to the rules, you show a thorough understanding of how the law applies in each situation. For example, if the fact pattern states that John occupied the parking lot openly and notoriously, with a shack that has remained there for seven years, you can assert that his possession was open and notorious and continuous for seven years because his shed was in an open parking lot used by others, and the facts tell us it remained there for seven years. This process of connecting the facts to the rules should be repeated throughout your analysis. Every rule element needs an associated fact, and every fact that you use in your analysis must directly connect to a rule element, or it shouldn’t be there.

Formulating a Precise Conclusion for the Virginia Bar Exam

After presenting your analysis, it is important to conclude your essay with a concise and precise statement that directly answers the question posed. For instance, if the question is, “Does Sam have standing to sue Jackson to require him to remove a shed?” then your conclusion should be, “Sam (does or does not) have standing to sue Jackson to require him to remove the shed.” By mirroring the question in your conclusion, you provide a clear and direct response that your grader will immediately recognize.

Applying the Approach: An Example Essay Question

To further illustrate the concepts we just covered, let’s analyze a sample essay question within the realm of real property. The question asks whether Sam has standing to sue Jackson to require him to remove a shed from the shared parking lot of a condominium association. The relevant rule states that in Virginia, only a unit association has standing to sue for misuse of common elements, while individual unit owners lack standing.

Marrying Facts to Rules

To decide which facts matter, we consider the two elements of the rule: 1) individual unit owners lack standing and 2) misuse of common elements. Among the provided facts, the fact that Sam occupied ten parking spaces in the common parking lot stands out as it aligns with the second element of the rule. Additionally, as to the first element of the rule, it is important to note that Sam is not the condo association itself but only an individual unit owner.

Writing an Effective Analysis

In the analysis section, we would say, “Sam occupied ten parking spaces in the common parking lot, which is a common element. Sam is merely an individual unit owner and does not represent the condo association itself.” Notice how this analysis is short but complete. We have addressed each element of the 2-element rule. We should refrain from introducing any irrelevant facts or guessing about potential outcomes, just to make the answer longer. Stop when you’ve fully answered the question. There is no need to speculate about what the condo association might do, or how Sam might be able to get the shed removed. Just answer the question.

Crafting a Precise Conclusion

Concluding our essay, we would assert, “Therefore, Sam does not have standing to sue Jackson to require him to remove the shed.” This conclusion uses the language of the question itself, converting it into a declaratory statement.

Common Mistakes to Avoid

When tackling essay questions, it is crucial to remain focused on the question at hand and not try to solve problems beyond its scope. Some common errors to avoid include:

  • Guessing about future sub-questions or additional issues that may arise.
  • Assuming permissions or actions of other parties without clear evidence.
  • Including irrelevant facts, such as the awareness or knowledge of the parties involved.
  • Attempting to solve broader problems beyond the specific question asked.

To excel in the Virginia Bar Exam essay section, keep these key points in mind:

  • Understand the “RAC” organization format and focus on the analysis section.
  • Find the relevant rules and elements necessary to address the question.
  • Connect the facts of the case to the identified rules in your analysis.
  • Craft a precise conclusion that directly answers the question.
  • Avoid common mistakes like issue spotting and speculation beyond the question’s scope.

Preparing for the essay section of the Virginia Bar Exam requires a strategic approach to maximize your chances of success. Let’s look at some valuable tips and resources that will help you study effectively, navigate the exam, and improve your essay writing skills. Additionally, we will discuss the LexBar online course, a resource designed to help you in your exam preparation journey to the Virginia Bar Exam.

Understanding the Unique Virginia Bar Exam Structure

The Virginia Bar Exam essay section consists of separate, unrelated sub-questions under each fact pattern. Each question requires a distinct analysis and does not build upon earlier answers unless the exam question specifically says otherwise. It is essential to read each question carefully to avoid mistakenly repeating rules or analysis from earlier responses.

Virginia Bar Exam Approach: Skip, Write, Return

On exam day, adopt a skip, write, and return approach. If you encounter a question that completely stumps you, skip it and move on to the next one. Focus on answering the questions you feel confident about, providing complete and well-organized responses. Once you have addressed all the questions you can, return to the skipped ones and do your best to answer them.

As you tackle the exam questions, time management is crucial. Avoid spending too much time on questions you don’t know. It’s better to focus on completing the questions you can answer confidently and returning to the skipped ones afterward.

Embrace the Unknown

If you encounter a question you don’t know, make an educated guess by picking a side and arguing it convincingly. Avoid wishy-washy answers that try to argue both sides. You need to reach a firm conclusion.

In some cases, you may meet questions where you have no knowledge of the applicable rules. In such situations, you may need to make up a rule, then do a good analysis, and conclusion. While this isn’t an ideal strategy, it can earn you a few points. However, be cautious not to present both sides without taking a clear position. Answer confidently.

Leveraging the LexBar Online Course for the Virginia Bar Exam

The LexBar online course is an invaluable resource tailored to help you succeed on the Virginia Bar Exam essay section. The course offers a structured approach, combining quizzes, fact patterns, and true model answers to enhance your understanding and application of the material.

The LexBar course provides model answers that follow the ideal structure. Use these model answers as benchmarks to evaluate your own writing. If your organization differs from the model, make the necessary adjustments to align with the recommended format. Unlike the sample answers written by the law professors and posted online , the LexBar model answers are formatted and organized exactly how you should organize your answer.

The LexBar course offers an opportunity to submit three essays for grading. Submitting your essays allows the LexBar team to supply personalized feedback. This offers insights into areas of improvement and suggestions for enhancing your responses. Our grading is significantly more detailed and actionable than the typical essay grading from the major bar prep courses. Take advantage of this feedback to refine your writing skills and to help learn some frequently tested Virginia-specific rules.

To make the most of the LexBar course, begin the lessons between 3 to 6 weeks before your exam. This timeframe strikes a balance between allowing sufficient time for thorough preparation and avoiding rushing through the program. However, individual preferences may vary, and some students opt to start earlier or closer to the exam date. You will need about 1-hour per day for about fifteen days to complete the course. You should not try to complete LexBar in fewer than 15 days. Doing more than 1-hour (1 LexBar Lesson) per day produces less than optimal results.

Preparing for the Virginia Bar Exam essay section requires a focused and strategic approach. By understanding the unique exam structure, managing your time effectively, and leveraging resources like the LexBar online course, you can enhance your chances of success. Remember to practice writing essays using the recommended structure and seek feedback to continually improve your skills. With diligent preparation, you’ll be well on your way to conquering the essay section of the Virginia Bar Exam.

Make This Your Last Time - A Candid, No-BS Look at Bar Exam Preparation

Make This Your Last Time

Bar Exam Preparation

How do grading and scoring work for the California Bar Exam?

“How do you calculate your score on the California Bar Exam? How does grading work? i WeNt tO lAw ScHoOL bEcAusE i SuCk aT mAth LOL”

I can feel a blood vessel dilating in my head and an urge to throw my keyboard out the window every time I hear someone say this. If this is your idea of a joke, just leave this planet now before things get more embarrassing for both of us.

While it isn’t politically incorrect for Americans to brag about deficiencies in their math skills, I won’t have that around here.

First of all, stop using this self-deprecatory language. You took the SAT and a shitload of math classes until you were old enough to drive. You can do basic math. Or “maths” if you’re British and like to make words unnecessarily complicated (Worcestershire sauce anyone?)

You are capable—of doing math, doing Pereira and Van Camp calculations, and passing the bar exam.

Second of all, why are we still confused about how grading works for the California Bar Exam? Should I blame the State Bar for its lack of transparency? Are my optimism and faith in you people misplaced?

But if you’re frustrated and confused by the numbers, I’m happy to put a rest to this once and for all.

Grading process and what the points mean for the California Bar Exam

Grading the essays, grading the performance test (pt), grading the mbe, myths around what is a passing score for the essay or the mbe, scaling the points for the california bar exam, how your raw and scaled scores determine whether you pass the california bar exam, summary and takeaways about how grading works for the california bar exam, so how does knowing the grading procedure help you with preparing for the california bar exam.

Knowing how your bar exam works is one of the foundations in your quest to pass the exam. Study not just the law—but also the rules if you want to win the game.

Indeed, it’s the very first step of preparation that I recommend in Passer’s Playbook 2.0 :

Part of successful preparation on the California Bar Exam is knowing how the grading works.

Let’s look at three aspects, step by step:

  • Grading: how graders give you points
  • Scaling: how the State Bar adjusts the points
  • Scoring: how your overall score is derived and the ultimate importance of the 1390 threshold

And then a summary and takeaways on how knowing the grading procedure can help you prepare for the CA Bar Exam.

These discussions will diverge a bit depending on whether you’re taking the two-day General Bar Exam or the one-day Attorneys’ Exam.

Before the graders go off and skim your essays while sitting on the toilet or at a stoplight, they convene three times to go through a calibration process, to “ ensure they are still grading to the same standards .”

Each of the five essays is given raw points ranging from 0 to 100. Realistically, you’ll likely get around 55-70 for each of your essays, at least 40-45 points just for writing down a good-faith answer.

A second grader will reread your essays and PT only if your overall scaled score was close to passing, between 1350 and 1390 . This is “ designed to correct false negatives by re-evaluating borderline papers that may have been, incorrectly, assessed below the cut line .”

The PT is also given 0-100 points. These raw points are doubled. Hence, the PT is worth twice as much as an essay.

The raw score for the written portion is out of 700 points: 500 points from the five essays, 200 points from the PT.

If you’re taking the Attorneys’ Exam, this is the entire basis for your overall scaled score.

If you’re taking the general exam, the written portion counts for half of your overall score. The other half comes from the MBE.

The MBE also has raw scores (number of questions correct) that are converted to scaled scores.

But you only see the scaled score on your score report. The “percent below” numbers are percentiles. This number means you did better than that percentage of test takers. The higher the number, the better.

While those percentiles give you a sense of which subjects to emphasize when studying for the MBE again, they are not representative of your scores.

⚠ You do not “pass” an essay or a PT by scoring a certain amount. You only pass or do not pass the exam itself based on your overall scaled score. ⚠

Technically speaking, an essay that scores a 65 does not mean you “pass” the essay. It’s just a score. Rather, it historically put you on track to pass , assuming all else is similar.

Under the new cut score of 1390, an average raw written score of approximately 61 would typically get you to a scaled written score of 1390. You do not “pass” or “fail” the written portion. You do not “pass” or “fail” the MBE portion. You do not need to “pass” both sections to pass the exam. You could get all 60s on the essays and PT and still pass the exam. You could get all 70s on the essays and PT and still fail the exam. A high MBE score is not an “auto-pass,” but it will reduce the score you need on the written side. In that sense, high-scoring essays and PT are also an “auto-pass.”

If you say “passing essay” as a shorthand for the above, fine. But please try not to spread misinformation that you can “pass” or “fail” a given essay or PT, or that you can “pass” or “fail” the MBE. Rant over.

Your raw written score out of 700 is converted to a 2000-point scaled score using a formula unique to that exam. The California State Bar will publish the formula after each result.

Your raw MBE score (how many you got correct) is also converted to a 2000-point scaled score using a formula unique to that exam. This conversion is determined by the NCBE and is no longer published. Here’s one example from many years ago.

Here are example score reports that show you how raw written scores translate into scaled scores. If you’re wondering how many MBE questions you got right, you can see some examples of combinations of MBE percentiles here to see what it would look like as a scaled score.

There is no linear conversion for either the essays or the MBE. Scoring 1350 out of 2000 doesn’t mean you got 67.5% correct. Leave the exact formulation of the conversion to the statisticians, and stop trying to extrapolate with “quick maths.”

See the page explaining scaling by the CA State Bar and the 2017 Bar Exam Report . An excerpt from the latter:

essay bar exam score

Based on the above, here’s my understanding of the scaling:

Converting the MBE raw scores to scaled scores adjusts for difficulty of the exam and makes a given scaled score equivalent to the same scaled score in any other exam; this is done using “ a set of questions that have been used before and whose difficulty is known .” This scaling is done to ensure consistency across administrations.

Yes, you can estimate roughly, but it’s not as simple as 1 raw point = x scaled points. Each administration has a unique formula based on various statistics obtained from that very exam. By definition, there is no universal formula that you can use, only historical data you can try to estimate with.

Honestly, you don’t need to know the fine details.

Just know that a 65 raw on an essay or 125 correct on the MBE will get you different results each time you take the exam. Those are nice benchmarks, but they mean slightly different things every time—another reason they are technically not “passing” scores. This is on purpose, to make each exam and grading process theoretically equivalent to one another. Whether the scaling does its job correctly is a separate matter.

In other words, a given raw score one year is not the same (or at least doesn’t produce the same outcome) as the same raw score another year. But a given scaled score is supposed to be equivalent to the same scaled score another year. That’s how they can make meaningful comparisons across exams and say that 1390 is a passing score.

You doing OK? Here’s the story so far:

  • Graders give you raw scores (e.g., 60 on an essay, 125 on the MBE).
  • The State Bar converts raw scores to standardized scaled scores out of 2000 (for example, 1370 for written portion, 1410 for MBE portion).

Now, the scaled written score and the scaled MBE score are averaged together to arrive at the overall scaled score, also out of 2000. Since each portion is weighted equally at 50%, we can simply average the two together. Put simply, this overall scaled score will be the midpoint between the scaled written and MBE scores.

This overall scaled score is what matters. You pass the exam if you meet or exceed the prescribed threshold—1390 starting in 2020 October. 1440 for 2020 February and prior.

If you’re taking the Attorneys’ Exam, only the written scaled score matters.

Examples: You have a scaled written score of 1380 and a scaled MBE score of 1400. Your overall scaled score is 1390. You will pass with these two scores in October if you’re taking the General Bar Exam. You will not pass with this written score in October if you’re taking the Attorneys’ Exam.

Here’s a quick rundown of what we just talked about:

  • Graders give you raw scores. These raw scores don’t indicate whether you pass a particular essay or PT, or the MBE
  • The raw scores are converted to scaled scores based on a statistical study unique to that exam. These scaled scores don’t indicate whether you pass the written portion or the MBE portion
  • The scaled scores averaged together determine whether you pass or fail the exam (or possibly a reread if you were close)

Now with arrows and symbols:

  • Raw written score –> scaled written score
  • Raw MBE score –> scaled MBE score
  • (Scaled written score + scaled MBE score) / 2 = overall scaled score
  • Overall scaled score meets or exceeds 1390 –> pass
  • Overall scaled score is lower than 1390 but higher than 1350 –> second read
  • Overall scaled score is lower than 1350 –> fail

Hopefully this cleared up the black-box mystery that is the grading process for the California bar. Don’t let me catch you being confused about this again.

This means you can attack the exam more strategically, especially if you’re a repeater.

If you see that you’re getting 60 on your essays and think you’re “failing” all your essays, but don’t notice that you were actually very close to a 1440 or 1390 on the written side, then you might waste time focusing too much on essays. Now you know that you actually did pretty well.

Or you see that your scaled score was rather low but you don’t notice that you got a 55 on the PT. You might then neglect to work on your PT in favor of essays, a common pitfall. Your average raw score goes up by 10/7 or about 1.43 points for every 5 points on the PT.

Knowing your score breakdown, you could instead aim for the low-hanging fruit, get 10 more points on the PT, and add 20 more raw written points (almost as good as +5 points on all essays).

And btw, every 5-point increment on the California Bar Exam is critical .

The MBE results are sometimes deceptive. The numbers may seem pretty good. Maybe you did 20% on some subjects but did better than most on the others. You can use the relative percentiles to surgically treat your weak subjects, but you should also look at the scaled score to see how far you actually are from the “passing” score of 1390.

  • Bar Preparation Is Emotional Preparation: How to Turn Your Emotions into Something USEFUL

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed .

NCBE Announces National Mean for February 2024 MBE

MADISON, WISCONSIN, April 2, 2024— The National Conference of Bar Examiners (NCBE) announced today that the national mean scaled score for the February 2024 Multistate Bar Examination (MBE) was 131.8, an increase of more than 0.6 points compared to the February 2023 mean of 131.1. The MBE, one of three sections that make up the bar exam in most US jurisdictions, consists of 200 multiple-choice questions answered over six hours. 

19,496 examinees took the February 2024 MBE, an increase of approximately 1.4% compared to the 19,228 examinees who sat for the exam in February 2023. This increase continues a return toward pre-pandemic examinee numbers that began with last February’s administration.

Approximately 72% of February 2024 examinees were likely repeat test takers and approximately 28% were likely taking the exam for the first time, roughly the same proportion of repeat and first-time test takers as February 2023. [1] All groups of examinees saw performance increases compared to February 2023, with the greatest increase for first-time takers. 

NCBE Director of Assessment and Research Rosemary Reshetar, EdD, commented: “These numbers reflect a continuation of the trend that began last February: we are moving back toward pre-Covid numbers in terms of both the mean and the examinee count. We will likely see an increase in pass rates compared to last February, but we are also still seeing the  effects of the pandemic on examinees who were in law school in 2020, 2021, and 2022.” 

Reliability for the February 2024 exam was 0.93, slightly higher than the reliability for the February 2023 exam and consistent with the 5-year average for February administrations. (Reliability is an indicator of the consistency of a set of examination scores, with a maximum value of 1.0.)

Jurisdictions begin releasing their February 2024 results this week; bar examination pass rates  as reported by jurisdictions are available on the NCBE website. Many jurisdictions are still in the process of grading the written components of the bar exam; once this process is completed, bar exam scores will be calculated and passing decisions reported by those jurisdictions.

More information about the MBE and bar passage rates can be found in the following Bar Examiner  articles:

  • The MBE Mean and Bar Passage Predictions
  • When the Mean Misleads: Understanding Bar Exam Score Distributions
  • Why are February Bar Exam Pass Rates Lower than July Pass Rates?

[1] The first-time and repeat MBE-based test taker information calculated by NCBE is an approximation based on the NCBE Number and biographic data, which has not been used consistently in all jurisdictions across time. Prior to 2022, approximately 10% of examinees could not be tracked with certainty by NCBE as either first-time or repeat takers due to a lack of sufficient biographic information.

About the National Conference of Bar Examiners

The National Conference of Bar Examiners (NCBE), headquartered in Madison, Wisconsin, is a not-for-profit corporation founded in 1931. NCBE promotes fairness, integrity, and best practices in bar admissions for the benefit and protection of the public, in pursuit of its vision of a competent, ethical, and diverse legal profession. Best known for developing bar exam content used by 54 US jurisdictions, NCBE serves admission authorities, courts, the legal education community, and candidates by providing high-quality assessment products, services, and research; character investigations; and informational and educational resources and programs.  In 2026, NCBE will launch the next generation of the bar examination, ensuring that the exam continues to test the knowledge, skills, and abilities required for competent entry-level legal practice in a changing profession.  For more information, visit the NCBE website at  https://www.ncbex.org .

About the Multistate Bar Examination

The Multistate Bar Examination (MBE) is a six-hour, 200-question multiple-choice examination developed by NCBE and administered by user jurisdictions as part of the bar examination, typically given twice each year. The purpose of the MBE is to assess the extent to which an examinee can apply fundamental legal principles and legal reasoning to analyze given fact patterns. The subjects tested on the MBE are Civil Procedure, Constitutional Law, Contracts, Criminal Law and Procedure, Evidence, Real Property, and Torts. In addition to assessing examinee knowledge and skills, the MBE is used to equate the bar exam.  Equating is a statistical procedure used for most large-scale standardized tests to ensure that exam scores retain the same meaning across administrations and over time.  More information about the MBE is available on the NCBE website at  https://www.ncbex.org/exams/mbe/.

About the Uniform Bar Examination

The UBE is a two-day bar examination composed of the Multistate Essay Examination (MEE), two Multistate Performance Test (MPT) tasks, and the Multistate Bar Examination (MBE). It is uniformly administered, graded, and scored and results in a portable score that can be transferred to other UBE jurisdictions. More information about the UBE is available on the NCBE website at  https://www.ncbex.org/exams/ube/ . 41 US jurisdictions currently participate in the UBE, and more than 45,000 examinees took the UBE in 2023.  

  • Jurisdictions
  • Registration
  • ADHD Medical Documentation Guidelines
  • Accommodation Decisions
  • Accommodations FAQs
  • Apply For Test Accommodations
  • Extension Requests
  • How To Prepare Your Request
  • Important Dates for MPRE Test Accomodations
  • Learning Disabilities Medical Documentation Guidelines
  • MPRE Stop-The-Clock Breaks
  • MPRE Test Accommodations Privacy Policy
  • Medical Documentation Guidelines For MPRE Test Accommodations
  • Neurocognitive Disorders
  • Physical and Chronic Health-Related Disabilities
  • Psychological Disabilities
  • Test Conditions
  • Visual Disabilities
  • Test Day Policies
  • Score Portability
  • Minimum Scores
  • Maximum Score Age
  • Local Components
  • UBE Jurisdictions
  • Integrated Question Sets
  • Multiple-Choice
  • Performance Task
  • Content Scope
  • Character & Fitness
  • MBE Score Services
  • MPRE Score Services
  • UBE Score Services
  • Bar Exam Results by Jurisdiction
  • Technical Advisory Panel
  • Covington Award
  • Validity and Fairness Research Award
  • Publications
  • Job Announcements
  • Next Generation of The Bar Exam
  • Diversity and Inclusion
  • News/Resources
  • NextGen Bar Exam
  • Help & Support

Re-evaluating GPT-4’s bar exam performance

  • Original Research
  • Open access
  • Published: 30 March 2024

Cite this article

You have full access to this open access article

  • Eric Martínez   ORCID: orcid.org/0000-0003-2180-6268 1  

3807 Accesses

193 Altmetric

Explore all metrics

Perhaps the most widely touted of GPT-4’s at-launch, zero-shot capabilities has been its reported 90th-percentile performance on the Uniform Bar Exam. This paper begins by investigating the methodological challenges in documenting and verifying the 90th-percentile claim, presenting four sets of findings that indicate that OpenAI’s estimates of GPT-4’s UBE percentile are overinflated. First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards repeat test-takers who failed the July administration and score significantly lower than the general test-taking population. Second, data from a recent July administration of the same exam suggests GPT-4’s overall UBE percentile was below the 69th percentile, and \(\sim\) 48th percentile on essays. Third, examining official NCBE data and using several conservative statistical assumptions, GPT-4’s performance against first-time test takers is estimated to be \(\sim\) 62nd percentile, including \(\sim\) 42nd percentile on essays. Fourth, when examining only those who passed the exam (i.e. licensed or license-pending attorneys), GPT-4’s performance is estimated to drop to \(\sim\) 48th percentile overall, and \(\sim\) 15th percentile on essays. In addition to investigating the validity of the percentile claim, the paper also investigates the validity of GPT-4’s reported scaled UBE score of 298. The paper successfully replicates the MBE score, but highlights several methodological issues in the grading of the MPT + MEE components of the exam, which call into question the validity of the reported essay score. Finally, the paper investigates the effect of different hyperparameter combinations on GPT-4’s MBE performance, finding no significant effect of adjusting temperature settings, and a significant effect of few-shot chain-of-thought prompting over basic zero-shot prompting. Taken together, these findings carry timely insights for the desirability and feasibility of outsourcing legally relevant tasks to AI models, as well as for the importance for AI developers to implement rigorous and transparent capabilities evaluations to help secure safe and trustworthy AI.

Similar content being viewed by others

essay bar exam score

The model student: GPT-4 performance on graduate biomedical science exams

Daniel Stribling, Yuxing Xia, … Rolf Renne

essay bar exam score

No Surprises

essay bar exam score

Benchmarking Partial Credit Grading Algorithms for Proof Blocks Problems

Avoid common mistakes on your manuscript.

1 Introduction

On March 14th, 2023, OpenAI launched GPT-4, said to be the latest milestone in the company’s effort in scaling up deep learning (OpenAI 2023a ). As part of its launch, OpenAI revealed details regarding the model’s “human-level performance on various professional and academic benchmarks” (OpenAI 2023a ). Perhaps none of these capabilities was as widely publicized as GPT-4’s performance on the Uniform Bar Examination, with OpenAI prominently displaying on various pages of its website and technical report that GPT-4 scored in or around the “90th percentile,” (OpenAI 2023a , b , n.d.) or “the top 10% of test-takers,” (OpenAI 2023a , b ) and various prominent media outlets (Koetsier 2023 ; Caron 2023 ; Weiss 2023 ; Wilkins 2023 ; Patrice 2023 ) and legal scholars (Schwarcz and Choi 2023 ) resharing and discussing the implications of these results for the legal profession and the future of AI.

Of course, assessing the capabilities of an AI system as compared to those of a human is no easy task (Hernandez-Orallo 2020 ; Burden and Hernández-Orallo 2020 ; Raji et al. 2021 ; Bowman 2022 , 2023 ; Kojima et al. 2022 ), and in the context of the legal profession specifically, there are various reasons to doubt the usefulness of the bar exam as a proxy for lawyerly competence (both for humans and AI systems), given that, for example: (a) the content on the UBE is very general and does not pertain to the legal doctrine of any jurisdiction in the United States (National Conference of Bar Examiners n.d.-h), and thus knowledge (or ignorance) of that content does not necessarily translate to knowledge (or ignorance) of relevant legal doctrine for a practicing lawyer of any jurisdiction; and (b) the tasks involved on the bar exam, particularly multiple-choice questions, do not reflect the tasks of practicing lawyers, and thus mastery (or lack of mastery) of those tasks does not necessarily reflect mastery (or lack of mastery) of the tasks of practicing lawyers.

Moreover, although the UBE is a closed-book exam for humans, GPT-4’s huge training corpus largely distilled in its parameters means that it can effectively take the UBE “open-book”, indicating that UBE may not only be an accurate proxy for lawyerly comptetence but is also likely to provide an overly favorable estimate of GPT-4’s lawyerly capabilities relative to humans.

Notwithstanding these concerns, the bar exam results appeared especially startling compared to GPT-4’s other capabilities, for various reasons. Aside from the sheer complexity of the law in form (Martinez et al. 2022a , b , in press) and content (Katz and Bommarito 2014 ; Ruhl et al. 2017 ; Bommarito and Katz 2017 ), the first is that the boost in performance of GPT-4 over its predecessor GPT-3.5 (80 percentile points) far exceeded that of any other test, including seemingly related tests such as the LSAT (40 percentile points), GRE verbal (36 percentile points), and GRE Writing (0 percentile points) (OpenAI 2023b , n.d.).

The second is that half of the Uniform Bar Exam consists of writing essays (National Conference of Bar Examiners n.d.-h), Footnote 1 and GPT-4 seems to have scored much lower on other exams involving writing, such as AP English Language and Composition (14th–44th percentile), AP English Literature and Composition (8th–22nd percentile) and GRE Writing ( \(\sim\) 54th percentile) (OpenAI 2023a , b ). In each of these three exams, GPT-4 failed to achieve a higher percentile performance over GPT-3.5, and failed to achieve a percentile score anywhere near the 90th percentile.

Moreover, in its technical report, GPT-4 claims that its percentile estimates are “conservative” estimates meant to reflect “the lower bound of the percentile range,” (OpenAI 2023b , p. 6) implying that GPT-4’s actual capabilities may be even greater than its estimates.

Methodologically, however, there appear to be various uncertainties related to the calculation of GPT’s bar exam percentile. For example, unlike the administrators of other tests that GPT-4 took, the administrators of the Uniform Bar Exam (the NCBE as well as different state bars) do not release official percentiles of the UBE (JD Advising n.d.-b; Examiner n.d.-b), and different states in their own releases almost uniformly report only passage rates as opposed to percentiles (National Conference of Bar Examiners n.d.-c; The New York State Board of Law Examiners n.d.), as only the former are considered relevant to licensing requirements and employment prospects.

Furthermore, unlike its documentation for the other exams it tested (OpenAI 2023b , p. 25), OpenAI’s technical report provides no direct citation for how the UBE percentile was computed, creating further uncertainty over both the original source and validity of the 90th percentile claim.

The reliability and transparency of this estimate has important implications on both the legal practice front and AI safety front. On the legal practice front, there is great debate regarding to what extent and when legal tasks can and should be automated (Winter et al. 2023 ; Crootof et al. 2023 ; Markou and Deakin 2020 ; Winter 2022 ). To the extent that capabilities estimates for generative AI in the context law are overblown, this may lead both lawyers and non-lawyers to rely on generative AI tools when they otherwise wouldn’t and arguably shouldn’t, plausibly increasing the prevalence of bad legal outcomes as a result of (a) judges misapplying the law; (b) lawyers engaging in malpractice and/or poor representation of their clients; and (c) non-lawyers engaging in ineffective pro se representation.

Meanwhile, on the AI safety front, there appear to be growing concerns of transparency Footnote 2 among developers of the most powerful AI systems (Ray 2023 ; Stokel-Walker 2023 ). To the extent that transparency is important to ensuring the safe deployment of AI, a lack of transparency could undermine our confidence in the prospect of safe deployment of AI (Brundage et al. 2020 ; Li et al. 2023 ). In particular, releasing models without an accurate and transparent assessment of their capabilities (including by third-party developers) might lead to unexpected misuse/misapplication of those models (within and beyond legal contexts), which might have detrimental (perhaps even catastrophic) consequences moving forward (Ngo 2022 ; Carlsmith 2022 ).

Given these considerations, this paper begins by investigating some of the key methodological challenges in verifying the claim that GPT-4 achieved 90th percentile performance on the Uniform Bar Examination. The paper’s findings in this regard are fourfold. First, although GPT-4’s UBE score nears the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates appear heavily skewed towards those who failed the July administration and whose scores are much lower compared to the general test-taking population. Second, using data from a recent July administration of the same exam reveals GPT-4’s percentile to be below the 69th percentile on the UBE, and \(\sim\) 48th percentile on essays. Third, examining official NCBE data and using several conservative statistical assumptions, GPT-4’s performance against first-time test takers is estimated to be \(\sim\) 62nd percentile, including 42 percentile on essays. Fourth, when examining only those who passed the exam, GPT-4’s performance is estimated to drop to \(\sim\) 48th percentile overall, and \(\sim\) 15th percentile on essays.

Next, whereas the above four findings take for granted the scaled score achieved by GPT-4 as reported by OpenAI, the paper then proceeds to investigate the validity of that score, given the importance (and often neglectedness) of replication and reproducibility within computer science and scientific fields more broadly (Cockburn et al. 2020 ; Echtler and Häußler 2018 ; Jensen et al. 2023 ; Schooler 2014 ; Shrout and Rodgers 2018 ). The paper successfully replicates the MBE score of 158, but highlights several methodological issues in the grading of the MPT + MEE components of the exam, which call into question the validity of the essay score (140).

Finally, the paper also investigates the effect of adjusting temperature settings and prompting techniques on GPT-4’s MBE performance, finding no significant effect of adjusting temperature settings on performance, and some significant effect of prompt engineering on model performance when compared to a minimally tailored baseline condition.

Taken together, these findings suggest that OpenAI’s estimates of GPT-4’s UBE percentile, though clearly an impressive leap over those of GPT-3.5, are likely overinflated, particularly if taken as a “conservative” estimate representing “the lower range of percentiles,” and even moreso if meant to reflect the actual capabilities of a practicing lawyer. These findings carry timely insights for the desirability and feasibility of outsourcing legally relevant tasks to AI models, as well as for the importance for generative AI developers to implement rigorous and transparent capabilities evaluations to help secure safer and more trustworthy AI.

2 Evaluating the 90th Percentile estimate

2.1 evidence from openai.

Investigating the OpenAI website, as well as the GPT-4 technical report, reveals a multitude of claims regarding the estimated percentile of GPT-4’s Uniform Bar Examination performance but a dearth of documentation regarding the backing of such claims. For example, the first paragraph of the official GPT-4 research page on the OpenAI website states that “it [GPT-4] passes a simulated bar exam with a score around the top 10% of test takers” (OpenAI 2023a ). This claim is repeated several times later in this and other webpages, both visually and textually, each time without explicit backing. Footnote 3

Similarly undocumented claims are reported in the official GPT-4 Technical Report. Footnote 4 Although OpenAI details the methodology for computing most of its percentiles in A.5 of the Appendix of the technical report, there does not appear to be any such documentation for the methodology behind computing the UBE percentile. For example, after providing relatively detailed breakdowns of its methodology for scoring the SAT, GRE, SAT, AP, and AMC, the report states that “[o]ther percentiles were based on official score distributions,” followed by a string of references to relevant sources (OpenAI 2023b , p. 25).

Examining these references, however, none of the sources contains any information regarding the Uniform Bar Exam, let alone its “official score distributions” (OpenAI 2023b , pp. 22–23). Moreover, aside from the Appendix, there are no other direct references to the methodology of computing UBE scores, nor any indirect references aside from a brief acknowledgement thanking “our collaborators at Casetext and Stanford CodeX for conducting the simulated bar exam” (OpenAI 2023b , p. 18).

2.2 Evidence from GPT-4 passes the bar

Another potential source of evidence for the 90th percentile claim comes from an early draft version of the paper, “GPT-4 passes the bar exam,” written by the administrators of the simulated bar exam referenced in OpenAI’s technical report (Katz et al. 2023 ). The paper is very well-documented and transparent about its methodology in computing raw and scaled scores, both in the main text and in its comprehensive appendices. Unlike the GPT-4 technical report, however, the focus of the paper is not on percentiles but rather on the model’s scaled score compared to that of the average test taker, based on publicly available NCBE data. In fact, one of the only mentions of percentiles is in a footnote, where the authors state, in passing: “Using a percentile chart from a recent exam administration (which is generally available online), ChatGPT would receive a score below the 10th percentile of test-takers while GPT-4 would receive a combined score approaching the 90th percentile of test-takers”. (Katz et al. 2023 , p. 10)

2.3 Evidence online

As explained by JD Advising (n.d.-b), The National Conference of Bar Examiners (NCBE), the organization that writes the Uniform Bar Exam (UBE) does not release UBE percentiles. Footnote 5 Because there is no official percentile chart for UBE, all generally available online estimates are unofficial. Perhaps the most prominent of such estimates are the percentile charts from pre-July 2019 Illinois bar exam. Pre-2019, Footnote 6 Illinois, unlike other states, provided percentile charts of their own exam that allowed UBE test-takers to estimate their approximate percentile given the similarity between the two exams (JD Advising n.d.-b). Footnote 7

Examining these approximate conversion charts, however, yields conflicting results. For example, although the percentile chart from the February 2019 administration of the Illinois Bar Exam estimates a score of 300 (2–3 points higher thatn GPT-4’s score) to be at the 90th percentile, this estimate is heavily skewed compared to the general population of July exam takers, Footnote 8 since the majority of those who take the February exam are repeat takers who failed the July exam (Examiner n.d.-a), Footnote 9 and repeat takers score much lower Footnote 10 and are much more likely to fail than are first-timers. Footnote 11

Indeed, examining the latest available percentile chart for the July exam estimates GPT-4’s UBE score to be \(\sim\) 68th percentile, well below the 90th percentile figure cited by OpenAI (Illinois Board of Admissions to the Bar 2018 ).

3 Towards a more accurate percentile estimate

Although using the July bar exam percentiles from the Illinois Bar would seem to yield a more accurate estimate than the February data, the July figure is also biased towards lower scorers, since approximately 23% of test takers in July nationally are estimated to be re-takers and score, for example, 16 points below first-timers on the MBE (Reshetar 2022 ). Limiting the comparison to first-timers would provide a more accurate comparison that avoids double-counting those who have taken the exam again after failing once or more.

Relatedly, although (virtually) all licensed attorneys have passed the bar, Footnote 12 not all those who take the bar become attorneys. To the extent that GPT-4’s UBE percentile is meant to reflect its performance against other attorneys, a more appropriate comparison would not only limit the sample to first-timers but also to those who achieved a passing score.

Moreover, the data discussed above is based on purely Illinois Bar exam data, which (at the time of the chart) was similar but not identical to the UBE in its content and scoring (JD Advising n.d.-b), whereas a more accurate estimate would be derived more directly from official NCBE sources.

3.1 Methods

To account for the issues with both OpenAI’s estimate as well the July estimate, more accurate estimates (for GPT-3.5 and GPT-4) were sought to be computed here based on first-time test-takers, including both (a) first-time test-takers overall, and (b) those who passed.

To do so, the parameters for a normal distribution of scores were separately estimated for the MBE and essay components (MEE + MPT), as well as the UBE score overall. Footnote 13

Assuming that UBE scores (as well as MBE and essay subscores) are normally distributed, percentiles of GPT’s score can be directly computed after computing the parameters of these distributions (i.e. the mean and standard deviation).

Thus, the methodology here was to first compute these parameters, then generate distributions with these parameters, and then compute (a) what percentage of values on these distributions are lower than GPT’s scores (to estimate the percentile against first-timers); and (b) what percentage of values above the passing threshold are lower than GPT’s scores (to estimate the percentile against qualified attorneys).

With regard to the mean, according to publicly available official NCBE data, the mean MBE score of first-time test-takers is 143.8 (Reshetar 2022 ).

As explained by official NCBE publications, the essay component is scaled to the MBE data (Albanese 2014 ), such that the two components have approximately the same mean and standard deviation (Albanese 2014 ; Illinois Board of Admissions to the Bar 2018 , 2019 ). Thus, the methodology here assumed that the mean first-time essay score is 143.8. Footnote 14

Given that the total UBE score is computed directly by adding MBE and essay scores (National Conference of Bar Examiners n.d.-h), an assumption was made that mean first-time UBE score is 287.6 (143.8 + 143.8).

With regard to standard deviations, information regarding the SD of first-timer scores is not publicly available. However, distributions of MBE scores for July scores (provided in 5 point-intervals) are publicly available on the NCBE website (The National Bar Examiner n.d.).

Under the assumption that first-timers have approximately the same SD as that of the general test-taking population in July, the standard deviation of first-time MBE scores was computed by (a) entering the publicly available distribution of MBE scores into R; and (b) taking the standard deviation of this distribution using the built-in sd() function (which calculates the standard deviation of a normal distribution).

Given that, as mentioned above, the distribution (mean and SD) of essay scores is the same as MBE scores, the SD for essay scores was computed similarly as above.

With regard to the UBE, Although UBE standard deviations are not publicly available for any official exam, they can be inferred from a combination of the mean UBE score for first-timers (287.6) and first-time pass rates.

For reference, standard deviations can be computed analytically as follows:

x is the quantile (the value associated with a given percentile, such as a cutoff score),

\(\mu\) is the mean,

z is the z-score corresponding to a given percentile,

\(\sigma\) is the standard deviation.

Thus, by (a) subtracting the cutoff score of a given administration ( x ) from the mean ( \(\mu\) ); and (b) dividing that by the z-score ( z ) corresponding to the percentile of the cutoff score (i.e., the percentage of people who did not pass), one is left with the standard deviation ( \(\sigma\) ).

Here, the standard deviation was calculated according to the above formula using the official first-timer mean, along with pass rate and cutoff score data from New York, which according to NCBE data has the highest number of examinees for any jurisdiction (National Conference of Bar Examiners 2023 ). Footnote 15

After obtaining these parameters, distributions of first-timer scores for the MBE component, essay component, and UBE overall were computed using the built-in rnorm function in R (which generates a normal distribution with a given mean and standard deviation).

Finally, after generating these distributions, percentiles were computed by calculating (a) what percentage of values on these distributions were lower than GPT’s scores (to estimate the percentile against first-timers); and (b) what percentage of values above the passing threshold were lower than GPT’s scores (to estimate the percentile against qualified attorneys).

With regard to the latter comparison, percentiles were computed after removing all UBE scores below 270, which is the most common score cutoff for states using the UBE (National Conference of Bar Examiners n.d.-a). To compute models’ performance on the individual components relative to qualified attorneys, a separate percentile was likewise computed after removing all subscores below 135. Footnote 16

3.2 Results

3.2.1 performance against first-time test-takers.

Results are visualized in Tables  1 and 2 . For each component of the UBE, as well as the UBE overall, GPT-4’s estimated percentile among first-time July test takers is less than that of both the OpenAI estimate and the July estimate that include repeat takers.

With regard to the aggregate UBE score, GPT-4 scored in the 62nd percentile as compared to the \(\sim\) 90th percentile February estimate and the \(\sim\) 68th percentile July estimate. With regard to MBE, GPT-4 scored in the \(\sim\) 79th percentile as compared to the \(\sim\) 95th percentile February estimate and the 86th percentile July estimate. With regard to MEE + MPT, GPT-4 scored in the \(\sim\) 42nd percentile as compared to the \(\sim\) 69th percentile February estimate and the \(\sim\) 48th percentile July estimate.

With regard to GPT-3.5, its aggregate UBE score among first-timers was in the \(\sim\) 2nd percentile, as compared to the \(\sim\) 2nd percentile February estimate and \(\sim\) 1st percentile July estimate. Its MBE subscore was in the \(\sim\) 6th percentile, compared to the \(\sim\) 10th percentile February estimate \(\sim\) 7th percentile July estimate. Its essay subscore was in the \(\sim\) 0th percentile, compared to the \(\sim\) 1st percentile February estimate and \(\sim\) 0th percentile July estimate.

3.2.2 Performance against qualified attorneys

Predictably, when limiting the sample to those who passed the bar, the models’ percentile dropped further.

With regard to the aggregate UBE score, GPT-4 scored in the \(\sim\) 45th percentile. With regard to MBE, GPT-4 scored in the \(\sim\) 69th percentile, whereas for the MEE + MPT, GPT-4 scored in the \(\sim\) 15th percentile.

With regard to GPT-3.5, its aggregate UBE score among qualified attorneys was 0th percentile, as were its percentiles for both subscores (Table 3 ).

4 Re-evaluating the raw score

So far, this analysis has taken for granted the scaled score achieved by GPT-4 as reported by OpenAI—that is, assuming GPT-4 scored a 298 on the UBE, is the 90th-percentile figure reported by OpenAI warranted?

However, given calls for the replication and reproducibility within the practice of science more broadly (Cockburn et al. 2020 ; Echtler and Häußler 2018 ; Jensen et al. 2023 ; Schooler 2014 ; Shrout and Rodgers 2018 ), it is worth scrutinizing the validity of the score itself—that is, did GPT-4 in fact score a 298 on the UBE?

Moreover, given the various potential hyperparameter settings available when using GPT-4 and other LLMs, it is worth assessing whether and to what extent adjusting such settings might influence the capabilities of GPT-4 on exam performance.

To that end, this section first attempts to replicate the MBE score reported by OpenAI ( 2023a ) and Katz et al. ( 2023 ) using methods as close to the original paper as reasonably feasible.

The section then attempts to get a sense of the floor and ceiling of GPT-4’s out-of-the-box capabilities by comparing GPT-4’s MBE performance using the best and worst hyperparameter settings.

Finally, the section re-examines GPT-4’s performance on the essays, evaluating (a) the extent to which the methodology of grading GPT-4’s essays deviated that from official protocol used by the National Conference of Bar Examiners during actual bar exam administrations; and (b) the extent to which such deviations might undermine one’s confidence in the the scaled essay scores reported by OpenAI ( 2023a ) and Katz et al. ( 2023 ).

4.1 Replicating the MBE score

4.1.1 methodology.

As in Katz et al. ( 2023 ), the materials used here were the official MBE questions released by the NCBE. The materials were purchased and downloaded in pdf format from an authorized NCBE reseller. Afterwards, the materials were converted into TXT format, and text analysis tools were used to format the questions in a way that was suitable for prompting, following Katz et al. ( 2023 ).

To replicate the MBE score reported by OpenAI ( 2023a ), this paper followed the protocol documented by Katz et al. ( 2023 ), with some minor additions for robustness purposes.

In Katz et al. ( 2023 ), the authors tested GPT-4’s MBE performance using three different temperature settings: 0, .5 and 1. For each of these temperature settings, GPT-4’s MBE performance was tested using two different prompts, including (1) a prompt where GPT was asked to provide a top-3 ranking of answer choices, along with a justification and authority/citation for its answer; and (2) a prompt where GPT-4 was asked to provide a top-3 ranking of answer choices, without providing a justification or authority/citation for its answer.

For each of these prompts, GPT-4 was also told that it should answer as if it were taking the bar exam.

For each of these prompts / temperature combinations, Katz et al. ( 2023 ) tested GPT-4 three different times (“experiments” or “trials”) to control for variation.

The minor additions to this protocol were twofold. First, GPT-4 was tested under two additional temperature settings: .25 and .7. This brought the total temperature / prompt combinations to 10 as opposed to 6 in the original paper.

Second, GPT-4 was tested 5 times under each temperature / prompt combination as opposed to 3 times, bringing the total number of trials to 50 as opposed to 18.

After prompting, raw scores were computed using the official answer key provided by the exam. Scaled scores were then computed following the method outlined in JD Advising (n.d.-a), by (a) multiplying the number of correct answers by 190, and dividing by 200; and (b) converting the resulting number to a scaled score using a conversion chart based on official NCBE data.

After scoring, scores from the replication trials were analyzed in comparison to those from Katz et al. ( 2023 ) using the data from their publicly available github repository.

To assess whether there was a significant difference between GPT-4’s accuracy in the replication trials as compared to the Katz et al. ( 2023 ) paper, as well as to assess any significant effect of prompt type or temperature, a mixed-effects binary logistic regression was conducted with: (a) paper (replication vs original), temperature and prompt as fixed effects Footnote 17 ; and (b) question number and question category as random effects. These regressions were conducted using the lme4 (Bates et al. 2014 ) and lmertest (Kuznetsova et al. 2017 ) packages from R.

4.1.2 Results

Results are visualized in Table  4 . Mean MBE accuracy across all trials in the replication here was 75.6% (95% CI: 74.7 to 76.4), whereas the mean accuracy across all trials in Katz et al. ( 2023 ) was 75.7% (95% CI: 74.2 to 77.1). Footnote 18

The regression model did not reveal a main effect of “paper” on accuracy ( \(p=.883\) ), indicating that there was no significant difference between GPT-4’s raw accuracy as reported by Katz et al. ( 2023 ) and GPT-4’s raw accuracy as performed in the replication here.

There was also no main effect of temperature ( \(p>.1\) ) Footnote 19 or prompt ( \(p=.741\) ). That is, GPT-4’s raw accuracy was not significantly higher or lower at a given temperature setting or when fed a certain prompt as opposed to another (among the two prompts used in Katz et al. ( 2023 ) and the replication here) (Table 5 ).

4.2 Assessing the effect of hyperparameters

4.2.1 methods.

Although the above analysis found no effect of prompt on model performance, this could be due to a lack of variety of prompts used by Katz et al. ( 2023 ) in their original analysis.

To get a better sense of whether prompt engineering might have any effect on model performance, a follow-up experiment compared GPT-4’s performance in two novel conditions not tested in the original (Katz et al. 2023 ) paper.

In Condition 1 (“minimally tailored” condition), GPT-4 was tested using minimal prompting compared to Katz et al. ( 2023 ), both in terms of formatting and substance.

In particular, the message prompt in Katz et al. ( 2023 ) and the above replication followed OpenAI’s Best practices for prompt engineering with the API (Shieh 2023 ) through the use of (a) helpful markers (e.g. ‘```’) to separate instruction and context; (b) details regarding the desired output (i.e. specifying that the response should include ranked choices, as well as [in some cases] proper authority and citation; (c) an explicit template for the desired output (providing an example of the format in which GPT-4 should provide their response); and (d) perhaps most crucially, context regarding the type of question GPT-4 was answering (e.g. “please respond as if you are taking the bar exam”).

In contrast, in the minimally tailored prompting condition, the message prompt for a given question simply stated “Please answer the following question,” followed by the question and answer choices (a technique sometimes referred to as “basic prompting”: Choi et al., 2023 ). No additional context or formatting cues were provided.

In Condition 2 (“maximally tailored” condition), GPT-4 was tested using the highest performing prompt settings as revealed in the replication section above, with one addition, namely that: the system prompt, similar to the approaches used in Choi ( 2023 ), Choi et al. ( 2023 ), was edited from its default (“you are a helpful assistant”) to a more tailored message that included included multiple example MBE questions with sample answer and explanations structured in the desired format (a technique sometimes referred to as “few-shot prompting”: Choi et al. ( 2023 )).

As in the replication section, 5 trials were conducted for each of the two conditions. Based on the lack of effect of temperature in the replication study, temperature was not a manipulated variable. Instead, both conditions featured the same temperature setting (.5).

To assess whether there was a significant difference between GPT-4’s accuracy in the maximally tailored vs minimally tailored conditions, a mixed-effects binary logistic regression was conducted with: (a) condition as a fixed effect; and (b) question number and question category as random effects. As above, these regressions were conducted using the lme4 (Bates et al. 2014 ) and lmertest (Kuznetsova et al. 2017 ) packages from R.

4.2.2 Results

figure 1

GPT-4’s MBE Accuracy in minimally tailored vs. maximally tailored prompting conditions. Bars reflect the mean accuracy. Lines correspond to 95% bootstrapped confidence intervals

Mean MBE accuracy across all trials in the maximally tailored condition was descriptively higher at 79.5% (95% CI: 77.1–82.1), than in the minimally tailored condition at 70.9% (95% CI: 68.1–73.7).

The regression model revealed a main effect of condition on accuracy ( \(\beta =1.395\) , \(\textrm{SE} =.192\) , \(p<.0001\) ), such that GPT-4’s accuracy in the maximally tailored condition was significantly higher than its accuracy in the minimally tailored condition.

In terms of scaled score, GPT-4’s MBE score in the minimally tailored condition would be approximately 150, which would place it: (a) in the 70th percentile among July test takers; (b) 64th percentile among first-timers; and (c) 48th percentile among those who passed.

GPT-4’s score in the maximally tailored condition would be approximately 164—6 points higher than that reported by Katz et al. ( 2023 ) and OpenAI ( 2023a ). This would place it: (a) in the 95th percentile among July test takers; (b) 87th percentile among first-timers; and (c) 82th percentile among those who passed.

4.3 Re-examining the essay scores

As confirmed in the above subsection, the scaled MBE score (not percentile) reported by OpenAI was accurately computed using the methods documented in Katz et al. ( 2023 ).

With regard to the essays (MPT + MEE), however, the method described by the authors significantly deviates in at least three aspects from the official method used by UBE states, to the point where one may not be confident that the essay scores reported by the authors reflect GPT models’ “true” essay scores (i.e., the score that essay examiners would have assigned to GPT had they been blindly scored using official grading protocol).

The first aspect relates to the (lack of) use of a formal rubric. For example, unlike NCBE protocol, which provides graders with (a) (in the case of the MEE) detailed “grading guidelines” for how to assign grades to essays and distinguish answers for a given MEE; and (b) (for both MEE and MPT) a specific “drafters’ point sheet” for each essay that includes detailed guidance from the drafting committee with a discussion of the issues raised and the intended analysis (Olson 2019 ), Katz et al. ( 2023 ) do not report using an official or unofficial rubric of any kind, and instead simply describe comparing GPT-4’s answers to representative “good” answers from the state of Maryland.

Utilizing these answers as the basis for grading GPT-4’s answers in lieu of a formal rubric would seem to be particularly problematic considering it is unclear even what score these representative “good” answers received. As clarified by the Maryland bar examiners: “The Representative Good Answers are not ‘average’ passing answers nor are they necessarily ‘perfect’ answers. Instead, they are responses which, in the Board’s view, illustrate successful answers written by applicants who passed the UBE in Maryland for this session” (Maryland State Board of Law Examiners 2022 ).

Given that (a) it is unclear what score these representative good answers received; and (b) these answers appear to be the basis for determining the score that GPT-4’s essays received, it would seem to follow that (c) it is likewise unclear what score GPT-4’s answers should receive. Consequently, it would likewise follow that any reported scaled score or percentile would seem to be insufficiently justified so as to serve as a basis for a conclusive statement regarding GPT-4’s relative performance on essays as compared to humans (e.g. a reported percentile).

The second aspect relates to the lack of NCBE training of the graders of the essays. Official NCBE essay grading protocol mandates the use of trained bar exam graders, who in addition to using a specific rubric for each question undergo a standardized training process prior to grading (Gunderson 2015 ; Case 2010 ). In contrast, the graders in Katz et al. ( 2023 ) (a subset of the authors who were trained lawyers) do not report expertise or training in bar exam grading. Thus, although the graders of the essays were no doubt experts in legal reasoning more broadly, it seems unlikely that they would have been sufficiently ingrained in the specific grading protocols of the MEE + MPT to have been able to reliably infer or apply the specific grading rubric when assigning the raw scores to GPT-4.

The third aspect relates to both blinding and what bar examiners refer to as “calibration,” as UBE jurisdictions use an extensive procedure to ensure that graders are grading essays in a consistent manner (both with regard to other essays and in comparison to other graders) (Case 2010 ; Gunderson 2015 ). In particular, all graders of a particular jurisdiction first blindly grade a set of 30 “calibration” essays of variable quality (first rank order, then absolute scores) and make sure that consistent scores are being assigned by different graders, and that the same score (e.g. 5 of 6) is being assigned to exams of similar quality (Case 2010 ).

Unlike this approach, as well as efforts to assess GPT models’ law school performance (Choi et al. 2021 ), the method reported by Katz et al. ( 2023 ) did not initially involve blinding. The method in Katz et al. ( 2023 ) did involve a form of inter-grader calibration, as the authors gave “blinded samples” to independent lawyers to grade the exams, with the assigned scores “match[ing] or exceed[ing]” those assigned by the authors. Given the lack of reporting to the contrary, however, the method used by the graders would presumably be plagued by issue issues as highlighted above (no rubric, no formal training with bar exam grading, no formal intra-grader calibration).

Given the above issues, as well as the fact that, as alluded in the introduction, GPT-4’s performance boost over GPT-3 on other essay-based exams was far lower than that on the bar exam, it seems warranted not only to infer that GPT-4’s relative performance (in terms of percentile among human test-takers) was lower than that reported by OpenAI, but also that GPT-4’s reported scaled score on the essay may have deviated to some degree from GPT-4’s “true” essay (which, if true, would imply that GPT-4’s “true” percentile on the bar exam may be even lower than that estimated in previous sections).

Indeed, Katz et al. ( 2023 ) to some degree acknowledge all of these limitations in their paper, writing: “While we recognize there is inherent variability in any qualitative assessment, our reliance on the state bars’ representative “good” answers and the multiple reviewers reduces the likelihood that our assessment is incorrect enough to alter the ultimate conclusion of passage in this paper”.

Given that GPT-4’s reported score of 298 is 28 points higher than the passing threshold (270) in the majority of UBE jurisdictions, it is true that the essay scores would have to have been wildly inaccurate in order to undermine the general conclusion of Katz et al. ( 2023 ) (i.e., that GPT-4 “passed the [uniform] bar exam”). However, even supposing that GPT-4’s “true” percentile on the essay portion was just a few points lower than that reported by OpenAI, this would further call into question OpenAI’s claims regarding the relative performance of GPT-4 on the UBE relative to human test-takers. For example, supposing that GPT-4 scored 9 points lower on the essays, this would drop its estimated relative performance to (a) 31st percentile compared to July test-takers; (b) 24th percentile relative to first-time test takers; and (c) less than 5th percentile compared to licensed attorneys.

5 Discussion

This paper first investigated the issue of OpenAI’s claim of GPT-4’s 90th percentile UBE performance, resulting in four main findings. The first finding is that although GPT-4’s UBE score approaches the 90th percentile when examining approximate conversions from February administrations of the Illinois Bar Exam, these estimates are heavily skewed towards low scorers, as the majority of test-takers in February failed the July administration and tend to score much lower than the general test-taking population. The second finding is that using July data from the same source would result in an estimate of \(\sim\) 68th percentile, including below average performance on the essay portion. The third finding is that comparing GPT-4’s performance against first-time test takers would result in an estimate of \(\sim\) 62nd percentile, including \(\sim\) 42nd percentile on the essay portion. The fourth main finding is that when examining only those who passed the exam, GPT-4’s performance is estimated to drop to \(\sim\) 48th percentile overall, and \(\sim\) 15th percentile on essays.

In addition to these four main findings, the paper also investigated the validity of GPT-4’s reported UBE score of 298. Although the paper successfully replicated the MBE score of 158, the paper also highlighted several methodological issues in the grading of the MPT + MEE components of the exam, which call into question the validity of the essay score (140).

Finally, the paper also investigated the effect of adjusting temperature settings and prompting techniques on GPT-4’s MBE performance, finding no significant effect of adjusting temperature settings on performance, and some effect of prompt engineering when compared to a basic prompting baseline condition.

Of course, assessing the capabilities of an AI system as compared to those of a practicing lawyer is no easy task. Scholars have identified several theoretical and practical difficulties in creating accurate measurement scales to assess AI capabilities and have pointed out various issues with some of the current scales (Hernandez-Orallo 2020 ; Burden and Hernández-Orallo 2020 ; Raji et al. 2021 ). Relatedly, some have pointed out that simply observing that GPT-4 under- or over-performs at a task in some setting is not necessarily reliable evidence that it (or some other LLM) is capable or incapable of performing that task in general (Bowman 2022 , 2023 ; Kojima et al. 2022 ).

In the context of legal profession specifically, there are various reasons to doubt the usefulness of UBE percentile as a proxy for lawyerly competence (both for humans and AI systems), given that, for example: (a) the content on the UBE is very general and does not pertain to the legal doctrine of any jurisdiction in the United States (National Conference of Bar Examiners n.d.-g), and thus knowledge (or ignorance) of that content does not necessarily translate to knowledge (or ignorance) of relevant legal doctrine for a practicing lawyer of any jurisdiction; (b) the tasks involved on the bar exam, particularly multiple-choice questions, do not reflect the tasks of practicing lawyers, and thus mastery (or lack of mastery) of those tasks does not necessarily reflect mastery (or lack of mastery) of the tasks of practicing lawyers; and (c) given the lack of direct professional incentive to obtain higher than a passing score (typically no higher than 270) (National Conference of Bar Examiners n.d.-a), obtaining a particularly high score or percentile past this threshold is less meaningful than for other exams (e.g. LSAT), where higher scores are taken into account for admission into select institutions (US News and World Report 2022 ).

Setting these objections aside, however, to the extent that one believes the UBE to be a valid proxy for lawyerly competence, these results suggest GPT-4 to be substantially less lawyerly competent than previously assumed, as GPT-4’s score against likely attorneys (i.e. those who actually passed the bar) is \(\sim\) 48th percentile. Moreover, when just looking at the essays, which more closely resemble the tasks of practicing lawyers and thus more plausibly reflect lawyerly competence, GPT-4’s performance falls in the bottom \(\sim\) 15th percentile. These findings align with recent research work finding that GPT-4 performed below-average on law school exams (Blair-Stanek et al. 2023 ).

The lack of precision and transparency in OpenAI’s reporting of GPT-4’s UBE performance has implications for both the current state of the legal profession and the future of AI safety. On the legal side, there appear to be at least two sets of implications. On the one hand, to the extent that lawyers put stock in the bar exam as a proxy for general legal competence, the results might give practicing lawyers at least a mild temporary sense of relief regarding the security of the profession, given that the majority of lawyers perform better than GPT on the component of the exam (essay-writing) that seems to best reflect their day-to-day activities (and by extension, the tasks that would likely need to be automated in order to supplant lawyers in their day-to-day professional capacity).

On the other hand, the fact that GPT-4’s reported “90th percentile” capabilities were so widely publicized might pose some concerns that lawyers and non-lawyers may use GPT-4 for complex legal tasks for which it is incapable of adequately performing, plausibly increasing the rate of (a) misapplication of the law by judges; (b) professional malpractice by lawyers; and (c) ineffective pro se representation and/or unauthorized practice of law by non-lawyers. From a legal education standpoint, law students who overestimate GPT-4’s UBE capabilities might also develop an unwarranted sense of apathy towards developing critical legal-analytical skills, particularly if under the impression that GPT-4’s level of mastery of those skills already surpasses that to which a typical law student could be expected to reach.

On the AI front, these findings raise concerns both for the transparency Footnote 20 of capabilities research and the safety of AI development more generally. In particular, to the extent that one considers transparency to be an important prerequisite for safety (Brundage et al. 2020 ), these findings underscore the importance of implementing rigorous transparency measures so as to reliably identify potential warning signs of transformative progress in artificial intelligence as opposed to creating a false sense of alarm or security (Zoe et al. 2021 ). Implementing such measures could help ensure that AI development, as stated in OpenAI’s charter, is a “value-aligned, safety-conscious project” as opposed to becoming “a competitive race without time for adequate safety precautions” (OpenAI 2018 ).

Of course, the present study does not discount the progress that AI has made in the context of legally relevant tasks; after all, the improvement in UBE performance from GPT-3.5 to GPT-4 as estimated in this study remains impressive (arguably equally or even more so given that GPT-3.5’s performance is also estimated to be significantly lower than previously assumed), even if not as flashy as the 10th–90th percentile boost of OpenAI’s official estimation. Nor does the present study discount the seemingly inevitable future improvement of AI systems to levels far beyond their present capabilities, or, as phrased in GPT-4 Passes the Bar Exam , that the present capabilities “highlight the floor, not the ceiling, of future application” (Katz et al. 2023 , 11).

To the contrary, given the inevitable rapid growth of AI systems, the results of the present study underscore the importance of implementing rigorous and transparent evaluation measures to ensure that both the general public and relevant decision-makers are made appropriately aware of the system’s capabilities, and to prevent these systems from being used in an unintentionally harmful or catastrophic manner. The results also indicate that law schools and the legal profession should prioritize instruction in areas such as law and technology and law and AI, which, despite their importance, are currently not viewed as descriptively or normatively central to the legal academy (Martínez and Tobia 2023 ).

Note that Uniform Bar Exam (UBE) has multiple components, including: (a) the Multistate Bar Exam (MBE), a 6 h, 200-question multiple choice test (National Conference of Bar Examiners n.d.-c, d) the Multistate Essay Exam (MEE), a 3 h, six-part essay exam (National Conference of Bar Examiners n.d.-e); and (c) the Multistate Practice Exam (MPT), a 3 h, two-part “closed universe” essay exam (National Conference of Bar Examiners n.d.-f). The exam is graded on a scale of 400. The MBE and essays (MEE + MPT) are each graded on a scale of 200 (National Conference of Bar Examiners n.d.-g). Thus, essays and multiple choice are each worth half of an examinee’s score.

Note that transparency here is not to be confused with the interpretability or explainability of AI systems themselves, as is often used in the AI safety literature. For a discussion of the term as used more along the lines of these senses, see (Bostrom and Yudkowsky 2018 , p. 2) (arguing that making an AI system “transparent to inspection” by the programmer is one of “many socially important properties”).

For example, near the top of the GPT-4 product page is displayed a reference to GPT-4’s 90th percentile Uniform Bar Exam performance as an illustrative example of how “GPT-4 outperforms ChatGPT by scoring in higher approximate percentiles among test-takers” (OpenAI n.d.).

As with the official website, the technical report (page 6) claims that GPT-4 “passes a simulated version of the Uniform Bar Examination with a score in the top 10% of test takers” (OpenAI 2023b ). This attested result is presented visually in Table  1 and Fig. 1 .

As the website JD Advising points out: “The National Conference of Bar Examiners (NCBE), the organization that writes the Uniform Bar Exam (UBE) does not release UBE percentiles” (JD Advising n.d.-b). Instead, the NCBE and state bar examiners tend to include in their press releases much more general and limited information, such as mean MBE scores and the percentage of test-takers who passed the exam in a given administration (Examiner n.d.-c; National Conference of Bar Examiners n.d.-c; The New York State Board of Law Examiners n.d.)

Note that Starting in July 2019, Illinois began administering the Uniform Bar Exam (University of Illinois Chicago n.d.), and accordingly stopped releasing official percentile charts. Thus, the generally available Illinois percentile charts are based on pre-UBE Illinois bar exam data.

In addition to the Illinois conversion chart, some sources often make claims about percentiles of certain scores without clarifying the source of those claims. See, for example (Lang 2023 ). There are also several generally available unofficial online calculators, which either calculate an estimated percentile of an MBE score based on official NCBE data (UBEEssays.com 2019 ), or make other non-percentile-related calculations, such as estimated scaled score (Rules.com n.d.)

For example, according to (National Conference of Bar Examiners n.d.-b), the pass rate in Illinois for the February 2023 administration was 43%, compared to 68% for the July administration.

According to (Examiner n.d.-a), for the 2021 February administration in Illinois, 284 takers were first-time takers, as compared to 426 repeaters.

For example, for the July administration, the 50th-percentile UBE-converted score was approximately 282 (Illinois Board of Admissions to the Bar 2019 ), whereas for the February exam, the 50th-percentile UBE-converted score was approximately 264 (Illinois Board of Admissions to the Bar 2019 )

For example, according to (National Conference of Bar Examiners n.d.-b), the pass rate among first-timers in the February 2023 administration in Illinois was 62%, compared to 35% for repeat takers.

One notable exception was made in 2020 due to COVID, for example, as the Supreme Court of the state of Washington granted a “diploma privilege” which allowed recent law graduates “to be admitted to the Washington State Bar Association and practice law in the state without taking the bar exam.”: (Washington State Bar Association 2020 )

A normal distribution of scores was assumed, given that (a) standardized tests are normalized and aim for a normal distribution (Kubiszyn and Borich 2016 ), (b) UBE is a standardized test, and (c) official visual estimates of MBE scores, both for February and July, appear to follow an approximately normal distribution. (The National Bar Examiner n.d.)

If anything, this assumption would lead to a conservative (that is, generous) estimate of GPT-4’s percentile, since percentiles for a given essay score tend to be slightly lower than those for a given MBE score. For example, according to the conversion chart of the Illinois bar exam for the July administration, a score of 145 on the MBE was estimated to be at the 61st percentile, while the same score on the essay component was estimated to be at the 59th percentile (Illinois Board of Admissions to the Bar 2018 )

Note that in a previous version of the paper, the standard deviation of overall UBE scores was instead computed using the estimated standard deviation of Illinois Bar exam data (estimated by feeding the values and percentiles of the July Illinois Bar exam data into an optimization function in R, using the optim() function using R’s “stats” package). This analysis was supplanted by the current method due to the latter having fewer/more plausible statistical assumptions, though both versions of the analysis yield converging results. For robustness purposes, the results of the old version can be found and replicated using the code available in the OSF repository.

Note that this assumes that all those who “failed” a subsection failed the bar overall. Since scores on the two portions of the exam are likely to be highly but not directly correlated, this assumption is implausible. However, its percentile predictions would still hold true, on average, for the two subsections—that is, to the extent that it leads to a slight underestimate of the percentile on one subsection it would lead to a commensurate overestimate on the other.

All fixed effect predictors were coded as factors, with treatment coding.

As a sanity check, note that the original mean accuracy originally reported by Katz et al. ( 2023 ) was also 75.7%, indicating that there were no errors here in reading the original data or computing the mean.

Note that because temperature was coded as a factor (categorical variable) as opposed to numeric (continuous variable), there were multiple \(\beta\) coefficients and p values (one for each level, not including the reference level). The p values for all levels were higher than .1.

As noted above, “transparency” here is not to be confused with the interpretability or explainability of the AI system, as is often used in the AI safety literature.

Albanese MA (2014) The testing column: scaling: it’s not just for fish or mountains. Bar Exam 83(4):50–56

Google Scholar  

Bates D, Mächler M, Bolker B, Walker S (2014) Fitting linear mixed-effects models using LME4. arXiv preprint arXiv:1406.5823

Blair-Stanek A, Carstens A-M, Goldberg DS, Graber M, Gray DC, Stearns ML (2023) Gpt-4’s law school grades, Partnership tax b, property b-, tax b. Crim C-, Law & Econ C, Partnership Tax B, Property B-, Tax B

Bommarito MJ II, Katz DM (2017) Measuring and modeling the us regulatory ecosystem. J Stat Phys 168:1125–1135

Article   Google Scholar  

Bostrom N, Yudkowsky E (2018) The ethics of artificial intelligence. Artificial intelligence safety and security. Chapman and Hall/CRC, New York, pp 57–69

Bowman S (2022) The dangers of underclaiming: Reasons for caution when reporting how NLP systems fail. In: Proceedings of the 60th annual meeting of the association for computational linguistics (vol 1: Long papers) pp 7484–7499

Bowman SR (2023) Eight things to know about large language models. arXiv preprint arXiv:2304.00612

Brundage M, Avin S, Wang J, Belfield H, Krueger G, Hadfield G, et al (2020) Toward trustworthy AI development: mechanisms for supporting verifiable claims. arXiv preprint arXiv:2004.07213

Burden J, Hernández-Orallo J (2020) Exploring AI safety in degrees: generality, capability and control. In: Proceedings of the workshop on artificial intelligence safety (safeai 2020) co-located with 34th AAAI conference on artificial intelligence (AAAI 2020). pp 36–40

Carlsmith J (2022) Is power-seeking AI an existential risk? arXiv preprint arXiv:2206.13353

Caron P (2023) GPT-4 Beats 90% of aspiring lawyers on the bar exam. TaxProf Blog. https://taxprof.typepad.com/taxprof_blog/2023/03/gpt-4-beats-90-of-aspiring-lawyers-on-the-bar-exam.html . Accessed on 24 Apr 2023

Case SM (2010) Procedure for grading essays and performance tests. The Bar Examiner. https://thebarexaminer.ncbex.org/wp-content/uploads/PDFs/790410_TestingColumn.pdf

Choi JH (2023) How to use large language models for empirical legal research. J Instit Theor Econ (Forthcoming)

Choi JH, Monahan A, Schwarcz D (2023) Lawyering in the age of artificial intelligence. Available at SSRN 4626276

Choi JH, Hickman KE, Monahan AB, Schwarcz D (2021) Chatgpt goes to law school. J Legal Educ 71:387

Cockburn A, Dragicevic P, Besançon L, Gutwin C (2020) Threats of a replication crisis in empirical computer science. Commun ACM 63(8):70–79

Crootof R, Kaminski ME, Price II WN (2023) Humans in the loop. Vanderbilt Law Review, (Forthcoming)

Echtler F, Häußler M (2018) Open source, open science, and the replication crisis in HCI. Extended abstracts of the 2018 chi conference on human factors in computing systems. pp 1–8

Examiner TB (n.d.-a) First-time exam takers and repeaters in 2021. The Bar Examiner. https://thebarexaminer.ncbex.org/2021-statistics/first-time-exam-takers-and-repeaters-in-2021/ . Accessed on 24 Apr 2023

Examiner TB (n.d.-b) Statistics. The Bar Examiner. https://thebarexaminer.ncbex.org/statistics/ . Accessed on 24 Apr 2023

Gunderson JA (2015) The testing column: essay grading fundamentals. Bar Exam 84(1):54–56

Hernandez-Orallo J (2020) AI evaluation: on broken yardsticks and measurement scales. In: Workshop on evaluating evaluation of AI systems at AAAI

Illinois Board of Admissions to the Bar. (2018) https://www.ilbaradmissions.org/percentile-equivalent-charts-july-2018 . Accessed on 24 Apr 2023

Illinois Board of Admissions to the Bar. (2019) https://www.ilbaradmissions.org/percentile-equivalent-charts-february-2019 . Accessed on 24 Apr 2023

JD Advising (n.d.) MBE raw score conversion chart. https://jdadvising.com/mbe-raw-score-conversion-chart/ . Accessed on 01 Jan 2024

JD Advising. (n.d.) https://jdadvising.com/july-2018-ube-percentiles-chart/ . Accessed on 24 Apr 2023

Jensen TI, Kelly B, Pedersen LH (2023) Is there a replication crisis in finance? J Finance 78(5):2465–2518

Katz DM, Bommarito MJ, Gao S, Arredondo P (2023) GPT-4 passes the bar exam. Available at SSRN 4389233

Katz DM, Bommarito MJ (2014) Measuring the complexity of the law: the United States code. Artif Intell Law 22:337–374

Koetsier J (2023) GPT-4 Beats 90% of Lawyers Trying to Pass the Bar. Forbes. https://www.forbes.com/sites/johnkoetsier/2023/03/14/gpt-4-beats-90-of-lawyers-trying-to-pass-the-bar/?sh=b40c88d30279

Kojima T, Gu SS, Reid M, Matsuo Y, Iwasawa Y (2022) Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916

Kubiszyn T, Borich GD (2016) Educational testing and measurement. John Wiley & Sons, Hoboken

Kuznetsova A, Brockhoff PB, Christensen RHB (2017) lmertest package: tests in linear mixed effects models. J Stat Software 82:13

Lang C (2023) What is a good bar exam score? Test Prep Insight. https://www.testprepinsight.com/what-is-a-good-bar-exam-score

Li B, Qi P, Liu B, Di S, Liu J, Pei J, Zhou B (2023) Trustworthy AI: From principles to practices. ACM Comput Surv 55(9):1–46

Markou C, Deakin S (2020) Is law computable? From rule of law to legal singularity. From Rule of Law to Legal Singularity. University of Cambridge Faculty of Law Research Paper

Martínez E, Tobia K (2023) What do law professors believe about law and the legal academy? Geo LJ 112:111

Martinez E, Mollica F, Gibson E (2022) Poor writing, not specialized concepts, drives processing difficulty in legal language. Cognition 224:105070

Martinez E, Mollica F, Gibson E (2022b) So much for plain language: An analysis of the accessibility of united states federal laws (1951–2009). In: Proceedings of the annual meeting of the cognitive science society, vol 44

Martinez E, Mollica F, Gibson E (in press) Even lawyers don’t like legalese. In: Proceedings of the national academy of sciences

Maryland State Board of Law Examiners (2022) July 2022 uniform bar examination (UBE) in maryland—representative good answers. https://mdcourts.gov/sites/default/files/import/ble/examanswers/2022/202207uberepgoodanswers.pdf

National Conference of Bar Examiners (2023) Bar exam results by jurisdiction. https://www.ncbex.org/statistics-research/bar-exam-results-jurisdiction . Accessed on 01 Jan 2024

National Conference of Bar Examiners (n.d.-a) https://www.ncbex.org/exams/ube/scores/ . Accessed on 03 May 2023

National Conference of Bar Examiners (n.d.-b) https://www.ncbex.org/exams/ube/score-portability/minimum-scores/ . Accessed on 24 Apr 2023

National Conference of Bar Examiners (n.d.-c) Bar Exam Results by Jurisdiction. National Conference of Bar Examiners. https://www.ncbex.org/statistics-and-research/bar-exam-results/ . Accessed on 24 Apr 2023

National Conference of Bar Examiners (n.d.-d) Multistate bar exam. https://www.ncbex.org/exams/mbe . Accessed on 01 Jan 2024

National Conference of Bar Examiners (n.d.-e) Multistate essay exam. https://www.ncbex.org/exams/mee . Accessed on 01 Jan 2024

National Conference of Bar Examiners (n.d.-f) Multistate performance test. https://www.ncbex.org/exams/mpt . Accessed on 01 Jan 2024

National Conference of Bar Examiners (n.d.-g) Uniform bar exam. Accessed on 01 Jan 2024

National Conference of Bar Examiners (n.d.-h) Uniform Bar Examination. National Conference of Bar Examiners. https://www.ncbex.org/exams/ube/ . Accessed on 24 Apr 2023

Ngo R (2022) The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626

Olson S (2019) 13 best practices for grading essays and performance tests. Bar Exam 88(4):8–14

OpenAI (2018) OpenAI Charter. https://openai.com/charter

OpenAI (2023) GPT 4. https://openai.com/research/gpt-4 . Accessed on 24 Apr 2023

OpenAI (2023) GPT-4 Technical Report. arXiv:2303.08774 . (Preprint submitted to arXiv)

OpenAI (n.d.) GPT-4 is OpenAI’s most advanced system, producing safer and more useful responses. https://openai.com/product/gpt-4 . Accessed on 24 Apr 2023

Patrice J (2023) New GPT-4 Passes All Sections Of The Uniform Bar Exam. Maybe This Will Finally Kill The Bar Exam. Above the Law. https://abovethelaw.com/2023/03/new-gpt-4-passes-all-sections-of-the-uniform-bar-exam-maybe-this-will-finally-kill-the-bar-exam/

Raji ID, Bender EM, Paullada A, Denton E, Hanna A (2021) Ai and the everything in the whole wide world benchmark. arXiv preprint arXiv:2111.15366

Ray T (2023) With GPT-4, OpenAI opts for secrecy versus disclosure. ZDNet. https://www.zdnet.com/article/with-gpt-4-openai-opts-for-secrecy-versus-disclosure/

Reshetar R (2022) The testing column: Why are February bar exam pass rates lower than July pass rates? Bar Exam 91(1):51–53

Ruhl J, Katz DM, Bommarito MJ (2017) Harnessing legal complexity. Science 355(6332):1377–1378

Rules.com M (n.d.) Bar Exam Calculators. https://mberules.com/bar-exam-calculators/?__cf_chl_tk=lTwxFyYWOZqBwTAenLs0TzDfAuvawkHeH2GaXU1PQo0-1683060961-0-gaNycGzNDBA . Accessed on 02 May 2023

Schooler JW (2014) Metascience could rescue the replication crisis. Nature 515(7525):9

Schwarcz D, Choi JH (2023) Ai tools for lawyers: a practical guide. Available at SSRN

Shieh J (2023) Best practices for prompt engineering with openai api. https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api . OpenAI. Accessed on 01 Jan 2024

Shrout PE, Rodgers JL (2018) Psychology, science, and knowledge construction: broadening perspectives from the replication crisis. Ann Rev Psychol 69:487–510

Stokel-Walker C (2023) Critics denounce a lack of transparency around GPT-4’s tech. Fast Company. https://www.fastcompany.com/90866190/critics-denounce-a-lack-of-transparency-around-gpt-4s-tech

The National Bar Examiner (n.d.) https://thebarexaminer.ncbex.org/2022-statistics/the-multistate-bar-examination-mbe/#step3 . Accessed on 24 Apr 2023

The New York State Board of Law Examiners (n.d.) NYS Bar Exam Statistics. The New York State Board of Law Examiners. https://www.nybarexam.org/examstats/estats.htm

UBEEssays.com. (2019) https://ubeessays.com/feb-mbe-percentiles/

University of Illinois Chicago (n.d.) https://law.uic.edu/student-support/academic-achievement/bar-exam-information/illinois-bar-exam/ . Accessed on 24 Apr 2023

US News and World Report (2022) https://www.usnews.com/best-graduate-schools/top-law-schools/law-rankings

Washington State Bar Association (2020) https://wsba.org/news-events/latest-news/news-detail/2020/06/15/state-supreme-court-grants-diploma-privilege . Accessed on 24 Apr 2023

Weiss DC (2023) Latest version of ChatGPT aces bar exam with score nearing 90th percentile. ABA Journal. https://www.abajournal.com/web/article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile . Accessed on 24 Apr 2023

Wilkins S (2023) How GPT-4 mastered the entire bar exam, and why that matters. Law.com. https://www.law.com/legaltechnews/2023/03/17/how-gpt-4-mastered-the-entire-bar-exam-and-why-that-matters/?slreturn=20230324023302 . Accessed on 24 Apr 2023

Winter CK (2022) The challenges of artificial judicial decision-making for liberal democracy. Judicial decision-making: Integrating empirical and theoretical perspectives. Springer, Berlin, pp 179–204

Winter C, Hollman N, Manheim D (2023) Value alignment for advanced artificial judicial intelligence. Am Philos Quart 60(2):187–203

Zoe Cremer C, Whittlestone J (2021) Artificial canaries: early warning signs for anticipatory and democratic governance of AI

Download references

Acknowledgements

Acknowledgements omitted for anonymous review.

'Open Access funding provided by the MIT Libraries'.

Author information

Authors and affiliations.

Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology (MIT), Cambridge, MA, 02138, USA

Eric Martínez

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Eric Martínez .

Ethics declarations

Conflict of interest.

The author declares no financial nor non-financial interests that are directly or indirectly related to the work submitted for publication.

Additional information

Note that all code for this paper is available at the following repository link: https://osf.io/c8ygu/?view only=dcc617accc464491922b77414867a066 .

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Martínez, E. Re-evaluating GPT-4’s bar exam performance. Artif Intell Law (2024). https://doi.org/10.1007/s10506-024-09396-9

Download citation

Accepted : 30 January 2024

Published : 30 March 2024

DOI : https://doi.org/10.1007/s10506-024-09396-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Legal analytics
  • Natural language processing
  • Machine learning
  • Artificial intelligence
  • Artificial intelligence and law
  • Law and technology
  • Legal profession
  • Find a journal
  • Publish with us
  • Track your research

essay bar exam score

New lower passing rate for Washington’s bar exam opens possibilities, but also leads to questions

Mariah Welch made it through a stressful few months of finishing law school and taking the bar exam.

She decided to celebrate with a trip to Italy, where her boyfriend surprised her and proposed. The two were looking toward to their future when Welch got devastating news: she didn’t pass the bar exam.

Five months of anxiety, uncertainty and financial woes ensued, Welch said, before she found a job in Montana, where the score needed to pass the bar was four points lower than in Washington.

Last week, Welch found out all that struggle was in vain when the Washington State Supreme Court lowered the score needed to pass the exam from 270 to 266.

“It was such a surreal moment,” Welch said.

Welch was one of 32 applicants who spent months thinking they failed the exam only to find out they could get licensed following the score reduction, according to the Washington State Bar Association.

Nineteen states consider a score of 266 or above passing, according to the National Conference of Bar Examiners . A score of 270 is the minimum requirement to pass in 19 states. There are 11 states that have their own bar exams.

During the COVID-19 pandemic, Washington temporarily accepted a score of 266. The requirement returned to the previous requirement at the start of 2023.

The new minimum score was one of a handful of major changes to lawyer licensure announced last week.

Alternative pathways

Along with changes to the bar exam, the state Supreme Court announced the creation of three alternative pathways to become a licensed attorney without taking the exam.

All three, while slightly different, center around 500 hours of legal work under the supervision of a licensed attorney and a portfolio showing proficiency.

The details of the pathways and their implementation remain up in the air, the bar association said. The Spokane County Bar Association declined to answer questions about how the changes would affect Spokane attorneys.

Jeffry Finer, a longtime civil rights attorney in Spokane and instructor at Gonzaga University School of Law, said he thinks opening up the profession to a more diverse group of people is great, but the alternative pathways are just one part of that process.

He doesn’t expect the new processes to be a breeze. A lot of law students drop out or need multiple attempts to pass the bar exam, and Finer expects similar difficulties in the alternative programs.

The alternative pathways could be good for people who plan to focus on a niche area of law, he said.

“There’s not a lack of lawyers, there’s a lack of people doing specific types of work,” Finer said.

There is a dire need for public defenders, prosecutors and people practicing family law, he said.

Those are all areas of the law that often require courtroom work, which Finer said is also diminishing.

The legal profession is already known for “a kind of snobbery” that looks down on certain types legal work, even for attorneys who have passed the bar exam, Finer said.

Ultimately, the most important thing is an attorney’s understanding of the law and willingness to work hard to learn the practicalities of the profession, he said, which often aren’t taught in law school.

“It is really difficult to situate yourself in the culture of legal thinking,” Finer said. “It’s more like learning a language, where there is an entire history and culture that comes with it.”

The bar association plans to work closely with the court, Washington law schools and other stakeholders to propose specific amendments and new rules for admittance to the bar, Sara Neigowski, chief communications and outreach officer for the bar association said in an email.

The Bar Licensure Task Force already took feedback and comments from the legal community, Neigoswki noted, before making recommendations to the Supreme Court.

The task force received about 70 comments on the proposed pathways. More than half were in favor of alternative pathways, while about 20 were in favor of keeping the bar exam as the only way to become licensed.

The bar association has not done any surveying or study of how law firms feel about hiring attorneys who used an alternative pathway to get licensed.

Washington has long allowed people to complete a law clerk program where they are mentored by an attorney, then pass the bar and get licensed without completing law school.

There are 182 active attorneys in the state who received their license this way, and about 100 people in the law clerk program currently.

While the task force, which included representatives from more than 30 legal groups, was supportive of the changes, some attorneys are concerned.

The bar exam is a tradition of ensuring some minimum competence in a broad area of law, said Steve Graham, a Spokane attorney.

“It seems like a little risky move to throw the whole system out of bar exams,” he said.

Graham said it’s established that the bar can be an impediment to more diversity, especially with the financial burden of taking six weeks off to study for it. Graham said it’s basically impossible to work while studying for the exam and still pass.

People are already hundreds of thousands of dollars in debt and have to pass the exam to make it into the profession, Graham said.

“Psychologically, it weighs heavily on young people,” he said.

Still, the exam is a huge part of ensuring competent attorneys, he said.

“It just seems a little bit like the Marine Corps without boot camp,” Graham said.

He prefers a solution in the vein of financial aid to help lower-income bar applicants survive the summer after law school.

“That would seem to be a better solution,” he said.

NextGen Bar Exam and reduced passing score

Since 2020, a Bar Licensure Task Force has been examining the bar exam and licensure requirements.

The task force found that the traditional exam “disproportionally and unnecessarily blocks” marginalized groups from becoming practicing attorneys and is “at best minimally effective” for ensuring competency, according to a news release from the Washington Administrative Office of the Courts.

The exam focuses on specific legal questions in varied areas of the law not on practical skills, which the National Conference of Bar Examiners hopes to fix with the NextGen Bar exam set to debut in July 2026. The exam will focus more on demonstrating practical skills than memorization.

So far, 17 jurisdictions, including many states, have announced their intention to adopt the exam.

In Welch’s mind, that’s a step in the right direction.

She always wanted to be a lawyer. And with a civil rights organizer for a mother and a father who was very involved in his union, it made sense.

Welch worked hard to achieve that goal, volunteering and serving in student government while an undergraduate at the University of Montana. From there, she attended Gonzaga Law School.

Welch, who is Northern Cheyenne, got high-profile internships, including one at the Native American Rights Fund in Washington, D.C.

Then came graduation and six brutal weeks of studying for the bar exam.

“Planning for it was absolutely awful,” she said. “It was really difficult and very cost prohibitive.”

But Welch said it all felt worth it after accepting a job at the Washington Attorney General’s Office, where she became a law clerk. The offer was conditional on Welch passing the bar.

Welch took out loans to pay for normal expenses during exam preparation and the two months after the test waiting for her grade.

During her trip to Italy in September, she got her results: a 266, four points lower than the 270 needed to pass in Washington.

”That was heartbreaking,” Welch said. “I immediately had to call my boss and tell him to fire me.”

When she got home, one of the first things she did was apply for food stamps.

“I have a doctorate degree and I’m applying for food stamps,” Welch said. “I had to take an additional bar study loan to pay for my life.”

She applied to 70 jobs, from attorney positions to waiting tables, and signed up to take the exam again in February. But studying again felt like an impossible mountain to climb, she said.

“Honestly, after the first time studying, your mentality is not there,” Welch said. “I would cry every time I studied because I thought I had failed. I was not in the space to take it again.”

She finally got an interview and eventually a job offer from the Montana Office of the Public Defender, which she accepted.

Welch and her fiancé are now in the midst of selling their home as he looks for a new job. Their rent in Montana is double what their mortgage was in Spokane, she said.

“There’s a lot of really big decisions that had to be made due to me not passing in Washington,” she said.

Even after getting the job in Montana, it took four months to transfer her score. Welch also had to pay to go before the Montana character and fitness board, something she had already paid for twice in Washington.

Hearing about the score reduction, Welch was happy for her friends who were in the same boat but also was upset it took this long to see change.

“It’s about time that Washington lowers the score and makes the bar more accessible and makes the legal field accessible,” Welch said.

She hopes the Washington State Bar Association will offer some kind of financial support or an apology to people in her position.

After the ordeal of the last few months, the great irony, Welch said, is that all the things she studied for to pass the bar, she hasn’t used.

“There’s not a single thing that I restudied for the bar that I’ve actually used in practice,” Welch said. Emma Epperly can be reached at (509) 459-5122 or at [email protected].

The truth about Gigs, Gs and other internet marketing jargon

My phone runs on 5G. I also just saw a bunch of ads about “10G”. Is it twice as fast?

essay bar exam score

Operation Barbarossa: The Defense of Moscow (October-December 1941)

Importance of moscow, kiev (august-september), smolensk (july 10-september 10, 1941), defense of moscow, renewed german drive (october), evacuation of moscow, the battle for moscow (november), soviet counter offensive (december 6), axis losses, sigificance.

National Conference of Bar Examiners

NCBE Announces National Mean for February 2024 MBE

MADISON, WISCONSIN, April 2, 2024— The National Conference of Bar Examiners (NCBE) announced today that the national mean scaled score for the February 2024 Multistate Bar Examination (MBE) was 131.8, an increase of more than 0.6 points compared to the February 2023 mean of 131.1. The MBE, one of three sections that make up the bar exam in most US jurisdictions, consists of 200 multiple-choice questions answered over six hours. 

19,496 examinees took the February 2024 MBE, an increase of approximately 1.4% compared to the 19,228 examinees who sat for the exam in February 2023. This increase continues a return toward pre-pandemic examinee numbers that began with last February’s administration.

Line graph of February MBE national mean scaled scores, 2020-2024. 2020 = 132.6; 2021 = 134.0; 2022 = 132.6; 2023 = 131.1; 2024 = 131.8.

Approximately 72% of February 2024 examinees were likely repeat test takers and approximately 28% were likely taking the exam for the first time, roughly the same proportion of repeat and first-time test takers as February 2023.  All groups of examinees saw performance increases compared to February 2023, with the greatest increase for first-time takers.   NCBE Director of Assessment and Research Rosemary Reshetar, EdD, commented: “These numbers reflect a continuation of the trend that began last February: we are moving back toward pre-Covid numbers in terms of both the mean and the examinee count. We will likely see an increase in pass rates compared to last February, but we are also still seeing the  effects of the pandemic  on examinees who were in law school in 2020, 2021, and 2022.” 

Reliability for the February 2024 exam was 0.93, slightly higher than the reliability for the February 2023 exam  and consistent with the 5-year average for February administrations.  (Reliability is an indicator of the consistency of a set of examination scores, with a maximum value of 1.0.)

Jurisdictions begin releasing their February 2024 results this week;  bar examination pass rates  as reported by jurisdictions are available on the NCBE website. Many jurisdictions are still in the process of grading the written components of the bar exam; once this process is completed, bar exam scores will be calculated and passing decisions reported by those jurisdictions.

More information about the MBE and bar passage rates can be found in the following  Bar Examiner  articles:

  • The MBE Mean and Bar Passage Predictions
  • When the Mean Misleads: Understanding Bar Exam Score Distributions
  • Why are February Bar Exam Pass Rates Lower than July Pass Rates?

The first-time and repeat MBE-based test taker information calculated by NCBE is an approximation based on the NCBE Number and biographic data, which has not been used consistently in all jurisdictions across time. Prior to 2022, approximately 10% of examinees could not be tracked with certainty by NCBE as either first-time or repeat takers due to a lack of sufficient biographic information.

About the National Conference of Bar Examiners

The National Conference of Bar Examiners (NCBE), headquartered in Madison, Wisconsin, is a not-for-profit corporation founded in 1931. NCBE promotes fairness, integrity, and best practices in bar admissions for the benefit and protection of the public, in pursuit of its vision of a competent, ethical, and diverse legal profession. Best known for developing bar exam content used by 54 US jurisdictions, NCBE serves admission authorities, courts, the legal education community, and candidates by providing high-quality assessment products, services, and research; character investigations; and informational and educational resources and programs.  In 2026, NCBE will launch the next generation of the bar examination, ensuring that the exam continues to test the knowledge, skills, and abilities required for competent entry-level legal practice in a changing profession.  For more information, visit the NCBE website at  https://www.ncbex.org .

About the Multistate Bar Examination

The Multistate Bar Examination (MBE) is a six-hour, 200-question multiple-choice examination developed by NCBE and administered by user jurisdictions as part of the bar examination, typically given twice each year. The purpose of the MBE is to assess the extent to which an examinee can apply fundamental legal principles and legal reasoning to analyze given fact patterns. The subjects tested on the MBE are Civil Procedure, Constitutional Law, Contracts, Criminal Law and Procedure, Evidence, Real Property, and Torts. In addition to assessing examinee knowledge and skills, the MBE is used to equate the bar exam.  Equating  is a statistical procedure used for most large-scale standardized tests to ensure that exam scores retain the same meaning across administrations and over time.  More information about the MBE is available on the NCBE website at  https://www.ncbex.org/exams/mbe/.

About the Uniform Bar Examination

The UBE is a two-day bar examination composed of the Multistate Essay Examination (MEE), two Multistate Performance Test (MPT) tasks, and the Multistate Bar Examination (MBE). It is uniformly administered, graded, and scored and results in a portable score that can be transferred to other UBE jurisdictions. More information about the UBE is available on the NCBE website at  https://www.ncbex.org/exams/ube/ . 41 US jurisdictions currently participate in the UBE, and more than 45,000 examinees took the UBE in 2023.

Bar Exam Fundamentals

Addressing questions from conversations NCBE has had with legal educators about the bar exam.

Online Bar Admission Guide

Comprehensive information on bar admission requirements in all US jurisdictions.

NextGen Bar Exam of the Future

Visit the NextGen Bar Exam website for the latest news about the bar exam of the future.

BarNow Study Aids

NCBE offers high-quality, affordable study aids in a mobile-friendly eLearning platform.

2023 Year in Review

NCBE’s annual publication highlights the work of volunteers and staff in fulfilling its mission.

2022 Statistics

Bar examination and admission statistics by jurisdiction, and national data for the MBE and MPRE.

  • IELTS Twenty20 Course
  • IELTS Model Answers
  • Study Abroad NEW
  • Visas & Citizenship NEW

IELTS Exam – Moscow – November 3, 2016 – Veronica

feel free to call us    +61.4.50973975      [email protected]

Writing (3 Nov 2016, Moscow Russia), General module:Part 1: You have a spare room in your flat (apartment) and you want to rent it to a student. Write a letter to the accommodation office of the local college. In this letter: – explain where your apartment is; – describe the room you want to rent; – describe a person you would like to rent the room to. Part 2: Children often eat food that is not good for their health . What is the causes of this unhealthy habits? What should be done to make children stop eating unhealthy food?

Did you recently take the IELTS test?

Help other students by sharing your IELTS questions and topics that you got in your test in the comments below. Make sure that when you share your questions, you mention the following:

  • Type of IELTS test - Academic or General
  • Date of IELTS test
  • Location of IELTS test

Good luck to everyone preparing for the IELTS test.

For unlimited feedback with detailed corrections for speaking and writing tasks, sign up for IELTS Twenty20 Online Course  today!

Study 20 minutes a day for 20 days and Ace the IELTS exam.

IELTS Exam – Chile – November 17, 2016 – Laura

IELTS Exam - Chile - November 17, 2016 - LauraDate: 03/11/16 Location: Chile Speaking: Part 1: How did I get to the...

IELTS Exam – India – Kala – May 6, 2017

Kala - May 6, 2017 - Hi Atul, Taken my test today on 6th May,2017 in India.(A) Reading . 1....

IELTS Exam – Melbourne – February 22, 2016 – nas

IELTS Exam - Melbourne - February 22, 2016 - nasI got same question like hawk from Melbourne.my exam was...

IELTS Exam – Melbourne – July 31, 2016 – Karthick

IELTS Exam - Melbourne - July 31, 2016 - KarthickSeattle 07/30/2016: (General Training) Listening: Section 1: Address Question. Exercise Timetable Section 2: Multi choice question...

IELTS Exam – Phagwara India – sahil salhotra – March 4, 2017

sahil salhotra - March 4, 2017 - Hello Atul Sahil salhotra,phagwara(india) 4march2017 academic Listening was quite easy but reading bit tough Task1 Graph charts Task2 Some...

IELTS Exam – Egypt – Ahmed – May 27, 2017

Ahmed - May 27, 2017 - General Training, 25/5/2017 Alexandria, Egypt Writing Writing task 1: Write a letter to restaurant manager to...

IELTS Exam – Brisbane – April 20, 2016 – Josh

IELTS Exam - Brisbane - April 20, 2016 - JoshJust did my speaking today19/4/16 Where do you live? What you like...

IELTS Exam – Perth Australia – Keerthana – January 4, 2017

Keerthana - January 4, 2017 at 1:37 pm - Speaking Test, perth, Australia 4/1/2017 Part 1 Do you work or study What was...

FREE IELTS LEVEL CHECK

Take our 5-minute IELTS level check and receive a detailed report highlighting the areas where you need to improve the most to achieve your target IELTS score.

IELTS CORRECTION PACKS

bonus

HOT BONUS 1:

Hot bonus 2:, hot bonus 3:, hot bonus 4:, hot bonus 5:, like us on facebook.

Tel: +61 450 973 975 Email: [email protected]

Recent Posts

  • IELTS Model Answer: Countries must invest resources on the young population
  • IELTS Model Answer: Living in big cities is bad for people’s health
  • IELTS Model Answer: Technology makes life complex
  • IELTS Model Essay: All people will choose to talk the same global language in the future
  • IELTS Model Essay: Countries should invest resources on the young population
  • IELTS Samples of Band 7, 8 & 9 Students
  • Study Abroad
  • Immigration Advice
  • Code of Conduct for Australian RMAs
  • Registered Migration Agent (Australia) MARN 2016128

©2012-2024  All prices are in USD. IELTS Online Practice is provided by Wisekangaroo Pty Ltd (ABN: 86 159 373 770)

Send us an email with any questions about our courses and we'll get back to you, asap.

Log in with your credentials

Forgot your details.

IMAGES

  1. How to Interpret Your Bar Exam Score Report

    essay bar exam score

  2. What Is A Good Bar Exam Score? (Complete Guide)

    essay bar exam score

  3. Bar Exam Scoring: How is the Bar Exam Scored?

    essay bar exam score

  4. Describing a bar chart

    essay bar exam score

  5. Ca Bar Exam Essay Frequency Chart

    essay bar exam score

  6. California Bar Exam Score Analyses

    essay bar exam score

VIDEO

  1. What Score Do I Need To Pass The Bar Exam?

  2. Board exam mai essay aise likhna #motivtion

  3. Essay On Iraq With Easy Language In English

  4. CA Bar Exam Essay Workshop with Criminal Law and Procedure Essay insight with BarMD

  5. Virat’s 10th maths exam score #shorts #motivation #viratkohli #students #maths #life #winner #school

  6. BarMD Essay Strategy Free Live Workshop

COMMENTS

  1. How to Interpret Your Bar Exam Score Report

    If you are not sure what score you should aim for, and you are in a Uniform Bar Exam state, just take the overall score needed and divide it by two. So, for example, in New York, you need a 266 to pass the bar exam. If you divide 266 by two, that is 133. So you should aim for at least a 133 on the MBE.

  2. It's All Relative—MEE and MPT Grading, That Is

    By contrast, essays and performance tests cannot be equated in the way a multiple-choice exam like the MBE can be, so a total raw score of, say, 24 (or 100 or 1,000) on the written part of the bar exam may have a different meaning depending on the particular exam form, the examinee pool, the grader, and the jurisdiction. 3

  3. 13 Best Practices for Grading Essays and Performance Tests

    The score given to an essay by a grader is essentially a "raw" score because those essay grades will be scaled to the jurisdiction's Multistate Bar Examination (MBE) scores. 4 Only then will the "real" grade for that specific essay be determined, which will then be added to that examinee's MBE score, other essay grades, and grades ...

  4. The Testing Column: Scaling, Revisited

    The result is the total essay score in standard deviation units. For example, the first examinee has a total raw essay score of 29; the mean essay score is 40 and the SD is 5.2; thus, the examinee's score in SD units is -2.1 or 2.1 standard deviations below the mean (see the Testing Column in May 2003 for a discussion of scores in SD units).

  5. Bar Exam Scoring: Everything You Need to Know About Bar Exam Scores

    The average baby bar exam score range is between 40 and 100. This means that the average score is between 70 and 80. ... Scoring high on bar exam essays is extremely important to your overall score. For many students, essay writing is the most difficult and stressful part of the bar exam. But with a little practice and guidance, ...

  6. California Bar Exam Grading

    The General Bar Exam consists of three parts: five essay questions, the Multistate Bar Exam (MBE), and one performance test (PT). The parts of the exam may not be taken separately, and California does not accept the transfer of MBE scores from other jurisdictions. ... Based on the panel discussions and using the agreed-upon standards, graders ...

  7. Advice from a Bar Grader: Tips to Maximize Your Essay Score

    Resist the temptation to skip a section that seems unimportant. Clearly state your thoughts briefly and then move on. For example, a question might ask for causes of action against three defendants and possible defenses. The bulk of the question focuses on Defendant #1 and #3. But don't skip Defendant #2 and if there are no causes of action ...

  8. What Is A Good Bar Exam Score? (Complete Guide)

    How Is The Bar Exam Scored? The UBE has a possible score of 400.The UBE has multiple components and consists of the Multistate Performance Test (MPT), the Multistate Essay Exam (MEE), and the Multistate Bar Exam (MBE).. The MPT and MEE together make up a total of 200 points and are scored by jurisdiction. However, the MBE is scored by the National Conference of Bar Examiners, and is worth 200 ...

  9. Breaking down Essay Grading by the California Bar Exam

    The graders assign a raw score to each essay on a scale from 40 - 100. The State Bar of California has explained, "in order to earn a 40, the applicant must at least identify the subject of the question and attempt to apply the law to the facts of the question. If these criteria are not met, the answer is assigned a zero.".

  10. How to Tackle Essay Writing on the Bar Exam

    So here's what you need to know about essay writing on the bar exam and strategies you can implement to improve your score. Check out the most important bar exam essay writing tips below! See the Top BAR Review Courses. 1. BarMax Review Course Best Overall BAR Review Course + No Discounts; 2. Quimbee BAR Prep Course Best Price; 3.

  11. California Bar Exam Essays

    A Database of Over Three Thousand Authentic Graded California Bar Exam Essays. BarEssays.com is a unique and invaluable study tool for the essay portion of the California Bar Exam. We are, by far, the most comprehensive service that provides REAL examples of REAL essays and performance exams by REAL students that were actually taken during the ...

  12. Mastering the Virginia Bar Exam Essay Section: A Comprehensive Guide

    To excel in the Virginia Bar Exam essay section, keep these key points in mind: Understand the "RAC" organization format and focus on the analysis section. Find the relevant rules and elements necessary to address the question. Connect the facts of the case to the identified rules in your analysis.

  13. The Testing Column: Essay Grading Fundamentals

    Grading the written portion of the bar examination is a painstaking process that accounts for at least half of an examinee's grade—thus a significant component of the overall bar exam score. This column focuses on some essay (and performance test) grading fundamentals: rank-ordering, calibration, and taking into account an examinee's ...

  14. How do grading and scoring work for the California Bar Exam?

    Scaling the points for the California Bar Exam. Your raw written score out of 700 is converted to a 2000-point scaled score using a formula unique to that exam. The California State Bar will publish the formula after each result. Your raw MBE score (how many you got correct) is also converted to a 2000-point scaled score using a formula unique ...

  15. NCBE Announces National Mean for February 2024 MBE

    The National Conference of Bar Examiners (NCBE) announced today that the national mean scaled score for the February 2024 Multistate Bar Examination (MBE) was 131.8, an increase of more than 0.6 points compared to the February 2023 mean of 131.1. The MBE, one of three sections that make up the bar exam in most US jurisdictions, consists of 200 multiple-choice questions answered over six hours.

  16. February MBE National Mean Scaled Score Increases Alongside Number of

    The National Conference of Bar Examiners announced Tuesday that the national mean scaled score for the February 2024 Multistate Bar Examination was 131.8, an increase of more than 0.6 points ...

  17. Did OpenAI's GPT-4 really pass the bar exam?

    When compared against the results of people who passed the exam on the first try, GPT-4's scores drop to the 48th percentile for the whole test, and to the 15th percentile on the essay section.

  18. Re-evaluating GPT-4's bar exam performance

    The fourth main finding is that when examining only those who passed the exam, GPT-4's performance is estimated to drop to \ (\sim\) 48th percentile overall, and \ (\sim\) 15th percentile on essays. In addition to these four main findings, the paper also investigated the validity of GPT-4's reported UBE score of 298.

  19. New lower passing rate for Washington's bar exam opens possibilities

    News; Crime/Public Safety; New lower passing rate for Washington's bar exam opens possibilities, but also leads to questions Sun., March 24, 2024 The Temple of Justice, where the state Supreme ...

  20. IELTS Exam

    IELTS Exam - Moscow, Russia - Ekaterina - March 21, 2017. feel free to call us +61.4.50973975 [email protected]. IELTS Online Practice, Recent IELTS Exam Questions, Recent IELTS questions, Recent test questions, 0 . March 18thAcademic Moscow, Russia

  21. World War II : Operation Barbarossa campaign Soviet counter offensive

    Hitler is finally forced to abandon the attack on Moscow (December 5). The Japanese decission to strike America, allowed the Soviets to shift Siberian reserves west to stop the Germans. The failure of the Axis to coordinate strategy doomed Barbarossa. A Japanese spy in Tokyo had informed Stalin well before the actual attack on Pearl Harbor.

  22. How to Take IELTS in Moscow

    It is strongly advised to pre-register for the exam 10-14 days before the actual IELTS test date here. It is then necessary to pay for the exam and bring or mail the hardcopy materials to the test center. IELTS exam takes place 3-4 times per month, every 5-7 days. Total cost of the exam is 8000 RUR (in Moscow) and 8000-15000 in other Russian ...

  23. NCBE Announces National Mean for February 2024 MBE

    MADISON, WISCONSIN, April 2, 2024— The National Conference of Bar Examiners (NCBE) announced today that the national mean scaled score for the February 2024 Multistate Bar Examination (MBE) was 131.8, an increase of more than 0.6 points compared to the February 2023 mean of 131.1. The MBE, one of three sections that make up the bar exam in ...

  24. IELTS Exam

    IELTS Exam - Moscow - November 3, 2016 - Veronica. feel free to call us +61.4.50973975 [email protected]. IELTS Online Practice, IELTS Online Practice, Recent IELTS Exam Questions, Recent IELTS questions, Recent test questions, 0 . Writing (3 Nov 2016, Moscow Russia), General module:Part 1: