Law Meets Statistics
I found this incredibly interesting academic paper (thanks to SCOTUSBlog for starting me off with this post, which eventually led me to the paper) which uses network theory to synthesize the citations to and from the ~28K Supreme Court majority opinions which have ever been issued, and using the resulting model, predicts which case will become important in our body of law going forward and which cases will rise or fall in their importance as time passes.
This paper is nearly inaccessible for those without an extensive background in statistics (I found it very overwhelming and my MBA was concentrated in statistics and econometrics), and if you go and read it, be prepared to get the "gist" of it without understanding any of the details.
The brass tax, their model does a much better job than all previous methods (examples including a) looking to see if the case made it onto the front page of the New York Times and b) how many amicus briefs were filed with respect to the case in question) of predicting the probability that a given case would be cited, based on which other cases cited the case in question and which other cases the case in question cites (even summarizing their research is confusing).
One question I have after a first reading: This study does not attempt to compare the citation flow with any measure of which cases the Court decides to put on its docket. By this I mean, the Court has complete control over the cases it chooses to hear. With that as a given, how does the docket construction (and implicitly, the importance/interest level each Court places on/in certain areas of law) influence this analysis? I think that would be one interesting area for followup research.
No comments:
Post a Comment