This story is about the implications of the latest developments in legal tech. Before we get to it, I want to ask a question and just leave it there — for us to keep it in mind as we plough through the latest wonders of the AI world. The question is, what is point of our existence as human beings on Earth?
Chatbot Passed the Uniform Bar Exam
Word on the street is that the latest Microsoft-backed AI has passed the universal bar exam, an exam that lawyers in the United States have to pass in order to get licensed to practice law. Not only did it pass it but its score “fell in the top 10% of test takers.”
- Concerned about your life’s savings as the banking crisis decimates retirement accounts? You’re not alone. Find out how Genesis Precious Metals can help you secure your wealth with a proper self-directed IRA backed by physical precious metals.
The AI software, made by the Microsoft partner OpenAI, is called “GPT-4.” It is an upgrade from GPT-3.5, which is what the recently famous ChatGPT program is based on. In case you are curious, “GPT” stands for “Generative Pre-trained Transformer,” which is a computer language model that uses “deep learning” to produce text.
Deep learning stands roughly for combing through vast amounts of data, algorithmically extracting meaningful characteristics of different types, and then summing them up in a way that makes it look like the computer “understands.”
This software falls under the definition of “generative AI,” which is the type of AI that goes beyond analyzing large amounts of data and producing a summary and that is capable of generating its own “creative” output based on the data is has analyzed.
Per OpenAI’s GPT-4 Technical Report, the program was tested “on a diverse set of benchmarks, including simulating exams that were originally designed for humans.”
Can I please use this opportunity to express my annoyance with the popularization of the hipster use of the word “humans” instead of “people,” as if there is ever going to be a point in time when robots grow sentient and start participating in society not as a type of technology at the hands of whoever is in charge but as living beings in their own right?
This is of course never going to happen (although the patent owners may pretend). This is nonsense and fiction. But calling people “humans” introduces a new of viewing ourselves through externalized mechanical eyes. It is yet another magical trick by the crazies in high chairs to disconnect us from our innate personality and our souls. It also helps the tyrants to bring about “robot citizens” and give them “rights.”
The “robot citizens” may even include “financial actors,” to justify the fraud! And when the actual people complain about the absurdity of it based on the fact that robots aren’t actually “human,” they’ll be accused of having a phobia of some sort.
Using “humans” instead of “people” when describing us all just makes the trickery a tad easier because, are these machines or software products people? Obviously they are not. We know they are not people. But are they maybe a little bit human-like? Trying to be human? Wanting to be human? Deserving to be human? Don’t you think they at the very lease deserve-human-like rights? Etc.
Back to the Topic of AI Passing the Bar Exam
Anyway, according to OpenAI, the company used the necessary precautions to ensure that the AI product didn’t just mechanically reproduce already known correct answers to the already known questions on the bar exam. In their own words: “we did no specific training for these exams.
A minority of the problems in the exams were seen by the model during training; for each exam we run a variant with these questions removed and report the lower score of the two. We believe the results to be representative.”
“Exams were sourced from publicly available materials. Exam questions included both multiple choice and free-response questions; we designed separate prompts for each format, and images were included in the input for questions which required it.
The evaluation setup was designed based on performance on a validation set of exams, and we report final results on held-out test exams. Overall scores were determined by combining multiple-choice and free-response question scores using publicly available methodologies for each exam.”
Important: Our sponsors at Jase are now offering emergency preparedness subscription medications on top of the long-term storage antibiotics they offer. Use promo code “Rucker10” at checkout!
“On a simulated bar exam, GPT-4 achieves a score that falls in the top 10% of test takers. This contrasts with GPT-3.5, which scores in the bottom 10%.” Here are the uniform bar exam scores and for the LSAT test results, according to GPT-4 Technical Report:
Uniform Bar Exam (MBE+MEE+MPT)
- GPT-4: 298 / 400 (approximately 90th percentile)
- GPT-4 (no vision): 298 / 400 (approximately 90th percentile)
- GPT-3.5: 213 / 400 (approximately 10th percentile)
- GPT-4: 163 (approximately 88th percentile)
- GPT-4 (no vision): 161 (approximately 83rd percentile)
- GPT-3.5: 149 (approximately 40th percentile)
Here is what a mainstream review of the software, written by Dr. Lance Eliot and published on law.com, has to say:
Does the passing of the simulated uniform bar exam imply or prove that GPT-4 is legally capable and fully ready to perform legal tasks on par with human lawyers?
The answer is a resounding No, namely that despite the wink-wink implication or innuendo, all that can be reasonably said is that GPT-4 was able to use its extensive computational pattern-matching of words related to other words in order to successfully derive answers to the presented exams.
My comment: I think it is important to emphasize the fact that the “wink-wink” component is a big part of the AI myth. We need to remember it when it comes to all AI “work,” not just the tasks that people do in their currently still prestigious career paths. And is it possible that when the overlords came up with their conveyors and “systems management,” they just duped us point-blank?! Back to Dr. Eliot’s analysis:
“As I’ve stated in my prior posted piece entitled “Best Ways To Use Generative AI In Your Law Practice,” care needs to be exercised in overstating what generative AI can attain. Furthermore, the comparison of GPT-4 to “human-level performance” smacks of anthropomorphizing of AI. This is a dangerous slippery slope of taking people down a primrose path that current AI is sentient or human-like in abilities.
Generative AI such as GPT-4 is notably handy as an aid for lawyers and can be a huge leg-up in performing legal tasks. That being said, relying solely on generative AI for legal efforts is unsound and improper.
The key takeaway for lawyers is that you ought to be giving serious and deep consideration to leveraging generative AI such as GPT-4. No doubt about that. GPT-4 is even better at aiding lawyers than ChatGPT. I’ve said over and over again that lawyers and law practices using generative AI are going to outdo and outperform attorneys and firms that aren’t using generative AI.”
Let’s talk for a second about generative AI. Here is what Reuter’s has to say:
“Generative artificial intelligence has become a buzzword this year, capturing the public’s fancy and sparking a rush among Microsoft (MSFT.O) and Alphabet (GOOGL.O) to launch products with technology they believe will change the nature of work.”
“The most famous generative AI application is ChatGPT, a chatbot that Microsoft-backed OpenAI released late last year. The AI powering it is known as a large language model because it takes in a text prompt and from that writes a human-like response.”
“GPT-4, a newer model that OpenAI announced this week, is ‘multimodal’ because it can perceive not only text but images as well. OpenAI’s president demonstrated on Tuesday how it could take a photo of a hand-drawn mock-up for a website he wanted to build, and from that generate a real one.”
Here is my favorite bit: “Cybersecurity researchers have also expressed concern that generative AI could allow bad actors, even governments to produce far more disinformation than before.” Oh no, governments spreading misinformation? Impossible. Never once happened in human history. At least, not our government. Not here and not now, and not against us (phew). A short video explainer by Reuters (narrated by AI?).
Paying Attention to the Upside-Down Language of AI
One of the things to pay attention to in the talk about AI is the use of the word “harm.” What is “harmful language”? On the intuitive level, we know (calls for violence or child abuse, for example, are actually harmful language) — but in the official, robotic context, “harmful language” is whatever they say it is on a given day.
In an imaginary honest society, in a society where the leaders wouldn’t try to mess with language and with people’s heads, the desire to put in tight controls around what can come out of a robot’s mouth could be a neutral and noble goal. It is a robot, after all. But we don’t live in an honest society, society, and the leaders are already trying to label any “wrongthink” as “hate speech.”
So naturally, as the trend with AI proceeds, and as AI replaces educators and bureaucratic decision making, people facing censorship might be squeezed more and more into the torture of “talking to the hand.” A mechanical hand.
The weaponization of language and automated interactions have been on my mind a lot. In my interview with Dr. Bruce Dooley, we discussed the weaponization of the word “harm” in medicine by the Federation of State Medical Boards. In my 2022 article, “Who is the Terrorist?” I wrote about the ideas about harm and disinformation, put forward by the DHS last year. And a few years before COVID, I wrote an essay titled, “Love & Automation: The Creepy Touch of a Mechanical Mother”:
“The defenders of efficiency typically try to impose it on other people — while allowing themselves to remain just as human and free-roaming as they wish to be. The famous example is how the likes of Steve Jobs and Bill Gates limit their own children’s access to technology while telling the rest of the world that technology is universally and ubiquitously amazing.
Factory owners gladly impose efficiency on the former independent craftsmen and their descendants. Office managers impose efficiency on the office slaves. Makers of software sell efficiency to the general population. But in their own shoes, the salesmen of efficiency like to look at the sky and smell the flowers.”
What Can Be Done With AI in the Legal Field?
As of this second, AI can possibly do the work that is usually done by paralegals and junior lawyers. It can customize template-based documents, such as contracts and form letters. It can go through vast piles of documents quickly and summarize the case for more senior lawyers to review. It can also write draft legal documents, citing the laws and the precedents that apply.
According to a Brookings Institution review, in the legal field, AI can take care of some of the most time-consuming tasks:
“Consider one of the most time-consuming tasks in litigation: extracting structure, meaning, and salient information from an enormous set of documents produced during discovery. AI will vastly accelerate this process, doing work in seconds that without AI might take weeks. Or consider the drafting of motions to file with a court.
AI can be used to very quickly produce initial drafts, citing the relevant case law, advancing arguments, and rebutting (as well as anticipating) arguments advanced by opposing counsel. Human input will still be needed to produce the final draft, but the process will be much faster with AI.”
“More broadly, AI will make it much more efficient for attorneys to draft documents requiring a high degree of customization — a process that traditionally has consumed a significant amount of attorney time.
Examples include contracts, the many different types of documents that get filed with a court in litigation, responses to interrogatories, summaries for clients of recent developments in an ongoing legal matter, visual aids for use in trial, and pitches aimed at landing new clients.
AI could also be used during a trial to analyze a trial transcript in real time and provide input to attorneys that can help them choose which questions to ask witnesses.”
One of the first popular AI legal tech products, branded as “the world’s first robot lawyer” was created in 2015 by Josh Browder, the founder of DoNotPay. That particular software was designed to help people write effective responses and letters to fight unjustly issued tickets, cancel subscriptions, and so on. Once the word got out, the company was able to receive significant investor funding.
In January 2023, the company announced that their AI would act as an informal “attorney” in court, helping the client to fight a speeding ticket.
“A program trained with the help of artificial intelligence is set to help a defendant contest his case in a U.S. court next month … Instead of addressing the court, the program, which will run on a smartphone, will supply appropriate responses through an earpiece to the defendant, who can then use them in the courtroom.”
“Since this is the AI’s very first case, DoNotPay is ready to take on the burden of punishment if the AI’s advice does not help the client. Since it is a speeding ticket, DoNotPay will pay for the speeding ticket. If it wins though, it will have a massive victory to its credit.” The AI was supposed to assist the client in court in February 2023. But then it didn’t.
Good morning! Bad news: after receiving threats from State Bar prosecutors, it seems likely they will put me in jail for 6 months if I follow through with bringing a robot lawyer into a physical courtroom. DoNotPay is postponing our court case and sticking to consumer rights:
— Joshua Browder (@jbrowder1) January 25, 2023
And then the company was sued for “practicing law without a license.” “The robot lawyer is facing a proposed class section lawsuit filed by Chicago-based law firm, Edelson and published on the website of the Superior Court of the State of California for the County of San Francisco.”
The way it looks to me, the people-facing, cheaply priced version of AI is seen as a threat by the big boys, and so they want to put a stop to that. I mean, can you imagine a world in which lowly peasants use cheaply priced AI (fast calculator) products to benefit themselves — and possibly even without sending all their personal data to the Central Mother Ship? The outrage.
Another AI product, also branded as “the first artificially intelligent attorney, was Ross.” “Ross” was the AI product made by Ross Intelligence, a startup founded in 2014 by three Canadian students. The product was based on IBM’s Watson series of AI products for business. Spoiler: the company was shut down in 2020. Here is from Futurism (2016):
“Law firm Baker & Hostetler has announced that they are employing IBM’s AI Ross to handle their bankruptcy practice, which at the moment consists of nearly 50 lawyers. According to CEO and co-founder Andrew Arruda, other firms have also signed licenses with Ross, and they will also be making announcements shortly.”
“Ross, ‘the world’s first artificially intelligent attorney’ built on IBM’s cognitive computer Watson, was designed to read and understand language, postulate hypotheses when asked questions, research, and then generate responses (along with references and citations) to back up its conclusions. Ross also learns from experience, gaining speed and knowledge the more you interact with it.”
But like I said, in 2020, Ross Intelligence was shut down. That seemingly had to do not with any underlying philosophical issues of using AI in the legal field — but with corporate competition. It was more about the fight of the mobs and the question of who gets to benefit from the potential dollar waterfall.
It was shut down as a result of the company’s financing being blocked by the lawsuit filed by Thompson Reuters who claimed that Ross Intelligence used TR data to build their legal tech AI.
“ROSS Intelligence, a company that sought to innovate legal research through the use of artificial intelligence, and that helped to raise awareness of AI throughout the legal industry, is shutting down its operations, as a lawsuit against it by Thomson Reuters has crippled its ability to raise new financing or explore potential acquisition and left it without sufficient funds to operate.”
“Thomson Reuters sued ROSS in May, alleging that it stole content from Westlaw to build its own competing legal research product. ROSS did this, TR alleged, by “intentionally and knowingly” inducing the legal research and writing company LegalEase Solutions to use its Westlaw account to deliver Westlaw data to ROSS en masse.”
The Owner of the AI Sets the Tone
As usual, the devil is in the detail. If AI technology is used to help regular people accomplish with the tasks that have been previously out of reach without being rich — and without the abuse data offloading to the Central Mother Ship — it could be a useful thing.
And this will be the selling point during the initial phases of the technology rollout. The bait is supposed to taste good, this is how it always works.
The Apocalypse doesn’t have to taste awful. Get long-term preparedness food that’s actually edible from my new store, Late Prepper. Use promo code “jdr” for 15% off!
But if a collective habit develops for the use of chatbots and coherent-sounding, language-producing fast calculators in our everyday professional lives, and once the sufficient amounts of data have been scooped, the useful stuff will be moved behind a very hefty paywall. And perhaps, at that time, the middle class and even the upper middle class lawyers will get “deprecated,” just like many have been “deprecated” before them. Time will tell!
“Gradually Then Suddenly”: The Conquest
Let’s talk about the concept of the bait. Life works in mysterious ways, and history works in long time spans. The attack on human agency and personality and on our relationship with nature and our own emotional richness started thousands of years ago. Today, we are dealing not just with the intentions and the incessant scamming of Klaus Schwab, the alphabets, and their owners upstairs — but also with the consequence of the scams put forth by the tyrants and tricksters from the past.
Today, we are paying the price not only for the collective imperfect choices of the people underneath the boot of the big tyrants of today (including our own) — but also for the imperfect choices made by tyrant victims of the past, when some people were possibly startled, or bullied, or tricked, or bribed into accepting soul compromises of their time.
And then they passed the compromise on to their children. And they passed it to their children. And so on. See how this works?
Even on a smaller scale, it works “gradually then suddenly.” For example, the U.S. pandemic preparedness model that bit us all in 2020 was set in motion in the early 2000s, during Bush. After being prepared, it sat there behind the bushes (a pun!) for 15 years, waiting to enter the stage. And then it entered the stage with vengeance. And here we are!
The Condemnation of the Algorithmic Model
All these AI developments may be a compliment to the power of technology but they are also a condemnation of the state of our civilization.
Remember the question I asked in the beginning? I asked, what is the point of our time on Earth? And what are we doing with our civilization if our lives are so mechanical that even a dumb fast calculator can do our “intellectual work” more efficiently than we do? What is our “work”?
Did they — gradually and then suddenly — get us? Have our own “human” cognitive models and angles from which we analyze the world become so algorithmic that a stupid machine can outdo us at the “thinking” task? Not good.
And speaking of law, there was a time in history — before our communities and individual people became invisible pawns under various dominating entities — when even “law and order” wasn’t algorithmic but was based on the subjective and honest desire to do what’s right in the spiritual sense.
And I think on a local level, it still happens sometimes, but the “domination” mentality has poisoned our minds, and the algorithm was put in place to keep a semblance of goodness where the goodness had been undermined.
I think that the existential role of the horrible Great Reset is to make that algorithmic poison so big, so obvious to our eyes that we simply can’t be in denial anymore and have to rebel against the ancient abuse of our souls. Because we are more than a collection of formulas. We are more than algorithms.
We are of spirit and water, we have souls, we are capable of navigating subjectivity and thriving at it, we can feel, touch, love, savor our relationships — and that is the point of us being here.
- Concerned about your life’s savings as the banking crisis decimates retirement accounts? You’re not alone. Find out how Genesis Precious Metals can help you secure your wealth with a proper self-directed IRA backed by physical precious metals.
The very feeling of being alive — the breath and the skin — is why we are here. And when someone is trying to further reduce us to pattern-matching machines, we have the full right to say, “No.” I want like to end this article with a quote from a sci-fi story that I wrote in 2019 (I only added a line about a pandemic to it in 2020):
“In order to gain control over the economy and human bodies, they needed to first gain control over people’s thinking. So they created a strong push to shift all major human activities to the digital domain as digital footprints were initially much easier to track and monetize. They set up breadcrumbs and made the transition look like fun.
Simultaneously, they built strong relationships with some of the most influential citizens and organizations of the time. Tech leaders promised easy surveillance to law enforcement — and free access to education and entertainment to common citizens. Everybody thought they were getting a great deal!
They gave the people previously unseen opportunities to create new worlds — both on the developer side and on the user side — but nobody except the top execs knew that the new worlds came with hidden trackers and treacherous on-off switches that could be activated at any point.”
“Early warnings came from artists who figured out that their work was being used as a bait to attract people to tech platforms. But artistic types were not respected members of society, and their cries were drowned in optimistic speeches about the bright future of everything.
Then came the media. After news companies starting crumbling and many journalists found themselves without an income, they realized that the game was rigged. But they, too, were swept out of the way. Some made a bargain and took tech funding, some became “gig economy” workers, and some learned how to code.
Then, at a critical point in time, there was a pandemic of some sort, and powerful technology leaders, including some of the IHT official saints, managed to use their influence on governments to legally mandate digitization of all aspects of life. It was then that unregulated human contact was made illegal and smart wearables and AI assistants became mandatory.”
“By the time lawyers, doctors, bankers, and government officials were personally impacted and practically enslaved on a massive scale, it was too late. Big Tech controlled every aspect of life, tracked everything, and funded every industry. It became the default law enforcement agency and the default news publisher, and thus it had the power to make or break any pundit, academic, or politician.
Everyone — from governments to low-level assistants-to-robots — depended on technology for every life function. Sex and baby permits required impeccable Digital Citizen Scores. No one could even get a low-level job without abiding by algorithms — and most jobs were automated anyway. Municipal councils owed money for smart city maintenance. The grip was total.”
“And while many felt instinctively uneasy about giving up privacy and cognitive autonomy, they also felt alone and helpless. Jobs outside of tech were scarce, competition was harsh, and very few had the luxury to even ponder the big picture. So people kept their heads low and did what they had to do to feed their families — complied, wore mandatory smart masks, and learned how to code if they were allowed.
Developers and other high-level tech industry workers preserved their financial independence and cognitive autonomy the longest — gated coder communities became a fixture on every smart urban hub — but eventually they, too, became obsolete, as AI grew sophisticated enough to produce itself.
Shortly after the institution of biologically compromised governance was deprecated, Big Tech became Interplanetary Holy Tech, and you know the rest.”
Does this sound familiar? If it does, now is a good time for all of us to ponder why we were born on Earth, why were are alive here and now, with all our gifts — and to follow through with the unique, brave and important tasks that we are here to do. With heart.
About the Author
To find more of Tessa Lena’s work, be sure to check out her bio, Tessa Fights Robots.
What Would You Do If Pharmacies Couldn’t Provide You With Crucial Medications or Antibiotics?
The medication supply chain from China and India is more fragile than ever since Covid. The US is not equipped to handle our pharmaceutical needs. We’ve already seen shortages with antibiotics and other medications in recent months and pharmaceutical challenges are becoming more frequent today.
Our partners at Jase Medical offer a simple solution for Americans to be prepared in case things go south. Their “Jase Case” gives Americans emergency antibiotics they can store away while their “Jase Daily” offers a wide array of prescription drugs to treat the ailments most common to Americans.
They do this through a process that embraces medical freedom. Their secure online form allows board-certified physicians to prescribe the needed drugs. They are then delivered directly to the customer from their pharmacy network. The physicians are available to answer treatment related questions.