AI tools like ChatGPT can be incredibly helpful for everyday questions, but they are not reliable for legal strategy, especially in complex WorkSafeBC claims.
A recent WCAT decision demonstrates exactly how dangerous it can be when someone relies on AI-generated legal submissions without knowing the law, the policies, or the system’s strict deadlines.
In that case, the worker used AI to prepare arguments for a prohibited action appeal. The result?
Fake laws, fake cases, incorrect policies, and ultimately, a dismissed appeal. This real-world outcome shows why AI cannot replace a trained WCB lawyer.

A Real WCAT Case Shows the Risks of Relying on AI
In a 2025 WCAT decision (A2501051), the worker submitted arguments that were clearly generated, at least in part, by artificial intelligence. The tribunal specifically addressed this because the errors were so serious. Key issues included:
1. AI Invented Cases That Didn’t Exist
The worker cited two WCAT decisions that AI had fabricated. When the tribunal searched for them, the cases either didn’t exist at all or were unrelated to the issue.
This is a known problem with AI models: they “hallucinate” legal citations.
WCAT Decision A2501051 PA (1)
2. AI Misinterpreted or Invented WorkSafeBC Policy
The worker relied on a policy that AI pulled from outdated or irrelevant sources. The tribunal confirmed that:
- The cited policy was not current
- It did not say what the AI-generated submission claimed
- It had nothing to do with late filing of prohibited action complaints
WCAT Decision A2501051 PA (1)
3. AI Provided Incorrect Legal Arguments About Time Limits
The worker argued that there were “special circumstances” that allowed him to file a prohibited action complaint late. AI confidently generated an incorrect legal interpretation, but:
- Section 49 of the Act does not allow late filing
- WCAT has no jurisdiction to extend the one-year time limit
- The worker’s entire argument had no legal basis
WCAT Decision A2501051 PA (1)
4. The AI Submission Made the Worker’s Case Worse
WCAT was clear: because the submission contained inaccurate information and fabricated legal references, it did not help the worker’s case at all.
WCAT Decision A2501051 PA (1)
5. The Tribunal Warned That AI Can Lead to Costs Against a Party
WCAT noted that AI-generated submissions can waste significant time for decision-makers and other parties. The decision even referenced situations where tribunals ordered costs against parties who relied on inaccurate AI-generated material.
WCAT Decision A2501051 PA (1)
This case is an important reminder: AI is not a lawyer, and bad advice can have irreversible consequences.
AI Cannot Understand the Evidence-Driven Nature of WCB Claims
WorkSafeBC cases rely heavily on:
- Medical reports
- Functional ability information
- Employer evidence
- Job duties
- Timing and deadlines
- Nuanced legal tests
AI cannot evaluate evidence, identify missing documentation, or connect the dots the way a trained lawyer can.
Why Self-Representation Using AI Often Leads to Serious Mistakes
We frequently see workers unintentionally harming their own claims by:
- Missing strict deadlines
- Filing the wrong type of appeal
- Using incorrect terminology
- Submitting irrelevant or harmful information
- Misunderstanding WorkSafeBC policies
- Making arguments based on incorrect interpretations of the Act
AI often produces confident-sounding explanations, but confidence does not equal accuracy.
How Working With a Lawyer Prevents These Problems
A WorkSafeBC lawyer can:
- Interpret the law correctly
- Identify what evidence is actually required
- Understand deadlines that cannot be extended
- Prepare legally sound submissions
- Recognize when a policy or case applies (or does not apply)
- Prevent irreversible errors early in the process
Even one consultation could stop the kinds of mistakes seen in the WCAT example.
Using AI as a Tool and Not a Replacement for Legal Advice
AI can be helpful for:
- Summaries
- Definitions
- Preparing questions to ask a lawyer
But it cannot replace:
- Legal strategy
- Evidence analysis
- Policy interpretation
- Tribunal experience
- Advocacy

The WCAT case illustrates exactly what can happen when someone relies on AI for legal decision-making: the appeal was dismissed before it even began.
The WCAT Deputy Registrar wrote:
It appears that the worker’s submission was created, at least partly, with the use of artificial intelligence. It is widely known that large language-based artificial intelligence models can prepare lengthy submissions that sound like they were written by a person with expertise. However, these models can also “hallucinate” legal cases, meaning they make them up. (See, for example, Zhang v. Chen, 2024 BCSC 285 at paragraph 38, or Geismayr v. The Owners, Strata Plan KAS 1970, 2025 BCCRT 217 at paragraph 25). The worker’s submission contains these types of hallucinations. If the worker did not use artificial intelligence to prepare his submission, the only other reasonable explanation is that he deliberately fabricated the policy and cases. I find it more likely he did not fabricate the cases, but that artificial intelligence did.
There is nothing in WCAT’s Manual of Rules of Practice and Procedure right now that prevents parties from relying on artificial intelligence. However, parties have an obligation in WCAT’s Code of Conduct not to put forth information that is known to be untrue. Given the known limitations of artificial intelligence, in my view this obligation includes, at minimum, an obligation to make sure any cases, laws or policies cited in a submission created by artificial intelligence relate to the issue they are being cited for. I accept that artificial intelligence can be an exceptionally helpful tool at times. However, while the technology continues to evolve, at the moment it is not a reliable tool for legal research and it should not be relied on for legal cases.
Tribunal decision-makers have an obligation to provide sufficient reasons for their decisions, but decision-makers at some tribunals (see for example AQ v. BW, 2025 BCCRT 907) have concluded that this duty does not include the obligation to respond to submissions concocted by artificial intelligence which have no basis in law. Therefore, parties who rely on artificial intelligence should be aware that their arguments may not be addressed if they are not based in law.
I also caution that, under the Workers Compensation Act Appeal Regulation and item 16.2 (Costs) of the Manual of Rules of Practice and Procedure, appeal tribunals can award costs to another party where the other party caused costs to be incurred without reasonable cause. Trying to sort through and locate cases and policies that, ultimately, are found not to exist, can take a lot of additional time for both the decision-maker and the other party to the appeal. Someone who uses artificial intelligence can generate a submission with a simple prompt. The people who must review this submission may need to spend considerably more time assessing its accuracy. Tribunals and courts who have the authority to award costs have done just that in cases where parties provide inaccurate submissions prepared by artificial intelligence (see e.g. Simpson v. Hung Long Enterprises Inc., 2025 BCCRT 525).
Perhaps the most significant problem with the worker’s artificially-generated submission is that, because of its inaccuracy, it has not helped the worker’s case.
AI is powerful, but not for legal submissions. As WCAT clearly stated, AI-generated arguments may contain fabricated cases, incorrect policies, and misleading interpretations that can damage a worker’s claim.
If you are facing a WorkSafeBC decision, denial, or deadline, speaking with a lawyer early can prevent the kind of harm demonstrated in this real case. Gosal & Company is here to help you avoid costly mistakes and protect your benefits.
