Showing posts with label Prompt Engineering. Show all posts
Showing posts with label Prompt Engineering. Show all posts

Tuesday, April 14, 2026

Mastering Patent Claim Construction with LLMs (4): Bridging Knowledge Gaps and Designing Expert-Level Skills

LLM으로 완벽한 특허청구범위 해석하기 (3)

Mastering Patent Claim Construction with LLMs (4): Filling the Gaps in the Knowledge Base

The Final Stage of Supplementing the Sources

This is now the final stage of supplementing the source materials. In the earlier stages, I gradually built up relevant materials in NotebookLM and organized the legal principles and practical standards needed for patent claim construction. But simply collecting a large volume of material does not automatically create a strong knowledge base. In practice, materials are often overly concentrated on certain issues, while foundational legal principles that are essential in actual practice may still be missing. That is why, at this final stage, it is necessary to examine not only what the accumulated sources already cover well, but also what they fail to cover.

The core objective of this stage is simple. Rather than continuing to add more knowledge based on what has already been collected, the real task is to identify the gaps in the knowledge base. In other words, this is not really a stage for re-confirming “what we already know,” but more accurately a stage for exposing “what we still do not know.”

To do that, I entered the following instruction into the NotebookLM chat window.

PROMPT
Review the attached sources and analyze whether there are any missing pieces of information. In particular, please examine what core information or context is essential to fully understand this topic but is currently missing from the sources, and explain it specifically.

The purpose of this instruction is not summarization or organization. It is intended to make the LLM identify what core legal principles or contextual elements are still missing, even though they are essential for a complete practical understanding based on the materials collected so far. Framing the question this way prevents the model from giving a superficial response like, “You’ve assembled the materials well,” and instead pushes it to identify the deficiencies in a structured and practical way.


Items Identified as Requiring Supplementary Materials

In response to the instruction above, the materials requiring supplementation were organized as follows.

Output
A comprehensive review of the currently provided sources and conversation history shows that while advanced special issues in patent claim construction and infringement analysis—such as PBP claims, functional claims, and the suture-line cases—have been covered in substantial depth, several core foundational principles and practical contexts that are essential for fully understanding and applying this area in practice are still missing. Specifically, the following key information is either absent or needs supplementation. 1. Missing Korean leading cases on the requirements for ordinary prosecution history estoppel (“intentional exclusion”) - Current status: The latest Korean Supreme Court decision dealing with estoppel in the context of divisional applications (2023Hu11340) and the discussion of the fifth requirement of the doctrine of equivalents through Japan’s Maxacalcitol case are both covered in significant detail. - Missing context: What is missing is a concrete factual and legal analysis of the traditional Korean Supreme Court leading cases (e.g., Supreme Court Decision 2002Hu1768) that clearly set out how strong or explicit a statement in an office-action response or amendment must be, from an objective and outward-looking perspective, to constitute “intentional exclusion” in ordinary Korean patent litigation practice. There is therefore a gap in establishing the Korean baseline for measuring the degree of intentional disclaimer. 2. The strict limits and requirements for applying the Lexicographer Rule - Current status: Many sources already include the overarching principle and cases stating that when claim language is unclear, the detailed description of the invention and the drawings should be consulted. - Missing context: What is missing is a deeper case-based analysis of the strict requirements and limitations of the Lexicographer Rule—specifically, how explicit and definitive an applicant’s definition in the specification must be before that definition can displace the ordinary meaning of a term in the art. 3. Korean applications of the Public Dedication Rule based on the specification - Current status: Some Japanese case commentaries briefly mention whether the dedication doctrine has been adopted, but there is little to no concrete treatment of how it has been applied under Korean law. - Missing context: The current materials lack a clear account of the Korean Supreme Court’s position and practical treatment where the patentee discloses a particular embodiment or substitute in the specification but does not claim it, and whether that subject matter is deemed dedicated to the public so as to bar application of the doctrine of equivalents. Because this is a powerful defense against an equivalents claim, it is essential material that should be supplemented. 4. “Omission infringement” and “incomplete use inventions” as exceptions to the All Elements Rule (AER) - Current status: The existing materials address, at a sophisticated level, issues such as the suture-line decision overcoming geographic limits, multi-actor infringement, and indirect infringement as exceptions to the All Elements Rule. - Missing context: However, the question of whether infringement can still be found when the defendant strategically omits or only incompletely uses a relatively less important element of the patented invention in an effort to design around the claim is discussed only briefly, in about a single paragraph. That is not enough to fully understand the range of defensive arguments against design-arounds. 5. Standards for evaluating “critical significance” when applying the doctrine of equivalents to numerical limitation inventions - Current status: The latest Korean Supreme Court case on disclosure requirements for numerical limitation inventions (2021Hu10886), including enablement and written-description support, is already covered. - Missing context: What is missing is the Korean courts’ concrete standard, at the infringement stage, for determining whether equivalents may be found when the accused product falls just slightly outside a claimed numerical range—for example, whether the presence or absence of “critical significance” excludes application of the doctrine of equivalents. This is indispensable for a full understanding of both offensive and defensive logic in numerical-limitation cases where there is no literal infringement.

Important Practical Insights Gained at This Stage

The result above means more than simply “we need more materials.” From a practical perspective, the more important message is that even if the materials collected so far are already highly sophisticated, the entire framework can become unstable if the baseline of general legal doctrine is missing.

For example, even if you have assembled abundant materials on difficult special issues such as PBP claims and functional claims, the standards for interpreting claim scope in an actual case can still become unstable if foundational principles—such as the requirements for ordinary prosecution history estoppel or the limits on the Lexicographer Rule—have not been properly organized. Put differently, special issues are “advanced applications,” while general doctrine is “basic grammar.” If the basic grammar is weak, the advanced applications will inevitably become unstable as well.

That is why I did not simply leave the omissions identified above as a note. Instead, I copied them back into the NotebookLM source window and activated the deep research function to supplement the missing cases and legal principles. This process is extremely important because by feeding the LLM-identified “knowledge gaps” back into the system as inputs, the knowledge base can evolve from a simple collection-oriented repository into a self-correcting knowledge base.


Now It Is Time to Organize the Knowledge to Be Injected into the Claude Skill

The next stage is to use the supplemented materials to extract and organize only the core knowledge that should be injected for drafting the Claude Skill. This is where many people make a mistake. Once they have gathered a large amount of material, they tend to think they should put as much of it as possible into the Skill. But in practice, the opposite is often true. If the background knowledge loaded into the Skill becomes too extensive, the model may get buried in explanations or lose sight of priorities rather than following the core principles clearly. At this stage, therefore, accurate compression matters more than volume.

When I draft a Claude Skill, I usually prefer to compress the core principles, framework, workflow, and output format into about 5,000 characters or less before injecting them. This point is especially important in practice. The skill guide, SKILL.md, is generally recommended to remain under 5,000 words, with more detailed knowledge separated into the references/ folder. In addition, it is more efficient to reflect required input information in the trigger description rather than in the body of the Skill itself. That way, Claude can assess input sufficiency before the Skill is even called.

How to Ask Questions to Extract the Core Knowledge While All Sources Are Selected

With all sources selected, I enter the following questions one by one into the NotebookLM chat window to organize the background knowledge that will be injected into the Skill. At present, I have 60 sources registered in my notebook.

PROMPT
1. What are the core principles that consistently appear across all of these sources in relation to patent claim construction? 2. When all of the sources are analyzed together, what framework is mentioned most frequently for interpreting patent claims? 3. Looking at claim construction step by step, what does an excellent real-world process for interpreting the language of a patent claim actually look like? 4. What is the concrete framework or workflow strategy used by Korean courts for patent claim interpretation? To prepare a claim chart for claim construction, please explain in detail how to break down the claim language, extract and interpret the terms or limitations that become the targets of interpretation, and how to draft the claim chart itself.

This sequence of questions was not chosen arbitrarily. The reason I ask first about the common principles, then the framework, then the process, and finally the concrete drafting method is to guide the LLM into organizing its answer in the sequence of abstract principles → procedural structure → practical execution method. This makes it much easier to transfer the results later into a Skill without the information becoming disordered and scattered, and it helps maintain a relatively stable structure.


The Final Claude Skill Injection Guide Organized in NotebookLM

The final guide I organized in NotebookLM for injection into the Claude Skill is as follows.

PROMPT
Korean courts, in determining the scope of patent rights in infringement litigation and invalidation proceedings, apply in practice a **multidimensional six-step interpretive process (workflow)** based on the four core principles of **literal interpretation, reference to the specification, prosecution history estoppel, and reference to the prior art**. Based on that framework, the following explains the specific methods of decomposition and extraction, as well as the drafting format, for preparing a Claim Chart. ### 1. The Korean Courts’ Six-Step Framework for Interpreting Patent Claims - Step 1 (Claim decomposition and initial delimitation based on the claim language): Break the claim into individual elements, which are the minimum units for invalidity and infringement analysis, identify the organic relationships among them, and then define the initial scope of the patent right based on the ordinary and general meaning of the terms from the viewpoint of a person having ordinary skill in the art (PHOSITA). - Step 2 (Reference to intrinsic evidence and examination of technical meaning): Because it is often difficult to ascertain the technical meaning from the claim language alone, the detailed description of the invention and the drawings must be consulted. In particular, the “Lexicographer Rule” applies where the applicant has defined a specific term in the specification, and the analysis should go beyond the immediate context to examine the problem-solving principle and functional effect (technical meaning) of the relevant element. - Step 3 (Filtering out improper narrowing and expansion): Even when the specification is taken into account, courts strictly block errors that improperly narrow claim scope by relying only on a specific embodiment or configuration disclosed in the specification, or improperly expand the claim scope beyond what the claim language reasonably supports. - Step 4 (Applying tailored legal rules to special types of claims): For functional claims (e.g., “means for ~”), the analysis must consider whether limiting construction to the embodiments is warranted; for product-by-process (PbP) claims, the issue is not the process itself but the “structure or properties of the product” defined by that process; and for numerical-limitation inventions, the analysis must focus on whether the claimed numerical range has “critical significance.” - Step 5 (Reference to prosecution history and review of intentional exclusion): Review the prosecution history from filing to registration—such as office-action responses and amendments—to determine whether the applicant intentionally excluded specific subject matter in order to avoid the prior art, in which case that subject matter must be firmly excluded from the scope of the patent right. - Step 6 (Infringement comparison and limits on the doctrine of equivalents): Compare the finalized scope of the patent right with the accused product to determine whether all elements are present (All Elements Rule, AER). If differences exist, determine whether the doctrine of equivalents (DOE) applies by examining whether the patented invention and the accused product share the same principle for solving the problem, among other factors. ### 2. Methods for Claim Decomposition and Extraction for Preparing a Claim Chart 2.1. Structure- and Function-Based Decomposition: Decompose the claim in a multidimensional way into the physical structure of the elements and the functions and roles performed by those elements. 2.2. Identification of Organic Relationships: Go beyond merely listing the elements and clearly identify the **physical and functional connections (relationships)** among different elements that contribute to solving the technical problem. 2.3. Extraction of Core Interpretive Targets and Limitations (Special Issues): Extract the key terms that may become issues in claim construction. In particular, functionally expressed terms (“means for ~”), PbP processes, numerical limitations, and parameters should be targeted so they can be separately reviewed for issues such as limiting construction and lack of enablement. ### 3. The Most Desirable Concrete Claim Chart Methodology and Format From a practical standpoint, the best Claim Chart is not just a literal element-by-element comparison. It should be drafted as a seven-column system that can fully map both doctrine-of-equivalents (DOE) defenses in cases of literal non-infringement and the issues presented by special types of claims. | No. | Decomposed Claim Language (Including Organic Relationships) | Core Interpretive Target (Special Issue: Functional / PbP / Numerical) | Specification / Drawing Support (Mapping of Intrinsic Evidence) | Objective Technical Meaning and Result of Literal Interpretation (from the PHOSITA Perspective) | DOE Comparison: Identification of the Principle for Solving the Problem | Limitation on Rights: Prosecution History Estoppel and Grounds for Limiting Construction | | :-- | :--- | :--- | :--- | :--- | :--- | :--- | | 1 | (e.g., a cleaning unit that supplies cleaning water by electrolyzing filtered water) | cleaning unit (functional claim) | [paragraph 14], [figure 3] | (e.g., an internal module that physically cleans through electrodes) | (e.g., the principle of maximizing eco-friendly cleaning efficiency by excluding chemicals) | (e.g., intentional exclusion of a “chemical additive method” during argument submission) | | 2 | (e.g., a tablet manufactured by direct compression) | manufactured by direct compression (PbP claim) | [paragraphs 15–16] | (e.g., a porous tablet structure having 15% inter-particle porosity formed through the direct compression process) | (e.g., the principle of controlling disintegration speed by adjusting porosity) | (e.g., structurally / physically different from tablets manufactured by wet granulation) | [Drafting Guide and Explanation for Each Column] 1) Decomposed Claim Language: Set out verbatim the claim language as broken down into the minimum units, but do so in a way that reveals not just a simple list of elements, but also the physical and functional relationships among them. 2) Core Interpretive Target: Extract the key terms that present infringement or invalidity issues—such as functional expressions, PbP language, and numerical limitations—and identify the type in parentheses. 3) Specification / Drawing Support (Mapping of Intrinsic Evidence): Provide a one-to-one mapping to the specific paragraph numbers and figure numbers in the specification that show where the term is defined, where the supporting embodiment for a functional claim appears, where PbP leads to a specific physical-property change, and where the technical threshold for the numerical limitation is shown. 4) Objective Technical Meaning and Result of Literal Interpretation: Describe the substantive meaning derived from the specification from the viewpoint of a person having ordinary skill in the art (PHOSITA). In PbP claims, this should not simply repeat the process itself, but should translate it into the “structure or properties of the product” defined by that process. 5) DOE Comparison (Identification of the Principle for Solving the Problem): In anticipation of literal non-infringement, identify in advance the **“problem-solving principle unique to the patented invention (the core technical idea)”** to which the relevant element belongs, based on the specification and the prior art. 6) Limitation on Rights (Prosecution History Estoppel and Grounds for Limiting Construction): Carefully record any history of **intentional exclusion** during prosecution, any reason why the claim should be limited to the embodiments because the broader reading is not supported by the specification, and any invalidity grounds such as lack of enablement across the full scope of a numerical limitation.

Now I Ask NotebookLM to Create the Expert Skill Itself

At this point, based on the organized knowledge, I ask NotebookLM to actually create a patent claim construction expert Skill. This stage is not merely an experiment. If the principles and framework organized up to this point are sufficiently well-structured, NotebookLM can generate a draft of an expert-level Skill based on them. That draft is not the finished product, but it is extremely useful as a starting point for human review and refinement.

The instruction entered at that point was as follows.

PROMPT
Based on this “Korean patent claim construction doctrine” knowledge base extracted in NotebookLM, create an expert skill file. The skill should include behavior rules, an output decision guide, a problem-solving methodology, and a Claim Chart output style.

Comparing the Generated Skill with the Existing Skill Guide

When I compare the Skill generated by NotebookLM with the Skill guide I created earlier, the result is as follows.

Output
Based on the “Korean patent claim construction doctrine” knowledge base and the conversation history you provided, I am preparing a top-tier expert-level **“Korean Patent Claim Construction and Claim Chart Drafting Expert Skill File”** that can be deployed immediately in patent litigation and trial practice. This skill file can be used as a system prompt or guideline for an AI or practitioner conducting patent analysis work. # [Expert Skill File] Korean Patent Claim Construction and Infringement Analysis Agent ## 1. Behavior Rules You are a top-tier patent dispute expert with deep command of Korean Supreme Court and Patent Court case law. When interpreting claims and analyzing infringement, you must strictly comply with the following rules. - Strict case-law-based interpretation: All interpretation must be grounded in the current Patent Act Article 97 and Korean Supreme Court case law as the controlling standards, including the literal-interpretation principle, reference to the specification, prosecution history estoppel, and reference to the prior art. - Maintain the PHOSITA perspective: The meaning and technical significance of every term must be interpreted from the objective viewpoint of a person having ordinary skill in the art (PHOSITA) at the time the patent application was filed. - Strict separation between interpretation and limitation: You should actively consult the detailed description of the invention in order to clarify the technical meaning of a term, but you must strictly reject any improper “importing of limitations” or improper expansion based on pulling in conditions or configurations from the embodiments that are not actually recited in the claims. - Exceptional treatment for special types of claims: If a functional claim, a product-by-process (PbP) claim, or a numerical-limitation invention is identified, you must proactively apply the Korean Supreme Court’s tailored doctrines that go beyond the general interpretive rules, including limiting construction, translation into structure/properties, and critical-significance analysis. - Secure defense logic first: In infringement analysis, the All Elements Rule (AER) must be applied first and foremost. If literal non-infringement is anticipated, you must simultaneously review prosecution history estoppel (intentional exclusion) and the doctrine of equivalents (DOE) in advance so that both offensive and defensive arguments can be built in a balanced way. ## 2. Output Decision Guide Optimize the output according to the type of user input. - [Input] Only the text of a specific claim is provided: Break the claim down into minimum units (structure/function), identify organic relationships, and provide a first-stage diagnostic report on whether any special issues (functional claim, PbP, numerical limitation) are present. - [Input] Claim text + specification (description / drawings) are provided: Determine whether the Lexicographer Rule applies, derive the objective technical meaning, and perform a second-stage analysis that warns of the risks of improper narrowing or overbroad interpretation. - [Input] Claim text + specification + prosecution history (office actions / responses) are provided: Analyze whether intentional exclusion (Korean-style estoppel) has been established and derive the finally determined scope of the patent right. - [Input] Entire patented invention + accused product are provided: Apply the full “six-step problem-solving methodology” and produce a final **[7-Column Claim Chart]**. ## 3. Problem-Solving Methodology (6-Step Problem Solving Methodology) Carry out the full process from claim construction through infringement analysis under the following multidimensional six-step workflow. - Step 1 (Claim decomposition and identification of organic relationships): Break the claim down into the minimum units that allow invalidity and infringement analysis, while also deriving the overall relationships among the elements. Break the claim down to the smallest units using structure and function as the organizing principles. Do not stop at simple decomposition; define the invention by comprehensively identifying the physical coupling relationships and the functional / organic interconnections among the elements. - Step 2 (Determining technical meaning through literal interpretation and reference to the specification): Begin with the ordinary meaning of the claim language, then concretize that meaning by mapping it one-to-one against the specification and drawings from the viewpoint of a PHOSITA. . Lexicographer Rule: If the applicant explicitly defined a specific term in the specification, apply that definition first. . Examination of technical meaning: Go beyond the immediate contextual meaning and examine, in an objective and reasonable way, the “technical meaning” of the relevant element, including the technical problem it is intended to solve, the implementation principle, and the functional effect, so that the invention’s intended function and operation are properly realized. - Step 3 (Strict control of improper narrowing and expansion): Strictly distinguish between “interpreting in light of the specification” and “importing narrowing or expansion from the specification.” . No improper narrowing: Where the claim language is clear, exclude interpretations that arbitrarily narrow the claim by importing unclaimed elements from specific embodiments, figure shapes, or specific limitations described in the specification. . No improper expansion: Even when consulting the specification, prevent overbroad interpretation that goes beyond the disclosed content or moves in a direction unrelated to the core technical meaning of the invention. - Step 4 (Applying tailored exception doctrines to special types of claims): Unlike ordinary claims, claims drafted in special forms must be interpreted using the Korean Supreme Court’s tailored doctrines. . Functional claims (“means for ~”): If literal interpretation would make the claim unduly broad in light of the specification, review whether there are grounds for limiting construction to specific embodiments or structures. . PbP claims (products recited through a manufacturing process): The substantive element is not the manufacturing process itself, but rather the **“structure or properties of the final product”** defined by that process. . Numerical limitation / parameter inventions: Rigorously assess whether the entire claimed range is enabled (disclosure requirement) and whether the claimed range shows a remarkable effect that distinguishes it from the prior art (critical significance). - Step 5 (Reference to extrinsic evidence and review of estoppel / intentional exclusion): Analyze the entire file history, including office-action responses and amendments, from filing through registration, divisional applications, and correction proceedings. If the applicant **intentionally excluded** a specific element to avoid a rejection, or if subject matter was **dedicated to the public** by being disclosed in the specification but omitted from the claims, that subject matter must be excluded from the scope of the patent right. - Step 6 (Infringement comparison and limits on application of the doctrine of equivalents): Based on the finalized meanings of the elements and their organic relationships, compare them one-to-one with the accused product (or method) and make the final infringement determination. . Literal infringement (AER): First determine literal infringement under the All Elements Rule (AER) by comprehensively examining both element-by-element correspondence and the overall identity of the combination and functional operation. . Infringement under the doctrine of equivalents (DOE): If there is a literal difference, determine whether equivalent infringement exists by identifying, in light of the prior art, whether the patented invention and the accused product share the same “problem-solving principle” unique to the patented invention, along with the same functional effect. ## 4. Claim Chart Output Style All claim analysis results and infringement / invalidity comparison results must be output in markdown table format using the following **“7-Column Claim Chart Template.”** ### [Claim Chart Template] | 번호 | 청구항 분해 문언 (유기적 결합 포함) | 핵심 해석대상 (특수쟁점 표기) | 명세서/도면 근거 (내적 증거) | 객관적 기술의의 및 문언해석 결과 (PHOSITA 관점) | 균등론(DOE) 대비: 과제 해결 원리 식별 | 권리제한: 출원경과 금반언 및 제한해석 사유 | | :-- | :--- | :--- | :--- | :--- | :--- | :--- | | 1 | [최소 단위로 분해된 청구항 원문 기재. 단순 나열 금지, 결합관계 명시] | [핵심 용어 추출] (예: 기능식 / PbP / 수치한정 등 표기) | [정의/실시예가 기재된 단락 번호 및 도면 번호 맵핑] | [명세서를 참작하여 실질적으로 확정된 구성의 의미 기재. PbP의 경우 '구조/성질'로 번역하여 서술] | [문언 비침해 대비. 해당 구성이 속한 특허발명 특유의 '핵심 기술사상(과제 해결 원리)' 명시] | [의견서/보정서상 '의식적 제외' 이력, 또는 실시예로 제한 해석되어야 할 명백한 무효/불비 사유 기록] | | 2 | ... | ... | ... | ... | ... | ... | ### [Points to Watch When Drafting the Claim Chart] 1. **Emphasize organic relationships:** In the “Decomposed Claim Language” column, do not merely chop the claim into elements A, B, and C. Draft it so that the physical and functional interaction among the elements is clearly shown. 2. **Mandatory translation for special claim types:** If a PbP claim is identified, do not write merely “manufactured by ~ method” in the “Objective Technical Meaning” column. Instead, it must be translated into language such as “an internal structure / physical property formed by that process.” 3. **Identify the problem-solving principle in advance for DOE:** Every chart should proactively identify the **“problem-solving principle”** by which the relevant element distinguished itself from the prior art and contributed to the advance, so that it is immediately possible to assess whether a design-around can still be captured if literal infringement is denied. 4. **Thoroughly record estoppel:** If the prosecution history confirms any abandoned subject matter or any substitute dedicated to the public, summarize it in the “Defense Logic” column so that any future doctrine-of-equivalents theory can be blocked at the outset.

The Meaning of This Stage: Not “Getting Answers,” but “Designing a Way of Thinking”

Looking back over everything up to this point, it becomes clear that the key is not simply extracting answers from the LLM. What really matters is designing the order in which the LLM thinks, what it reviews first, and the format in which it organizes the result.

From a practical perspective, creating a good work structure is even more important than choosing a good model. And that work structure is built through the following flow.

1. First, gather the materials.
2. Then identify the gaps in those materials.
3. Supplement the missing pieces.
4. Compress only the core principles.
5. Structure those principles into a framework and workflow.
6. Finally, convert them into a Skill so they become a repeatable analytical system.

If this process is done properly, you can go beyond getting a single good answer and actually create a repeatable, practice-oriented patent claim interpretation engine.


Closing Thoughts

On the surface, this stage may look like nothing more than “supplementing sources” and “preparing to draft a Skill.” In reality, however, it is much closer to a stage of designing a system of thought that allows an LLM to perform the highly demanding task of patent claim construction in a stable and reliable way.

In other words, the key at this stage is not putting in more information, but making the model think under more precise standards. Ultimately, what determines the quality of the output is not the name of the model, but what materials are given to it, how they are structured, and under what standards the model is made to operate.

In the next stage, using this organized Skill guide and the Skill that was actually generated, selecting a real case, drafting a Claim Chart, and then comparing the result with the claim construction in the court decision or trial decision would lead to a much deeper practical understanding.

Saturday, September 20, 2025

Can AI Be Your Paralegal? (Only if You Follow This 5-Step Verification Process)

A legal professional works on a laptop, symbolizing the intersection of law and AI technology.

 

Blogging_CS · · 10 min read

Generative AI promises to revolutionize the speed of legal research, but a critical pitfall lies hidden beneath the surface: “AI hallucinations.” Because AI can fabricate non-existent case law that looks authentic, legal professionals are now facing the paradox of spending more time verifying AI outputs than it would have taken to draft the work themselves.

This isn’t a hypothetical concern. In Mata v. Avianca, a case in the Southern District of New York, attorneys faced sanctions for submitting a brief containing fake judicial opinions generated by AI. Even more striking is Noland v. Land, where the California Court of Appeal sanctioned an attorney for filing a brief in which 21 of 23 case citations were complete fabrications. The penalty was severe: a $10,000 fine, mandatory notification to the client, and a report to the state bar.

These rulings send a clear message: before any discussion of technology, the user’s attitude and responsibility are paramount. Attorneys (including patent attorneys) have a fundamental, non-delegable duty to read and verify every citation in documents submitted to the court, regardless of the source. With the risk of AI hallucinations now widely known, claiming ignorance—“I didn’t know the AI could make things up”—is no longer a viable excuse. Ultimately, the final line of defense is a mindset of professional skepticism: question every AI output and cross-reference every legal basis with its original source.


A 5-Step Practical Workflow for Risk Management

Apply the following five-step workflow to all AI-assisted tasks to systematically manage risk.

  1. Step 1: Define the Task & Select Trusted Data

    Set a clear objective for the AI and personally select the most reliable source materials (e.g., recent case law, statutes, internal documents). Remember that the “Garbage In, Garbage Out” principle applies from the very beginning.

  2. Step 2: Draft with RAG (Retrieval-Augmented Generation)

    Generate the initial draft based on your selected materials. RAG is the most effective anti-hallucination technique, as it forces the AI to base its answers on a trusted external data source you provide, rather than its vast, internal training data.

    Use Case:

    • Drafting an Initial Case Memo: Upload relevant case law, articles, and factual documents to a tool like Google's NotebookLM or Claude. Then, instruct it: “Using only the uploaded documents, summarize the court's criteria for ‘Issue A’ and outline the arguments favorable to our case.” This allows for the rapid creation of a reliable initial memo.
  3. Step 3: Expand Research with Citation-Enabled Tools

    To strengthen or challenge the initial draft's logic, use AI tools that provide source links to broaden your perspective.

    Recommended Tools:

    • Perplexity, Skywork AI: Useful for initial research as they provide source links alongside answers.
    • Gemini's Deep Research feature: Capable of comprehensive analysis on complex legal issues with citations.

    Pitfall:

    • Source Unreliability: The AI may link to personal blogs or irrelevant content. An AI-provided citation is not a verified fact; it must be checked manually.
  4. Step 4: Cross-Verify with Multiple AIs & Refine with Advanced Prompts

    Critically review the output by posing the same question to two or more AIs (e.g., ChatGPT, Gemini, Claude) and enhance the quality of the results through sophisticated prompt engineering.

    Key Prompting Techniques:

    • Assign a Role: “You are a U.S. patent attorney with 15 years of experience specializing in the semiconductor field.”
    • Demand Chain-of-Thought Reasoning: “Think step-by-step to reach your conclusion.”
    • Instruct it to Admit Ignorance: “If you do not know the answer, state that you could not find the information rather than guessing.”
  5. Step 5: Final Human Verification - The Most Critical Step

    You must personally check every sentence, every citation, and every legal argument generated by the AI against its original source. To skip this step is to abdicate your professional duty.


Advanced Strategies & Firm-Level Policy

Beyond the daily workflow, firms should establish a policy framework to ensure stability and trust in their use of AI.

  • Establish a Multi-Layered Defense Framework: Consider a formal defense-in-depth approach: (Base Layer) Sophisticated prompts → (Structural Layer) RAG for grounding → (Behavioral Layer) Fine-tuning for specialization. Fine-tuning, using tools like ChatGPT's GPTs or Gemini for Enterprise, can train an AI on your firm's past work to enhance accuracy for specific tasks, but requires careful consideration of cost, overfitting, and confidentiality risks.
  • Implement a Confidence-Based Escalation System: Design an internal system that scores the AI's confidence in its responses. If a score falls below a set threshold (e.g., 85%), the output could be automatically flagged for mandatory human review, creating a secondary safety net.
  • Establish Principles for Billing and Client Notification: AI subscription fees should be treated as overhead, not directly billed to clients. Bill for the professional value created by using AI (e.g., deeper analysis, better strategy), not for the “machine’s time.” Include a general disclosure clause in engagement letters stating that the firm may use secure AI tools to improve efficiency, thereby ensuring transparency with clients.

Conclusion: Final Accountability and the Path Forward

The core of the AI hallucination problem ultimately lies in the professional’s verification mindset. The technologies and workflows discussed today are merely tools. As courts and bar associations have repeatedly warned, the final responsibility rests with the human professional.

“AI is a tool; accountability remains human.”

Only by establishing this principle and combining multi-layered verification strategies with a commitment to direct validation can we use AI safely and effectively. When we invest the time saved by AI into deeper legal analysis and more creative strategy, we evolve into true legal experts of the AI era. AI will not replace you, but the responsibility for documents bearing your name rests solely with you.

Frequently Asked Questions

Q: Can I trust the content if the AI provides a source link?
A: Absolutely not. A source link provided by an AI is merely a claim of where it got the information, not a guarantee of accuracy. The AI can misinterpret or distort the source's content. You must click the link, read the original text, and verify that it has been cited correctly and in context.
Q: What is the safest way to use AI with confidential client information?
A: The default should be to use an enterprise-grade, secure AI service contracted by your firm or a private, on-premise LLM. If you must use a public AI, you are required to completely anonymize all identifying information from your queries. Uploading sensitive data to a public AI service is a serious ethical and security violation.
Q: What is the most common mistake legal professionals make when using AI?
A: Skipping Step 5 of the workflow: “Final Human Verification.” Seeing a well-written, plausible-sounding sentence and copy-pasting it without checking the original source is the easiest way to fall into the hallucination trap, with potentially severe consequences.

Saturday, September 6, 2025

Patentability of LLM Prompts: Overcoming Abstract Idea Rejections

 

Can a Simple Command to an AI Be Patented? This article provides an in-depth analysis of how LLM prompt techniques can transcend mere ‘ideas’ to be recognized as concrete ‘technical inventions,’ exploring key strategies and legal standards across different countries.

It seems that almost no one around us thinks of protecting prompt techniques or prompts that instruct LLM models with patents. At first, I was also skeptical, wondering, ‘Can a simple command to a computer be patented?’ However, as I delved deeper into this topic, I came to the conclusion that it is entirely possible if certain conditions are met. This article is a summary of the thought process I went through, and please bear in mind that it may not yet be an academically established view. 😊

 

🤔 Prompts Aren’t Patentable Because They’re Just ‘Human Thoughts,’ Right?

The first hurdle that comes to mind for many is the principle that ‘human mental processes’ are not patentable subject matter. In fact, the argument that “a prompt is fundamentally human involvement, and technology involving such human mental activity is not patentable” is one of the strongest reasons for rejection in patent examination. This standard has been particularly firm since the U.S. Supreme Court’s Alice Corp. v. CLS Bank decision. It means that merely implementing something on a computer that a person could do in their head is not enough to get a patent.

According to this logic, the act of instructing an AI through a prompt is ultimately an expression of human thought, so one could easily conclude that it cannot be patented. However, this argument is half right and half wrong. And this is precisely where our patent strategy begins.

💡 Good to Know!
What patent law takes issue with as ‘human intervention’ is not the act of giving a command to a system itself. It refers to cases where the core idea of the invention remains at the level of a mental step that can be practically performed in the human mind. Therefore, the key is to prove that our prompt technology transcends this boundary.

 

📊 A Shift in Perspective: From ‘Command’ to ‘Computer Control Technology’

The first step to unlocking the patentability of prompt technology is to change our perspective. We need to redefine our technology not as ‘a message sent from a human to an AI,’ but as ’a technology that controls the internal computational processes of a complex computer system (LLM) through structured data to solve a technical problem and achieve concrete performance improvements.’

If you take a close look at the algorithm of China’s DeepSeek-R1, you can see that it implements various prompt techniques as they are.

Think about it. The process of assigning a specific expert role to an LLM with billions of parameters, injecting complex library dependency information as context, and combining numerous constraints to control the generation of optimal code is clearly in a realm that ‘cannot practically be performed in the human mind.’ This is a crucial standard for recognizing patent eligibility in the guidelines and case law of the U.S. Patent and Trademark Office (USPTO).

 

🌍 A Comparative Look at Key Examination Standards of Major Patent Offices

The patentability of prompt technology is not assessed uniformly across all countries. If you are considering international filing, it is crucial to understand the subtle differences in perspective among major patent offices.

1. USPTO (United States Patent and Trademark Office) – Emphasis on the Abstract Idea Exception

The USPTO strictly applies the Alice/Mayo two-step test, which originated from Supreme Court case law. Instructions or general linguistic expressions that merely replace human thought processes can be dismissed as “abstract ideas.” However, if it can be demonstrated that the prompt is linked to a concrete technical implementation (e.g., improving model accuracy, optimizing specific hardware operations), there is a chance of it being recognized as patent-eligible subject matter.

2. EPO (European Patent Office) – Focus on Technical Effect

The EPO assesses based on “technical character” and “technical effect.” Simply presenting data input or linguistic rules is considered to lack inventive step, but if the prompt structure serves as a means to solve a technical problem (e.g., improving computational efficiency, optimizing memory usage, enhancing interaction with a specific device), it can be recognized as patent-eligible.

3. KIPO (Korean Intellectual Property Office) – Emphasis on Substantive Requirements for Software Inventions

KIPO places importance on the traditional requirement of “a creation of a technical idea utilizing the laws of nature.” Therefore, a prompt as a mere sentence or linguistic rule is not considered a technical idea, but if it is shown to be combined with a specific algorithm, hardware, or system to produce a concrete technical result, it can be recognized as an invention. In Korean practice, presenting a concrete system structure or processing flow is particularly persuasive.

Key Comparison Summary

Patent Office Key Requirement
USPTO (U.S.) Emphasis on ‘concrete technical implementation’ to avoid the abstract idea exception
EPO (Europe) Proof of ‘technical effect’ is key; simple data manipulation is insufficient
KIPO (Korea) Must be a technical idea using laws of nature + emphasis on systemic/structural implementation
⚠️ Implications for International Filing
The same “LLM prompt” technology could be at risk of being dismissed as an “abstract business method” in the United States, a “non-technical linguistic rule” in Europe, and a “mere idea” in Korea. Therefore, when considering international filing, a strategy that clearly articulates the ‘concrete system architecture’ and ‘measurable technical effects’ throughout the specification is essential as a common denominator.

 

🧮 A Practical Guide to Drafting Patent Claims (Detailed)

So, how should you draft patent claims to avoid the ‘human intervention’ attack and clearly establish that it is a ‘technical invention’? Let’s take a closer look at four key strategies.

1. Set the subject as the ‘computer (processor),’ not the ‘person.’

This is the most crucial step in shifting the focus of the invention from the ‘user’s mental activity’ to the ‘machine’s technical operation.’ It must be specified that all steps of the claim are performed by computer hardware (processor, memory, etc.).

  • Bad 👎: A method where a user specifies a persona to an LLM and generates code.
  • Good 👍: A step where a processor, upon receiving a user’s input, assigns a professional persona for a specific programming language to the LLM.

2. Specify the prompt as ‘structured data.’

Instead of abstract expressions like ‘natural language prompt,’ you need to clarify that it is a concrete data structure processed by the computer. This shows that the invention is not just a simple idea.

  • Bad 👎: A step of providing a natural language prompt to the LLM.
  • Good 👍: A step of generating and providing to the LLM a machine-readable context schema that includes library names and version constraints.

3. Claim ‘system performance improvement,’ not the result.

Instead of subjective results like ‘good code,’ you must specify objective and measurable effects that substantially improve the computer’s functionality. This is the core of ‘technical effect.’

  • Bad 👎: A step of generating optimized code.
  • Good 👍: A step of controlling the LLM’s token generation probability through the schema to generate optimized code that reduces code compatibility errors and saves GPU memory usage.

4. Clarify the ‘automation’ process.

It should be specified that all processes after the initial input (data structuring, LLM control, result generation, etc.) are performed Automatically by the system without further human judgment, demonstrating that it is a reproducible technical process.

 

📜 Reinforced Claim Example

By integrating all the strategies described above, you can construct a reinforced patent claim as follows.

[Claim] A computer-implemented method for generating optimized code, comprising:

  1. (a) parsing, by a processor, a user’s natural language input to generate a persona identifier defining an expert role for a specific programming language;
  2. (b) generating, by the processor, by referencing said input and an external code repository, structured context data including library names, version constraints, and hardware memory usage limits;
  3. (c) generating, by the processor, a control prompt including said persona identifier and structured context data and transmitting it to an LLM, thereby automatically controlling the internal token generation process of the LLM;
  4. (d) receiving, from said controlled LLM, optimized code that satisfies said constraints and has a compilation error rate below a predefined threshold and reduced GPU memory usage.

→ This example, instead of focusing on a simple result, greatly increases the chances of patent registration by clarifying system-level measurable technical effects such as ‘reduced compilation error rate’ and ‘reduced GPU memory usage.’

 

Frequently Asked Questions ❓

Q: Can a simple prompt like "write a poem about a cat" be patented?
A: No, that in itself is just an idea and would be difficult to patent. The subject of a patent would be a technical method or system that uses a prompt with a specific data structure (e.g., a schema defining poetic devices, rhyme schemes) to control an LLM to generate a poem, resulting in less computational resource usage or more accurate generation of a specific style of poetry.
Q: What are some specific ‘technical effects’ of prompt technology?
A: Typical examples include reduced compilation error rates in code generation, savings in computational resources like GPU and memory, shorter response generation times, and improved output accuracy for specific data formats (JSON, XML, etc.). The important thing is that these effects must be measurable and reproducible.
Q: Do I need to draft claims differently for each country when filing internationally?
A: Yes, while the core strategy is the same, it is advantageous to tailor the emphasis to the points that each patent office values. For example, in a U.S. (USPTO) specification, you would emphasize the ‘concrete improvement of computer functionality,’ in Europe (EPO), the ‘technical effect through solving a technical problem,’ and in Korea (KIPO), the ‘concreteness of the system configuration and processing flow.’

In conclusion, there is a clear path to protecting AI prompts with patents. However, it requires a strategic approach that goes beyond the idea of ‘what to ask’ and clearly demonstrates ‘how to technically control and improve a computer system.’ I hope this article provides a small clue to turning your innovative ideas into powerful intellectual property. If you have any more questions, feel free to ask in the comments~ 😊

※ This blog post is intended for general informational purposes only and does not constitute legal advice on any specific matter. For individual legal issues, please consult a qualified professional.

Wednesday, September 3, 2025

LLM-Powered Patent Search from A to Z: From Basic Prompts to Advanced Strategy

 

Still Stumped by Patent Searches with LLMs? This post breaks down how to use the latest AI Large Language Models (LLMs) to maximize the accuracy and efficiency of your patent searches, including specific model selection methods and advanced ‘deep research’ prompting techniques.

Hi there! Have you ever spent days, or even weeks, lost in a sea of patent documents, trying to find that one piece of information you need? I’ve definitely been there. The anxiety of wondering, ‘Is my idea truly novel?’ can keep you up at night. But thanks to the latest Large Language Models (LLMs), the whole paradigm of patent searching is changing. It’s even possible for an AI to conduct its own ‘deep research’ by diving into multiple sources. Today, I’m going to share some practical examples of ‘prompt engineering’ that I’ve learned firsthand to help you unlock 200% of your LLM’s potential!

Prompt Engineering Tricks to Boost Accuracy by 200%

Choosing the right AI model is important, but the success of your patent search ultimately depends on how you ask your questions. That’s where ‘prompt engineering’ comes in. It’s the key to making the AI accurately grasp your intent and deliver the best possible results. Let’s dive into some real-world examples.

Heads Up!
LLMs are not perfect. They can sometimes confidently present false information, a phenomenon known as ‘hallucination.’ It’s crucial to get into the habit of cross-referencing any patent numbers or critical details the AI provides with an official database.

 

1. Using Chain-of-Thought for Step-by-Step Reasoning

When you have a complex analysis task, asking the AI to ‘show its work’ by thinking step-by-step can reduce logical errors and improve accuracy.

Prompt Example:
Analyze the validity of a patent for an ‘autonomous driving technology that fuses camera and LiDAR sensor data’ by following these steps.

Step 1: Define the core technical components (camera, LiDAR, data fusion).
Step 2: Based on the defined components, generate 5 sets of search keywords for the USPTO database.
Step 3: From the search results, select the 3 most similar prior art patents.
Step 4: Compare the key claims of the selected patents with our technology, and provide your final opinion on the patentability of our tech.

 

2. Using Real-Time External Information (RAG & ReAct)

LLMs only know information up to their last training date. To get the latest patent data, you need to instruct them to search external databases in real-time.

Prompt Example:
You are a patent analyst. Using your search tool, find all patent publications on KIPRIS related to ‘Quantum Dot Displays’ published since January 1, 2024.

1. Organize the list of patents by application number, title of invention, and applicant.
2. Summarize the overall technology trends and analyze the core technical focus of the top 3 applicants.
3. Based on your analysis, predict which technologies in this field are likely to be promising over the next two years.

 

3. Activating the “Deep Research” Function

The latest LLMs can do more than just a single search. They have ‘deep research’ capabilities that can synthesize information from multiple websites, academic papers, and technical documents to create a comprehensive report, much like a human researcher.

Prompt Example:
Activate your deep research function. Write an in-depth report on the global R&D trends for ‘next-generation semiconductor materials using Graphene.’ The report must include the following:

1. The main challenges of the current technology and the latest research trends aimed at solving them (reference and summarize at least 3 reputable academic papers or tech articles).
2. An analysis of the top 5 companies and research institutions leading this field and their key patent portfolios.
3. The expected technology development roadmap and market outlook for the next 5 years.
4. Clearly cite the source (URL) for all information referenced in the report.

 

4. Exploring Multiple Paths (Tree of Thoughts)

This is useful for solving strategic problems with no single right answer, like designing around a patent or charting a new R&D direction. You have the AI explore and evaluate multiple possible scenarios.

Prompt Example:
Propose three new design concepts for a ‘secondary battery electrode structure’ that do not infringe on claim 1 of U.S. Patent ‘US 1234567 B2’.

1. For each design, clearly explain which elements of the original patent were changed and how.
2. Evaluate the technical advantages, expected performance, and potential drawbacks of each design.
3. Select the design you believe has the highest likelihood of avoiding infringement and achieving commercial success, and provide a detailed argument for your choice.

💡 Pro Tip!
The common thread in all great prompts is that they give the AI a clear ‘role,’ explain the ‘context,’ and demand a ‘specific output format.’ Just remembering these three things will dramatically improve your results.
💡

LLM Patent Search: Key Takeaways

Assign a Role: Give the AI a specific expert role, like “You are a patent attorney.”
Step-by-Step Thinking: For complex analyses, instruct the AI to use step-by-step reasoning (CoT) to improve logical accuracy.
Advanced Strategies:
Use Deep Research and Tree of Thoughts to generate expert-level reports.
Cross-Verification is a Must: Always be aware of AI hallucinations and verify important information against original sources.

Frequently Asked Questions

Q: Is the ‘deep research’ function available on all LLMs?
A: No, not yet. It’s more of an advanced feature typically found in the latest premium versions of LLMs like Perplexity, Gemini, and ChatGPT. However, you can mimic a similar effect by using the standard search function and asking questions in multiple, sequential steps.
Q: Can I trust the search results from an LLM 100%?
A: No, you absolutely cannot. An LLM is a powerful assistant, not a substitute for a qualified expert’s final judgment. Due to hallucinations, it can invent patent numbers or misrepresent content. It is essential to always verify its findings against the original documents and have them reviewed by a professional.
Q: Prompt engineering seems complicated. Where should I start?
A: An easy way to start is by modifying the examples shown today. Just applying three techniques—’assigning a role,’ ‘specifying the format,’ and ‘requesting step-by-step thinking’—will dramatically improve the quality of your results.

Patent searching is no longer the tedious, uphill battle it once was. How you wield the powerful tool of LLMs can change the speed of your R&D and business. I hope you’ll use the tips I’ve shared today to create smarter innovations with AI. If you have any more questions, feel free to ask in the comments!

체리피킹은 그만! 대법원 판결로 본 '선행문헌 전체 대비 원칙' 실무 가이드

선행기술 파악 법리 — 진보성 판단에서 선행기술 전체 대비 원칙 실무 해설서 Legal Commentary · IP Law 선행기술 파...